Two watermarks, two different jobs
When you generate an image in Gemini or Google AI Studio and click Download, Google applies both watermarks to the output file. They serve completely different purposes:
| Nano Banana (visible) | SynthID (invisible) | |
|---|---|---|
| What it looks like | Small semi-transparent logo, bottom-right corner | Nothing. Invisible to the human eye |
| Size | 48x48 or 96x96 pixels | Spread across the entire image |
| Purpose | Visual branding — signals "AI-generated" to anyone who sees the image | Machine-readable provenance — lets platforms detect AI origin automatically |
| Survives editing? | No. Cropping, painting over, or alpha blending removes it | Yes. Survives cropping, resizing, JPEG compression, screenshots |
| Can be removed? | Yes, mathematically via reverse alpha blending | No, not without destroying image quality |
The visible watermark is branding. SynthID is infrastructure. Google knows the banana logo will get cropped out or painted over — that is exactly why SynthID exists.
How SynthID works
SynthID was developed by Google DeepMind and announced in August 2023. The core idea: embed a statistical signal into the image during generation, not after. The watermark is part of the image creation process, not an overlay applied on top.
The technical approach
SynthID modifies the image generation model's output at the diffusion step level. During the denoising process that produces the final image, the model subtly adjusts pixel values to encode a binary message. These adjustments are tiny — typically less than 1 unit on the 0-255 scale per channel per pixel. The human eye cannot detect the difference between a watermarked and unwatermarked version of the same image.
The signal is distributed across the spatial frequency domain of the image. Rather than concentrating the watermark in one region (like the visible banana in the corner), SynthID spreads it across the entire image in a way that is redundant. If you crop 50% of the image, the remaining half still contains enough signal for detection. If you resize, compress to JPEG at quality 70, or take a screenshot, the statistical pattern persists.
Google has not published the exact encoding algorithm — for obvious reasons. If the method were fully public, building a removal tool would be straightforward. What has been published (in the DeepMind research papers from 2023-2024) describes a general framework: the watermark modifies the latent space representation during generation, and a trained classifier can detect whether a given image contains the SynthID signal.
Detection confidence
SynthID detection is probabilistic, not binary. The system returns one of three results:
- "Watermark detected" — high confidence the image was generated by a Google model
- "Watermark not detected" — high confidence it was not
- "Inconclusive" — the signal is too degraded to tell (heavy editing, very aggressive JPEG compression, or reconstruction from a low-resolution screenshot)
In internal testing reported by DeepMind, SynthID maintained detection accuracy above 95% after JPEG compression at quality 75, resizing to 50% of original dimensions, and even after screenshot-and-reupload cycles. The accuracy drops below 80% only under aggressive transformations like reducing to under 256px resolution or applying heavy artistic filters that fundamentally reshape the pixel distribution.
Why SynthID cannot be removed
The Nano Banana watermark sits in a known 48x48 or 96x96 pixel region. Its shape, position, and opacity are fixed. Because the blending formula is standard alpha compositing, you can reverse it with simple arithmetic: original = (watermarked - alpha * 255) / (1 - alpha). The mask is the same for every image. The math is exact.
SynthID is a fundamentally different problem. The signal is not localized — it touches every pixel in the image. It is not a fixed pattern — the encoding adapts to the content of each specific image. And the exact encoding parameters are secret.
Could you remove it by brute force? In theory, adding random noise to every pixel would degrade the SynthID signal. But you would need to add enough noise to overwhelm the watermark's statistical pattern, and that much noise would visibly degrade the image. You would be trading one problem (an invisible watermark) for a worse one (a noisy, damaged image).
Some researchers have explored adversarial attacks against watermarking systems in general. A 2024 paper from the University of Maryland showed that with access to the detection model, you can craft perturbations that fool the detector while keeping the image visually intact. But this requires access to the detector itself (which Google does not provide publicly) and significant computational resources. It is an academic exercise, not a practical tool.
SynthID beyond images
Google has expanded SynthID beyond image generation. As of 2025, SynthID watermarks are applied to:
- Text generated by Gemini models (using a technique that biases token selection toward detectable patterns)
- Audio generated by Google's music models
- Video frames from Google's video generation tools
The text watermarking works differently from the image version. Instead of modifying pixel values, it subtly biases which words the model selects at each generation step. A detection algorithm can analyze a block of text and determine whether it was likely generated by a SynthID-enabled model. This approach is less robust than the image version — paraphrasing or translating the text breaks the signal.
How this compares to other watermarking approaches
Google is not alone in watermarking AI-generated content. Here is how the major players handle it:
| Platform | Visible watermark | Invisible watermark | Metadata tags |
|---|---|---|---|
| Google Gemini | Nano Banana logo | SynthID | C2PA metadata (partial rollout) |
| OpenAI DALL-E | None (removed in 2024) | None publicly confirmed | C2PA metadata |
| Midjourney | None | Steganographic (details undisclosed) | EXIF metadata |
| Adobe Firefly | None | Content Credentials (CR) | C2PA metadata (full support) |
| Stability AI | None | Optional, model-dependent | Varies by deployment |
Google is one of the few that still uses a visible watermark alongside the invisible one. Most competitors rely on metadata (C2PA) or invisible watermarks alone. The C2PA standard (Coalition for Content Provenance and Authenticity) is an industry initiative backed by Adobe, Microsoft, Google, and others. It attaches a signed certificate to the image file, recording its origin and editing history. Unlike SynthID, C2PA metadata can be stripped by simply re-saving the image in a format that does not support it.
What this means for you
If you are generating images in Gemini for practical use — presentations, social media, blog posts, mockups — here is what you need to know:
The visible Nano Banana watermark is removable. It is a semi-transparent overlay in a fixed position, and it can be reversed mathematically without any loss of quality. Tools like Banana Clean do this automatically in under 100ms.
SynthID stays. No matter what you do to the image short of destroying it, the invisible watermark persists. This is by design. It means platforms like YouTube, Instagram, and TikTok can automatically detect that your image was AI-generated, even if you crop out the banana logo. Several platforms already use AI content detection in their moderation pipelines, and SynthID gives them a reliable signal.
For most use cases, SynthID does not matter. It does not affect how the image looks. It does not degrade quality. It does not add visible artifacts. It is metadata, effectively — just metadata that cannot be stripped.
If you are concerned about disclosure: removing the visible watermark does not hide the AI origin of the image. SynthID ensures that the provenance is always detectable by machines. If a platform requires AI content disclosure (Meta, TikTok, YouTube all have such policies as of 2026), SynthID-based detection can enforce it regardless of what you do to the visible watermark.
FAQ
Can any tool remove SynthID?
No practical tool exists. Academic research has demonstrated adversarial attacks in controlled settings, but these require access to the detection model and produce visible artifacts. For real-world use, SynthID is effectively permanent.
Does SynthID affect image quality?
Not in any perceptible way. The modifications are sub-pixel-level — less than 1/255 per channel in most areas. Side-by-side comparison of watermarked and unwatermarked versions of the same image shows no visible difference.
Does Banana Clean remove SynthID?
No. Banana Clean removes only the visible Nano Banana logo using reverse alpha blending. SynthID is an entirely separate system embedded in the pixel data, and Banana Clean does not attempt to modify it. After cleaning, SynthID remains intact in the image.
If I screenshot a Gemini image instead of downloading it, does it still have SynthID?
In most cases, yes. SynthID is designed to survive screenshot-and-reupload cycles. Detection accuracy drops slightly compared to the original file, but remains above 90% in typical conditions (standard screenshot resolution, no heavy post-processing).
Is SynthID open source?
Partially. Google released a reference implementation for SynthID Text on GitHub. The image watermarking system remains proprietary. The detection API is available to select partners but not to the general public.