Google has released a new state-of-the-art image generation and editing model, that goes by two equally confusing names: “Nano Banana” and “Gemini 2.5 Flash Image Preview”. It clearly beats out Flux Kontext and GPT-5 in terms of editing capabilities. In my tests, I found the model to be extremely capable at both generation and editing, although hampered by the extremely basic API options offered by Google. For example, you cannot set the output aspect ratio of the generated image – all generated images are square.
A few users on HN noticed something strange, which I can also verify: the model’s safety settings are different in the European Union as compared to the United States. Literally based on the geolocation of your IP address, certain outputs will be either refused or allowed. This occurs both on the web platform and on the API. There is no indication of this in the user interface or API documentation. The image is simply replaced by a little caution icon, telling you that it contained “prohibited content”. The only explanation on the Google AI Studio interface (regardless of your location):
Gemini 2.5 Flash Image does not currently support editing images of children.
From the tests that I did, the restriction seems to be that:
- You cannot generate an image of a known person (this includes the obvious, like politicians and movie stars, but also historical figures, including Elvis, Einstein, and even Shakespeare, Jesus, or Tutankhamun, and mythological figures like King Arthur). This one is very consistent.
- You cannot edit an existing image that has a photo of a known person. This one is a little more weird, because there seems to be some sort of very imprecise facial recognition being used to determine which images have to be blocked. Sometimes I can upload a photo of a famous actor and modify the image without issue. Sometimes I upload a photo of myself and get blocked if I make any modifications. Sometimes the model will refuse to edit a photo of a famous statue, such as Michelangelo’s David.
The implementation of the restriction is very half-assed. First, as mentioned above, the detection for “identifiable faces” in uploaded images is quite variable, with both false positives and false negatives. Second, the detection is solely at the API-output layer. The model itself has no idea that there’s different rules for different regions, or even what region the user is in. It’s simply the location of your IP at the time that the image is viewed. You can make a request in the EU, then turn on a US-based VPN, reload the page, and see the image. The real kicker is that the notice claiming the model doesn’t support editing images of children isn’t even true. Nano Banana will happily modify the face of any child, famous or otherwise, if you’re outside the EU.
Exactly which legislation is this system intended to comply with? The AI Act requires that systems that generate photorealistic images “ensure synthetic content is marked in a machine-readable format and detectable as artificially generated”. Currently, images generated by Gemini have only a small, partially-transparent, Gemini logo, which is super easy to crop out. Google has also (beta) released SynthID, and we can assume it is in use here. Although very little is known about SynthID, it surely qualifies as a machine-readable format for marking synthetic content. But perhaps more importantly, under GDPR, a photo (even a synthetic one!) of a person is considered that person’s personal information. Therefore to generate and store images of a celebrity, you need to have their explicit consent, or argue your legitimate interest in generating a photo of them. These GDPR restrictions seem to align more closely with the blocking that we observe, since SynthID covers the AI Act requirements, and the blocking is centered around recognizable faces instead of humans in general. That said, the European Data Protection Board takes the stance that public persons such as celebrities cannot reasonably expect their personal data to be excluded from AI systems (this is related to the legitimate interest basis). So when it comes to famous people, GDPR does not require images to be censored.
We’ve established that no EU-wide law requires Google to do this, and so for completeness I’ve also looked into a few country-specific laws. Most EU law systems consider personality rights to be inheritable (usually by a family or estate), whereas in US law, they are lost upon death. Moreover, many European countries have specific laws that protect a deceased individual’s privacy (the “privacy” component specifically is very easy for regulators and courts to connect to GDPR). The protections are generally even higher when commercial use is involved: see, for example, a recent Spanish Supreme Court ruling in which festival organizers were fined for using an image of a dead artist. When these various legal systems interact with EU-wide legislation, the existence of the AI Act and GDPR creates a difficult compliance situation, in which more than two dozen countries all enact slightly different laws with similar motivations, some more strict than others. Generating a realistic image of Einstein or Shakespeare could theoretically trigger claims from estates, heirs, and trusts across more than two dozen different legal systems.
In this regime of legal uncertainty, the risks of noncompliance are extremely high. European courts are generally very happy to hand out huge fines to American tech companies that violate the GDPR. Meta alone has been fined upwards of €1 billion for GDPR violations, and Amazon and Google have both received fines in the hundreds of millions of euros. The AI Act allows fines proportional to global revenue for violations in high-risk systems, intentional deployment of prohibited practices such as manipulation and abuse, and diffusion of false information (which could potentially be interpreted as including fake images of public individuals), and it’s hard to imagine that we won’t soon be seeing court cases with similarly high penalties stemming from the AI Act.
However, the restrictions that are actually implemented make it clear that in Google’s view, the EU remains an important region as AI consumers, users, and commentators: Google would rather completely neuter their product than not release it there. Many previous Google models have been region gated, but this practice tends to dampen the hype wave of a global, SOTA release. On the other hand, Google doesn’t see the EU as an important source of business revenue, since they decided it wasn’t worth the effort or risk to create more granular safety controls that would let their product actually be useful.
The unfortunate irony of Google’s choice is that the censorship on this model clearly goes far beyond what any European court or law requires, but it seems that the (financial) costs of being wrong outweigh the (financial and social) costs of preemptive over-compliance.