Google Gemini Nano Banana AI Saree Trend
A woman's viral AI photo trend took a chilling turn when the app created an image that included a personal detail not visible in her original picture. Instagram / Jhalak Bhawani

In the age of viral photo trends and ever-evolving AI, uploading a personal picture for a fun transformation seems completely harmless.

But for one woman, a recent encounter with the Google Gemini Nano Banana AI Saree trend proved to be a genuinely chilling experience, raising a crucial question for all of us: what other details can AI uncover from your images?

The Google Gemini-powered 'Banana AI saree trend' has created a lot of excitement on Instagram, transforming simple pictures into classic, 90s Bollywood-style images. Many women are joining in on the fashion trend, sharing popular edits that feature flowing chiffon sarees and a gentle, golden-hour glow on social media.

But when an Instagram user named Jhalak Bhawani gave the trend a try, she came across something quite disturbing and decided to share her experience on the platform.

The Unsettling Discovery

Jhalak shared what she found so unnerving about the trend after she tried generating her own image. 'A trend is going viral on Instagram where you upload your image to Gemini with a single prompt, and Gemini converts it into a saree. Last night, I tried this trend myself and found something very unsettling', she wrote in an Instagram post.

She uploaded a picture of herself in a green, full-sleeve suit to Gemini along with a written prompt. But what the AI produced next left her completely astonished. In her post, she explained how she initially reacted to the picture, writing: 'I found this image very attractive and even posted it on my Instagram.'

'But then I noticed something strange — there is a mole on my left hand in the generated image, which I actually have in real life. The original image I uploaded did not have a mole. How did Gemini know that I have a mole on this part of my body?'

'You can see the mole — it's very scary and creepy. I'm still not sure how this happened, but I wanted to share this with all of you. Please be careful. Whatever you upload on social media or AI platforms, make sure you stay safe', she added.

The AI That Knows All

Her post immediately sparked a wide range of reactions. While several people voiced safety concerns, others dismissed it as normal or suggested she was only doing it to gain views.

The post also resonated with many users, who shared similar experiences. 'Yes I have noticed same things', one person commented.

Another user replied: 'This happened to me too. My tattoos which are not visible in my photos were also visible. I don't know how but it was happening.'

One commenter offered a chilling explanation, writing: 'This is normal. Because Gemini belongs to Google and Google photos have all your pictures, in which picture you must be seen in your mole, for better result he has used your old pictures. Google know everything about you.'

This shocking experience, coupled with the commenter's explanation, brings us to the most important question of all: just how safe is the Gemini Nano Banana tool?

The Privacy Question: What Does the Tool Know?

Although large technology companies such as Google and OpenAI provide tools to safeguard user-uploaded content, experts suggest that ultimate safety depends on individual habits and the intentions of those with access to the images.

Google's Nano Banana images, for example, contain an invisible digital watermark known as SynthID, as well as metadata tags, which are intended 'to clearly identify them as AI-generated', according to the website aistudio.google.com.

Although it's invisible to the naked eye, the watermark can be detected with specialised tools to confirm that an image was created by AI, as reported by spielcreative.com.

Is the Watermark Truly a Safeguard?

However, as Tatler Asia reports, the detection tool is not yet available to the public, meaning most people have no way of confirming an image's genuineness. Critics also warn that watermarks are simple to fake or remove. This view is shared by experts such as Hany Farid, a UC Berkeley professor, who stated: 'Nobody thinks watermarking alone will be sufficient.'

Meanwhile, Ben Colman, CEO of Reality Defender, added that in real-world scenarios, the applications often 'fail from the onset'. To combat convincing deepfakes effectively, experts believe, watermarking should be combined with other technologies.