AI
A fake video of a woman with a passport from ‘Torenza’ showed how generative AI makes fiction look frighteningly real. Pexels

Generative artificial intelligence has entered a new and dangerous phase, where fiction can convincingly pass for fact. From fake passports to cloned voices, AI hoaxes are growing increasingly sophisticated, leaving ordinary people, governments and even businesses struggling to tell what is real.

The so-called 'Torenza Passport Woman' is one example of how easily misinformation created by generative AI can spread worldwide in minutes.

The Torenza Passport Woman: A Viral AI Hoax

A video showing a woman arriving at New York's John F. Kennedy International Airport from Tokyo with a passport from a non-existent country called 'Torenza' recently went viral across TikTok and X (formerly Twitter). In the clip, the woman calmly hands her fabricated passport to immigration officials while explaining her fictional homeland's location. The footage fuelled online theories of time travel, alternate dimensions, and government conspiracies.

However, no official record of such an incident exists. Authorities at JFK Airport and US Customs and Border Protection have issued no statement confirming any passenger from 'Torenza'. Investigations revealed that the video was generated using artificial intelligence tools, borrowing heavily from the decades-old urban legend of the 'Man from Taured' — a supposed traveller who appeared in Tokyo in 1954 with documents from a non-existent nation before vanishing mysteriously.

The Origins of AI-Generated Deception

Generative AI has made it possible for anyone with an internet connection to create convincing images, videos, and voices that appear authentic. Tools such as Runway, Stable Diffusion and ChatGPT are accessible globally, allowing users to craft believable but false narratives with minimal technical skill.

AI-generated hoaxes exploit the human tendency to trust what can be seen and heard. When realistic content spreads online, it can mislead millions before fact-checkers have time to respond. As AI technology improves, so too does the quality of misinformation, making detection harder than ever.

Deepfakes, Voice Cloning, and the Spread of AI Scams

Beyond fictional stories like 'Torenza', generative AI is increasingly used in cyber crime. Deepfake technology, which can convincingly replicate human likeness or voice, has enabled scams that target both individuals and corporations. Fraudsters have cloned voices to impersonate family members or company executives, tricking victims into transferring money or sharing confidential data.

A 2024 survey found that nearly half of companies in selected countries had experienced deepfake-related fraud, with one report citing that a Hong Kong firm lost over £20 million (approximately $25 million) due to an AI-generated video call impersonation. Voice cloning in particular is one of the hardest forms of deception to detect, with 70% of people admitting they cannot reliably distinguish cloned voices from real ones.

The Accessibility and Risk of Open-Source AI

While commercial AI systems such as ChatGPT and Midjourney have built-in safety filters, open-source models often do not. These unrestricted systems, like GPT-J developed by EleutherAI in 2021, have been repurposed by criminals to create rogue tools such as FraudGPT and WormGPT. These programs can generate phishing emails, malware, and other forms of cyber fraud without limitation.

The open-source nature of these models means developers can train them on any data they choose, including harmful or illegal material. This lack of oversight makes them powerful tools in the wrong hands, allowing malicious actors to bypass the restrictions found in mainstream AI systems.

Who Is Most at Risk?

The rapid spread of AI-driven misinformation disproportionately affects society's most vulnerable groups: the elderly, the young, and those with intellectual disabilities. The FBI's Internet Crime Complaint Center reported a 14% rise in scams targeting older adults in 2023, resulting in financial losses exceeding £2.5 billion (approximately $3.4 billion). Many of these cases involved voice or video scams powered by generative AI.

Younger people are also at risk due to their increased online presence. Fake job offers, investment schemes, and romance scams often rely on AI-generated communication to appear legitimate. Those unfamiliar with AI technology can easily fall victim to these convincing deceptions.