OpenAI's Sora Is the Ultimate Disinformation Weapon, Experts Warn - Here's How
Experts say OpenAI's new text-to-video tool blurs the boundary between fact and fiction, raising fears of an unprecedented global disinformation surge

In the age of disinformation and misinformation, a new titan in synthetic media has arrived, and experts fear that OpenAI's Sora could become the most dangerous disinformation tool ever created, and it's not even hyperbolic. With its ability to produce ultra realistic videos from simple text prompts that anyone can come up with, Sora blurs the line between truth and fabrication in ways the world has never seen before.
The new AI beast in the market, Sora, is OpenAI's groundbreaking text-to-video AI model, capable of transforming written prompts just like one uses on ChatGPT, into vivid, short videos that look convincingly and dangerously real. In September 2025, OpenAI unveiled Sora 2, a social media type app built around the model. It allows users to generate clips as per their imagination, but even more terrifyingly, insert themselves or others into scenes through a feature called 'cameo,' and then share these creations within an interactive feed. According to reports, videos generated by Sora can run up to 10 seconds, and users must opt in voluntarily to allow their likeness to appear in public videos which might become a really popular yet possibly misused feature.
New Wave of Misinformation
While OpenAI markets Sora as a creative and playful platform for storytelling as it should be, it's not shocking that experts have warned that the technology's ease of use and realism could unleash a new wave of misinformation in the already rampant epidemic of disinformation going on around the world today. The dangerous part is that what once required sophisticated editing tools and advanced technical skills can now be achieved through a few keystrokes by any layman. That accessibility, coupled with the growing virality of social platforms like these, has raised alarm among researchers, policymakers, and journalists who fear Sora could distort public understanding of reality more than fabricated AI content already does.
Disinformation by Default: Why Sora Is Especially Risky
While the app has not been launched in many countries yet, the initial use has brought out the defining risk of Sora which lies in the combination of hyper realistic imagery that it can create and its accessibility to everyday users. For example, before Sora, creating convincing deepfakes required time, effort, and knowledge of specialist software and was easily identifiable. But now, anyone can produce a lifelike video of say a political figure giving a fake speech or a fabricated scandalous clip of a celebrity in a matter of minutes. Reports till now suggest that within hours of Sora's public rollout, users had generated disturbing videos depicting war zones, mass shootings, and racial hate speech illustrating how the model could unintentionally or maliciously reproduce harmful and misleading narratives that can incite public rage.
This is just the tip of the iceberg as experts have also criticised the system's weak guardrails, arguing that they are reactive rather than preventive. The moderation filters often fail to catch troubling material until after it spreads. Further complicating the issue, OpenAI's decision to make copyrighted content available unless creators explicitly opt out has created widespread concern among artists and rights holders as this puts them at danger. While OpenAI as per reports has given the original artists a way to strike copyrighted content, there is no one and all opt out of their content being used.
And the biggest worry is that identity misuse poses an irreversible danger. OpenAI reportedly requires users to complete a 'liveness check' before using their likeness, yet researchers and users have allegedly found ways to bypass that restriction. These loopholes reveal how vulnerable digital identities remain in the age of generative video.
What Experts Warn About OpenAI's Sora
Why Sora is so alarming can be seen from the words of the industry experts as per reports. Joan Donovan of Boston University described Sora as a system with 'no fidelity to history' and 'no relationship to the truth,' warning that malicious actors could use it to promote hate, harassment, and incitement which is entirely possible. Furthermore, Emily Bender from the University of Washington echoed this sentiment, noting that Sora is 'creating a dangerous situation where it's harder to find trustworthy sources and harder to trust them once found'. David Karpf of George Washington University added that Sora had already been exploited to produce deepfake characters promoting cryptocurrency scams, further highlighting how easily it could be weaponised for deception. He further added,
'In 2022, [the tech companies] would have made a big deal about how they were hiring content moderators ... In 2025, this is the year that tech companies have decided they don't give a shit.'
Their warnings converge on one clear message and that is Sora normalises fabricated video as a persuasive tool, making it increasingly difficult to separate authenticity from manipulation.
© Copyright IBTimes 2025. All rights reserved.