Sora
OpenAI's Sora 2 AFP

OpenAI has quietly unleashed its latest creation, Sora 2, an AI video generator capable of producing jaw-droppingly realistic footage that blurs the line between real and fake.

The company launched the technology on 30 September through an invite-only iOS app in the US and Canada, along with a private web platform. Within days, downloads soared past one million, as users flooded social media with eerily lifelike clips generated entirely by artificial intelligence.

But the buzz has quickly given way to alarm. Copyright holders, news organisations and disinformation watchdogs are now warning that Sora 2's ultra-realistic videos could open the floodgates to deepfakes, misinformation and digital identity theft.

Experts are demanding answers: How exactly does Sora 2 work? Who is pushing back? What safeguards are in place to prevent abuse? And perhaps the biggest question of all — can anyone still tell what's real anymore?

OpenAI's Sora 2 Launch and Capabilities

Sora 2 is the second-generation version of OpenAI's video synthesis model. It allows users to generate short video clips from text prompts or modify existing video segments. The 'cameos' feature additionally lets users insert themselves or others into generated scenes, subject to opt-in controls.

In its launch phase, OpenAI limited availability to the US and Canada with an invite system, and disclosed plans for a future Android version via pre-registration.

OpenAI positioned Sora 2 as more controllable and safer than many comparable video-AI tools, with embedded provenance metadata (C2PA), moving watermarks, content filters, and moderation layers designed to screen for misuse.

Rapid Adoption and Virality

Sora 2 gained steam almost instantly. It topped the US App Store and reportedly saw 627,000 iOS downloads in its first week, surpassing ChatGPT's own early uptake. Its viral output included imaginative, stylised clips and provocative mashups of known characters.

But some of those same viral pieces drew alarm: for example, a clip depicting SpongeBob cooking meth went viral, fanning fears of misuse.

The app's speed of adoption has forced OpenAI to scramble on moderation capacity as creators and critics push back hard.

Rising Copyright Pushback

Hollywood interests and talent agencies have swiftly voiced objections. The Creative Artists Agency (CAA) warned that Sora threatens creators' rights, and insisted on compensation, credit, and control.

Meanwhile, the Motion Picture Association (MPA) criticised OpenAI's early policy on default inclusion of copyrighted works, placing onus on rights holders to opt out.

In response, OpenAI CEO Sam Altman announced a shift to opt-in controls for copyrighted characters, meaning that the system would not permit generation of a given IP unless the rights holder explicitly gives permission.

The company said it would share revenue with IP owners who opt in. OpenAI also introduced a copyright disputes form and promised to block characters flagged for violation.

Critics, however, note that blanket opt-out requests are disallowed, and enforcement remains uncertain. Some studios reportedly already opted out or have prohibited their IP from being used.

Disinformation and Deepfake Risks

The realism of Sora 2 has ignited alarm among misinformation specialists. According to The Guardian, generated scenes including violence or racist imagery have already surfaced, raising fears that lifelike video could be manipulated for fraud, bullying or deception.

Academics and analysts warn that such indistinguishability between AI video and real footage could erode trust in authentic media.

Security firms caution that Sora and similar tools are fertile ground for impersonation scams, with fraudsters exploiting the ability to strip watermarks or mimic voices. The app's capacity to produce photoreal content fast and widely magnifies these risks.

To counter misuse, OpenAI limits generation of public figures, prohibits direct video-to-video transformations, mandates watermark insertion, and deploys layered moderation (classifiers, human review, reporting tools). But critics argue that these protections may not scale to viral volume.

OpenAI's Safety and Moderation Framework

OpenAI's published system documentation outlines a multi-layered safety stack: input and output filters across audio, visual frames and text; stricter thresholds for material involving minors; and internal moderation tools.

Every generated video carries C2PA metadata and a visible moving watermark to support provenance and traceability. Rights holders may request takedowns via the disputes form and block specific characters.

However, OpenAI still restricts full global rollout, relying on incremental deployment and ongoing safety evaluation.

Industry and Policy Reactions

Analysts suggest it could challenge traditional short video and content platforms, prompting competitive realignment.

According to CNN, OpenAI is also partnering with Broadcom to design and develop 10 gigawatts of custom AI chips and systems, a massive infrastructure project expected to consume as much power as a large city.

The collaboration highlights how energy-intensive the AI boom has become, with OpenAI scaling its hardware capacity to support growing demand for tools like ChatGPT and Sora 2.

Government and regulatory actors are already investigating AI video rules, transparency mandates, and copyright reform as the technology accelerates.

In short, Sora 2 has arrived as both a breakthrough in generative video and a lightning rod for legal, ethical and disinformation debates.