Will Smith Eating Spaghetti
Seedance is tested out in latest Will-Smith-Eating-Spaghetti benchmark. X/Toolnav.ai

The Will-Smith-Eating-Spaghetti Test began as a bizarre internet clip, but it has quietly become one of the most talked-about indicators of progress in generative video.

The Will-Smith-Eating-Spaghetti Test, once shared as a joke about broken AI, now surfaces in serious discussions about model quality. And with the arrival of new systems such as Seedance 2.0, the strange benchmark is back in the spotlight.

In early versions of AI video, the prompt 'Will Smith eating spaghetti' produced scenes that looked more like nightmares than meals. Faces warped, forks melted into hands, and noodles appeared to teleport between frames.

The awkward clip spread rapidly online, attracting millions of views and reactions. But beneath the humour was a useful insight: the test captured nearly every technical weakness in early video models.

From Viral Meme to Industry Benchmark

What began as a viral meme soon became an unofficial stress test. Developers and researchers realised that if a system could convincingly generate a person eating spaghetti, it had likely solved several of the hardest problems in video synthesis at once.

According to Know Your Meme, the original clip emerged in March 2023, generated by a text-to-video model using the prompt 'Will Smith eating spaghetti.' The result was widely shared across Reddit and social media, where users described it as 'uncanny' and 'nightmarish.'

Instead of fading, the clip became a shorthand inside AI circles. The scenario combined moving hands, complex facial expressions, reflective cutlery, and flexible noodles. At the time, early models struggled to render these consistently. The test quickly turned into a quick visual check: if the spaghetti looked believable and the face stayed stable, the model had improved.

By February 2024, the meme had become so widespread that Will Smith himself joined the trend. He posted a parody video on Instagram, pretending to be an AI-generated version of himself eating spaghetti, drawing hundreds of thousands of likes in hours.

Why The Spaghetti Scene Is So Hard

The test works because it stresses several technical challenges at once. Noodles stretch, overlap, and interact with sauce. Hands must grasp utensils and move them toward the mouth. The face must stay consistent while chewing or speaking.

Early systems often failed in dramatic ways. Forks merged into fingers, faces drifted between frames, and noodles seemed to ignore gravity. The clip exposed issues with temporal consistency, identity preservation, and object interaction.

Because of this, the spaghetti scenario evolved into an informal benchmark. It offered a simple, recognisable scene that could quickly reveal a model's weaknesses.

Seedance 2.0 And The New Spaghetti Test

In 2026, the spaghetti test returned to headlines with the release of Seedance 2.0, a new video model linked to ByteDance. Early demonstrations suggested the system had largely 'solved' the spaghetti problem, producing far more coherent scenes than earlier models.

Seedance 2.0 focuses on multi-modal generation, combining text, images, video, and audio to produce more consistent results. It can generate multi-shot sequences, maintain character identity across scenes, and synchronise dialogue with lip movements.

Test clips using the classic spaghetti prompt show smoother hand movements, stable faces, and noodles that behave like real objects rather than visual noise. Social media posts describing the model note that the once-awkward spaghetti scenario has become a 'stress test' that newer systems can now pass with relative ease.

Industry chatter suggests models like Seedance 2.0 and competing systems have reached a new level of realism, with fluid motion, sharper detail, and synchronised audio in short scenes. Some observers say the technology is approaching production-ready quality for certain tasks.

Progress Brings New Questions

As video models improve, the risks grow alongside them. More realistic AI footage raises concerns about deepfakes, misinformation, and unauthorised use of celebrity likenesses. Reports note that highly realistic tools could be used to create convincing but false footage, increasing pressure for regulation and disclosure.

Many platforms have already introduced restrictions around public-figure prompts, and lawmakers in several regions are pushing for clearer labelling of synthetic media.

A Strange Test With Serious Meaning

The Will-Smith-Eating-Spaghetti Test began as a joke, but it has become a surprisingly effective yardstick for the AI video industry. The scene compresses multiple technical challenges into one familiar moment, making it easy to compare progress across models.

With systems like Seedance 2.0 now passing the once-impossible test, the meme has transformed into a milestone. What used to reveal AI's weaknesses now signals its rapid advance and hints at a future where synthetic video is part of everyday production.