Lucy Letby AI face
Netflix documentary 'The Investigation of Lucy' is under fire for using AI to maintain the anonymity of key interviewees @LucyLetbyTrials / X

Netflix's latest true-crime offering, The Investigation of Lucy Letby, has ignited a fierce debate over the platform's deployment of artificial intelligence (AI) to anonymise key interviewees. Released on 4 February 2026, the documentary delves into the case of Lucy Letby, the neonatal nurse convicted of murdering seven babies and attempting to kill seven more at the Countess of Chester Hospital between 2015 and 2016.

While the film provides insights into the police investigation and lingering questions about Letby's guilt, the use of AI-generated avatars for witnesses—such as a victim's mother and Letby's friend—has drawn sharp criticism. These digital disguises, intended to protect identities, have instead been lambasted for plunging into the 'uncanny valley,' where near-realistic simulations evoke unease rather than empathy.

The AI technique alters names, appearances, and voices, creating avatars that blink, cry, and express emotion but often appear eerily detached. On-screen notices briefly explain the 'digital anonymisation,' but many viewers report missing them initially, only to be jolted by unnatural elements such as perfect teeth, dead eyes, or mismatched lip-sync.

Backlash from Viewers and Critics

Social media and online forums have erupted with discontent, with users describing the AI avatars as 'creepy,' 'nauseating,' and 'disturbing.' On Reddit, a thread dedicated to the topic highlights how the technology disrupts immersion, pulling audiences out of the narrative during emotionally charged testimonies.

One user noted the avatars' 'soulless' quality, arguing that it undermines the documentary's credibility and sensitivity to the tragic subject matter involving infant deaths. Critics, including those from The Telegraph, have echoed these sentiments, calling the choice 'questionable' for a film of such gravity, as the avatars tip 'too far into uncanny valley' and clash with the weighty content.

Broader backlash extends to outlets like The Guardian, which criticised the inclusion of raw arrest footage, showing Letby's distress and her parents' anguish, as sensationalist. Letby's parents themselves have decried the documentary as an 'invasion of privacy.'

Ethical Dilemmas in AI-Driven Storytelling

The controversy raises profound ethical questions about AI's role in true-crime documentaries. Critics argue that manipulating faces and voices risks falsifying evidence or manipulating viewer emotions, blurring the line between fact and fiction.

In a case still under review by the Criminal Cases Review Commission for potential miscarriages of justice, such alterations could undermine public trust. Questions of consent and authenticity arise: Are the words direct quotes, or AI-prompted reconstructions?

Furthermore, the documentary's use of AI coincides with ongoing debates in the Letby case, including police fixation on her as a suspect despite her offers to assist, and admissions from consultants like Dr John Gibbs of minor doubts about the verdict. By employing AI, Netflix may inadvertently sensationalise these elements, prioritising drama over factual depth, as noted in reviews.

Future Implications for True Crime

This backlash could influence how streaming platforms handle anonymity in future productions. Traditional methods like shadowed figures or voice modulation are favoured by many for maintaining immersion without the 'creepy' factor.

As AI technology advances, its environmental and job impacts—potentially displacing actors or voice artists—are also under scrutiny.