Tiktok AI Denied
TikTok user denied life-saving heart surgery coverage, sparking debate over AI in insurance decisions. TikTok/@kenasue

A woman's tearful TikTok video about being denied coverage for a life-saving heart surgery has gone viral, drawing fresh attention to the growing use of artificial intelligence in American health insurance claim decisions. Noah Jones, who posts on TikTok under the username @kenasue, told her followers that despite paying £7,438 ($10,000) a year for health insurance, her insurer refused to cover the procedure — leaving her potentially liable for around £44,626 ($60,000) in out-of-pocket costs.

In the video, Jones explained she had been with the same doctor for ten years and was born with her condition — 'this is not something I ever had a choice with,' she said. She added that she had been paying into the policy for four years without issue, making the sudden denial all the more jarring. 'I can't have the surgery I need to literally live,' she told viewers, her frustration evident throughout.

The Algorithm Behind the Denial

In a follow-up video, Jones confirmed that her surgery had been approved — but the anaesthesia had been blocked. The denial, she was told by a representative, was made automatically by an AI. 'I'm not really sure how you can have heart surgery without anaesthesia,' she said. Once a human reviewed the decision, the anaesthesia was approved, though Jones said she remained uncertain whether the full procedure would be covered and still faced the prospect of an appeal. Jones has not publicly named her insurer.

The case has struck a nerve with millions of viewers, not least because it reflects a much wider pattern. The use of AI to automatically process — and deny — health insurance claims has been under legal scrutiny for years in the United States.

A class action lawsuit against UnitedHealth Group, filed in late 2023, alleged that its AI tool, nH Predict, had a 90 per cent error rate. This means that nine out of ten appealed denials were ultimately reversed. Despite this, the suit claimed the company continued using the model, banking on the fact that only around 0.2 per cent of policyholders would bother to appeal at all.

A System Built on Denial

In February 2025, a federal judge allowed the breach of contract claims in that case to proceed, ruling that UnitedHealthcare's own policy documents had promised coverage decisions made by clinical staff, not algorithms. UnitedHealth has denied that nH Predict is used to make coverage determinations, describing it instead as a tool to help inform care planning. The case remains ongoing.

Jones's video has renewed attention to the fact that algorithmic denials are not confined to elderly Medicare patients. They appear to be touching people across age groups and insurance types. And, as her case shows, the consequences can be immediate and life-threatening.

Healthcare researchers have long warned that the financial incentive to deny first and let patients appeal later creates a system that is structurally weighted against sick people. Rohan Kulkarni, executive researcher at HFS Research, described the dynamic plainly: 'Health plans are using AI in a perverse way to delay and deny care.'

Jones confirmed in a follow-up video that the anaesthesia had been approved following human review, though she said uncertainty remained over full procedural coverage. The UnitedHealth Group class action, filed in 2023, remains ongoing. No federal legislation specifically regulating the use of AI in health insurance claim decisions has been enacted.