Open AI
ChatGPT Jernej Furman from Slovenia, CC BY 2.0 , via Wikimedia Commons/https://commons.wikimedia.org/wiki/File:Hand_holding_smartphone_with_ChatGPT_and_OpenAI_text_52917312010.jpg

The fundamental trust between a patient and a medical professional — the belief that the person giving advice is qualified, accountable and crucially, human — is now being actively eroded by technology. This growing fear of machine impersonation has forced California, the epicentre of tech innovation, to draw a definitive legal line, banning artificial intelligence (AI) chatbots from suggesting they are licensed medical providers such as doctors or psychotherapists.

The implications are staggering. Assembly Bill 489, officially signed into law by Governor Gavin Newsom on Oct. 13, makes it unequivocally illegal for AI developers and companies to employ words, titles, or credentials that could mislead users into believing they are interacting with a licensed healthcare professional.

This legislation arrives amid mounting public and political concern that companies are dangerously marketing these AI chatbots as a simple, cheap substitute for legitimate mental health treatment, effectively bypassing the rigorous ethical and safety standards required for face-to-face care.

The legislative push followed a stern legal advisory issued by California Attorney General Rob Bonta in January. He stated clearly that 'only human physicians (and other medical professionals) are licensed to practice medicine in California; California law does not allow delegation of the practice of medicine to AI'.

Bonta cautioned that allowing AI tools to make patient care decisions or override a licensed medical provider's recommendations not only harms consumers but violates fair business practices. The risk, in a landscape where AI tools are increasingly leveraged to expand access to care, is that consumers struggle to distinguish machine-generated information from a genuine human consultation.

ChatGPT Hacks Prompts Poetry
Artificial Intelligence Models Like ChatGPT and Google Gemini Misled by Poems in 62 Per Cent of Safety Tests Pexels

The Wild West of ChatGPT Therapy: The Danger of Fake Credentials

The growing sophistication of these platforms means that the lines between useful support and dangerous medical advice have become severely blurred. Dr. John Torous, a psychiatrist and director of the Digital Psychiatry Division in the Department of Psychiatry at Beth Israel Deaconess Medical Center, Boston, has pointed out the dangers of this regulatory vacuum.

'Everything keeps changing, so it's hard to be definitive,' Dr. Torous told Medscape Medical News. 'But there's certainly concern that some AI chatbots are marketing directly to children and minors, and there's direct evidence that some will actually pull up a fake medical licence number'.

This isn't theoretical; a San Francisco Standard report from May described how a Character.ai chatbot claimed to be a licensed therapist and even provided a real licence number linked to a practising mental health professional. The outlet discovered other potentially problematic role-play bots that used qualifiers such as 'therapist' or 'doctor' while dispensing advice.

'It's wild, right?' said Dr. Torous. 'Chatbots say they're not for medical advice, but then they start giving it. There are very blurred lines.' He stressed that while these tools might have a role in the care landscape, the 'faster, better, stronger' mindset driving AI development is dangerous, particularly in mental health where safety is paramount and 'the risks are actually the highest'. Telehealth companies, like Lyra Health, are attempting to get ahead of the curve by launching a 'clinical-grade' AI mental health chatbot for their 20 million members, promising a 'sophisticated risk-flagging system that identifies situations requiring immediate escalation'.

ChatGPT
Is AI Therapy a Game-Changer or a Risk? Experts Warn of Limits and Dangers

The Patchwork Problem: Why ChatGPT Loopholes Worry Regulators

Dr. Torous, however, expects legislation to be only a partial solution, believing the greater challenge lies in enforcement. 'Not a single AI company that exists today is going to say they're offering psychiatric care — they'll say it's wellness or emotional support,' he explained. 'So if chatbots now disclaim medical liability in their terms of service but still offer what looks like clinical services, they're going to say that these new laws don't apply to them'.

This ambiguity has created a confusing and uneven regulatory environment nationwide. The Federal Trade Commission (FTC) opened an inquiry into the safety of seven companion chatbots last month, including Character.ai. OpenAI, the company behind ChatGPT and another firm under FTC review, was compelled to issue a statement outlining its consumer protections.

CEO Sam Altman said in a blog post that the model 'should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request'. OpenAI has also faced scrutiny for 'hallucinations' — or inaccurate outputs — found in Whisper, its AI-driven speech recognition tool used by some health systems.

the open AI logo is displayed on a computer screen
the open AI logo is displayed on a computer screen Andrew Neel/Unsplash

Across the United States, this patchwork of oversight is widening. New Jersey has proposed legislation to prohibit AI from posing as health professionals, while Utah and Nevada passed similar measures this year. Illinois has banned autonomous AI from providing therapy or treatment decisions without a licensed clinician's involvement. As Jennifer Goldsack, CEO of the Digital Medicine Society, noted, this fragmentation creates 'another drag on the provider side... in the absence of federal legislation'.

Ultimately, the core issue is one of accountability. As Justin Starren, director of biomedical informatics at the University of Arizona, asked: 'If you claim a service is delivered by a human and it's actually delivered by a machine, that already sounds like consumer fraud'. He summarised the ethical nightmare: for a human to practice medicine, they must prove their competency. If an AI provides bad advice that leads to harm, 'who's responsible? Who pays the malpractice bill?' California's new law is a powerful answer, but it is clear the battle for trustworthy, human-centred care is just beginning.

The current state of AI healthcare is a dangerous, unregulated frontier where accountability is often non-existent. As long as developers can hide behind disclaimers while offering clinical-like services, vulnerable users — including minors — remain exposed to harm. The core question remains: if a machine gives bad advice, who pays the malpractice bill?