ChatGPT Health
Health experts in Australia raised concerns over the rollout of ChatGPT Health calling out its unregulated nature, safety concerns, and the possibility of misinformation that could harm people. Sanket Mishra/Pexels

Health experts and advocates in Australia raised concerns over the limited rollout of ChatGPT Health. Licensed professionals callout its unregulated nature, safety concerns, and the possibility of misinformation on this fast-growing AI-centred health platform.

ChatGPT Health as the new digital Healthcare companion?

OpenAI's newest release, ChatGPT Health, in Australia, provides a small group of users a taste of a novel AI-based application that assists with interpreting medical records, provides wellness guidance, and answers health-related questions.

Although the platform indicates that it can help a person to interpret the test results, control chronic diseases, or consider the options of food, it remains in the waitlist stage, and a limited number of people can test it.

The company asserts that ChatGPT Health should be used as an addition, and not a substitute for healthcare professionals.

But behind this potential business picture, some worries are developing.

Analysts are concerned that the platform is not safe, accurate, and transparent, particularly because it exists in a regulatory grey zone. The concern among the critics is that users will find AI answers as expert medical advice, which may cause harm due to false information or neglected safety information.

Safety and regulation, there are always loopholes

The absence of rigorous testing and regulation happens to be one of the key problems that health misinformation researchers have raised.

Alex Ruani, a health misinformation and doctoral researcher at University College London, showed concern about a lack of publicly available research to support the safety of ChatGPT Health.

'ChatGPT Health is being presented as an interface that can help people make sense of health information and test results or receive diet advice, while not replacing a clinician,' Ruani told the Guardian.

'The challenge is that, for many users, it's not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalised, even if they mislead,' he added.

The network is based on a proprietary appraisal procedure named HealthBench, where the physician is trying their reaction. Nonetheless, the entire procedure is not fully revealed, and it casts doubt on its credibility and openness.

According to critics, excessive use of AI-based health advice might result in harmful misconceptions or unnoticed risks, including unnoticed critical side effects or incompatibility of certain drugs, unless supported by peer-reviewed research or other safety precautions.

Biases, misinformation, data security, and privacy concerns - you name it

Professionals are concerned that AI content is quite deceptive, as the answers are incomplete, and information about safety is missing. An example is when ChatGPT Health does not provide information about possible side effects of drugs or potential warnings about allergies or risks of some diets or supplements.

The terrifying instances of AI chatbots giving incomplete or incorrect health data have already commenced in large numbers, and in many cases, essential information on safety is omitted.

The fear is that users, particularly those who are vulnerable or in need of a quick fix may take this information blindly and make decisions that may be detrimental to them.

While OpenAI states that ChatGPT Health is focused on privacy, as it encrypts the data and gets the users' consent, cybersecurity specialists are wary. The software's data gathering privacy policy lacks a full description of how health data is stored, used, and shared.

Health information is such sensitive that people fear its abuse or violations, particularly given that the platform is not a controlled medical device.

Why Australians are resorting to AI to receive health advice

The increasing costs of healthcare, excessively long queues of specialists, and inequality in the sphere of healthcare have caused a tendency among many Australians to seek the assistance of AI-powered tools.

According to research done by the University of Sydney, about 9.9% of Australian adults (about 1.9 million people) asked ChatGPT health‐related questions during the six months preceding June 2024.

The researchers stated that 'health‐related ChatGPT use was higher for groups who face barriers to health care access, including people who were born in non‐English speaking countries, do not speak English at home, or whose health literacy is limited or marginal.'

Dr Elizabeth Deveny, chief executive of the Consumers Health Forum of Australia, pointed out that AI would be a useful tool in the care of chronic illnesses, health information in various languages, and some burden on the healthcare system.

'We need clear guardrails, transparency and consumer education so people can make informed choices about if and how they use AI for their health,' she said.

She cautioned that increased efforts in implementation without adequate monitoring would increase health inequities.