‘Not Regulated’: Launch of ChatGPT Health in Australia Causes Concern Among Experts
AI

‘Not Regulated’: Launch of ChatGPT Health in Australia Causes Concern Among Experts

The Guardian AI
The Guardian AIJan 15, 2026

Why It Matters

The launch exposes a regulatory vacuum for AI‑driven medical advice, creating immediate public‑health risks and prompting urgent policy action.

‘Not regulated’: launch of ChatGPT Health in Australia causes concern among experts

Calls for clear guardrails and consumer education ahead of wider Australian rollout of OpenAI’s health advice platform · Melissa Davey · Medical editor · Thu 15 Jan 2026 09:00 EST (last modified 15 Jan 2026 09:01 EST)

Medical experts say ChatGPT Health could be useful, but worry users will take advice at face value. Photograph: Dado Ruvić/Reuters

A  60‑year‑old man with no history of mental illness presented at a hospital emergency department insisting that his neighbour was poisoning him. Over the next 24 hours he had worsening hallucinations, and tried to escape the hospital.

Doctors eventually discovered the man was on a daily diet of sodium bromide, an inorganic salt mainly used for industrial and laboratory purposes like cleaning and water treatment. He bought it over the internet after ChatGPT told him he could use it in place of table salt because he was worried about the health impacts of salt in his diet. Sodium bromide can accumulate in the body causing a condition called bromism, with symptoms including hallucinations, stupor and impaired coordination.

It is cases like this that have Alex Ruani, a doctoral researcher in health misinformation at University College London, concerned about the launch of ChatGPT Health in Australia.

A limited number of Australian users can already access the artificial‑intelligence (AI) platform, which allows them to “securely connect medical records and wellness apps” to generate responses “… more relevant and useful to you”. ChatGPT users in Australia can join a waitlist for access.

“ChatGPT Health is being presented as an interface that can help people make sense of health information and test results or receive diet advice, while not replacing a clinician,” Ruani said.

“The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalised, even if they mislead.”

Ruani said there have been too many “horrifying” examples of ChatGPT “leaving out key safety details like side effects, contraindications, allergy warnings, or risks around supplements, foods, diets, or certain practices”.

“What worries me is that there are no published studies specifically testing the safety of ChatGPT Health,” Ruani said. “Which user prompts, integration paths, or data sources could lead to misguidance or harmful misinformation?”

ChatGPT is developed by OpenAI, which used the tool HealthBench to develop ChatGPT Health. HealthBench employs doctors to test and evaluate how well AI models perform when responding to health questions. Ruani said the full methodology used by HealthBench, and its evaluations, are “… mostly undisclosed, rather than outlined in independent peer‑reviewed studies”.

“ChatGPT Health is not regulated as a medical device or diagnostic tool. So there are no mandatory safety controls, no risk reporting, no post‑market surveillance, and no requirement to publish testing data.”

An OpenAI spokesperson told Guardian Australia that the company had worked in partnership with more than 200 physicians from 60 countries “to advise and improve the models powering ChatGPT Health”.

“ChatGPT Health is a dedicated space where health conversations stay separate from the rest of your chats, with strong privacy protections by default,” the spokesperson said.

“ChatGPT Health data is encrypted and subject to privacy protections by default. Sharing with third parties will happen with user consent, or in limited circumstances outlined in OpenAI’s privacy policy.”

The CEO of the Consumers Health Forum of Australia, Dr Elizabeth Deveny, said rising out‑of‑pocket medical costs and long wait times to see doctors are driving people to AI. She said ChatGPT Health could be useful in helping people manage well‑known chronic conditions and to research ways to stay well. AI’s ability to give answers in different languages “provides a real benefit to people who don’t have English proficiency,” she said.

Deveny is concerned that people will take advice given by ChatGPT Health at face value, and that “large global tech companies are moving faster than governments,” setting their own rules around privacy, transparency and data collection.

“This is not a small not‑for‑profit experimenting in good faith. It’s one of the largest technology companies in the world.

When commercial platforms define the norms, the benefits tend to flow to people who already have resources, education, and system knowledge. The risks fall on those who do not.”

She added that a failure of governments to act has left health consumers to navigate the social transformation that is AI largely alone.

“We need clear guardrails, transparency and consumer education so people can make informed choices about if and how they use AI for their health,” she said.

“This isn’t about stopping AI. It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind.”

Comments

Want to join the conversation?

Loading comments...