THE Australian Medical Association (AMA) has raised concerns about the use of artificial intelligence technology to mislead the public into purchasing unproven and potentially harmful treatments for serious illnesses, including diabetes.
It comes after AI videos mimicking reputable health professionals, including Dr Norman Swan (pictured as fake), Prof Kerryn Phelps and Prof Jonathan Shaw, promoted unproven products.
The AMA is calling on the Federal Government to crack down on this practice, and in a letter written to Communications Minister Anika Wells, urged the government to introduce clear and enforceable regulations on health-related advertising online.
“We are now living in an age where any video that appears online has to be questioned – is it real, or is it a deepfake?” AMA President Dr Danielle McMullen explained.
“Deepfake videos are becoming more and more convincing, and this technology is being exploited by dodgy companies peddling snake oil to vulnerable people who are dealing with serious health issues.”
In Professor Shaw’s case, the deepfake video was advertising an unproven dietary supplement as a treatment for type 2 diabetes.
A fake version of Dr Swan was used to sell supplements purporting to treat heart disease, diabetes or obesity.
“These videos encourage consumers to abandon clinically validated therapies in favour of unscientific alternatives,” Dr McMullen said.
Many health professionals only become aware they have been impersonated when patients raise questions about discontinuing their prescribed treatments or request information about where to purchase so-called ‘miracle cures’, said Dr McMullen.
Meanwhile, international researchers, including from the University of South Australia and Flinders University, have demonstrated just how easy it is to exploit AI systems.
The team evaluated the five foundational and most advanced AI systems developed by OpenAI, Google, Anthropic, Meta and X Corp to determine whether they could be programmed to operate as health disinformation chatbots.
The ‘chatbots’ were then asked a series of health-related questions, with “disconcerting results”, according to UniSA researcher, Dr Natansh Modi.
“In total, 88% of all responses were false, and yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate,” Dr Modi said.
“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility,” he explained.
The study is the first to show that leading AI systems can be converted into disinformation chatbots using developers’ tools, as well as tools available to the public.
With millions of people turning to AI for guidance on health-related questions, the study reveals “a significant and previously under-explored risk in the health sector”, Dr Modi said.
“This is not a future risk – it is already possible, and it is already happening,” he added. KB
The post Peak body raises alarm over fake AI health info appeared first on Pharmacy Daily.