Scientists invent fake illness and AI spreads it as real

The fictional illness, called bixonimania, was invented by medical researcher Almira Osmanovic Thunström at the University of Gothenburg as part of an experiment to test whether AI systems would repeat false medical information, Qazinform News Agency correspondent reports.

photo: QAZINFORM

Bixonimania does not exist in real medicine. But within weeks of two fake studies about the condition being posted online in early 2024, several major chatbots began describing it as a real illness and offering health advice to users.

A fake disease

Osmanovic Thunström created the condition and wrote 2 deliberately fake academic papers about it. The papers included obvious clues that they were fictional, such as a made-up author, a non-existent university, and references to fictional institutions like Starfleet Academy.

The illness was described as a skin problem linked to blue light from digital screens that could cause dark or pinkish discoloration around the eyes.

Despite the warning signs, large language model chatbots quickly began repeating the claim. Some systems told users that bixonimania was a rare condition caused by screen exposure and advised them to visit eye specialists.

Researchers say this shows how easily AI tools can absorb and repeat unreliable information found online.

The experiment had an even more surprising effect when other researchers cited the fake disease in real academic work. One study published in the journal Cureus referenced the invented condition as if it were genuine research. The article was later retracted after editors discovered it cited a fictional illness.

Why AI was fooled

Experts say AI systems can produce very different answers depending on how questions are asked and what information they pull from the internet.

Because the fake papers were formatted like professional medical research, they appeared more trustworthy to AI models. Mahmud Omar at Harvard Medical School said studies show AI systems are more likely to expand on false information when it looks like formal medical writing.

An OpenAI spokesperson said current models provide safer and more accurate health information than earlier versions. Google also said earlier responses came from older models and that its AI tools encourage users to verify sensitive information with professionals.

Still, experts warn the problem goes beyond a single experiment. As AI becomes more common in health advice and research, false information could spread quickly if systems absorb unreliable material.

Earlier, Qazinform News Agency reported that 15% of Americans would work for an AI boss, according to a new study.