| CMSC | -0.05% | 23.56 | $ | |
| SCS | 0.12% | 16.14 | $ | |
| BCC | -0.67% | 90.42 | $ | |
| JRI | -0.78% | 12.87 | $ | |
| BTI | -3.24% | 60.83 | $ | |
| NGG | -0.19% | 87.89 | $ | |
| GSK | -2.32% | 58.865 | $ | |
| BCE | 1.45% | 25.45 | $ | |
| AZN | -2.28% | 188.72 | $ | |
| RIO | 3.06% | 96.36 | $ | |
| RBGPF | 0.12% | 82.5 | $ | |
| RYCEF | 2.65% | 17.34 | $ | |
| CMSD | 0.13% | 23.98 | $ | |
| BP | 0.55% | 39.225 | $ | |
| RELX | -0.72% | 29.17 | $ | |
| VOD | 2.17% | 15.445 | $ |
AI chatbots give bad health advice, research finds
Next time you're considering consulting Dr ChatGPT, perhaps think again.
Despite now being able to ace most medical licensing exams, artificial intelligence chatbots do not give humans better health advice than they can find using more traditional methods, according to a study published on Monday.
"Despite all the hype, AI just isn't ready to take on the role of the physician," study co-author Rebecca Payne from Oxford University said.
"Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed," she added in a statement.
The British-led team of researchers wanted to find out how successful humans are when they use chatbots to identify their health problems and whether they require seeing a doctor or going to hospital.
The team presented nearly 1,300 UK-based participants with 10 different scenarios, such as a headache after a night out drinking, a new mother feeling exhausted or what having gallstones feels like.
Then the researchers randomly assigned the participants one of three chatbots: OpenAI's GPT-4o, Meta's Llama 3 or Command R+. There was also a control group that used internet search engines.
People using the AI chatbots were only able to identify their health problem around a third of the time, while only around 45 percent figured out the right course of action.
This was no better than the control group, according to the study, published in the Nature Medicine journal.
- Communication breakdown -
The researchers pointed out the disparity between these disappointing results and how AI chatbots score extremely highly on medical benchmarks and exams, blaming the gap on a communication breakdown.
Unlike the simulated patient interactions often used to test AI, the real humans often did not give the chatbots all the relevant information.
And sometimes the humans struggled to interpret the options offered by the chatbot, or misunderstood or simply ignored its advice.
One out of every six US adults ask AI chatbots about health information at least once a month, the researchers said, with that number expected to increase as more people adopt the new technology.
"This is a very important study as it highlights the real medical risks posed to the public by chatbots," David Shaw, a bioethicist at Maastricht University in the Netherlands who was not involved in the research, told AFP.
He advised people to only trust medical information from reliable sources, such as the UK's National Health Service.
H.Hall--CT