
Between endless waitlists, rising healthcare costs, and appointment booking systems that feel like puzzles from an escape room, it’s no surprise people are turning to AI for answers. In fact, 1 in 6 American adults now use AI chatbots like ChatGPT for health advice every month.
But here’s the catch: trusting a chatbot to decode your symptoms might not be as smart as it sounds.
A new study out of Oxford just exposed something surprising — using AI to self-diagnose didn’t help people make better health decisions. In fact, it sometimes made them worse. The researchers gave 1,300 participants medical scenarios and asked them to use tools like ChatGPT-4o, Cohere Command R+, and Meta’s LLaMA 3 to figure out what to do. The results? Not exactly comforting.
People not only missed critical health issues, but they also underestimated the severity of the ones they did spot. The reason? Most didn’t know what information to give the AI, and the chatbots’ answers were often a confusing mix of helpful advice and flat-out bad calls.
As Adam Mahdi, a lead author of the study, put it: “Users were often confused. The bots gave a cocktail of good and poor recommendations — and people struggled to tell which was which.”
So what now? While big tech players like Apple, Amazon, and Microsoft are racing to integrate AI into healthcare, this study reminds us that AI isn’t quite ready for solo practice. Even the American Medical Association is waving a caution flag, urging doctors not to use tools like ChatGPT for clinical decisions.
The bottom line: AI can be a great assistant — but not a substitute for a trained professional when your health is on the line.
Moral of the story? If you’ve got symptoms, trust your body — but follow up with your doctor, not just your chatbot.
Want to know how to integrate AI into your business workflows safely? Let’s talk.
Would you like a shorter version of this for a LinkedIn or Instagram caption, too?