Jakarta, INTI - Google is exposing users to potential harm by minimizing safety warnings attached to AI-generated medical guidance.
For health related searches and other sensitive topics, the company claims that its AI Overviews, which appear at the top of search results, encourage users to consult medical professionals rather than rely solely on automated summaries. “AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said.
However, an investigation by The Guardian revealed that such disclaimers are absent when users initially receive AI-generated medical information.
Warnings only appear if users click the “Show more” option to request additional health details. Even then, the safety notice is placed beneath the extended AI content and displayed in smaller, lighter text.
“This is for informational purposes only,” the disclaimer tells users who click through for further details after seeing the initial summary, and navigate their way to the very end of the AI Overview. “For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”
Google did not dispute that the disclaimers are missing from the initial medical summaries or that they appear less prominently. A company spokesperson stated that AI Overviews “encourage people to seek professional medical advice” and often reference medical consultation within the summaries “when appropriate.”
AI specialists and patient advocates who reviewed the findings expressed serious concern, emphasizing that disclaimers play a crucial role and should be clearly visible when medical information is first presented.
One of the experts, Pat Pataranutaporn, an assistant professor and researcher at Massachusetts Institute of Technology, warned about the risks involved.
“The absence of disclaimers when users are initially served medical information creates several critical dangers,” he said. “First, even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy. In healthcare contexts, this can be genuinely dangerous.”
Experts warned that the risks posed by AI-generated medical summaries extend beyond technical inaccuracies and into human behavior itself, where users may misunderstand symptoms or fail to provide complete context when seeking health information.
They stressed that disclaimers play a vital role in interrupting blind trust in automated systems and encouraging more critical evaluation of AI-generated content.
“Second, the issue isn’t just about AI limitations – it’s about the human side of the equation. Users may not provide all necessary context or may ask the wrong questions by misobserving their symptoms.
Disclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.”
Academics Criticize Speed First AI Design
Gina Neff, a professor of responsible AI at Queen Mary University of London, argued that the shortcomings of AI Overviews stem from design choices rather than isolated technical errors.
“The problem with bad AI Overviews is by design,” she said, adding that Google was responsible. “AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.”
Earlier this year, an investigation revealed that misleading and incorrect medical information within Google’s AI Overviews was placing users at risk of real-world harm.
Neff said the findings demonstrated why disclaimers must be highly visible.
“Google makes people click through before they find any disclaimer,” she said. “People reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes.”
Following the investigation, Google removed AI Overviews from some, though not all, medical-related searches.
Meanwhile, Sonali Sharma, a researcher at Stanford University’s Center for AI in Medicine and Imaging, highlighted how the placement of AI Overviews at the very top of search results increases their influence on user decision-making.
“The major issue is that these Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a user’s question at a time where they are trying to access information and get an answer as quickly as possible.
For many people, because that single summary is there immediately, it basically creates a sense of reassurance that discourages further searching, or scrolling through the full summary and clicking ‘Show more’ where a disclaimer might appear.
What I think can lead to real-world harm is the fact that the AI Overviews can often contain partially correct and partially incorrect information, and it becomes very difficult to tell what is accurate or not, unless you are familiar with the subject matter already.”
In response, a Google spokesperson maintained that the company encourages users to seek professional medical advice.
“It’s inaccurate to suggest that AI Overviews don’t encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.”
Health organizations have also called for stronger safeguards. Anthony Nolan, a blood cancer charity, urged immediate changes to how disclaimers are displayed.
Its head of patient information, Tom Bishop, warned of the dangers of health misinformation.
“We know misinformation is a real problem, but when it comes to health misinformation, it’s potentially really dangerous,” he said.
“That disclaimer needs to be much more prominent, just to make people step back and think … ‘Is this something I need to check with my medical team rather than acting upon it? Can I take this at face value or do I really need to look into it in more detail and see how this information relates to my own specific medical situation?’ Because that’s the key here.”
He added, “I’d like this disclaimer to be right at the top. I’d like it to be the first thing you see. And ideally it would be the same size font as everything else you’re seeing there, not something that’s small and easy to miss.”
Conclusion
The growing reliance on AI-generated health information has exposed serious risks tied not only to technical inaccuracies but also to design choices that prioritize speed and convenience over safety. Experts and patient advocates agree that prominently displayed disclaimers are critical to protecting users from misinformation and false reassurance. As AI tools like Google’s Overviews continue to shape how people access medical guidance, stronger transparency, clearer warnings, and responsible design will be essential to ensure technology supports, rather than undermines, public health.
Read more: Indonesia’s Digital Minister Prepares 302 Digital Talents with Mandatory AI Skills and Cybersecurity Awareness