Jakarta, INTI - Google’s AI Overviews feature in Search references YouTube more frequently than any medical site when responding to health-related queries, a finding that has sparked renewed scrutiny of a tool accessed by around 2 billion users each month.
Google maintains that its AI-generated summaries, displayed prominently at the top of search results, are trustworthy and draw on authoritative health sources, including the Centers for Disease Control and Prevention (CDC) and the Mayo Clinic.
Yet an analysis of over 50,000 health-related searches conducted in Berlin revealed that YouTube emerged as the most frequently cited source. The video platform, owned by Google, ranks as the world’s second most visited website, trailing only Google Search itself.
Researchers at SE Ranking, a search engine optimisation platform, YouTube made up 4.43% off all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
“This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (e.g. board-certified, and creators with no medical training at all).”
Google told the Guardian that AI Overviews was built to highlight high-quality information from trusted sources across different formats, noting that many reputable health institutions and licensed medical professionals also publish content on YouTube. The company added that the study’s results should not be generalized to other regions, as the analysis was based on German-language searches conducted in Germany.
The research follows a Guardian investigation which revealed that misleading and inaccurate health information in Google AI Overviews had, in some cases, exposed users to potential harm.
In one instance described by experts as “dangerous” and “alarming,” Google reportedly delivered incorrect guidance on key liver function tests, potentially leading individuals with serious liver conditions to mistakenly believe they were in good health. In response, Google later disabled AI Overviews for certain medical queries, though not across all health-related searches.
Study Methodology, Limitations, and Alarming Signals
The SE Ranking study examined 50,807 healthcare-related queries and keywords to identify the sources most frequently referenced by AI Overviews when producing its responses.
They chose Germany because its healthcare system is strictly regulated by a mix of German and EU directives, standards and safety regulations. “If AI systems rely heavily on non-medical or non-authoritative sources even in such an environment, it suggests the issue may extend beyond any single country,” they wrote.
Researchers found that AI Overviews appeared in more than 82% of health-related searches. When examining the sources most frequently referenced in these AI-generated answers, one platform clearly dominated. YouTube emerged as the most cited domain, accounting for 20,621 references out of a total of 465,823 citations.
The second most referenced source was NDR.de with 14,158 citations (3.04%). Germany’s public broadcaster publishes health-related material alongside its news, documentary, and entertainment programming. Ranked third was the medical reference website Msdmanuals.com, which recorded 9,711 citations (2.08%).
Germany’s largest consumer health portal, Netdoktor.de, placed fourth with 7,519 citations (1.61%), followed by Praktischarzt.de, a career and information platform for physicians, which received 7,145 citations (1.53%).
The researchers also highlighted several limitations in their analysis. The study captured a single snapshot in December 2025 and relied on German-language search queries reflecting typical health information searches in Germany.
They noted that outcomes may change over time, differ across regions, or vary depending on how questions are phrased. Even so, the results were described as cause for concern.
Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: “This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases.”
“Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggest that visibility and popularity, rather than medical reliability, is the central driver for health knowledge.”
A Google spokesperson said: “The implication that AI Overviews provide unreliable information is refuted by the report’s own data, which shows that the most cited domains in AI Overviews are reputable websites. And from what we’ve seen in the published findings, AI Overviews cite expert YouTube content from hospitals and clinics.”
Google said the research indicated that 96% of the 25 most frequently cited YouTube videos came from medical-focused channels. However, the researchers warned that these videos accounted for less than 1% of all YouTube links referenced by AI Overviews in health-related responses.
“Most of them (24 out of 25) come from medical-related channels like hospitals, clinics and health organizations,” the researchers wrote. “On top of that, 21 of the 25 videos clearly note that the content was created by a licensed or trusted source.”
“So at first glance it looks pretty reassuring. But it’s important to remember that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews actually cite). With the rest of the videos, the situation could be very different.”
Conclusion
The study adds to growing concerns over how Google’s AI Overviews select and prioritize sources for health-related information. While the company emphasizes the use of reputable and high-quality content, the heavy reliance on YouTube, even in a limited regional sample, raises questions about consistency, transparency, and risk management in AI-generated health guidance. As AI Overviews continue to shape how billions of users access medical information, the findings underscore the need for stronger safeguards and clearer accountability in the use of generative AI for health searches.
Read more: Indonesia and India Forge a New Director for Inclusive AI Partnership