Investigation: Google’s ‘AI Overview’ Provides Wildly Inaccurate Medical Advice

robot doctor misdiagnoses patient
NanoStockk/Getty

An investigation has revealed that Google’s “AI Overview,” found at the top of most searches, provided users with false and potentially harmful medical information when they searched for health-related information like what the results of a liver function test meant. The search giant has scrubbed AI Overviews off the web in the cases of medical misinformation researchers uncovered.

The Guardian reports that Google has taken down some of its AI-generated health summaries following revelations that the feature was delivering inaccurate and misleading medical information to users. The removal came after an investigation uncovered serious concerns about the reliability of health-related content produced by the company’s “AI Overviews” feature.

The AI Overviews tool uses generative artificial intelligence to create snapshots of information about topics or questions posed by users. These summaries appear prominently at the top of search results. Google has previously described the feature as both helpful and reliable for users seeking quick answers to their queries.

However, the investigation found that some health-related summaries contained significant inaccuracies that could put users at risk. In one particularly concerning example, the AI provided incorrect information about liver function tests. Experts characterized this specific case as both dangerous and alarming due to the potential consequences for patients.

When users searched for information about normal ranges for liver blood tests, Google’s AI Overviews displayed numerous numbers with minimal context. The summaries failed to account for important variables such as the patient’s nationality, sex, ethnicity, or age. These factors can significantly affect what constitutes a normal test result.

Medical experts pointed out that the values Google’s AI indicated as normal could differ drastically from what medical professionals actually consider normal ranges. This discrepancy creates a serious risk that patients with serious liver disease might interpret their abnormal test results as normal and consequently skip crucial follow-up healthcare appointments.

Following the investigation, Google removed AI Overviews for searches including the terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” A company spokesperson stated that Google does not comment on individual content removals within its search platform. The spokesperson added that when AI Overviews lack appropriate context, the company works to implement broad improvements and takes action under its policies when necessary.

 

 

Sue Farrington, chair of the Patient Information Forum, an organization that promotes evidence-based health information for patients, the public, and healthcare professionals, welcomed the removal of the summaries. However, she stressed that significant concerns remain about the feature.

Farrington described the removal as a positive outcome but characterized it as only the initial step in what needs to happen to maintain trust in Google’s health-related search results. She noted that numerous examples still exist of Google AI Overviews providing inaccurate health information to users.

According to Farrington, millions of adults worldwide already face difficulties accessing trusted health information. This existing challenge makes it critically important for Google to direct users toward robust, researched health information and healthcare resources from trusted organizations.

The investigation also identified other problematic AI Overviews that remain active on the platform. These include summaries about cancer and mental health topics that experts have described as completely wrong and genuinely dangerous. These summaries continue to appear in search results despite the concerns raised.

When asked why these additional AI Overviews had not been removed, Google stated that they link to well-known and reputable sources and inform users when seeking expert advice is important. A company spokesperson said that an internal team of clinicians reviewed the materials and determined that in many cases, the information was not inaccurate and was supported by high-quality websites.

 

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.