Google Discontinues AI Health Feature Filled with Misleading Advice

Google AI gives lousy medical advice
Pakin Songmor/Getty

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

The removal of this feature follows a January investigation that found users were being exposed to false and misleading health information through Google AI Overviews. These AI-generated summaries appear above traditional search results on what is the world’s most visited website and are displayed to approximately two billion people monthly.

Google initially attempted to minimize the concerns raised by the investigation, stating that the AI Overviews linked to reputable sources and recommended seeking expert advice. However, within days of the investigation’s publication, Google removed AI Overviews for some medical queries, though not all.

The company’s spokesperson emphasized that safety was not a factor in discontinuing “What People Suggest.” “It had nothing to do with the quality or safety of the feature, and we continue to help people find reliable health information from a range of sources, including forums with first-person perspectives that people find incredibly useful,” the spokesperson stated.

Despite the removal of this particular feature, Google continues to integrate AI into health-related search functions. The company has scheduled another “The Check Up” event, where Chief Health Officer Michael Howell and other staff members are expected to discuss how Google is combining AI research, technological innovations, and partnerships to address global health challenges.

Wynton Hall Code Red cover

Wynton Hall’s newly released book, CODE RED, covers a wide range of topics related to AI, ranging from its impact on elections and the economy to faith and family. This includes the impact has on young Americans, ranging from mental health impacts to the rise of “AI girlfriends.” The serious issue of AI used to create non-consensual sexualized deepfakes is exactly why conservatives must seize the opportunity to create effective AI policies and safeguards.

Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised CODE RED as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.”  Award-winning investigative journalist and Public founder Michael Shellenberger calls CODE RED “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.