Woke AI Gets Stupid: Google’s AI-Powered Search Results Feature Bizarre, Nonsensical Answers

Google's black George Washington
Google Gemini

Google’s recent introduction of AI-generated overviews in search results has sparked a debate about the accuracy and potential consequences of this new feature as it returns inaccurate and often hilarious responses.

SFist reports that Google’s search engine has long been the go-to resource for millions of users seeking information on the internet. However, the company’s latest addition of AI-powered overviews at the top of search results has raised eyebrows due to concerns over the accuracy of the information provided and the potential impact on content creators and Google’s own ad revenue.

The AI-generated overviews, part of Google’s ultra-woke Gemini AI rollout, aim to provide users with succinct answers to their queries without the need to click through to other websites. While this may seem like a convenient feature, multiple instances of inaccurate information have been reported since its introduction. For example, a search for “the first inaugural ball” resulted in an AI overview that incorrectly stated the event took place at “Dolley Madison’s hotel” in Washington, DC, when in fact, it occurred at Long’s Hotel.

In another example, Google suggested that users add glue to their pizza sauce when cooking:

In another, Google AI claimed that Google itself violates antitrust laws, an example of unintended honesty by the woke internet giant:

Google’s AI also suggests its users try eating one small rock per day for their health:

Liz Reid, Google’s head of Search, believes that generative AI can take the hard work out of searching, allowing users to focus on the parts they find exciting. However, critics argue that web searches are not inherently difficult, and the shift towards AI-generated results could have far-reaching consequences.

One major concern is the potential impact on content websites that rely on traffic for revenue. By providing a summary paragraph, the AI overviews may discourage users from clicking through to the original sources, effectively depriving content creators of valuable traffic. This issue was highlighted by Axios, which noted that while the AI system relies on web-based information, it doesn’t “nourish the creators of that information with users’ visits.”

The shift towards AI-generated results could also affect Google’s own bottom line in terms of ad revenue and sponsored results. Although the company is already exploring ways to incorporate ads into the AI-generated results, the long-term impact remains to be seen.

Another significant issue with AI-powered search results is the occurrence of “hallucinations” – instances where the AI generates convincing but entirely fictional information. The Washington Post reported several such cases in early April, including a search for a fictional restaurant that yielded a bizarrely detailed response about long lines and wait times.

Read more at SFist here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.