Falling Farther Behind ChatGPT: Google’s Bard AI Chatbot Faces Scathing Reviews from Employees

Sundar Pichai, senior vice president of Chrome, speaks at Google's annual developer confer
KIMIHIRO HOSHINO/AFP/GettyImages

Google’s unreleased Bard AI chatbot received highly negative feedback from its own employees during testing, raising concerns about the company’s approach to AI ethics as it competes with OpenAI’s popular ChatGPT.

Futurism reports that before making its Bard AI chatbot available to the general public last month, Google asked about 80,000 of its employees to test it and provide feedback. According to screenshots obtained by Bloomberg, the reviews were anything but positive, with one employee labeling the AI a “pathological liar.” Another tester called it “cringe-worthy,” and a different employee received potentially fatal advice on scuba diving and airplane landings.

OpenAI founder Sam Altman, creator of ChatGPT

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

Despite the overwhelmingly unfavorable reviews, Google decided to release the chatbot, calling it an “experiment” and including prominent disclaimers. The choice was probably motivated by the company’s desire to compete with OpenAI’s popular ChatGPT, despite the fact that the technology appeared to be in its infancy.

Former Google manager and president of the Signal Foundation, Meredith Whittaker, told Bloomberg, “AI ethics has taken a back seat. If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.” However, the tech giant insists that AI ethics are still a top priority for the firm.

Bloomberg also revealed that, with the exception of more delicate topics like biometrics or identity features, Google’s AI ethics reviews are almost entirely voluntary. A recent meeting is where Google executive Jen Gennai allegedly stated that upcoming AI products could aim for “80, 85 percent, or something” on “fairness,” a bias indicator, rather than aiming for “99 percent.”

Employees are displeased with the company’s strategy for AI development. Margaret Mitchell, a former leader of Google’s AI ethical team, told Bloomberg, “There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” She added, “Even if there aren’t firm directives at Google to stop doing ethical work, the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

Google suffered a major drop in share price in February when the Bard AI chatbot was first debuted with an error in its answer. Breitbart News reported:

Reuters reports that this week, Alphabet, the parent company of Google, saw a $100 billion decline in market value as a result of Bard, the company’s new chatbot, providing inaccurate information in a promotional video. Because of this failure and the lack of information regarding Bard’s integration into Google’s primary search function, there are worries that Google is falling behind its competitor Microsoft. The stock fell as much as 9 percent during Wednesday trading.

The launch of Google’s new chatbot on Monday was anticipated to simplify complex subjects. But just before its livestreamed presentation on Wednesday, Reuters called attention to a mistake in the company’s promotional video — the supposedly all-knowing AI made a basic mistake about space exploration.

Read more at Futurism here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.