Report: Google Attempts to Quell Internal Revolt by AI Researchers

Former Google AI Ethics Researcher Timnit Gebru
Kimberly White /Getty

Following an internal revolt at Google over the integrity of its artificial intelligence research, the company has promised to change procedures for reviewing its scientists’ work.

Reuters reports that Google has pledged to change procedures before July for reviewing its scientists’ work following an internal revolt over the integrity of its artificial intelligence research.

Breitbart News reported in February that an engineering director and a software developer at Google quit over the dismissal of AI ethics researcher Timnit Gebru. David Baker, an engineering director focused on user safety, left the company after 16 years because Gebru’s exit “extinguished my desire to continue as a Googler,” he said in a letter. Baker added: “We cannot say we believe in diversity, and then ignore the conspicuous absence of many voices from within our walls.”

Google software engineer Vinesh Kannan stated that he left the company because Google mistreated Gebru and April Christina Curley, a recruiter who claims she was wrongly fired last year. Breitbart News reported last December that Gebru alleges that she was approached by a manager at Google and asked to retract or remove her name from a research paper that she had coauthored as an internal review at Google found the contents of the paper objectionable.

The paper discussed ethical issues raised by advances in artificial intelligence working with language, an area of research that Google believes is important to the future of its business. Gebru objected to retracting the paper or removing her name from it, calling the practice unscholarly. A short time later she was fired by the Silicon Valley giant. A Google spokesperson alleged that she resigned and was not fired but declined to comment further.

Now, in remarks at a staff meeting last Friday, Google Research executives said that they were working to regain trust at the company. Teams are reportedly trialing a questionnaire that will assess projects for risk and help scientists navigate reviews, research unit Chief Operating Officer Maggie johnson said in the meeting.

Jeff Dean, Google’s senior vice president overseeing the division, stated on Friday that the “sensitive topics” review “is and was confusing” and that he had tasked a senior research director, Zoubin Ghahramani, with clarifying the rules.

Ghahramani, a University of Cambridge professor who joined Google in September from Uber Technologies, stated: “We need to be comfortable with that discomfort” of self-critical research.

In an internal email seen by Reuters, Google researchers raised issues with how Google’s legal department modified one of three AI papers called “Extracting Training Data from Large Language Models.”

In a 1,200-word email dated February 8, a co-author of the paper, Nicholas Carlini, stated: “Let’s be clear here, when we as academics write that we have a ‘concern’ or find something ‘worrying’ and a Google lawyer requires that we change it to sound nicer, this is very much Big Brother stepping in.”

According to Carlini, required edits included “negative-to-neutral” swaps such as replacing the word “concerns” with the more neutral-sounding “considerations.” Words such as “dangers” were changed to “risks.” Lawyers also required the deletion of references to Google technology.

Read more at Reuters here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com

.

Please let us know if you're having issues with commenting.