5 Techniques Google Will Use to Ensure MAGA Is a ‘Hiccup in History’s Arc’

Trump vs. Google
Getty

In a recently leaked video of a Google all-hands “TGIF” meeting following the election of President Trump, the tech giant’s leadership discussed plans to thwart President Trump’s agenda — here are some of tools and techniques the Masters of the Universe plan to utilize.

A recently leaked video from a Google TGIF meeting shortly after the election of President Trump showed Google executives and employees dismayed at the victory of Trump and discussing plans to thwart Trump’s agenda and the larger populist movement worldwide. Here are just some of the ways that Google could attempt to make sure, to use the words of Google VP for Global Affairs Kent Walker, that “this election and others around the world” are just a “hiccup in history’s arc towards progress.”

1: Google Jigsaw

During the leaked TGIF meeting, one of the projects discussed was Jigsaw. Jigsaw is Alphabet’s think tank department or “social incubator.” Initially branded Google Ideas in 2010, the project was renamed Jigsaw in 2016 with then Alphabet CEO Eric Schmidt explaining that the new name “reflects our belief that collaborative problem-solving yields the best solutions” and that the team’s mission “is to use technology to tackle the toughest geopolitical challenges, from countering violent extremism to thwarting online censorship to mitigating the threats associated with digital attacks.”

However, so far most of what Jigsaw has produced has been new ways to censor information online, such as their “Perspective” A.I. product which attempts to crack down on “abusive” comments online. Perspective is used to filter and compile comments on websites for human review. To learn what exactly counts as a “toxic” comment, the program studied hundreds of thousands of user comments that had been deemed unacceptable by reviewers on websites like the New York Times and Wikipedia. “All of us are familiar with increased toxicity around comments in online conversations,” said Jigsaw president Jared Cohen. “People are leaving conversations because of this, and we want to empower publications to get those people back.”

Even those on the left appeared uneasy about the idea of censoring comments online based on an arbitrary number score assigned to “toxic” words. Feminist author Sady Doyle told Wired in 2016: “People need to be able to talk in whatever register they talk… imagine what the Internet would be like if you couldn’t say ‘Donald Trump is a moron,’” a phrase that registered a 99/100 on the AI’s personal attack scale.

2: Filter bubbles

A filter bubble is described by Technopedia as: “the intellectual isolation that can occur when websites make use of algorithms to selectively assume the information a user would want to see, and then give information to the user according to this assumption… A filter bubble, therefore, can cause users to get significantly less contact with contradicting viewpoints, causing the user to become intellectually isolated… Personalized search results from Google and personalized news stream from Facebook are two perfect examples of this phenomenon.”

This essentially means that Google only returns certain results based on a user’s browsing habits, which means that users may only be suggested content from the sources that reinforce their preconceived ideas. Given the nature of the Google algorithm, this means that users can be manipulated based upon what content the search function returns to them.

This phenomenon becomes particularly worrying around elections, when people are most likely to Google information about prospective candidates. Filter bubbles can mean that a user only receives positive results about a candidate they like and negative results about a candidate they dislike. This type of manipulation is a prime concern of big tech whistleblower Dr. Robert Epstein.

3: “Fake news”

Another way that Google could affect the content that users see is through the marking of certain websites or articles as “fake news.” Dr. Robert Epstein, a senior research psychologist at the American Institute for Behavioral Research and Technology, discussed this at the Breitbart Masters of the Universe Town Hall stating:

“If Breitbart published a nice sarcastic, satire, satirical piece, brilliantly written, wouldn’t that look like a fake news story? On the surface, wouldn’t it? Wouldn’t it get automatically censored because the algorithms, and for that matter the people who are making these decisions, they would look at something that he just said,” point to Marlow, “as some that’s false, invalid, unreasonable, right? And it would get censored.”

“Now, this kind of power should not be in the hands of a handful of executives in Silicon Valley who are not accountable to us the general public, they’re accountable only to their shareholders.”

Christian satire website The Babylon Bee faced the exact issue that Epstein discussed when Facebook threatened to censor the site for spreading “fake news.” The supposed “fake news” was a satirical article which claimed that CNN had purchased an “industrial-size washing machine to spin news before publication.” Yet, partisan “fact-checking” service Snopes felt the need to publish a correction of the article that clarified that  CNN had not “made a significant investment in heavy machinery.” After Snopes published the article Adam Ford, who runs the Babylon Bee, received a notification from Facebook which warned him that if he continued to publish “fake news” corrected by Snopes on Facebook, he could have his page demonetized and his page reach severely reduced.

Similar tactics could easily be employed by Google to permanently blacklist certain websites or articles to influence voters.

4: Autocomplete suggestions

Further research from Dr. Epstein showed that Google appeared to favor positive autocomplete search results relating to Hillary Clinton during the 2016 election, even when search terms critical of Clinton were actually more popular at the time. Epstein’s report revealed that Google manipulated search results related to Hillary Clinton during the 2016 election that had the potential to “shift as many as 3 million votes” according to Epstein.

Epstein along with his colleagues at the American Institute for Behavioral Research (AIBRT) became interested in a video published by Matt Lieberman of Sourcefed which claimed that Google searches suppressed negative information about Hillary Clinton while other search engines such as Bing and Yahoo showed accurate results.

Epstein and AIBRT tested hundreds of different search terms related to the 2016 election, using Yahoo and Bing search as a control. Epstein’s report stated:

It is somewhat difficult to get the Google search bar to suggest negative searches related to Mrs. Clinton or to make any Clinton-related suggestions when one types a negative search term. Bing and Yahoo, on the other hand, often show a number of negative suggestions in response to the same search terms. Bing and Yahoo seem to be showing us what people are actually searching for; Google is showing us something else — but what, and for what purpose?

As for Google Trends, as Lieberman reported, Google indeed withholds negative search terms for Mrs. Clinton even when such terms show high popularity in Trends. We have also found that Google often suggests positive search terms for Mrs. Clinton even when such terms are nearly invisible in Trends. The widely held belief, reinforced by Google’s own documentation, that Google’s search suggestions are based on “what other people are searching for” seems to be untrue in many instances.

Google tries to explain away such findings by saying its search bar is programmed to avoid suggesting searches that portray people in a negative light. As far as we can tell, this claim is false; Google suppresses negative suggestions selectively, not across the board. It is easy to get autocomplete to suggest negative searches related to prominent people, one of whom happens to be Mrs. Clinton’s opponent.

Epstein then hypothesized that Google directly altered search results in an attempt to influence the 2016 election:

Without whistleblowers or warrants, no one can prove Google executives are using digital shenanigans to influence elections, but I don’t see how we can rule out that possibility. There is nothing illegal about manipulating people using search suggestions and search rankings — quite the contrary, in fact — and it makes good financial sense for a company to use every legal means at its disposal to support its preferred candidates.

5: Cutting off ad revenue to sites Google disagree with ideologically

Google could also curtail free speech by cutting off ad revenue to sites with which they disagree on an ideological level. In February of 2018, current and former Google employees confirmed to Breitbart that leftists at the company were actively working to damage Breitbart’s advertising revenue. In leaked screenshots obtained by Breitbart News, Google ad account manager Aidan Wilks can be seen advising a client of Google’s that advertising on Breitbart may impact their “brand safety.”

Wilkes then linked the client to Sleeping Giants, a far-left organization which has repeatedly targeted Breitbart News and other conservative websites with false claims of racism and bigotry. In the same screenshots, another Google employee named Matthew Rivard can be seen telling the client that Wilkes email was a “nice template” for those who wished to “call out” the issue to other clients.

This list includes just some of the tools and techniques Google could use to influence the political opinions of their users. The only thing that can be done to guard against these sort of attacks is to constantly be on the lookout for manipulation by the Masters of the Universe and bring it to the attention of the public. As government scrutiny of Google continues to increase despite the companies claims that it is not ideologically biased, it will  require a combined effort to counter the search giant’s plans to make populist governments a “hiccup” in history.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com

COMMENTS

Please let us know if you're having issues with commenting.