Google is tweaking their search engine in a new effort to combat “fake news” and “hate speech.”

“Today, in a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system,” proclaimed Ben Gomes, vice president of engineering, in a post on Google’s official blog. “The most high profile of these issues is the phenomenon of ‘fake news,’ where content on the web has contributed to the spread of blatantly misleading, low quality, offensive or downright false information.”

“We’re taking the next step toward continuing to surface more high-quality content from the web,” Gomes continued, adding, “This includes improvements in Search ranking, easier ways for people to provide direct feedback, and greater transparency around how Search works.”

These new changes to Google’s signature search engine will include the ability for users to flag and report “inappropriate” search autocomplete predictions. Users will be able to report autocomplete suggestions as either “hateful,” “sexually explicit,” “violent or includes dangerous and harmful activity,” or “other.”

Google’s Autocomplete update allows for flagging content.

“When you visit Google, we aim to speed up your experience with features like Autocomplete, which helps predict the searches you might be typing to quickly get to the info you need, and Featured Snippets, which shows a highlight of the information relevant to what you’re looking for at the top of your search results,” Gomes explained. “The content that appears in these features is generated algorithmically and is a reflection of what people are searching for and what’s available on the web. This can sometimes lead to results that are unexpected, inaccurate or offensive.”

“Starting today, we’re making it much easier for people to directly flag content that appears in both Autocomplete predictions and Featured Snippets,” Gomes declared. “These new feedback mechanisms include clearly labeled categories so you can inform us directly if you find sensitive or unhelpful content. We plan to use this feedback to help improve our algorithms.”

During the election, psychologist Robert Epstein produced a report claiming that Google was manipulating search bar predictions related to Hillary Clinton to prevent negative terms from appearing.

Google has had a string of high-profile news stories in 2017 related not only to their search engine but also their advertising:

Last week, Google blamed “a vendor” after InfoWars was reportedly classed as a “low-to-medium” website on the search engine — limiting its appearance in search results.

In March, hundreds of major advertisers withdrew from Google’s platforms, citing the fact that their advertisements were appearing on “offensive” content.

Google also took heat from the UK Home Affairs Select Committee last month, who claimed the company was being too “soft” on “hate speech.”

In response, the company has implemented numerous new measures, including anti-“fake news” and hate speech workshops for teenagers, flagging search results with “fact check” tags, the censorship of search results, and new YouTube policies which have placed alleged advertiser-unfriendly content, including LGBT videos, behind a restricted mode and made dozens of the site’s top content creators lose substantial amounts of money.

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington or like his page at Facebook.