Google Creating AI-Based Automatic Reply System for Messaging Friends and Family

hal robot AI artificial intelligence 2001

Google is creating an artificial intelligence-based automatic reply system, which will give users personalized, but automated answers to respond to messages with just one click.

According to the Guardian, “Google’s experimental product lab called Area 120 is currently testing a new system simply called Reply that will work with Google’s Hangouts and Allo, WhatsApp, Facebook Messenger, Android Messages, Skype, Twitter direct messages and Slack.”

“The system can apparently work out what people are saying to you and suggest one-tap answers, but Google says it will go further, taking your location, your calendar and other bits of information into account,” they explained. “One example was using your location to send and instant response to ‘when can you be home?’ using your preferred method of transport and the time it’ll take to wherever your home is.”

The system will also inform others when you’re on holiday, and will be able to scan messages for important information so users can be alerted if there is an urgent matter.

Google’s Reply system, which will initially launch just for Android, is still in its early days, however.

“One of the many projects that we’re working on within Area 120 is Reply, which suggests smart replies right in notifications from various chat apps,” declared a Google spokesman. “Like all other projects within Area 120, it’s a very early experiment so there aren’t many details to share right now.”

Last year, it was reported that Google’s artificial intelligence can make better machine-learning code than the humans who made it, while the company’s A.I. unit also used games in an experiment to see if artificial intelligence can cheat.

In 2017, Google also announced the opening of an artificial intelligence research center in China, and launched an A.I. program to detect “hate speech.”

However, flaws in Google’s hate-speech detecting A.I. system were quickly discovered.

The A.I. system deemed criticism against Muslims to be more “toxic” than the same criticism against Christians, while even slogans such as “Vote Trump” were deemed to be more toxic than slogans supporting Hillary Clinton or Jeb Bush.

An October report also claimed Google’s artificial intelligence was biased against gay people and Jews.

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington, or like his page at Facebook.



Please let us know if you're having issues with commenting.