Smart Speakers from Amazon, Google Hacked to Listen to Conversations, Steal Passwords

The Associated Press
The Associated Press

Recent reports claim that Amazon Alexa and Google Home smart speakers can be targeted by malicious apps that use the devices to listen in on users and steal passwords.

Ars Technica reports that a group of “whitehat” hackers at Germany’s Security Research Labs have discovered that smart home speakers such as the Amazon Alexa and Google Home are more vulnerable to hacking than previously thought. In order to test the security of the devices, the group developed a number of fake apps designed to steal users’ passwords.

The team developed eight hacking apps that were designed to look like an app that would check a user’s horoscope, one was also designed to look like a random number generator. Instead, the apps were used by researchers to eavesdrop on users and phish for their passwords.

Fabian Bräunlein, a senior security consultant at SRLabs, told Ars Technica: “It was always clear that those voice assistants have privacy implications—with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes. We now show that not only the manufacturers but… also hackers can abuse those voice assistants to intrude on someone’s privacy.”

The apps all had different names and different ways of operating but generally followed the same routine. A user would say a phrase such as: “Hey Alexa, ask My Lucky Horoscope to give me the horoscope for Taurus” or “OK Google, ask My Lucky Horoscope to give me the horoscope for Taurus.” The eavesdropping apps then responded with the requested information while the phishing apps gave a fake error message, then the app made it seem as if they were no longer running and waited for the next part of the phishing attack.

The apps appeared to stop but quietly logged all conversations within listening distance of the device and sent a copy to the security researchers’ servers. A video showing the operation can be seen below:

Amazon commented on the experiment stating:

Customer trust is important to us, and we conduct security reviews as part of the skill certification process. We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.

On the record Q&A:

1) Why is it possible for the skill created by the researchers to get a rough transcript of what a customer says after they said “stop” to the skill?

This is no longer possible for skills being submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.

2) Why is it possible for SR Labs to prompt skill users to install a fake security update and then ask them to enter a password?

We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified. This includes preventing skills from asking customers for their Amazon passwords.

It’s also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password

While a Google spokesperson stated:

All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future.

As Breitbart News has written in the past, the easiest way for users to protect themselves from being listened to by these devices is quite simple — don’t purchase one.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com

COMMENTS

Please let us know if you're having issues with commenting.