Researchers have developed a new method to prevent AI systems from spying on you — but the solution is just as creepy as the issue it’s solving. In essence, the system predicts what words will be said next in a sentence, generating noises that don’t interfere with other humans understanding words but leave smart devices dedicated to surveillance capitalism utterly confused.
Science.org reports that a team of researchers has invented a new method to fight back against AI voice recognition systems such as Amazon’s Echo devices that are constantly listening in on users. The new technology is called Neural Voice Camouflage and can confuse AI assistants by generating custom audio noises in the background as you talk.
The system uses an “adversarial attack,” to fight back against AI eavesdropping. The strategy uses machine learning to tweak sounds to fool AI, making the systems mistake the sounds for something else. Essentially, one AI is being used to confuse another. However, the process is complicated and the machine-learning AI has to process a whole sound clip before learning how to tweak it making it hard to use in real-time like during a conversation.
Researchers taught a machine-learning system to essentially predict the future. The model was trained on many hours of recorded speech, constantly processing 2-second lips of audio and disguising what is likely to be said next. Science.org uses the example of the phrase “enjoy the great feast,” to which the AI can allegedly predict the response.
The AI listens to what was just said and produces sounds that will disrupt a number of phrases that are likely to follow. To the human ear, the audio sounds like background noise, and the spoken words are easily understood, but machines find it incomprehensible.
Chiquier, a computer scientist at Columbia University who led the research says that this is just the first step to fighting back against AI. “Artificial intelligence collects data about our voice, our faces, and our actions. We need a new generation of technology that respects our privacy,” she said.
Andrew Owens, a computer scientist at the University of Michigan, Ann Arbor, commented: “There’s something nice about the way it combines predicting the future, a classic problem in machine learning, with this other problem of adversarial machine learning.”
Read more at Science.org here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address email@example.com