‘Deepfake’ Audio Helps Criminals Steal Money from Companies

Chinese Hackers
Sean Gallup/Getty Images

Cyber-security firm Symantec says that criminals are able to steal millions by using “deepfaked” audio to fool financial controllers into transferring them money by making them believe they are speaking to the company’s CEO.

Cyber-security software company Symantec says that it has seen at least three cases in which “deepfaked” audio of CEOs is used to trick senior financial controllers into transferring money to criminals, according to a report by BBC News.

The term “deepfake” — which combines the aspects of “deep learning” and “fake” — refers to the use of A.I. and machine learning to create realistic counterfeit productions of audio or video by combining and superimposing elements of existing audio or video.

According to the report, Symantec says that the A.I. could be trained to build a model of a chief executive’s voice by using a large amount of already-existing audio of the individual, adding that the audio would be easily accessible, as it — in most cases — has already been innocently made available through corporate videos, earning calls, media appearances, and keynote speeches.

“The model can probably be almost perfect,” said Chief Technology Officer Hugh Thompson according to BBC News, “Really — who would not fall for something like that?”

Data scientist Alexander Adam noted, however, that it would take a considerable amount of time and money in order to create fake audio of good quality.

“Training the models costs thousands of pounds,” said Adam, “This is because you need a lot of compute power and the human ear is very sensitive to a wide range of frequencies, so getting the model to sound truly realistic takes a lot of time.”

The report added that Adam mentioned it would likely take hours of good quality audio in order to capture the rhythms and intonation of the targeted individual’s speech patterns.

Nonetheless, deepfakes remain a concern for many.

“AI-edited clips that can make it look like someone is saying something they never uttered,” notes an April report by Axios, entitled, “Defending against audio deepfakes before it’s too late.” “But video’s forgotten step-sibling, deepfake audio has attracted considerably less attention — despite a comparable potential for harm.”

“Experts worry that easily faked but convincing AI impersonations can turn society on its head,” the report adds, “running rampant fake news, empowering criminals, and giving political opponents and foreign provocateurs tools to sow electoral chaos.”

You can follow Alana Mastrangelo on Twitter at @ARmastrangelo, on Parler at @alana, and on Instagram.

.

Please let us know if you're having issues with commenting.