‘Grandparent Scams:’ Crooks Are Targeting Seniors with AI-Powered Voice Cloning of Loved Ones

elderly woman upset on phone
PIKSEL/Getty

Crooks and scammers are having a field day thanks to the advent of AI technologies used for cloning voices and extracting personal information from social media platforms. Cybercriminals are using AI to create convincing copies of young people’s voices, which are used to power “grandparent scams” targeting elderly relatives.

The Palm Beach Post reports that the evolution of AI technology has opened new avenues for criminal activities. A recent report highlights how AI is now being used by scammers to clone voices and gather personal information from social media platforms to perpetrate more convincing scams.

This development is particularly alarming in the context of the notorious “grandparent scam.” In its original form, this scam involved fraudsters calling elderly individuals, impersonating their grandchildren, and making urgent requests for money to cover emergencies like bail or car repairs. The scam’s effectiveness was often limited by the scammer’s ability to convincingly impersonate a family member. Scammers often resort to claiming to have a cold to account for the difference in their voice.

phone scammers targeting elderly

phone scammers targeting elderly (NIKOLAY DOYCHINOV/AFP)

However, with the advent of AI technologies, scammers can now create more compelling and personalized scams. By using audio and video clips found online, they are using AI to clone the voice of a family member, adding a new layer of deception to their schemes. This makes the scam more convincing and harder to detect, preying on the victims’ trust and emotional vulnerability.

Florida’s consumer watchdog agency has raised the alarm about these AI-assisted scams. The agency emphasizes the need for heightened awareness and preparedness against these modern technological twists. One recommended defense is encouraging family members to set their social media profiles to private, reducing the amount of personal information accessible to scammers.

Other guidelines include skepticism towards caller ID, as scammers can falsify this information, and the establishment of private passwords or security questions for family verification. The agency also advises staying calm and taking standard precautions like hanging up and calling back on a known number, listening for any inconsistencies that might indicate a scam.

The hallmark signs of a scam remain the same: pressure to act quickly, requests for secrecy, and demands for payment in forms that are difficult to trace or recover, such as wire transfers or gift cards. Recognizing these signs is crucial in avoiding becoming a victim.

Breitbart News previously reported on scammers successfully using this technology in New York City:

The New York Post reports that residents of the Upper West Side of New York City have been left in a state of shock and fear as scammers deploy AI to simulate the voices of their children, creating realistic and distressing scenarios. One mother recounted her harrowing experience, where she received a call from what she believed was her 14-year-old daughter, crying and apologizing, claiming she had been arrested. The voice was so convincing that the mother was prepared to hand-deliver $15,500 in cash for bail, believing her daughter had rear-ended a pregnant woman’s car while driving underage.

The scam was revealed when her actual daughter, who was in school taking a chemistry exam, contacted her to let her know she was safe. Reflecting on the incident, the mother stated, “I’m aware it was really stupid – and I’m not a stupid person – but when you hear your child’s voice, screaming, crying, it just puts you on a different level.”

Read more at the Palm Beach Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.