Amazon Thwarts 1,800 Attempts by North Koreans to Infiltrate Company as IT Workers
Amazon’s security team has uncovered and prevented more than 1,800 attempts by North Koreans to gain employment at the company under false pretenses since April 2024.

Amazon’s security team has uncovered and prevented more than 1,800 attempts by North Koreans to gain employment at the company under false pretenses since April 2024.

The increasing prevalence of AI-powered “nudify” apps and deepfake technology has led to a disturbing trend of students creating and sharing sexually explicit images of their classmates, with 75 percent of these images targeting children under 14 and even as young as 11 years old, according to a new poll of teachers in the UK.

India’s Ministry of Electronics and Information Technology (MEITY) on Wednesday proposed some of the world’s toughest regulations for content generated by artificial intelligence (AI), including “visible labelling, metadata traceability, and transparency for all public-facing AI-generated media.”

OpenAI has paused depictions of Martin Luther King Jr. in its AI video generation tool Sora after users created “disrespectful” deepfake videos of the civil rights leader.

Artificial intelligence is revolutionizing nearly every sector of modern life—from medicine and education to software and communication. Like any tool, however, it can be used for great good or great harm. Sadly, AI is now being used by foreign criminal networks to target innocent Americans, exploiting not only their wallets—but also their trust.

The rise of AI cheats and scammers using deepfake technology to trick interviewers is leading companies to revert to a more traditional hiring practice — the in-person job interview.

A London-based academic has received an apology and $5,700 refund from Airbnb after a New York apartment host allegedly claimed she caused over $15,963 in damages, using AI-generated images as evidence of the supposed damage. The company initially tried to charge their customer $7,000 for the damages and refused her appeal until a newspaper investigation caused them to change their tune.

A new analysis reveals that AI-powered “nudify” websites, which generate nonconsensual deep fake pornography based on normal pictures of victims, are making millions of dollars by exploiting the services of major tech companies like Google, Amazon, and Cloudflare.

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

First Lady Melania Trump was all business while attending the signing of the “Take It Down Act” in the Rose Garden, legislation she has championed and urged Congress to pass.

First Lady Melania Trump joined President Donald Trump in the White House Rose Garden on Monday afternoon to announce the signing of the Take It Down Act. The First Lady used the opportunity to discuss the dangers of AI, which she says is “digital candy” that can be “weaponized, shape beliefs, and, sadly, affect emotions, and even be deadly.”

Actress Jamie Lee Curtis took to Instagram to directly address Meta CEO Mark Zuckerberg, urging him to remove AI-generated commercials that featured her likeness without her consent or endorsement. The personal appeal was successful — ads the actress called, “totally AI fake commercial for some bullshit that I didn’t authorize, agree to or endorse,” were removed from Meta’s platforms.

Mr. Deepfakes, the internet’s largest repository of nonconsensual deepfake pornography, has announced its permanent closure due to the loss of a critical service provider and data.

As U.S. companies hire for remote positions, they face a growing threat from fraudsters using AI tools to create fake identities and credentials. These fake employees then use their company access to cause havoc with malware.

A troubling trend has emerged on social media where marketers and scammers are creating AI-generated influencers with Down syndrome to sell pornography on platforms like OnlyFans.

Melania Trump leads a roundtable discussion on the use of deepfake technology in the production of revenge porn on Monday, March 3.

A leading “misinformation expert” has come under fire for citing seemingly nonexistent sources in an affadavit supporting Minnesota’s new law banning some AI-generated deepfakes. Opposing lawyers claim the Stanford professor used AI to write his legal document, which backfired when the system “hallucinated” by generating false references to imaginary academic papers.

A new wave of AI-generated influencers is taking over Instagram, built on content stolen from real porn stars and models without their consent. Mark Zuckerberg’s Meta does not appear to be taking action against the scam dubbed “AI pimping.”

A private school in Lancaster, Pennsylvania, has been forced to close its doors following an AI-generated nude photo scandal involving nearly 50 female students.

The U.S. military seeks to develop advanced AI capable of generating fake online personas that are indistinguishable from real people, according to a procurement document recently reviewed by the Intercept.

Online AI chatbots are enabling users to generate explicit nude photos of real people with just a few clicks, raising alarms among experts about a looming “nightmarish scenario.”

A federal judge has issued a preliminary injunction blocking enforcement of a recently passed California law aimed at curbing the spread of AI-generated deepfakes depicting political candidates. In his decision, Judge John Mendez wrote, “While a well-founded fear of a digitally manipulated media landscape may be justified, this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.”

The Foundation for Individual Rights and Expression (FIRE) came out strongly against California Gov. Gavin Newsom (D) for signing a new law banning so-called “deepfakes” in elections, saying it threatens free speech.

California Gov. Gavin Newsom (D) signed two bills on Tuesday aimed at banning “deepfakes” — digitally manipulated video or images — of candidates before elections, as well as at prohibiting digital “disinformation” during elections.

2020 Election censors True Media are repositioning themselves as “AI Deepfake” authorities heading into the 2024 election thanks to backing from tech giant Microsoft.

Beetlejuice Beetlejuice star Jenna Ortega said she deleted her Twitter account after receiving explicit AI images of herself.

The San Francisco City Attorney’s office has launched a lawsuit against 16 of the most visited AI-powered “undressing” websites, alleging violations of state and federal laws prohibiting revenge porn, deepfake pornography, and child pornography.

Despite recent policy updates, Google Search continues to display promoted results for “nudify” AI apps that generate nonconsensual deepfake porn, raising concerns about the tech giant’s ability to combat harmful AI-powered content.

The anti-Deepfake porn bill, otherwise known as the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, passed the U.S. Senate unanimously on Thursday with 100 votes.

Meta’s Oversight Board, known as the social media giant’s “supreme court,” has concluded that Instagram should have removed AI deepfake pornography made of a Indian public figure, highlighting significant gaps in the platform’s content moderation policies and practices.

On Wednesday’s broadcast of CNBC’s “Squawk Box,” Senate Intelligence Committee Chairman Sen. Mark Warner (D-VA) said he is concerned about foreign election interference in the form of cyberattacks, spreading conspiracy theories, setting up fake events to incite violence or hiring

In response to the growing issue of boys using AI apps to create and share sexually explicit images of their female classmates, state legislators across the United States are introducing bills to protect minors from this new form of exploitation. Meanwhile, Silicon Valley continues to make billions with the very AI models boys are exploiting.

An investigation has been launched into allegations of inappropriate digital photos being created and shared in a California school.

The Federal Trade Commission (FTC) has awarded prizes to four organizations for developing technologies that can distinguish between authentic human speech and audio generated by AI, as concerns grow over the influence of deepfakes on elections and consumer scams.

Italian Prime Minister Giorgia Meloni is seeking damages over AI-generated deepfake pornography that superimposes her face onto the body of a naked woman.

NewsGuard, the purportedly impartial media rating service that has created a blacklist of disfavored news organizations, is ramping up efforts to prevent AI from spreading fake content that could influence the upcoming U.S. presidential election.

The misuse of AI has landed a group of middle school students in serious trouble after they created and disseminated explicit images of their classmates. Five students from a Beverly Hills, California, middle school have reportedly been expelled after creating AI deep fake porn of more than a dozen classmates.

AI giant OpenAI has revealed its newest creation, an AI system named Sora that can generate realistic videos from text descriptions. As AI-generated deepfake videos already cause problems online ranging from fake porn to realistic scam calls, the advent of easy to create video comes with the potential for trouble.

Major technology companies including Mark Zuckerberg’s Meta, OpenAI, Google and China’s TikTok are signing an agreement aimed at curbing the malicious use of artificial intelligence to meddle in elections.

A new AI-generated image trend on social media, known as #DignifAI, has turned the tables on deepfake porn by creating images that add clothing to photos of scantily clad women in order to make them appear more modest. The trend has expanded to men as well, removing face tattoos and generating images of celebrities as clean cut members of society.
