Anthropic

Anthropic Study: AI Models Are Highly Vulnerable to ‘Poisoning’ Attacks

A recent study by Anthropic AI, in collaboration with several academic institutions, has uncovered a startling vulnerability in AI language models, showing that it takes a mere 250 malicious documents to completely disrupt their output. Purposefully feeding malicious data into AI models is ominously referred to as a “poisoning attack.”

bottle of poison

Lawsuit: Anthropic Has Been Harvesting Reddit User Posts to Train AI

Reddit, the popular social media platform, has filed a lawsuit against AI startup Anthropic, alleging breach of contract and unfair business practices. The self-described “frontpage of the internet,” a notorious left-wing echo chamber, claims Anthropic has been vacuuming up user posts to train its AI systems.

A robot spies on human activity