Machine Learning ‘Fairness:’ FTC Centers AI Regulations on Woke Talking Points

WASHINGTON, DC - APRIL 21: FTC Commissioner nominee Lina M. Khan testifies during a Senate
Graeme Jennings-Pool/Getty Images

The FTC under Joe Biden appointee Lina Khan is taking a woke line on AI technology, pledging today to take an approach to AI regulation centered around “civil rights” and “equal opportunity” in addition to the standard FTC provinces of consumer protection and competition.

In a statement today that the FTC jointly made with the infamously woke Civil Rights Division of the Department of Justice, the U.S. Equal Opportunity Employment Commission, and the Consumer Financial Protection Bureau, the agency promised to prioritize “fairness, equality, and justice” in AI regulation.

Via the FTC:

Today, the use of automated systems, including those sometimes marketed as “artificial intelligence” or “AI,” is becoming increasingly common in our daily lives. We use the term “automated systems” broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. Private and public entities use these systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services. These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.

Much of the FTC’s announcement deals with concerns advanced by the leftist-dominated field of machine learning fairness, which Breitbart News has previously covered.

The announcement mentions the use of datasets that “incorporate historical bias,” or that “correlate[s] data with protected classes”:

Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.

The FTC specifies nether the data nor the protected classes it correlates with, leaving the public to guess. Is it crime data? Debt defaults? Rent payments? Credit scores? The FTC doesn’t tell us.

Allum Bokhari is the senior technology correspondent at Breitbart News. He is the author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election.

COMMENTS

Please let us know if you're having issues with commenting.