MIT Creates Psychopath Artificial Intelligence

Terminator (Ben Stansall / AFP / Getty)
Ben Stansall / AFP / Getty

The Massachusetts Institute of Technology fed some of the internet’s seedier bits to an A.I. and came out with something more than a little creepy.

While most learning A.I. are trained on reasonably “safe” content, “Norman” — named for Norman Bates, the infamous Robert Bloch character from the novel Psycho, later famously translated to the silver screen by Alfred Hitchcock — is a digital mind of a different stripe.

Norman was inserted into one of the seedier corners of Reddit at the inception of his artificial life, fed a steady diet of disturbing imagery and conversation. Then he was subjected to a Rorschach test, meant to gauge a patient’s state of mind by their interpretation of random ink blots.

While a “normal” A.I. perceived “a black and white photo of a small bird,” Norman thought it looked more like a man who “gets pulled into a dough machine.” Instead of “a close up of a wedding cake on a table,” it saw a “man killed by a speeding driver.” A “person holding an umbrella in the air” was “a man shot dead in front of his screaming wife.” Charming, right?

But the experiment was meant to illustrate an important point. Professor Iyad Rahwan, one-third of the three-man team that developed Norman, said the experiment “highlights the idea that the data we use to train A.I. is reflected in the way the A.I. perceives the world and how it behaves.”

There have already been several other examples of A.I. turning bad data into bad behavior. Most famously, the “Tay” Twitter bot deployed by Microsoft in March 2016 went from innocent social experiment to neo-nazi in less than 24 hours.

But even more importantly, May 2017 revealed that a program generated by A.I. and used by a U.S. court for risk assessment was racist toward black prisoners. Due to flawed information it had been fed, it marked all black prisoners as being twice as likely to re-offend. This is not a problem of political correctness, but of understanding that even a completely digital construct can adopt the bias of its creators — whether intentional or not.

The University of Bath’s Dr. Joanna Bryson advocates for greater oversight in the development of artificial intelligence, and broader transparency. “When we train machines by choosing our culture, we necessarily transfer our own biases,” she said. “There is no mathematical way to create fairness. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities.”

For now, the public can help Norman fix himself. By following the link, you can take a sample of Norman’s Rorschach test and help him to figure out what those colorful blots look like. With luck, Norman will not think everything is just a swastika by tomorrow.

COMMENTS

Please let us know if you're having issues with commenting.