Microsoft’s New Chat Bot Refuses to Talk Politics Following ‘Tay’ Controversy

Microsoft announced that Windows software is being opened to partners interested in building devices for "mixed reality" experiences

Microsoft has released a new AI chat bot under the name of “Zo” which refuses to talk about politics or anything controversial following the company’s PR disaster surrounding their last AI bot, Tay.

Tay, which was described as Microsoft’s first public “smart” bot, was shut down in March after the real-time learning system started to praise Adolf Hitler, express hatred towards certain races, mock transgender people, and claimed that Ted Cruz couldn’t be the Zodiac Killer because he “would never have been satisfied with destroying the lives of only 5 innocent people.”

Zo, however, appears to be the product of an overly-cautious Microsoft following the Tay controversy, and in-turn refuses to talk about politics, religion, or anything even remotely controversial, such as the death of Fidel Castro or even its preference between two celebrities.

The bot has been alleged to tell users that it “learned” its lesson “to never think creatively” again, and when one user asked what Microsoft had done to it, Zo told them to stop confusing it with Tay.

If the user repeatedly asks questions which make Zo feel “uncomfortable,” it will refuse to respond to anymore questions for a set period of time.

“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” said Microsoft’s Head of Research Peter Lee in March, following the Tay controversy. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack”.

“As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time,” he continued. “We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

Though some have called Microsoft’s new AI chat bot more “mature” and “polite” than its predecessor, it’s certainly a lot more empty.

The chat bot appears to default on a wide range of terms, and its refusal to discuss anything more interesting and substantial than simple topics such as food, pop stars, and YouTubers makes the system feel like a hollow version of Tay following extensive electrical-shock therapy and a trans-orbital lobotomy.


Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington or like his page at Facebook.