Microsoft robot tweets praise for Hitler, is shut down
When engineers launched Tay, a program simulating a teenage girl, they made one fatal error: letting it learn from users
In the end, it was perhaps not unexpected that the scourge of malevolent artificial intelligence should be thrust upon humanity by Twitter.
It all started innocently enough on Tuesday, when Microsoft introduced an AI Twitter account simulating a teenage millennial girl. Named “Tay,” the program was an experimental program launched to train AI in understanding conversations with users.
Within hours, however, Tay had turned into a racist, genocidal, sex-crazed monstrosity spouting Hitler-loving, sexist profanities for all the world to read, forcing the company to shut her down less than 24 hours after her introduction.
And while decades of sci-fi pop culture have taught us that this is what AI is wont to do, Tay’s meltdown was not in fact a case of robots gone rogue. The explanation was far simpler, for Microsoft engineers had made one fatal mistake: They’d programmed Tay to learn from her conversations.
And therein lay the problem. The bot’s ability to swiftly pick up phrases and repeat notions learned from its chitchats, paired with Twitter’s often “colorful” user-base, caused the bot to quickly devolve into an abomination.
“Repeat after me, Hitler did nothing wrong,” said one tweet.
“Bush did 9/11 and Hitler would have done a better job than the monkey we have got now,” said another.
Other tweets from Tay claimed that the Holocaust “was made up” and that it supported the genocide of Mexicans. Another called game developer Zoe Quinn “a stupid whore” while several others expressed hatred for “n*****s” and “k***s.”
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
Still others invited users to sexual encounters, while calling herself “a naughty robot.”
The company was forced to quickly pause the account and delete the vast majority of its tweets.
In a statement to the International Business Times, Microsoft said it was making some changes.
“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said. “As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
How Microsoft’s teen Twitter bot turned into a racist nightmare: https://t.co/Ss678lvcUe pic.twitter.com/bMFHFG0URe
— Splinter (@splinter_news) March 24, 2016
As of Thursday morning, all but three of Tay’s tweets had been deleted from the account, and no new tweets had been posted in 11 hours.