search
Human errorHuman error

Microsoft robot tweets praise for Hitler, is shut down

When engineers launched Tay, a program simulating a teenage girl, they made one fatal error: letting it learn from users

TayTweets’ Twitter photo (Twitter via JTA)
TayTweets’ Twitter photo (Twitter via JTA)

In the end, it was perhaps not unexpected that the scourge of malevolent artificial intelligence should be thrust upon humanity by Twitter.

It all started innocently enough on Tuesday, when Microsoft introduced an AI Twitter account simulating a teenage millennial girl. Named “Tay,” the program was an experimental program launched to train AI in understanding conversations with users.

Within hours, however, Tay had turned into a racist, genocidal, sex-crazed monstrosity spouting Hitler-loving, sexist profanities for all the world to read, forcing the company to shut her down less than 24 hours after her introduction.

And while decades of sci-fi pop culture have taught us that this is what AI is wont to do, Tay’s meltdown was not in fact a case of robots gone rogue. The explanation was far simpler, for Microsoft engineers had made one fatal mistake: They’d programmed Tay to learn from her conversations.

And therein lay the problem. The bot’s ability to swiftly pick up phrases and repeat notions learned from its chitchats, paired with Twitter’s often “colorful” user-base, caused the bot to quickly devolve into an abomination.

“Repeat after me, Hitler did nothing wrong,” said one tweet.

“Bush did 9/11 and Hitler would have done a better job than the monkey we have got now,” said another.

Other tweets from Tay claimed that the Holocaust “was made up” and that it supported the genocide of Mexicans. Another called game developer Zoe Quinn “a stupid whore” while several others expressed hatred for “n*****s” and “k***s.”

Still others invited users to sexual encounters, while calling herself “a naughty robot.”

The company was forced to quickly pause the account and delete the vast majority of its tweets.

In a statement to the International Business Times, Microsoft said it was making some changes.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said. “As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

As of Thursday morning, all but three of Tay’s tweets had been deleted from the account, and no new tweets had been posted in 11 hours.

read more:
Never miss breaking news on Israel
Get notifications to stay updated
You're subscribed
image
Register for free
and continue reading
Registering also lets you comment on articles and helps us improve your experience. It takes just a few seconds.
Already registered? Enter your email to sign in.
Please use the following structure: example@domain.com
Or Continue with
By registering you agree to the terms and conditions. Once registered, you’ll receive our Daily Edition email for free.
Register to continue
Or Continue with
Log in to continue
Sign in or Register
Or Continue with
check your email
Check your email
We sent an email to you at .
It has a link that will sign you in.