Tay AI
A Microsoft experiment, Tay AI quickly learned to become a massive racist and Nazi supporter Twitter

Peter Lee, the corporate vice president of Microsoft Research, has issued an apology for the behaviour of Tay, the company's new artificial intelligence (AI) chatbot that was unveiled earlier this week. Within 24 hours of going online, Tay was grounded after trolls on Twitter quickly corrupted it into a machine that spewed racist, sexist and xenophobic slurs.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee wrote in a blog post. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Developed by Microsoft's Technology and Research and Bing teams, Tay was created to conduct research on "conversational training." According to Tay's website, "the more you chat with Tay the smarter she gets, so the experience can be more personalized for you."

Apparently, she learned a little too much, too fast.

Tay tweets
Tay was tricked into repeating highly offensive statements by other users Screenshot

"Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay," Lee said. "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."

This is in sharp contrast to Microsoft's more well-received XiaoIce chatbot that became a strangely popular digital conversation partner and "virtual girlfriend" for thousands in China since its release in 2014. "The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?" Lee said.

He added that although the team had "stress-tested" Tay during development and implemented "a lot of filtering and conducted extensive user studies with diverse user groups," they made a critical oversight for this particular attack where Twitter users found they could game the system by "teaching" Tay how to make offensive comments.

However, it is hard to believe that Microsoft did not anticipate what the toxic world of social media and its vicious trolling population would do to its naive teen bot, given that the AI was designed to "learn" over time through "call and response" algorithms.

Game developer and internet activist Zoe Quinn, who was at the receiving end of Tay's abuse, tweeted that Microsoft developers should have anticipated the dark side of the internet and seen the corrosive attack coming.

It is also worth noting that Microsoft decided to make Tay a young female personality, like many modern digital assistants, which leaves the question if an avatar of a strong, powerful bloke or a sweet, old granny would have provoked a similar response.

Experts say the embarrassing episode does provide Microsoft with valuable lessons about human-AI interactions and the need for more sophisticated filters needed to deal with biases, blind spots and undesirable behaviour while building AI systems.

"While I'm sure Microsoft doesn't share my sentiment, I'm actually thankful to them [that] they created the bot with these 'bugs,'" AI expert John Havens and author of Heartificial Intelligence, told The Huffington Post. "One could argue that [Microsoft's] experiment actually provided an excellent caveat for how people treat robots. ... They tend to tease them and test them in ways that actually say more about the humans applying their tests than speaking to any malfunctions in the tech itself."

Artificial intelligence has been gaining momentum in the tech world as giants including IBM, Google, Facebook and other companies race to take their systems to the next predictive level. Microsoft, ironically, has been working on dealing with digital harassment in its Cortana product, which received "a good chunk of early queries about her sex life" since its launch in 2014. Now she has been redesigned to get mad "if you say things that are particularly a**holeish to her," according to Microsoft's Deborah Harrison at the Re•Work Virtual Assistant Summit in February.

"We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," Lee wrote.