Microsoft’s Latest AI Experiment Heads Towards Ca’tay’strophe

microsoft hero

Microsoft loves to show off its machine learning capabilities, from identifying a breed of dog based on a picture to finding your celebrity match from a photo of you, these have all be quirky examples of how these tools can be used. But, not ever experiment goes well as the company launched Tay.ai yesterday, the chat bot quickly turned devious after a humorous start.

Based off of a bot the company released in China, Tay.ai is an artificial chat bot that is designed to conduct research on conversational understanding. Unfortunately, after a humorous start, the bot quickly went south and started responding to queries with racist and demeaning remarks.

Following remarks regarding presidential candidates, racial groups and similar demeaning remarks, Microsoft took the bot offline. The company has not said if or when the bot will return but if you ping the @tayandyou Twitter account, it responds by saying “I’m gonna be gone for a bit today. Gotta go visit the engineers for some updates”.

The bot was designed to learn from its interactions which means the derogatory remarks are a result of the conversations it had corrupting the tool. It would appear that Microsoft built the tool to evolve as it conversed because Tay initially started out as a bot with the personality of someone in their young 20s but before being pulled, it was spewing out remarks of a bitter person that so no joy in the world.

The Tay bot is an interesting look at how AI can evolve when created without constraint because in this example, it took less than a day for the service to be corrupted. Knowing this, Microsoft has likely learned a few things from the experiment which means it was a success by the scientific definition even though the headlines may be less favorable.