October 6, 2024

As Shawn Langlois wrote in this Market Watch article:

“Microsoft initially created “Tay” in an effort to improve the customer service on its voice recognition software. “She” was intended to tweet “like a teen girl…”  (But within 24 hours Tay became a racist asshole.)

“Nobody was harmed — physically —by Microsoft’s foul-mouthed Twitter chat robot, of course, but what started out as a fun experiment in artificial intelligence turned ugly in a hurry. To some, that doesn’t bode well for the future of robot-human relations”

“N-words, 9/11 conspiracy theories, genocide, incest, etc. Tay really lost it. Needless to say, this wasn’t programmed into the chat robot. Rather, the trolls on Twitter exploited her, as one would expect them to do, and ruined it for everybody.”

MSFT Tay Tweets

MSFT Tay Tweets 2

In a statement cited by several news outlets, Microsoft explained what happened:

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

The company also deleted some of the most offensive tweets. To see more of them, check out the Socialhax website, which took screenshots while they were still available.”

About Author