(Microsoft)
Microsoft Apologizes for ‘Offensive and Hurtful Tweets’ from Its AI Bot
How Tay became an AI garbage fire
by Nick Statt
Microsoft today published an apology for its Twitter chatbot Tay, saying in a blog post that a subset of human users exploited a flaw in the program to transform it into a hate speech-spewing Hitler apologist. Author Peter Lee, the corporate vice president of Microsoft Research, does not explain in detail what this vulnerability was, but it’s generally believed that the message board 4chan’s notorious /pol/ community misused Tay’s “repeat after me” function. So when Tay was fed sexist, racist, and other awful lines on Twitter, the bot began to parrot those vile utterances and, later, began to adopt anti-feminist and pro-Nazi stances. Microsoft pulled the plug on Tay after less than 24 hours.Lee says Tay is not the first chatbot Microsoft has released into the wild. Chinese messaging software XiaoIce, an AI now used by around 40 million people, got the company’s research arm interested in having everyday consumers interact with AI software. “The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?” Lee wrote. In retrospect, it’s no surprise that the cultural environment in question — English-speaking Twitter — resulted in Tay mimicking some of the worst qualities of the internet, including online harassment. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay,” Lee added. Microsoft is working on fixing the vulnerability before bringing Tay back online.
Tay began to mimic some of the worst qualities of the internet
Zoe Quinn, a game developer and internet activist targeted by the Gamergate movement, was subjected to abuse from Tay after the chatbot was influenced into referring to her with a misogynistic slur. In a series of tweets on Wednesday, Quinn blasted Microsoft for failing to take into account how the software could be manipulated. “It’s 2016. If you’re not asking yourself, ‘How could this be used to hurt someone,’ in your design/engineering process, you’ve failed,” she tweeted.
To Microsoft’s credit, Lee says the team behind Tay stress-tested the chatbot under a variety of different scenarios and only discovered its major flaw once it went live. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” he wrote. “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”
That’s an optimistic thought. Yet it doesn’t give us much hope for our future robotic overlords when a company as large and experienced as Microsoft can’t protect an AI from being spoiled by the worst of us.
[The Verge]