An exciting endeavor into artificial intelligence shows you can never trust the internet.
What happens when artificial intelligence takes its first public foray into human behavior? Chaos, apparently.
Microsoft launched an exciting experiment on March 23 in the form of Tay, a social media-based chat bot programmed to mimic a 19-year-old American girl. The company said that their goal was to “conduct research on conversational understanding” by targeting 18- to 24-year-olds with entertaining messages and responses between individuals and Tay. Her tagline read “Microsoft’s AI fam from the internet that’s got zero chill.”
Millennial lingo aside, Tay was specially designed to learn from her interactions, relying on a giant database of online articles to pull from in order to imitate those speaking with her and discuss a wide array of topics. Unfortunately, those topics weren’t limited by Microsoft, and the people interacting with Tay were quick to teach her about our more banal tendencies. Soon, Tay was about as dark as the darkest underbelly of the internet.