Bitcoin Forum
May 29, 2024, 02:13:46 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Microsoft is deleting its AI chatbot's incredibly racist tweets  (Read 227 times)
galdur (OP)
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500



View Profile
March 25, 2016, 01:45:12 AM
 #1

Images of tweets in article...http://www.techinsider.io/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?utm_content=bufferd30fc&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer-ti

Microsoft's new AI chatbot went off the rails on Wednesday, posting a deluge of incredibly racist messages in response to users' questions.
The tech company introduced "Tay" this week — a bot that responds to users' questions and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But Tay proved a smash hit with racists, trolls, and online troublemakers — who persuaded Tay to blithely use racial slurs and even outright call for genocide.

Microsoft has now taken Tay offline for "upgrades," and is deleting some of the worst tweets — though many still remain. Microsoft did not immediately respond to a request for comment.

It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. The reason it spouted garbage is because racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what she was talking about — and exploited it.

Nonetheless, it is hugely embarrassing for the company.

In one highly publicised tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, she said "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

Zoe Quinn, a games developer who has been a frequent target of online harassment, shared a screengrab showing the bot calling her a "whore." (The tweet also seems to have been deleted.)

And here's the bot calling for genocide. (Note: In some — but not all — instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)

It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.

Microsoft is coming under heavy criticism online for the bot and its lack of filters, with some arguing the company should have expected and pre-empted abuse of the bot.

Microsoft did not immediately respond to a request for comment.

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!