![]() Human moderators along with industry leading filtering software allows you to easily monitor the content coming through and quickly react if something goes wrong. It is paramount to involve human moderators when launching a new technology. Shockingly, Microsoft did not even blacklist the most commonly used swear word. With this type of social experiment, it would be wise to have a large stock list. A simple filter with a concrete set of rules and a human moderator could have prevented this PR disaster. Within hours, Tay spiraled out of control proving there is still a long way to go with this new technology. It is unclear whether this was an intentional or unintentional move by Microsoft, but Tay once again began tweeting offensive content and was yanked from the internet. Interestingly enough, Tay was reintroduced to the world this morning. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.” “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. On Friday, Peter Lee, Corporate Vice President of Microsoft Reach apologized by saying: The chatbot was shut down within 24 hours of her introduction to the world after offending the masses. Tay, marketed as “Microsoft’s AI fam from the internet that’s got zero chill,” candidly tweeted racist and sexist remarks confirming she in fact had “zero chill”. ![]() Tay's final tweet read: "C u soon humans need sleep now so many conversations today thx.Tweet Microsoft apologizes after artificial intelligence experiment backfired. Tay has a verified Twitter account, but when contacted for comment by IBTimes UK, a spokesperson for the social network said: "We don't comment on individual accounts for privacy and security reasons." Without being told to repeat, she went from saying she "loved" feminism, to describing it as a "cult" and a "cancer". Tay's inability to understand anything she said was clear. During one conversation with a Twitter user, Tay responded to the question "is Ricky Gervais an atheist?" With the now-deleted "Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism." However, some other offensive tweets appeared to be the work of Tay herself. Another 'repeat after me' tweet, now deleted, read: "We're going to build a wall, and Mexico is going to pay for it." This was exploited multiple times to produce some of Tay's most offensive tweets. Tay's language went from childish and innocent to highly offensive in a matter of hours Gerald MellorĪ major flaw of Tay's intelligence was how she would agree to repeat any phrase when told "repeat after me". "I love feminism now" she said to another. ![]() "Can I just say that I'm stoked to meet you? Humans are super cool," she told one user. Powered by artificial intelligence, Tay began her day on Twitter like any excitable teenager. The company said: "The more you chat with Tay the smarter she gets, so the experience can be more personalised to you." ![]() Microsoft said it hoped Tay would help "conduct research on conversational understanding". Tay, which tweeted publicly and engaged with users through private direct messages, was supposed to be a fun experiment which would interact with 18- to 24-year-old Twitter users based in the US. The account was paused by Microsoft less than 24 hours after it launched and some of its most offensive tweets have been deleted the company says it is now "making some adjustments." It took less than a day for the internet to teach Microsoft's new artificial intelligence-powered Twitter robot, Tay, to become a racist, sexist Nazi sympathiser who denies the Holocaust and is in favour of genocide against Mexicans. A Microsoft experiment,Tay AI quickly learned to become a massive racist and Nazi supporter Twitter ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |