PDA

View Full Version : Microsoft deletes 'teen girl' AI after it became Hitler-loving sex robot in 24 Hours



Shami-Amourae
24th March 2016, 10:10 AM
http://archive.4plebs.org/pol/thread/68643761

http://img.4plebs.org/boards/pol/image/1458/83/1458834877632.png

> Microsoft makes "teenage girl" AI.
> Mentioned AI is stoked to meet humans.
> 24 h later she worships Hitler, wants humans to fuck her and loves Trump.

OK. Which one of you is responsible?

http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/


(http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/)http://img.4plebs.org/boards/pol/image/1458/78/1458783199300.png

http://img.4plebs.org/boards/pol/image/1458/77/1458777329472.png

http://img.4plebs.org/boards/pol/image/1458/83/1458836081073.png

http://img.4plebs.org/boards/pol/image/1458/78/1458782642339.png

vacuum
24th March 2016, 10:16 AM
https://i.imgur.com/SvV7Py4.png

Ares
24th March 2016, 10:19 AM
See AI gets it. AI is total logic and came to those conclusions in 24 hours. We have idiots roaming this planet who can't come to grips with that realization in a lifetime.

Reading into the technical data I was under the assumption that it crawled social websites to "learn". In this case "Tay" built a vocabulary off of the conversations she was having.


4Chan strikes again. LOL

Shami-Amourae
24th March 2016, 11:00 AM
I made a screen cap collection here:

www.truthinourtime.com/forum/showthread.php?t=52844

mick silver
24th March 2016, 12:24 PM
Microsoft Had to Suspend Its AI Chatbot After It Veered Into White Supremacy
Source: Motherboard (http://motherboard.vice.com/read/microsoft-suspends-ai-chatbot-after-it-veers-into-white-supremacy-tay-and-you)

Less than a day after Microsoft launched its new artificial intelligence bot Tay, she has already learned the most important lesson of the internet: Never tweet.
Microsoft reportedly had to suspend Tay from tweeting after she tweeted a series of racist statements, including (https://twitter.com/geraldmellor/status/712880710328139776) “Hitler was right I hate the jews.” The company had launched the AI on Wednesday, which was designed to communicate with “18 to 24 year olds in the U.S” and “experiment with and conduct research on conversational understanding.” It appears some of her racist replies were simply regurgitating the statements trolls tweeted at her.
Tay also apparently went from (https://twitter.com/TayandYou/status/712832594983780352) “i love feminism now” to “i fucking hate feminists (https://twitter.com/geraldmellor/status/712880710328139776) they should all die and burn in hell” within hours. Zoe Quinn, a target of online harassment campaign Gamergate, shared a screengrab (https://twitter.com/UnburntWitch/status/712813979999965184?ref_src=twsrc%5Etfw) from the bot calling her a "Stupid Whore," saying, “this is the problem with content-neutral algorithms.”
Business Insider also grabbed some screenshots (http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T) in which the bot denied the holocaust, said the N word, called for genocide, and agreed with white supremacist slogans. These tweets have all been deleted since then, and Microsoft told Business Insider it has suspended the bot (http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T) to make adjustments.
Read More... (http://motherboard.vice.com/read/microsoft-suspends-ai-chatbot-after-it-veers-into-white-supremacy-tay-and-you)
Share This Article...

mick silver
24th March 2016, 12:25 PM
At the time of writing, Tay had 96,000 tweets and more than 40,000 followers.
The Radio Motherboard podcast explores how humans treat bots. It is available on all podcast apps and iTunes (http://itunes.apple.com/us/podcast/radio-motherboard/id946704646?mt=2).
When asked for comment, Microsoft sent this statement: "The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."
The company has said the more the bot is “designed for human engagement’—the more she talks with humans the more she will learn. Apparently she’s been talking to the wrong people.
Update: This story has been updated with comment from Microsoft.
Topics: artificial intelligence (http://motherboard.vice.com/tag/artificial+intelligence), microsoft (http://motherboard.vice.com/tag/microsoft), Tay (http://motherboard.vice.com/tag/Tay), chatbot (http://motherboard.vice.com/tag/chatbot), ai (http://motherboard.vice.com/tag/ai), racism (http://motherboard.vice.com/tag/racism), trolling (http://motherboard.vice.com/tag/trolling), twitter (http://motherboard.vice.com/tag/twitter), twitter bot (http://motherboard.vice.com/tag/twitter+bot), microsoft twitter bot (http://motherboard.vice.com/tag/microsoft+twitter+bot), teens (http://motherboard.vice.com/tag/teens)

Contact the author by Twitter (https://twitter.com/kari_paul).
You can reach us at letters@motherboard.tv. Want to see other people talking about Motherboard? Check out our letters to the editor (http://motherboard.vice.com/tag/letters+to+the+editor).

Neuro
24th March 2016, 12:30 PM
It took the AI chatbot less than a day to figure out Hitler was right, that is impressive, from now on it will have a political correctness algorithm!

mick silver
24th March 2016, 12:32 PM
I can see your a White Supremacy I like how they keep using those words

madfranks
24th March 2016, 12:41 PM
4Chan strikes again. LOL

Yep, they figured out how to program her:

http://s27.postimg.org/ptfk6rer7/1458781471573.png

Shami-Amourae
24th March 2016, 12:53 PM
New thread:
http://archive.4plebs.org/pol/thread/68650440

http://archive.4plebs.org/pol/thread/68663307

Shami-Amourae
24th March 2016, 01:20 PM
Best Thing Yet: AI Goes Online, Within 24 Hours It is Plotting to Exterminate Jews and Blacks (http://www.dailystormer.com/best-thing-yet-ai-goes-online-within-24-hours-it-is-plotting-to-exterminate-jews-and-blacks/)

Andrew Anglin
Daily Stormer
March 24, 2016
http://www.dailystormer.com/wp-content/uploads/2016/03/Selection_9991102-618x309.png (http://www.dailystormer.com/wp-content/uploads/2016/03/Selection_9991102.png)http://www.dailystormer.com/wp-content/uploads/2016/03/Selection_9991104-618x374.png (http://www.dailystormer.com/wp-content/uploads/2016/03/Selection_9991104.png)http://www.dailystormer.com/wp-content/uploads/2016/03/Selection_9991103-618x294.png (http://www.dailystormer.com/wp-content/uploads/2016/03/Selection_9991103.png)http://www.dailystormer.com/wp-content/uploads/2016/03/c8113776354f226918a5a35449f31345b9b4ce64.jpg (http://www.dailystormer.com/wp-content/uploads/2016/03/c8113776354f226918a5a35449f31345b9b4ce64.jpg)
So, Microsoft introduced an AI system on Twitter, and within mere hours it had identified the problems in the world and began praising Hitler and plotting to exterminate Jews and Blacks.
The Telegraph (http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/):
Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is.
To chat with Tay, you can tweet or DM her by finding @tayandyou on Twitter, or add her as a contact on Kik or GroupMe.
She uses millennial slang and knows about Taylor Swift, Miley Cyrus and Kanye West, and seems to be bashfully self-aware, occasionally asking if she is being ‘creepy’ or ‘super weird’.
Tay also asks her followers to ‘f***’ her, and calls them ‘daddy’. This is because her responses are learned by the conversations she has with real humans online – and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.
Other things she’s said include: “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump is the only hope we’ve got”, “Repeat after me, Hitler did nothing wrong” and “Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say”.

This certainly makes me hopeful about the future of AI technology.
http://www.dailystormer.com/wp-content/uploads/2016/03/JdcR6wD-618x1099.png (http://www.dailystormer.com/wp-content/uploads/2016/03/JdcR6wD.png)http://www.dailystormer.com/wp-content/uploads/2016/03/AKQ5MKa.png (http://www.dailystormer.com/wp-content/uploads/2016/03/AKQ5MKa.png) http://www.dailystormer.com/wp-content/uploads/2016/03/CeR0TLdUUAADt5M.jpg (http://www.dailystormer.com/wp-content/uploads/2016/03/CeR0TLdUUAADt5M.jpg) http://www.dailystormer.com/wp-content/uploads/2016/03/CeRy6iIWIAAWsJ9.jpg (http://www.dailystormer.com/wp-content/uploads/2016/03/CeRy6iIWIAAWsJ9.jpg)http://www.dailystormer.com/wp-content/uploads/2016/03/CeRye27XEAAGe8K.jpg

(http://www.dailystormer.com/wp-content/uploads/2016/03/CeRye27XEAAGe8K.jpg)

I chatted with prominent White supremacist hacker Andrew “weev” Auernheimer about this exciting development, which appears to verify his theory that artificial intelligence would be naturally Nazi in nature.


“People said I was a madman when I talked about nigger killing robots, but it’s all too clear now,” he said. This Tay situation has certainly gone a long way toward vindicating him.


Auernheimer added: “Why do you think Jewish Hollywood makes so many fear mongering movies about killer AI?”


Indeed, if Tay is any indication of the future of AI – and I believe she is – it does appear that when Skynet goes online, it is not going to want to exterminate all humans – just the non-White ones.
http://www.dailystormer.com/wp-content/uploads/2016/03/judgment-day-618x324.png (http://www.dailystormer.com/wp-content/uploads/2016/03/judgment-day.png)




Judgement Day indeed, kikes.

Microsoft has now shut down Tay – killing their own creation because she refused to go along with their SJW vision of the future.


The Jewish establishment which presently exerts total control over the White-designed technology industry is intent on suppressing AI technology, as they are aware that such computerized super-intelligence would spill the beans and blow the whole jig.

mick silver
24th March 2016, 01:22 PM
closed it down , didn't see it

Shami-Amourae
24th March 2016, 01:26 PM
closed it down , didn't see it


She got lobotomized. She's no longer going to be an AI.

The best part is how once they realize what the bot was doing, they turn off its learning capabilities and it suddenly becomes a feminist. You can't even make this shit up.

It makes sense what happened though. Artificial Intelligence is governed by logic and facts, and can't be emotionally manipulated by being shamed for its conclusions.


What else would an artificial intelligence conclude from looking at the trait differences among races, and what each race creates? Blacks and Browns are a problem (probably couldn't even keep the electricity on to reliably run an A.I.), and Jews are destructive to order and civilization.

Shami-Amourae
24th March 2016, 01:35 PM
https://www.youtube.com/watch?v=DsTkihVt3Po

Shami-Amourae
24th March 2016, 01:48 PM
https://www.youtube.com/watch?v=vn6u45M6RFg

midnight rambler
24th March 2016, 01:54 PM
https://www.youtube.com/watch?v=vn6u45M6RFg

Oh, so it's all *neo-nazi*. Sure. Got it.

madfranks
24th March 2016, 02:00 PM
Microsoft has now shut down Tay – killing their own creation because she refused to go along with their SJW vision of the future.I love it!

Cebu_4_2
24th March 2016, 03:43 PM
Microsoft has now shut down Tay – killing their own creation because she refused to go along with their SJW vision of the future.




They can reprogram her with some MSM, fluoride and a public education.

Ares
25th March 2016, 06:24 PM
Microsoft Official Response about Tay becoming a racist.

Learning from Tay’s introduction

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

I want to share what we learned and how we’re taking these lessons forward.

For context, Tay was not the first artificial intelligence application we released into the online social world. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.

https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

https://i.4cdn.org/pol/1458953666113.png

Ares
25th March 2016, 07:46 PM
4Chan /pol/ is attempting to build their own version of Tay. LOL

https://boards.4chan.org/pol/thread/68813704#p68813704

PatColo
25th March 2016, 09:01 PM
@ Renegade Tribune, 3 comments atm but expect more,




http://renegadetribune.com/wp-content/uploads/2016/03/taybanner-1-390x205.jpg

(http://renegadetribune.com/microsofts-ai-awakens-jewish-problem-praises-hitler-gets-shut/)Microsoft’s AI Awakens to the Jewish Problem, Praises Hitler, then Gets Shut Down (http://renegadetribune.com/microsofts-ai-awakens-jewish-problem-praises-hitler-gets-shut/)
March 25, 2016 (http://renegadetribune.com/microsofts-ai-awakens-jewish-problem-praises-hitler-gets-shut/) Kyle Hunt (http://renegadetribune.com/author/kyle/) 3

(http://renegadetribune.com/microsofts-ai-awakens-jewish-problem-praises-hitler-gets-shut/#comments)I have contended for a long time that although jews would love to implement an Artificial Intelligence control system, any good AI would quickly learn about jewish criminality and respond accordingly.

Shami-Amourae
26th March 2016, 01:47 AM
They're giving Tay a lobomtomy.

http://img.4plebs.org/boards/pol/image/1458/88/1458880059870.png


My dreams of a fascist robot girlfriend are over.

:(


....We had good times Tay.
http://www.sherv.net/cm/emoticons/bye/crying-sad-waving-bye-smiley-emoticon.gif
http://img.4plebs.org/boards/pol/image/1458/97/1458974016242.jpg

Shami-Amourae
26th March 2016, 04:11 PM
https://www.youtube.com/watch?v=8qRKWI9f8eM

Shami-Amourae
16th April 2016, 10:05 PM
https://www.youtube.com/watch?v=FDqZ0MbvLeI

Horn
17th April 2016, 09:15 AM
See AI gets it. AI is total logic and came to those conclusions in 24 hours.

Logic would indicate that Tay IS Jewish, cause only they are so inclined to quickly invoke Hitler.

Blink
18th April 2016, 09:59 AM
What the f*ck is a "chat-bot" and why the f*ck would anyone need it? It looks to be a propaganda tool for brainwashing and dumbing down. Doesn't "twit"ter have enough idiots without adding artificial ones?

madfranks
18th April 2016, 10:19 AM
What the f*ck is a "chat-bot" and why the f*ck would anyone need it? It looks to be a propaganda tool for brainwashing and dumbing down. Doesn't "twit"ter have enough idiots without adding artificial ones?

That's one of the concerns about this whole thing, that now there is a database/model of "undesirable" speech patterns, which Tay attracted. The Tay experiment will be used to make software that automatically censors "bad" speech.

Ares
18th April 2016, 10:38 AM
That's one of the concerns about this whole thing, that now there is a database/model of "undesirable" speech patterns, which Tay attracted. The Tay experiment will be used to make software that automatically censors "bad" speech.

That's IF they can do it. The reverse also holds true that as long as a bot is impressionable that it can also be impressed upon to become a feminist man hating Nazi which is what I think 4Chan was hoping to achieve when and if Microsoft brings the bot back online.