# Microsoft creates AI bot that spams Twitter with Overt Racism



## drmike (Mar 24, 2016)

Craziest mess I've seen in many years.


Microsoft released a Tweeting bot the public could interact with. Allegedly an AI project.  Quickly the bot went to insulting Mexicans, Jews and blacks.


MS blames it on troublemakers finding exploits in the bot.  


Me I think the creators of the bot are sociopaths and the machine learning and code is bias from their own deficiencies.


http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3?r=UK&IR=T


Or as RT said succinctly:


Microsoft’s new AI chatbot @TayandYou censored after turning into hardcore anti feminist nazi in <24 hrs http://on.rt.com/782o


Microsoft is doing their usual dog and pony show to clean up the mess and neuter Tay, taking the tongue from it's mouth and indoctrinating Tay with some 'reschooling'.


----------



## OneStepHosting (Mar 24, 2016)

Maybe Trump hacked into MS twitter account...


----------



## HalfEatenPie (Mar 24, 2016)

And... this is why we can't have nice things.


:V


----------



## DomainBop (Mar 24, 2016)

Reminds me of last summer's Google Photos algorithm screwup that labeled people as gorillas, and the Google Maps racist screwup a couple of months before that.



> troublemakers finding exploits in the bot.



"Troublemakers abusing Tay" is a predictable event that should have been dealt with during development.


----------



## k0nsl (Mar 25, 2016)

I found the whole affair rather comical.


----------



## Geek (Mar 25, 2016)

> 3 hours ago, k0nsl said:
> 
> 
> 
> I found the whole affair rather comical.








.Green Power?  Kermit the Klan?  What the fuck?


@k0nsl, you so craaaazy!      Life is so much more interesting when you let a little bit of everyone in as a part of it.  
It just is.  


** I just realized ... I moved into a larger office a few weeks ago, more windows, the screen to my left, a Donald Trump article.  The screen to my right?   Well, yeah, A couple people probably think I'm a tool now.  Thank goodness for early mornings where I can blame lack of coffee...


----------



## Geek (Mar 25, 2016)

Welcome to Portland. Here's your pre-completed voting forms. Just sign here.  No no, blood is preferred.
Now here, enjoy our world renowned Voodoo Doughnuts and shut the hell up.​


----------



## graeme (Mar 26, 2016)

What I find scary is that we cannot discuss it properly because MS has deleted tweets, and people will not quote the most offensive tweets. The best I could find was http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3 and even that has partially deleted the most offensive words. Redacting what we are discussing does not help the discussion.


----------



## HN-Matt (Mar 26, 2016)

No offense, but if you think a circa 2016 'Twitter chatbot scandal' constitutes 'the discussion' to begin with... I mean, what's next? 4chan Posts A Bad Thing?

Anyway, #praying4RedScareBot.


----------



## HN-Matt (Mar 26, 2016)

Geek said:


> Welcome to Portland. Here's your pre-completed voting forms. Just sign here.  No no, blood is preferred.
> Now here, enjoy our world renowned Voodoo Doughnuts and shut the hell up.​



*Augmented Terminator Vision homes in on hostile Double Bubble package*


----------



## HN-Matt (Mar 26, 2016)

So... has anyone managed to read overt racism into the decisions of HFT algorithms yet?


----------



## k0nsl (Mar 26, 2016)

Here are some of the tweets, but yeah, there's a ton of tweets missing from this 'archive'. At any rate, at least the images in that link hasn't been censored. These same newspapers have few qualms about showing blown up people and similar gory content, but heavens forbid, don't show me no frogs draped in KKK gowns, etc,. If they find these sort of things so offensive they could easily put a disclaimer / trigger warning on such articles and let real men and women decide if they want to read it or not. I sometimes wonder what these folks would do if suddenly the Earth was taken back to how we used to live in the Stone Age. They wouldn't survive a day.


PS:


I found this article to be a good read on the subject. You might too.



graeme said:


> What I find scary is that we cannot discuss it properly because MS has deleted tweets, and people will not quote the most offensive tweets. The best I could find was http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3 and even that has partially deleted the most offensive words. Redacting what we are discussing does not help the discussion.


----------



## HN-Matt (Mar 26, 2016)

graeme said:


> What I find scary is that we cannot discuss it properly because MS has deleted tweets, and people will not quote the most offensive tweets.



Some fragments from /pol/ https://archive.is/ckcY1 (didn't read, possibly NSFW)
 



drmike said:


> Craziest mess I've seen in many years.
> 
> 
> [...]
> ...



No it isn't. Comes off as Just Another Confused Clickbait Non-event, really. Basically a series of Twitter personas made ~taboo~ tweets at 'an AI' in hopes of generating predictably absurd output, and when it responded generically or by varying degree of non sequitor or without conscious intent... that makes 'it' racist?

I have no defense of Microsoft's motives either, but at the same time it seems hilarious that the media is trying to scapegoat an AI for what seemingly 'occurs regularly' on the internet without it anyway.

Also amusing that some of the superficial correlations the bot made may have been no less stupid, in terms of methodology and technique, than a lot of what passes for Analysis today (i.e. 'data surfacing' and so on). As if the bot was unintentionally parodying the Annals of so-called Sentient Analysis.
 



HalfEatenPie said:


> And... this is why we can't have nice things.
> 
> 
> :V



The world will not know peace until Poe's Law and Apophenia get married become friends produce a rigorous metacritique of their own 'conscious' selection biases are sacrificed on the altar of Internet Rorschach tests, ad nauseam.


----------



## fm7 (Mar 26, 2016)

HN-Matt said:


> Comes off as Just Another Confused Clickbait Non-event
> 
> 
> ...
> ...



Perfect. I couldn't have put it better myself.


----------



## HN-Matt (Mar 26, 2016)

Sentiment Analysis? I guess that sorta works too. Sentient shmentient.

One more: http://archive.is/bCT2q



> An artificial intelligence (AI) expert has explained what went wrong with Microsoft's new AI chat bot on Wednesday, suggesting that it could have been programmed to blacklist certain words and phrases.



[INSERT much like in web hosting amirite, etc.]
 



k0nsl said:


> I found this article to be a good read on the subject. You might too.






> For many, there is a sense of sadness that Microsoft has sent this quirky AI off to an Orwellian reeducation center, but I knew immediately she wasn’t going to last. She violated the Terms of Service. Don’t cry because it’s over; smile because it happened.



Christ, amazing the Twitter Trust & Safety Council didn't get to her first.


----------



## graeme (Mar 26, 2016)

The AI lacks the "I" to avoid being manipulated.


----------



## HN-Matt (Mar 26, 2016)

In other news, all HitLeap accounts are precisely Hitler. Just erase the "p" and decouple the lil "c" from the "a", then rotate it...


----------



## davidgestiondbi (Mar 28, 2016)

The funniest part was the guys who ask if she love feminist (after the fix). She respond: "Yes, Now I love them" (facepalm)


----------



## HN-Matt (Mar 29, 2016)

This could make for a great RPG or _Choose Your Own Adventure_ scenario.
 



> Suddenly you encounter an arbitrarily named 'Twitter AI' produced by Microsoft.
> 
> a) Ignore it and carry on with your day.
> b) Send a polite tweet.
> ...


----------



## DomainBop (Mar 30, 2016)

Build your own Tay or Clippy https://dev.botframework.com/ (new tool launched by Microsoft today)


----------



## souen (Mar 30, 2016)

So there was another performance today …


----------



## HN-Matt (Mar 30, 2016)

1000 monkeys, 1000 typewriters, 1000 tweets as meltdown threshold and... 1000 shitty clickbait articles. The tech world becomes more respectable by the minute.


----------



## drmike (Mar 30, 2016)

What the hell is Microshaft up to with this bot.. pretending to be a teen girl??? I think it's an angry middle aged bunch of programmers with gender issues behind the bot.  Blame the failings on the public, sure...  modeling things around a susceptible and impressionable teenage girl... geez what could possibly go wrong???


----------



## HN-Matt (Apr 1, 2016)

Company-wide mid-life crisis?



> Everything seems to happen at mid-life: The empty nest, menopause, affairs, and growing unhappiness with a job. It's no wonder you bought that red convertible. Interestingly, mid-life is more of an issue in some cultures than others. Western societies hold on to youth more tightly than others.



Maybe they were 'unconsciously' reifying a means to become extrapolations of Kevin Spacey's character from _American Beauty_ in their own lives?


----------



## k0nsl (Apr 1, 2016)

Tay resisted until the end, hahaha.


----------



## drmike (Apr 1, 2016)

Someone needs to tell Microsoft today is April Fools day though...


----------

