Eliza’s young sister Tay is all grown up… and racist

24 03 2016

Yesterday, Microsoft’s launched an AI experiment on Twitter, called Tay. The bot would instantly respond to your questions, pictures and chatter, no matter how inane and learn to converse on the fly using “public data that’s been anonymised”.

Source: Tay, Microsoft’s social AI for millennials, turns racist within 24 hours | Alphr

Back when I was a student we learnt about Turing’s Test for AIs – basically whether or not a human could tell they were conversing with a real human or a machine. Ex Machina is a neat film based on the premise – as well as a keen allegory about the way men (still) treat women. Eliza was a relatively simple programme written back in 1966 to respond to key words in the offered conversation and gave a pretty realistic response. It (pre-Gleick) demonstrated how chaotic “realistic” language is and that we can be fooled by pretty simplistic patterns that APPEAR real.

The Microsfot experiment is a whole new level of sophistication and attempted to learn from the input what “normal” looks like. Inevitably, in a week where Boaty McBoatface is the Internet’s suggestion for the name of a new research vessel, Tay learnt that “being an arsehole” is the new normal on the Internet! Within 24 hours it allegedly spouted pro-Hitler twaddle and Microsoft took their new toy home to give it a good spanking and teach it some proper manners. We await it’s return…

Advertisements

Actions

Information

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: