Saturday, 2 April 2016

Microsoft's Artificial Troll

That's what the world needs. More Internet trolls.

I think that Microsoft's project is worthwhile,
unlike its computer software which is still
the binary version of a bucket of  bolts,
even after 30+ years.

It's good because it is supposedly a learning,
mind-like system. So, it may even mimic
some of the human stages of Internet Chat
Literacy (trademark).

What it shows is an arc of development of
that is similar to that of a human. It is
"cognitive" overload.
It is mind-blowing to be able to sit in front
of your computer and chat with people on
the other side of our blue sphere. This is
especially life-changing if you're a lonely
neurotic blogger, like me.
What that means is that you get a response
where in Reality 1.0, you get ignored even
by your mother. That opens a pandora's box
of repressed emotions, from happiness (and
vulnerability) but through a series of
disappointments (cuz others just don't play
in the way you want to) to curmudgeonly
trolling.
Admit it. We've all been there, at least for a
short while.

What I don't like is that the AI bot's twitter
account shows a human face. That can only
confuse things.
Indeed, I've noticed some
troll bots let loose on twitter. They seem
to engage with people but then if you
insult them ever so indirectly, they do
not react. A human troll would.

What I have practiced, in order to discover
if a troll is a computer or not is to use my
Turing Troll Test.
I assume that if you use the word "asshole",
the computer could be programmed to produce
an appropriate response.
If however, you use a more nuanced critique
that nevertheless would insult any human on
the planet, then the Trap is set. If the troll
doesn't get it, then it's a bot.
Then I enquire: "isn't it strange that you're
not responding to my concerns," and again
no answer is forthcoming.
When I claim "you're a bot", some of
the trolls respond back and say that they
"are" not. But I already have my answer.

checkit: Guardian


Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot

Company finally apologises after ‘Tay’ quickly learned to produce offensive posts, forcing the tech giant to shut it down after just 16 hours
Microsoft’s artificial intelligence chatbot Tay didn’t last long on Twitter.

Staff and agencies

Saturday 26 March 2016 16.52 GMT
Last modified on Tuesday 29 March 2016 09.53 BST

Microsoft has said it is “deeply sorry” for the racist and sexist Twitter messages generated by the so-called chatbot it launched this week.

The company released an official apology after the artificial intelligence program went on an embarrassing tirade, likening feminism to cancer and suggesting the Holocaust did not happen.
Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter
Read more

The bot, known as Tay, was designed to become “smarter” as more users interacted with it. Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users fed the program, forcing Microsoft Corp to shut it down on Thursday .

Following the disastrous experiment, Microsoft initially only gave a terse statement, saying Tay was a “learning machine” and “some of its responses are inappropriate and indicative of the types of interactions some people are having with it.”

But the company on Friday admitted the experiment had gone badly wrong. It said in a blog post it would revive Tay only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that undermine the company’s principles and values.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s vice president of research.

Microsoft created Tay as an experiment to learn more about how artificial intelligence programs can engage with Web users in casual conversation. The project was designed to interact with and “learn” from the young generation of millennials.

Tay began its short-lived Twitter tenure on Wednesday with a handful of innocuous tweets.

— TayTweets (@TayandYou)
March 24, 2016

c u soon humans need sleep now so many conversations today thx💖

Then its posts took a dark turn.

In one typical example, Tay tweeted: “feminism is cancer,” in response to another Twitter user who had posted the same message.
Tay tweeting
Facebook
Twitter
Pinterest
Tay tweeting Photograph: Twitter/Microsoft

Lee, in the blog post, called web users’ efforts to exert a malicious influence on the chatbot “a coordinated attack by a subset of people.”
Advertisement

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Microsoft has deleted all but three of Tay’s tweets.

Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014. XiaoIce is used by about 40 million people and is known for “delighting with its stories and conversations,” according to Microsoft.

As for Tay? Not so much.

“We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” Lee wrote.

Reuters contributed to this report