Download news summary (20)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Embodied language processing wikipedia , lookup

Technological singularity wikipedia , lookup

Machine learning wikipedia , lookup

Turing test wikipedia , lookup

Barbaric Machine Clan Gaiark wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Intelligence explosion wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

AI winter wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
https://www.technologyreview.com/s/601897/tougher-turing-test-exposes-chatbots-stupidity/
Tougher Turing Test Exposes Chatbots’ Stupidity
We have a long way to go if we want virtual assistants to understand us.


by Will Knight
July 14, 2016
User: Siri, call me an ambulance.
Siri: Okay, from now on I’ll call you “an ambulance.”
Apple fixed this error shortly after its virtual assistant was first released in
2011. But a new contest shows that computers still lack the common sense
required to avoid such embarrassing mix-ups.
The results of the contest were presented at an academic conference in New York
this week, and they provide some measure of how much work needs to be done to
make computers truly intelligent.
Illustration by Max Bode
The Winograd Schema Challenge asks computers to make sense of sentences
that are ambiguous but usually simple for humans to parse. Disambiguating
Winograd Schema sentences requires some common-sense understanding. In
the sentence “The city councilmen refused the demonstrators a permit because
they feared violence,” it is logically unclear who the word “they” refers to,
although humans understand because of the broader context.
The programs entered into the challenge were a little better than random at
choosing the correct meaning of sentences. The best two entrants were correct
48 percent of the time, compared to 45 percent if the answers are chosen at
random. To be eligible to claim the grand prize of $25,000, entrants would need to
achieve at least 90 percent accuracy. The joint best entries came from Quan Liu, a
researcher at the University of Science and Technology of China, and Nicos Issak,
a researcher from the Open University of Cypress.
“It’s unsurprising that machines were barely better than chance,” says Gary
Marcus, a research psychologist at New York University and an advisor to the
contest. That’s because giving computers common-sense knowledge is notoriously
difficult. Hand-coding knowledge is impossibly time-consuming, and it isn’t
simple for computers to learn about the real world by performing statistical
analysis of text. Most of the entrants in the Winograd Schema Challenge try to use
some combination of hand-coded grammar understanding and a knowledge base of
facts.
Marcus, who is also the cofounder of a new AI startup, Geometric
Intelligence, says it’s notable that Google and Facebook did not take part in the
event, even though researchers at these companies have suggested they are making
major progress in natural language understanding. “It could’ve been that those
guys waltzed into this room and got a hundred percent and said ‘hah!’” he says.
“But that would’ve astounded me.”
The contest does not only serve as a measure of progress in AI. It also shows how
hard it will be to build more intuitive and graceful chatbots, and to train computers
to extract more information from written text.
Researchers at Google, Facebook, Amazon, and Microsoft are turning their
attention to language. They are using the latest machine learning techniques,
especially “deep learning” neural networks, to develop smarter, more intuitive
chatbots and personal assistants (see “Teaching Machines to Understand Us”). As
a matter of fact, with chatbots and voice assistants becoming more common, and
with dramatic progress in areas like image and speech recognition, you might think
that machines were getting pretty good at understanding language.
One of the two first-place entries did, in fact, use a cutting-edge machine learning
approach. Liu’s group, which included researchers from York University in
Toronto and the National Research Council of Canada, used deep learning to train
a computer to recognize the relationship between different events, such as
“playing basketball” and “winning” or “getting injured,” from thousands of
texts.
“I was delighted to see deep learning used,” says Leora Morgenstern, a senior
scientist at Leidos Corporation, a technology consulting firm, and one of the
organizers of the challenge.
Liu’s team claims that after fixing a problem with the way its system parsed the
contest’s questions, it is almost 60 percent accurate. Morgenstern cautions,
however, that even if these claims were confirmed, the accuracy would still be far
worse than a human's.
Winograd Schema sentences were first highlighted as a way to gauge machine
comprehension by Hector Levesque, an artificial-intelligence researcher at the
University of Toronto. They are named after Terry Winograd, a pioneer in the field
and a professor at Stanford University who built one of the first conversational
computer programs.
The challenge was proposed in 2014 as an improvement on the Turing Test. Alan
Turing, a forefather of computing and artificial intelligence who in the 1950s
pondered whether machines might one day think as humans do, suggested a simple
way of testing a machine’s intelligence. His idea was for a machine to try to fool a
person into thinking that he was conversing with a real person in a text
conversation.
The problem with the Turing Test is that it’s often easy for a program to fool a
person using simple tricks and evasions. But a program cannot parse Winograd
Schema or other ambiguous sentences without some form of general knowledge.
…
https://www.technologyreview.com/s/601901/darpa-hopes-automation-can-create-the-perfect-hacker/
DARPA Hopes Automation Can Create the Perfect
Hacker
Seven Pentagon supercomputers are getting ready to attack one another.


by Tom Simonite
July 13, 2016
Look out, human hackers. Pentagon research agency DARPA says people are
too slow at finding and fixing security bugs and wants to see smart software
take over the task.
The agency released details today of a contest that will put that idea to the test at
the annual DEF CON hacking conference in Las Vegas next month. Seven
teams from academia and industry will pit high-powered computers
provided by the agency against one another. Each team’s system must run a
suite of software developed by DARPA for the event. Contestants win
points by looking for and triggering bugs in software run by competitors
while defending their own software.
Mike Walker, the DARPA program manager leading the Cyber Grand
Challenge project, claims the approach could make the world safer.
“The comprehension and reaction to unknown flaws is entirely manual
today,” he said in a briefing Wednesday. “We want to build autonomous
systems that can arrive at their own insights about flaws [and] make their own
decisions about when to release a patch.”
When a malicious hacker finds a new flaw in a piece of commonly used
software, they can typically exploit it for a year before it is fixed, Walker
said. “We want to bring that response down to minutes or seconds.
…
http://www.foxnews.com/tech/2016/07/18/west-point-taps-artificial-intelligence-to-help-cadetsnegotiate.html
West Point taps artificial intelligence to help cadets negotiate
By Rob Verger
Published July 18, 2016
FoxNews.com
A lone West Point Cadet waves the American flag amid a sea of fellow Cadets as
they watch Navy take a commanding lead in the first half of the Army-Navy game,
December 6 1997 at Giants Stadium in East Rutherford. (Reuters)
A company that sells software that analyzes the human voice and touts the
virtues of empathy, rapport and emotional intelligence is joining forces with
West Point United States Military Academy in an effort to help cadets become
better negotiators.
Cogito Corp. is a Boston-based company that makes software that can analyze
a person’s voice in real-time. That information, the company says, can help
customer service representatives show more empathy; the result is phone
conversations that are more efficient and personalized, according to Cogito.
Col. James Ness of West Point said that this kind of tech will help their students
become better negotiators, a key skill for people in the military.
“Cogito’s behavioral analytics technology will systematically analyze
communication patterns within negotiating sessions and provide insight into
the cadet’s psychological state,” Ness, who directs the engineering psychology
program at West Point, said in a statement. “This technology will provide an
unbiased assessment of how each cadet is being perceived by the other party.
It will deliver insights into how they can modify their behavior to improve
negotiation outcomes.” ** for training **
A company that makes software designed for people who work in call centers
might seem like a strange fit for West Point, but Cogito has also partnered with the
likes of the Defence Advanced Research Project Agency (DARPA). The
company has also worked with Massachusetts General Hospital on an appbased project tailored to analyze the moods of people with depression and
bipolar disorder.
…