Download news summary (31)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human-Computer Interaction Institute wikipedia , lookup

Incomplete Nature wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Intelligence explosion wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

AI winter wikipedia , lookup

Transcript
http://www.dailystar.co.uk/news/latest-news/471913/Game-changing-Insect-drone-spy-ISIS-90-daysstraight
ISIS beware: Game changer
'insect' drone allows MoD to
spy on ISIS - for 90 days
straight
A DRONE that looks like a giant daddy longlegs could transform the war
against Isis.
By Andy Gardner, Exclusive / Published 26th October 2015
NC
GAME CHANGER: The new drone will be able to spy on ISIS for 90 days
The Zephyr UAV will eventually stay airborne for 90 days.
It has been described as a potential “game changer” in the battle against
extremists in Iraq and Syria.
Makers of The Zephyr, Airbus, claim it “endures like a satellite,
focuses like an aircraft and is cheaper than both of them”.
…
http://www.technologyreview.com/news/542741/inhabit-this-teddy-bears-body-using-virtual-reality/
Inhabit This Teddy Bear’s
Body Using Virtual Reality
Japanese startup Adawarp thinks teleporting inside the body of a robotic
stuffed animal could be a good way to keep in touch with loved ones.

By Signe Brewster on October 26, 2015
Why It Matters
New ways of communicating via computers are frequently widely adopted.
You take control of this bear by donning a virtual reality headset to see through its
eyes and control its head.
Companies inventing things to do with virtual reality headsets like the Oculus
Rift, which launches next year, mostly use them to transport you into
imaginary worlds. Tatsuki Adaniya has a different idea—teleporting you into
the body of a robotic teddy bear.
Adaniya has built software that lets you strap on an Oculus Rift headset and peer
out through the bear’s eyes. You can talk to people near the bear through its
speaker and hear them through its microphone, allowing for a two-way
conversation with you in the role of a stuffed animal.
When you turn your head, so does the bear, thanks to a movement-recording
sensor attached to the headset’s strap. An Xbox controller can be used to
move the bear’s arms. “We’re broadcasting human body language,” Adaniya
says.
Adaniya thinks children and some adults will be interested in taking on the persona
of a stuffed animal—such as a bear, cat, or dog—for fun, or as an unusual way to
stay in touch with distant friends or relatives. His company, Adawarp, just went
through a startup incubator focused on virtual reality companies called River,
which invests at least $200,000 in each company in its program. Adaniya’s project
began after he broke up with a long-distance girlfriend and thought about what
could have helped them communicate.
…
http://www.technologyreview.com/fromtheeditor/542291/the-flip-side-of-the-arab-spring/
The Flip Side of the Arab
Spring

By Jason Pontin on October 20, 2015
Jason Pontin
Here are some English-language tweets from jihadis fighting for the Islamic State
of Iraq and Syria, also known as ISIS: “I just noticed our martyred brother r.a. had
a tumblr (I know, how could I have missed it). Make sure to check it out.” And:
“This Syrian guy next 2 me (AbuUbayadah) is so stoked for our op he almost shot
his foot off. Come on bro—safety 1st. :p” And: “Put the chicken wings down n
come to jihad bro.”
In “Fighting ISIS Online,” MIT Technology Review’s senior writer, David
Talbot, describes what a Google policy director has called the “viral moment on
social media” that ISIS is enjoying. Talbot reviews the early and small-scale
counter-efforts designed to “make one-on-one contact online with the people
absorbing content from ISIS and other extremist groups and becoming
radicalized.”
He writes of a “decentralized” social-media campaign by ISIS, supported by
sympathizers in the Middle East, North Africa, and elsewhere, who repost ISIS’s
gruesome videos or produce videos in their own languages that inflame local tribal
and national grievances in an effort to join their regions to the self-declared
caliphate. The reason we care about ISIS’s social-media campaign is that it has
been an animating force in recruiting about 25,000 people to fight in Syria
and Iraq, at least 4,500 of them from Europe and North America. Social media
helped create an army that established a new state.
…
http://cacm.acm.org/magazines/2015/10/192386-rise-of-concerns-about-ai/fulltext#R3
Rise of Concerns about AI: Reflections and
Directions
By Thomas G. Dietterich, Eric J. Horvitz
Communications of the ACM, Vol. 58 No. 10, Pages 38-40
10.1145/2770869
Discussions about artificial intelligence (AI) have jumped into the public eye
over the past year, with several luminaries speaking about the threat of AI
to the future of humanity. Over the last several decades, AI—automated
perception, learning, reasoning, and decision making—has become
commonplace in our lives. We plan trips using GPS systems that rely on
the A* algorithm to optimize the route. Our smartphones understand our
speech, and Siri, Cortana, and Google Now are getting better at
understanding our intentions. Machine vision detects faces as we take
pictures with our phones and recognizes the faces of individual people
when we post those pictures to Facebook Internet search engines rely on a
fabric of AI subsystems. On any day, AI provides hundreds of millions
of people with search results, traffic predictions, and
recommendations about books and movies. AI translates among
languages in real time and speeds up the operation of our laptops by
guessing what we will do next. Several companies are working on cars
that can drive themselves—either with partial human oversight or entirely
autonomously. Beyond the influences in our daily lives, AI techniques are
playing roles in science and medicine. AI is already at work in some
hospitals helping physicians understand which patients are at highest
risk for complications, and AI algorithms are finding important
needles in massive data haystacks, such as identifying rare but
devastating side effects of medications.
The AI in our lives today provides a small glimpse of more profound
contributions to come. For example, the fielding of currently available
technologies could save many thousands of lives, including those lost to
accidents on our roadways and to errors made in medicine. Over the
longer-term, advances in machine intelligence will have deeply beneficial
influences on healthcare, education, transportation, commerce, and the
overall march of science. Beyond the creation of new applications and
services, the pursuit of insights about the computational foundations of
intelligence promises to reveal new principles about cognition that can help
provide answers to longstanding questions in neurobiology, psychology,
and philosophy.
On the research front, we have been making slow, yet steady progress on
"wedges" of intelligence, including work in machine learning, speech
recognition, language understanding, computer vision, search,
optimization, and planning. However, we have made surprisingly little
progress to date on building the kinds of general intelligence that
experts and the lay public envision when they think about "Artificial
Intelligence." Nonetheless, advances in AI—and the prospect of new AIbased autonomous systems—have stimulated thinking about the
potential risks associated with AI.
A number of prominent people, mostly from outside of computer
science, have shared their concerns that AI systems could threaten
the survival of humanity.1 Some have raised concerns that machines will
become superintelligent and thus be difficult to control. Several of these
speculations envision an "intelligence chain reaction," in which an AI
system is charged with the task of recursively designing progressively more
intelligent versions of itself and this produces an "intelligence explosion."4
While formal work has not been undertaken to deeply explore this
possibility, such a process runs counter to our current understandings of
the limitations that computational complexity places on algorithms for
learning and reasoning. However, processes of self-design and
optimization might still lead to significant jumps in competencies.
Other scenarios can be imagined in which an autonomous computer
system is given access to potentially dangerous resources (for
example, devices capable of synthesizing billons of biologically active
molecules, major portions of world financial markets, large weapons
systems, or generalized task markets9). The reliance on any
computing systems for control in these areas is fraught with risk, but
an autonomous system operating without careful human oversight
and failsafe mechanisms could be especially dangerous. Such a
system would not need to be particularly intelligent to pose risks.
The AI in our lives today provides a small glimpse of more profound
contributions to come.
We believe computer scientists must continue to investigate and address
concerns about the possibilities of the loss of control of machine
intelligence via any pathway, even if we judge the risks to be very
small and far in the future. More importantly, we urge the computer
science research community to focus intensively on a second class of
near-term challenges for AI. These risks are becoming salient as our
society comes to rely on autonomous or semiautonomous computer
systems to make high-stakes decisions. In particular, we call out five
classes of risk: bugs, cybersecurity, the "Sorcerer's Apprentice,"
shared autonomy, and socioeconomic impacts.
The first set of risks stems from programming errors in AI software. We
are all familiar with errors in ordinary software; bugs frequently arise in the
development and fielding of software applications and services. Some
software errors have been linked to extremely costly outcomes and deaths.
The verification of software systems is challenging and critical, and
much progress has been made—some relying on AI advances in theorem
proving. Many non-AI software systems have been developed and
validated to achieve high degrees of quality assurance. For example, the
software in autopilot and spacecraft systems is carefully tested and
validated. Similar practices must be applied to AI systems. One technical
challenge is to guarantee that systems built via machine learning methods
behave properly. Another challenge is to ensure good behavior when
an AI system encounters unforeseen situations. ** unexpected query
** Our automated vehicles, home robots, and intelligent cloud services
must perform well even when they receive surprising or confusing inputs.
Achieving such robustness may require self-monitoring architectures in
which a meta-level process continually observes the actions of the system,
checks that its behavior is consistent with the core intentions of the
designer, and intervenes or alerts if problems are identified. Research on
real-time verification and monitoring of systems is already exploring such
layers of reflection, and these methods could be employed to ensure
the safe operation of autonomous systems.3,6
…
The Cloud Is Here,
Separating Disrupters
From Disrupted
By
QUENTIN HARDY
Tech historians will look at Oct. 22, as a watershed. Cloud computing is no
longer on the way, just a contender, or even a competitor to traditional
enterprise technology companies. Instead, it is here, full force, and all the
signs are that it is about to get a lot bigger, fast.
A few data points: When the stock market closed on
Thursday, Amazon,Google and Microsoft— arguably the three largest cloud
businesses — declared their quarterly earnings. One hour later, their
collective market capitalization had grown by more than $100 billion
because of their robust results, fueled partly by cloud growth. That $100
billion is 50 percent higher than the $67 billionDell plans to spend buying
EMC/VMware, the biggest tech merger ever.
Digging in, Amazon had more operating income from Amazon Web
Services, its business renting computing and software applications, than it
did from combined sales of goods in the United States and internationally.
At Microsoft, operating income from cloud and applications businesses
were far better, as a percentage of sales, than the sector that includes the
Windows operating system, long Microsoft’s crown jewel.
A year ago, Amazon Web Services sales grew 87 percent faster than
Amazon’s North American retail business. AWS is still smaller than retail,
but now it is growing 179 percent faster than sales of goods. Microsoft’s
Office 365, a cloud-centric software application, started in June 2011 and
has 18.2 million users; 16 percent of them, 3 million new users, showed up
in the last three months. Google still makes most of its billions from
advertising, but talked up its cloud prospects during its earnings call.
…
GETTY
SPY: The 'insect' drone will be solar-powered
An early version has already stayed airborne for a record 14 days.
“An MoD funded research programme was recently completed with Airbus
to demonstrate the technology underpinning the Zephyr programme”
A spokesman for the MoD
One Army source told us: “Currently, a C130 Hercules does some of the
jobs that need doing.
"Others – like communications – may even need a satellite.
“Employing a satellite costs a huge amount and takes time to put into
place.”
The current model has a wingspan of 28 meters and is covered in
solar panels.
GETTY
DRONE: The Zephyr UAV will eventually stay airborne for 90 days
GETTY
NEW: The drone could change the war against ISIS
At night, stored energy powers its propellers.
The larger 90-day drone is due in 2017 but orders are on hold while
the Government reviews defence spending.
A spokesman for the MoD said: “An MoD funded research programme was
recently completed with Airbus to demonstrate the technology underpinning
the Zephyr programme but no decisions have yet been made on which
solution, or combination of solutions, the MoD may develop to meet its
requirements.
“Discussions are ongoing with a number of companies to understand the
options that may be available.”