Download Published March 20, 2017

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Computer vision wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Intelligence explosion wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Augmented reality wikipedia , lookup

AI winter wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
https://twitter.com/jackclarkSF/status/838986258542026752
an article that yearns for the days when we could toil in obscurity and make advances without all the
hoopla – from our colleague Bert P
and from our colleague Teresa H
Here is the most recent post from the mathbabe. I thought you might find it interesting.
https://mathbabe.org/2017/03/21/guest-post-the-age-of-algorithms/
Guest post: the age of algorithms
by Cathy O'Neil, mathbabe
Artie has kindly allowed me to post his thoughtful email to me regarding my NYU conversation with
Julia Angwin last month.
This is a guest post by Arthur Doskow, who is currently retired but remains interested in the
application and overapplication of mathematical and data oriented techniques in business and
society. Artie has a BS in Math and an MS that is technically in Urban Engineering, but the
coursework was mostly in Operations Research. He spent the largest part of my professional life
working for a large telco (that need not be named) on protocols, interconnection testing and network
security. He is a co-inventor on several patents. He also volunteers as a tutor.
Dear Dr. O’Neil and Ms. Angwin,
I had the pleasure of watching the livestream of your discussion at NYU on February 15. I wanted to
offer a few thoughts. I’ll try to be brief.
1. Algorithms are difficult, and the ones that were discussed were being asked to make difficult
decisions. Although it was not discussed, it would be a mistake to assume a priori that there is
an effective mechanized and quantitative process by which good decisions can be made with
regard to any particular matter. If someone cannot describe in detail how they would
evaluate a teacher, or make a credit decision or a hiring decision or a parole decision,
then it’s hard to imagine how they would devise an algorithm that would reliably perform
the function in their stead. While it seems intuitively obvious that there are better teachers
and worse teachers, reformed convicts and likely recidivist criminals and other similar
distinctions, it is not (or should not be) equally obvious that the location of an individual on
these continua can be reliably determined by quantitative methods. Reliance on a quantitative
decision methodology essentially replaces a (perhaps arbitrary) individual bias with
what may be a reliable and consistent algorithmic bias. Whether or not that represents an
improvement must be assessed on a situation by situation basis.
2. Beyond this stark “solvability” issue, of course, are the issues of how to set objectives for how
an algorithm should perform (this was discussed with respect to the possible performance
objectives of a parole evaluation system) and the devising, validating and implementing of a
prospective system. This is a significant and demanding set of activities for any organization,
but the alternative of procuring an outsourced “black box” solution requires, at the least, an
understanding and an assessment of how these issues were addressed.
3. If an organization is considering outsourcing an algorithmic decision system, the RFP process
offers them an invaluable opportunity to learn and assess how a proposed system is designed
and how it will work – What inputs does it use? How does its decision engine operate?
How has it been validated? How will it cover certain test cases? Where has it been
used? To what effect? Etc. Organizations that do not take advantage of an RFP process to
ask these detailed questions and demand thorough and responsive answers have only
themselves to blame.
…
http://www.foxnews.com/tech/2017/03/20/it-looks-like-next-iphone-will-have-augmented-reality.html
It looks like the next iPhone will have augmented reality
By Philip Michaels Senior Editor
Published March 20, 2017
File photo: An Apple iPhone 7 and the company logo are seen in this illustration picture taken in
Bordeaux, France, February 1, 2017. (REUTERS/Regis Duvignau)
A glass design and edge-to-edge display may be the most visible changes to the next iPhone, but
a growing number of indicators point to augmented reality as a major focus for Apple's
iPhone 8.
The latest sign of an AR-friendly iPhones comes from a Wall Street Journal report on parts
suppliers likely to benefit from this fall's expected iPhone 8 release. That report includes makers
of the kind of 3D sensors a phone would require if it were to boast AR capabilities.
The Journal specifically calls out Lumentum, which supplies parts used in 3D sensors, as
the company reported delivering products for "a high-volume mobile-device application."
Also mentioned by the Journal is STM Microelectronics, which is reportedly providing the
sensors for a 3D camera system Apple plans to include in the iPhone 8.
If it's true that these companies are supplying parts for the next iPhone — and it's worth noting
there's no official confirmation that they are — it would suggest Apple is serious about
delivering on its publicly stated interest in augmented reality. And this Wall Street Journal
report is hardly the first time it's looked like AR is on the agenda for the next iPhone.
Last November, Business Insider reported that Apple was looking to integrate AR technology
into the iPhone's camera app , most likely in time for this fall's iOS 11 release. More recently,
UBS analyst Steven Milunovich predicted AR would be one of the big additions to the iPhone 8 ,
citing an Apple-owned facility in Israel that's entirely focused on implementing the technology.
And just last week, Apple supplier Imagination Technologies announced a new chip technology
that not only promises improved performance in gaming and graphics-intensive apps but also has
implications for AR uses as well.
…
From our colleague Andres R.
http://www.darpa.mil/news-events/2017-03-16
Defense Advanced Research Projects AgencyNews And Events
Toward Machines that Improve with Experience
New program seeks to develop the foundations for systems that might someday
“learn” in much the way biological organisms do
[email protected]
3/16/2017
Self-driving taxis. Cell phones that react appropriately to spoken requests. Computers
that outcompete world-class chess and Go players. Artificial Intelligence (AI) is
becoming part and parcel of the technological landscape—not only in the civilian and
commercial worlds but also within the Defense Department, where AI is finding
application in such arenas as cybersecurity and dynamic logistics planning.
But even the smartest of the current crop of AI systems can’t stack up against adaptive
biological intelligence. These high-profile examples of AI all rely on clever programming
and extensive training datasets—a framework referred to as Machine Learning (ML)—to
accomplish seemingly intelligent tasks. Unless their programming or training sets have
specifically accounted for a particular element, situation, or circumstance, these ML
systems are stymied, unable to determine what to do.
That’s a far cry from what even simple biological systems can do as they adapt to and
learn from experience. And it’s light years short of how, say, human motorists build on
experience as they encounter the dynamic vagaries of real-world driving—becoming
ever more adept at handling never-before-encountered challenges on the road.
This is where DARPA’s new Lifelong Learning Machines (L2M) program comes in.
The technical goal of L2M is to develop next-generation ML technologies that can learn
from new situations and apply that learning to become better and more reliable, while
remaining constrained within a predetermined set of limits that the system cannot
override. Such a capability for automatic and ongoing learning could, for example, help
driverless vehicles become safer as they apply knowledge gained from previous
experiences—including the accidents, blind spots, and vulnerabilities they encounter on
roadways—to circumstances they weren’t specifically programmed or trained for.
“Life is by definition unpredictable. It is impossible for programmers to anticipate every
problematic or surprising situation that might arise, which means existing ML systems
remain susceptible to failures as they encounter the irregularities and unpredictability of
real-world circumstances,” said L2M program manager Hava Siegelmann. “Today, if
you want to extend an ML system’s ability to perform in a new kind of situation, you
have to take the system out of service and retrain it with additional data sets relevant to
that new situation. This approach is just not scalable.”
http://www.foxnews.com/tech/2017/03/20/google-reduces-jpeg-sizes-by-35-percent.html
Google reduces JPEG sizes by 35 percent
By Matthew Humphries
Published March 20, 2017
File photo: - In this Oct. 17, 2012, file photo, a man raises his hand during at Google offices in
New York. People should have some say over the results that pop up when they conduct a search
of their own name online, Europe's highest court said Tuesday, May 13, 2014. (AP Photo/Mark
Lennihan) (ap)
The Internet is full of images, and we all want them to load as fast as possible and look as good
as possible. For those companies storing and serving these images, the desire is to keep the
images as small as possible. Google's research team created a new JPEG encoder that will
keep everyone happy. It serves up images that look great, but their file size is 35 percent
smaller.
The new open source algorithm is called Guetzli, which is Swiss German for "cookie."
Guetzli manages to significantly reduce file size without breaking compatibility with the
JPEG standard, web browsers, and image processing applications. That's important, as it
means the algorithm can be used everywhere without anything else having to change.
The three images below attempt to demonstrate how well Guetzli works. The uncompressed
image is on the left, the libjpeg version is in the middle, and the Guetzli version is on the right. If
you can't tell the difference, Guetzli has done its job.
JPEG is a lossy compression method for images, which allows a tradeoff to occur between file
size and the final quality of the images. A good encoder will produce a great looking compressed
image while reducing file size as much as possible. For example, a multi-megabyte image
stored in BMP or PNG format can look almost exactly the same converted to a JPEG that's
only a few hundred kilobytes in size.
The encoding process for a JPEG can be broken down into six distinct parts: color space
transformation, downsampling, block splitting, discrete cosine transform, quantization,
and entropy coding. Each can be optimized, and Guetzli focuses specifically on the
quantization stage as this is typically where visual quality is lost.
Tests carried out by Google revealed that users preferred the images Guetzli produced compared
to other JPEG encoders. If there is a downside, it's that Guetzli takes longer to produce a
compressed image, but the extra time is more than worth it for the saving on file size.
…
Relevant to our interest in the immortal matt:
https://www.technologyreview.com/s/603847/a-virtual-version-of-you-that-can-visit-many-vrworlds/?set=603864
A Virtual Version of You That Can Visit Many VR
Worlds
Avatars, long used in video games, are coming to VR. This startup thinks you should
be able to use the same one in different places.
 by Rachel Metz
 March 15, 2017
The desktop PC version of Morph 3D's Ready Room demo lets you pick out different kinds of clothes and hair for
your avatar.
In the real world, your body is always yours—you might dress differently for a
business meeting or a cocktail party, but you’re still the same person underneath
the outfit. Now an avatar-making startup thinks the same should be true in
virtual reality, too.
Virtual reality is still so new that most people haven’t even tried it, and there
isn’t all that much connecting with other people in virtual space, nor much
control over how you look when you are doing so. The medium is slowly
becoming more social, though, with the emergence of social VR companies
such as High Fidelity and AltspaceVR, and companies like Modal VR and
the Void working on in-person interactive experiences. And to make you
more comfortable in your virtual skin, a company called Morph 3D is letting
people easily make their own avatars that persist across different virtual
experiences.
In March the company publicly released a free software demo called Ready
Room that lets you craft and manage avatars, which can then be used in VR on
partner platforms. So far, Morph 3D has partnered with two social VR
companies: High Fidelity and VRChat. It says more partners will be added in
the coming months. Ready Room is meant to work just with HTC’s Vive
headset for now, but Morph 3D says some users have gotten it to work with the
Oculus Rift, too, by running it through the Steam entertainment platform.
(Oculus, which is owned by Facebook, offers its Oculus Avatars product, which
lets users customize their own avatars for use with compatible apps, but that is
limited to the Oculus platform.)
You can use the Ready Room demo in virtual reality with the HTC Vive to adjust your avatar's facial features.
The Ready Room demo lets you choose your avatar’s gender, pick from two
different body types (both somewhat cartoony), adjust a range of body traits like
skin hue, weight, and head shape, and dial in such specific things as the shapes
and spacing of eyes, nose, and lips. You can choose clothes, hairstyles, and
sneakers, and you can keep a portfolio of the same avatar in different outfits or
make several different ones.
…
https://www.wired.com/2017/03/openai-builds-bots-learn-speak-language/



Author: Cade Metz. Cade Metz Business
Date of Publication: 03.16.17. 03.16.17
Time of Publication: 7:30 am. 7:30 am
It Begins: Bots Are Learning to
Chat in Their Own Language
Getty Images
Igor Mordatch is working to build machines that can carry on a conversation. That’s something so
many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But
Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically
reach for language. He’s a roboticist who began his career as an animator. He spent time at Pixar
and worked on Toy Story 3, in between stints as an academic at places like Stanford and the
University of Washington, where he taught robots to move like humans. “Creating movement
from scratch is what I was always interested in,” he says. Now, all this expertise is coming
together in an unexpected way.
Born in Ukraine and raised in Toronto, the 31-year-old is now a visiting researcher at OpenAI,
the artificial intelligence lab started by Tesla founder Elon Musk and Y combinator president
Sam Altman. There, Mordatch is exploring a new path to machines that can not only
converse with humans, but with each other. He’s building virtual worlds where software
bots learn to create their own language out of necessity.
As detailed in a research paper published by OpenAI this week, Mordatch and his
collaborators created a world where bots are charged with completing certain tasks, like
moving themselves to a particular landmark. The world is simple, just a big white square—all
of two dimensions—and the bots are colored shapes: a green, red, or blue circle. But the point of
this universe is more complex. The world allows the bots to create their own language as a way
collaborating, helping each other complete those tasks.
All this happens through what’s called reinforcement learning, the same fundamental technique
that underpinned AlphaGo, the machine from Google’s DeepMind AI lab that cracked the
ancient game of Go. Basically, the bots navigate their world through extreme trial and error,
carefully keeping track of what works and what doesn’t as they reach for a reward, like arriving
at a landmark. If a particular action helps them achieve that reward, they know to keep doing it.
In this same way, they learn to build their own language. Telling each other where to go
helps them all get places more quickly.
As Mordatch says: “We can reduce the success of dialogue to: Did you end up getting to the
green can or not?”
To build their language, the bots assign random abstract characters to simple concepts they learn
as they navigate their virtual world. They assign characters to each other, to locations or objects
in the virtual world, and to actions like “go to” or “look at.” Mordatch and his colleagues hope
that as these bot languages become more complex, related techniques can then translate them
into languages like English. That is a long way off—at least as a practical piece of software—but
another OpenAI researcher is already working on this kind of “translator bot.”
Ultimately, Mordatch says, these methods can give machines a deeper grasp of language,
actually show them why language exists—and that provides a springboard to real conversation, a
computer interface that computer scientists have long dreamed of but never actually pulled off.
These methods are a significant departure from most of the latest AI research related to language.
Today, top researchers typically exploring methods that seek to mimic human language, not
create a new language. One example is work centered on deep neural networks. In recent years,
deep neural nets—complex mathematical systems that can learn tasks by finding patterns in vast
amounts of data—have proven to be an enormously effective way of recognizing objects in
photos, identifying commands spoken into smartphones, and more. Now, researchers at places
like Google, Facebook, and Microsoft are applying similar methods to language understanding,
looking to identify patterns in English conversation, so far with limited success.
…
http://ew.com/tv/2017/03/16/netflix-star-ratings/
Netflix changing user reviews,
dumps star ratings
JAMES HIBBERD•@JAMESHIBBERD
POSTED ON MARCH 16, 2017 AT 6:17PM EDT
Netflix is radically overhauling its user reviews.
After years of allowing customers to rank movies on a scale of 1-to-5 stars, the streaming service
announced plans to replace that system with a binary “thumbs up vs. thumbs down” rating.
Soon one-star ratings will cease to be a thing on Netflix — or five-star ratings, for that matter.
The new Siskel & Ebert-ian system was revealed by Netflix executive Tod Yellin at a press briefing
at the company’s headquarters in Los Gatos, California on Thursday, Variety reported and EW
confirmed.
The executive said Netflix tested the new system last year and found that users volunteered 200
percent more ratings when faced with a simple up or down choice than when having five options —
so the system will result in more feedback from viewers.
Yellin also noted that the review system has been less important over the years as the company has
found users will often rank respected documentaries with five stars and more frivolous titles with one
star despite being far more likely to actually watch the latter. (It’s true; many popular guilty-pleasure
favorites like Armageddon are saddled with one-star averages despite having plenty of fans).
…
https://www.technologyreview.com/s/603897/andrew-ng-is-leaving-baidu-in-search-of-a-big-new-aimission/?set=603941
Andrew Ng Is Leaving Baidu in Search of a Big
New AI Mission
One of the world’s leading experts in artificial intelligence is officially on the market,
and he says he wants to advance AI beyond the tech industry.


by Will Knight
March 22, 2017
Andrew Ng, a leading figure in the world of artificial intelligence, is leaving his
post as chief scientist at China’s Baidu and says he wants to find ways of
advancing AI beyond the technology world.
Ng is known for playing a leading role in formulating the AI strategy of both
Baidu and Google. He says is leaving the Chinese company on good terms and
simply wants to find a new challenge. “I’ve decided to step away from this role
while everything is going well and look at some other things,” he told MIT
Technology Review.
“I don’t know precisely what I’ll do, but I think AI offers a lot of opportunities,
not just at big companies like Baidu but for entrepreneurs, and for advancing
basic research,” Ng added. “I plan to spend some time looking at other
opportunities to use AI to help people.”
Ng is well respected within the field for his technical expertise in machine
learning, but also for finding innovative ways of applying AI.
After joining Baidu, China’s leading Internet search company, in 2014, Ng
helped develop an AI-first strategy. He oversaw the creation of several new AIfocused research groups and led a team of more than 1,300 researchers,
engineers, and other staff, including the Silicon Valley AI Lab in California.
Under Ng, Baidu researchers made fundamental advances in speech recognition
(see “10 Breakthrough Technologies: Conversational Interfaces”). Within the
past year, the department has also spawned two new business units for the
Chinese market: one dedicated to automated driving, another to providing
software for voice-controlled devices. Baidu is now leveraging its AI
technology in banking, health care, call-center support, and more.
…
https://www.nytimes.com/2017/03/21/magazine/platform-companies-are-becoming-more-powerfulbut-what-exactly-do-they-want.html
Platform Companies Are Becoming More Powerful — but What Exactly
Do They Want?
On Money
By JOHN HERRMAN MARCH 21, 2017
Credit Illustration by Andrew Rae
During a February ride in San Francisco, Travis Kalanick, the chief executive of Uber, was
recorded arguing with and eventually berating an Uber driver from the back seat of his car. The
driver, who had been working with the company since 2011, accused Kalanick of undercutting
drivers of high-end cars like his, plunging him into bankruptcy. Kalanick responded with a
lecture about the basic economic logic of his company: Soon, the supply of luxury cars on
the app would be reduced, causing demand to increase. Besides, he went on, if the company
hadn’t added a lower-priced tier, it would have been beaten by competitors. This did not satisfy
the driver, which seemed to enrage Kalanick, who erupted into a moralizing tirade. “Some
people don’t like to take responsibility for their own [expletive],” he said, before leaving the car.
The scene in the clip, obtained and published by Bloomberg, was striking. This wasn’t a
manufacturing magnate visiting the factory floor or a retail executive paying a surprise visit to a
struggling location. Indeed, Kalanick’s ambiguous relationship to the driver was, in a sense,
the source of the disagreement between them — a dispute that sailed straight past selfexamination into outright hostility. ** key to successful platforms – define these
relationships and the respective value propositions **
Uber has spent the beginning of 2017 mired in controversy. There were allegations of sexual
harassment and intellectual-property theft; The Times uncovered a brazen effort to thwart local
authorities. These scandals drew scrutiny to Uber’s corporate culture. But the recording of
Kalanick shed light on something else: the model around which the company is built.
Uber, like so many other successful tech companies in 2017, is a “platform business,” one
built around matchmaking between vendors and customers. If successful, a platform creates
its own marketplace; if extremely successful, it ends up controlling something closer to an entire
economy. This is intuitive in a case like eBay, which connects buyers and sellers. Airbnb, too,
resembles an age-old form of commerce, connecting property owners with short-term lodgers.
TaskRabbit and Fiverr connect contractors with people looking to hire them. Some of the
largest platforms are less obviously transactional: Facebook and Google connect
advertisers with users, users with one another, software developers with users. But while the
transactions that happen on their platforms largely take a different form — taps, shares, ads
served and scrolled past — the principles are essentially the same, as are the benefits. These
businesses are asset- and employee-light, low on liability and high on upside. They aspire to
monopoly, often unapologetically, and have been instrumental in rehabilitating the concept. **
can the AF be the monopoly of information for effects ** (The logic is seductive and often selfevident: Facebook is more useful if everyone is on it, therefore everyone should be on
Facebook.)
…
http://www.telegraph.co.uk/technology/2017/03/15/googles-deepmind-ai-learns-like-humanovercome-catastrophic/
Google's DeepMind AI learns like a human to overcome 'catastrophic forgetting'





Google's DeepMind AI could remember old skills CREDIT: DEEPMIND

Mark Molloy
15 March 2017 • 1:05pm
Forgetfulness is a major flaw in artificial intelligence, but researchers have just had
a breakthrough in getting 'thinking' computer systems to remember.
Taking inspiration from neuroscience-based theories, Google’s DeepMind
researchers have developed an AI that learns like a human.
Deep neural networks, computer systems modelled on the human brain and
nervous system, forget skills and knowledge they have learnt in the past when
presented with a new task, known as ‘catastrophic forgetting’.
“When a new task is introduced, new adaptations overwrite the knowledge that the
neural network had previously acquired,” the DeepMind team explains.
“This phenomenon is known in cognitive science as ‘catastrophic forgetting’, and
is considered one of the fundamental limitations of neural networks.”
…