Download Satinder Singh University of Michigan

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer vision wikipedia , lookup

Intelligence explosion wikipedia , lookup

Pattern recognition wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Reinforcement learning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Machine learning wikipedia , lookup

Concept learning wikipedia , lookup

Transcript
 Satinder Singh
University of Michigan
“Reinforcement Learning:
From Vision to Action and Back”
Monday, April 11, 2016
1-2 pm
CIT Building, Lubrano Room 477
Stemming in part from the great successes of other areas of Machine Learning, in particular the
recent success of Deep Learning, there is renewed hope and interest in Reinforcement Learning (RL)
from the wider applications communities. Indeed, there is a recent burst of new and exciting progress
in both theory and practice of RL. I will describe some results from my own group on a simple new
connection between planning horizon and overfitting in RL, as well as some results on combining RL
with Deep Learning in Atari games and Minecraft. I will conclude with some lookahead at what we
can do, both as theoreticians and those that collect data, to accelerate the impact of RL.
Satinder Singh is a Professor of Computer Science and Engineering as well as the Director of the
Artificial Intelligence Laboratory at the University of Michigan, Ann Arbor. He has been the Chief
Scientist at Syntek Capital, a venture capital company, a Principal Research Scientist at AT&T Labs,
an Assistant Professor of Computer Science at the University of Colorado, Boulder, and a
Postdoctoral Fellow at MIT’s Brain and Cognitive Science department. His research focus is on
developing the theory, algorithms and practice of building artificial agents that can learn from
interaction in complex, dynamic, and uncertain environments, including environments with other
agents in them. His main contributions have been to the areas of reinforcement learning, multi-agent
learning, and more recently to applications in cognitive science and healthcare. He is a Fellow of the
AAAI (Association for the Advancement of Artificial Intelligence) and has coauthored more than 150
refereed papers in journals and conferences and has served on many program committees. He is
Program-CoChair of AAAI 2017, and in 2013 helped cofound RLDM (Reinforcement Learning and
Decision Making), a biennial multidisciplinary meeting that brings together computer scientists,
psychologists, neuroscientists, roboticists, control theorists, and others interested in animal and
artificial decision making.
For more information on this talk and the HCRI Speaker Series, contact [email protected] or visit hcri.brown.edu.