Download Toward optimal learning dynamics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Learning theory (education) wikipedia , lookup

Psychological behaviorism wikipedia , lookup

Neuroeconomics wikipedia , lookup

Environmental enrichment wikipedia , lookup

Adult neurogenesis wikipedia , lookup

Transcript
Toward optimal learning dynamics
Garrison W. Cottrell, Andrea Chiba, and the Temporal Dynamics of Learning Center
As outlined in a Science article coauthored by members of the TDLC and LIFE centers, transformative
advances in the science of learning require collaboration from multiple disciplines, including psychology,
neuroscience, machine learning, and education. TDLC has implemented this approach through the
formation of research networks, small interdisciplinary teams focused on a common research agenda. By
combining approaches from multiple fields, more progress is possible than can be achieved by singlediscipline studies. In particular, by combining computational models and experiments, the underlying
mechanisms of learning can be elucidated, because the models can be analyzed in ways that brains cannot.
This approach has a long history, which we build upon (e.g., Hebb 1949, Machado 1997, Shadmehr & MussaIvaldi 1997, Staddon et al. 2002).
This form of team science is exemplified by the Interacting Memory Systems Network’s discovery of a
behavioral function of cell birth (neurogenesis) in the Dentate Gyrus of the hippocampus of mature rats 1. Brad
Aimone, an IMS graduate student in Rusty Gage’s lab (Salk Inst.), wanted to use a model to understand the
role of these neurons. He asked IMS member Jeff Elman, whose model of the hippocampus would be best
suited for this investigation, and Jeff sent him to IMS member Janet Wiles (U. Queensland). Together, they
added neurogenesis to Janet’s model, which then yielded new predictions of the functional role of these
newborn neurons, including that newborn neurons would bind together temporally-adjacent associations
with context. New behavioral tasks to verify this prediction were developed by IMS leader Andrea Chiba,
project scientist Laleh Quinn and graduate student Lara Rangel. The predictions were confirmed. These cells
are a new kind of place cell that fires only in a specific place and surrounding context, indicating the coding
of space-time in the hippocampus. The Eichenbaum laboratory of CELEST recently discovered “time cells”
in the CA1 region of the hippocampus. The existence of temporal coding and contextual encoding at the
cellular level in the hippocampus provides a complement to our earlier finding that internally generated
sequences of neural activity in the hippocampus are replayed in the absence of external cues (Pastalkova et
al. 2008). Thus, the elements and the ensemble of the hippocampus aggregate to create a sequential record of
our personal recollections.
A second, quite different application of this approach is to the study of spacing effects. Spacing of study
and testing are well known to influence the duration and effectiveness of learning. We extended the
understanding of spacing effects to educationally-relevant time scales, and found that spacing effects are
time scale invariant, providing coarse but useful guidance for educators (Cepeda et al. 2009). Based on this
data, we developed a new computational theory (the Multiscale Context Model) that successfully predicts
the optimal spacing for arbitrary material. We have incorporated MCM into a web-based tool that optimizes
study schedules (Mozer et al. 2009); we are evaluating it with 200 Colorado middle school Spanish students.
There are many more applications of this approach. We have applied various machine learning and
modeling techniques to automatically detect perceived difficulty of a lecture from facial expressions
(Whitehill et al. 2008), to learn the optimal action to take next in a tutoring context based on examples of
human tutoring interactions (Ruvolo et al. 2008), and to analyze children’s facial expressions while problem
solving in order to predict periods of uncertainty (Littlewort et al. 2011). Likewise, neural recording and
behavioral data can inform modeling – ruling out different models of decision-making (Purcell et al. 2010).
Two remaining challenges are highlighted here. First, we have used many techniques to build models at
various levels of the spatial and temporal hierarchy (from neurons and millisecond scales to the person and
year-long scale for spacing effects). Many of these approaches – those that share optimality or Bayesian
techniques – are compatible with one another, yet the mappings between levels of the temporal and spatial
hierarchy remain to be bridged, although progress has been made (e.g., Lerner et al. 2011; Poeppel 2012).
Consideration of this problem leads to the insight that interactions between levels depends on the physics of
how, for example, molecules (low level) interact at synapses (one level up) and has provided fundamental
links between thermodynamics and prediction, showing that in order to be energy-efficient, an organism
must be predictive (Still et al. 2012). However, this is still far from fulfilling the promise of what we call the
1 And, as the linked paper points out, neurogenesis is enhanced by running – another strong piece of evidence for the crucial role of
physical education in K-12 education.
“levels hypothesis” (Bell 2007), which is a search for fundamental principles linking the physical levels
between, for example, synapses, cells, and organisms. A second challenge is to bridge a collection of findings
indicating that an active EEG brain state is necessary for accurate encoding of sensory temporal patterns
(Marguet & Harris 2011; Goard and Dan 2009; Minces, Harris, & Chiba, In Prep) with data showing that EEG
brain state in babies predicts linguistic and cognitive development (Benasich et al. 2008; Gou et al. 2011). This
will require reverse engineering human EEG by using animal models, in order to understand the cortical
activity and neuromodulatory inputs underlying fast oscillatory activity in human EEG.