
Mirror Proposal 8-01 - USC - University of Southern California
... to propose models which place the mirror system in a broader perspective. As we have noted, our analysis of the monkey mirror system is to be seen as grounding efforts to understand the brain mechanisms underlying imitation in humans. For imitation, the key issue is the recognition of the structure ...
... to propose models which place the mirror system in a broader perspective. As we have noted, our analysis of the monkey mirror system is to be seen as grounding efforts to understand the brain mechanisms underlying imitation in humans. For imitation, the key issue is the recognition of the structure ...
Ectodermal Placodes: Contributions to the
... well understood, but, several genes have been implicated in pattern formation in the derivatives of the otic placode. The lateral line system is unique among placode-derived sensory systems in vertebrates in that it is only present in anamniotes, it is derived from multiple placodes, has an extensiv ...
... well understood, but, several genes have been implicated in pattern formation in the derivatives of the otic placode. The lateral line system is unique among placode-derived sensory systems in vertebrates in that it is only present in anamniotes, it is derived from multiple placodes, has an extensiv ...
Neural realisation of the SP theory
... At its most abstract level, the SP theory (Wolff, 2003a, 2001, 2004) is intended to model any kind of system for processing information, either natural or artificial. The theory is Turing-equivalent in the sense that it can model the operation of a Universal Turing Machine (Wolff, 1999a) but, unlike ...
... At its most abstract level, the SP theory (Wolff, 2003a, 2001, 2004) is intended to model any kind of system for processing information, either natural or artificial. The theory is Turing-equivalent in the sense that it can model the operation of a Universal Turing Machine (Wolff, 1999a) but, unlike ...
Gustatory processing is dynamic and distributed Donald B
... in the spike trains, such as adaptation, bursting, or poststimulus response dynamics. In addition, neither theory allows for the interactions between neurons that would be expected to induce temporal structure in the neural responses (for recent reviews, see [15,16]). To the extent that these phenom ...
... in the spike trains, such as adaptation, bursting, or poststimulus response dynamics. In addition, neither theory allows for the interactions between neurons that would be expected to induce temporal structure in the neural responses (for recent reviews, see [15,16]). To the extent that these phenom ...
Hebbian Learning of Bayes Optimal Decisions
... (6) and (7) suggest a decaying learning rate ηi i = N1i , where Ni is the number of preceding examples with yi = 1. We will present a learning rate adaptation mechanism that avoids biologically implausible counters, and is robust enough to deal even with non-stationary distributions. Since the Bayes ...
... (6) and (7) suggest a decaying learning rate ηi i = N1i , where Ni is the number of preceding examples with yi = 1. We will present a learning rate adaptation mechanism that avoids biologically implausible counters, and is robust enough to deal even with non-stationary distributions. Since the Bayes ...
streaming face recognition
... which contain faces, but it is much harder to train a neural network with samples which do not. The number of “non-face” samples are just too large. ...
... which contain faces, but it is much harder to train a neural network with samples which do not. The number of “non-face” samples are just too large. ...
Building silicon nervous systems with dendritic tree neuromorphs
... tree, so prominent and elaborate in neurons like the pyramidal cells of cerebral cortex and the Purkinje cells of the cerebellum. The classical conception of the neuron held that dendrites are inexcitable, extensions of the neuronal cell body [Eccles, 1957], expanding the neuron’s surface area for m ...
... tree, so prominent and elaborate in neurons like the pyramidal cells of cerebral cortex and the Purkinje cells of the cerebellum. The classical conception of the neuron held that dendrites are inexcitable, extensions of the neuronal cell body [Eccles, 1957], expanding the neuron’s surface area for m ...
working draft - DAVID KAPLAN | Macquarie University
... The major difficulty with this and many other traditional positions staked out on both sides of the debate over explanatory autonomy is that they all commonly assume the appropriateness of what is now widely recognized as an outdated and inapplicable law-based model of theory reduction and explanati ...
... The major difficulty with this and many other traditional positions staked out on both sides of the debate over explanatory autonomy is that they all commonly assume the appropriateness of what is now widely recognized as an outdated and inapplicable law-based model of theory reduction and explanati ...
The Development of Neural Synchrony and Large
... perception of squares and circles in children (10–12 y), young adults (20–26 y), and older adults (70–76 y). Evoked oscillations in children were significantly reduced between 30 and 148 Hz over occipital electrodes relative to adults and did not show a modulation by the size of the stimulus. Moreov ...
... perception of squares and circles in children (10–12 y), young adults (20–26 y), and older adults (70–76 y). Evoked oscillations in children were significantly reduced between 30 and 148 Hz over occipital electrodes relative to adults and did not show a modulation by the size of the stimulus. Moreov ...
Spike Train SIMilarity Space (SSIMS): A Framework for Single
... with the behavior or stimulus is unknown, model-free methods such as principal component analysis can be used to gain insight into the relationship. Here we employ t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008) to project the high-dimensional space defined by pai ...
... with the behavior or stimulus is unknown, model-free methods such as principal component analysis can be used to gain insight into the relationship. Here we employ t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008) to project the high-dimensional space defined by pai ...
Dissecting appetite
... Medical Center in Boston, Massachusetts, realized this in the late 1970s when, as an undergraduate at the University of Massachusetts, he made tiny nicks in the brains of rats to see which ...
... Medical Center in Boston, Massachusetts, realized this in the late 1970s when, as an undergraduate at the University of Massachusetts, he made tiny nicks in the brains of rats to see which ...
Neural Correlates of Vibrissa Resonance: Band
... vibrissa frequency tuning was relatively constant, independent of neural tuning. Second, FSU recordings were distributed into higher plots in this display, indicating lower tuning as compared to RSU and NV recordings. The basic qualitative trends in Figure 4A were reflected across the sample in the ...
... vibrissa frequency tuning was relatively constant, independent of neural tuning. Second, FSU recordings were distributed into higher plots in this display, indicating lower tuning as compared to RSU and NV recordings. The basic qualitative trends in Figure 4A were reflected across the sample in the ...
Artificial neural network
In machine learning and cognitive science, artificial neural networks (ANNs) are a family of statistical learning models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected ""neurons"" which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning.For example, a neural network for handwriting recognition is defined by a set of input neurons which may be activated by the pixels of an input image. After being weighted and transformed by a function (determined by the network's designer), the activations of these neurons are then passed on to other neurons. This process is repeated until finally, an output neuron is activated. This determines which character was read.Like other machine learning methods - systems that learn from data - neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.