Self-Organizing Visual Cortex Model using Homeostatic Plasticity
... chapter 17]. In current LISSOM model, activation thresholds (the upper and lower bound of neuron’s activation function) should be tuned manually by hand through trial and error. Besides time consuming for the modeler, this process also likely to be subjective. Comparison of experiment results under ...
... chapter 17]. In current LISSOM model, activation thresholds (the upper and lower bound of neuron’s activation function) should be tuned manually by hand through trial and error. Besides time consuming for the modeler, this process also likely to be subjective. Comparison of experiment results under ...
Dynamics of sensory processing in the dual olfactory pathway of the
... Ponce-Alvarez et al. 2010). PNs showed a more regular spike response pattern, indicated by a lower average CV2 of 0.32, than LNs (CV2= 0.50, Figure 2e). What are the differences with respect to the odor response spectra in PNs and LNs? Here, a direct quantitative comparison between different studies ...
... Ponce-Alvarez et al. 2010). PNs showed a more regular spike response pattern, indicated by a lower average CV2 of 0.32, than LNs (CV2= 0.50, Figure 2e). What are the differences with respect to the odor response spectra in PNs and LNs? Here, a direct quantitative comparison between different studies ...
A Review of Machine Learning for Automated Plan- ning
... Extraction of experience. How learning examples are collected. In the case of AP, learning examples can be autonomously collected by the planning system or provided by an external agent, such as a human expert. Implementing a mechanism to autonomously collect learning examples is a complex process. ...
... Extraction of experience. How learning examples are collected. In the case of AP, learning examples can be autonomously collected by the planning system or provided by an external agent, such as a human expert. Implementing a mechanism to autonomously collect learning examples is a complex process. ...
Associationism
... flavor (e.g., sugar) with a novel neutral face stimulus, in order to transfer the positive valence to the previously neutral face. 9 There are many different ways of construing the details of Pavlovian conditioning. For example, some would restrict the usage further by arguing that the US must be bi ...
... flavor (e.g., sugar) with a novel neutral face stimulus, in order to transfer the positive valence to the previously neutral face. 9 There are many different ways of construing the details of Pavlovian conditioning. For example, some would restrict the usage further by arguing that the US must be bi ...
Efficient Event-Driven Simulation of Large Networks of Spiking
... With dynamical synapses, an incoming stimulus will not only make learned information pop up from the memory store and be put in an active state, but will also contribute to expanding the store. Moreover, if synaptic dynamics is a function of neural activities, then the question arises of the stabili ...
... With dynamical synapses, an incoming stimulus will not only make learned information pop up from the memory store and be put in an active state, but will also contribute to expanding the store. Moreover, if synaptic dynamics is a function of neural activities, then the question arises of the stabili ...
Modulation of Inhibitory Synaptic Potentials in the Piriform Cortex
... linear function is used for computing the summed firing rate of the inhibitory population. The constant A represents the afferent input to a population of neurons during a period of time. This constant represents both the summed firing rate across a population of mitral cells in the olfactory bulb i ...
... linear function is used for computing the summed firing rate of the inhibitory population. The constant A represents the afferent input to a population of neurons during a period of time. This constant represents both the summed firing rate across a population of mitral cells in the olfactory bulb i ...
A neurocomputational model of the mammalian fear
... Fear conditioning is a subset of classical conditioning that involves the association between CSs and USs that evoke behaviours associated with fear. One well-known (and ethically controversial) fear conditioning experiment was performed by John Watson in 1919 [73]. In his experiment, Watson taught ...
... Fear conditioning is a subset of classical conditioning that involves the association between CSs and USs that evoke behaviours associated with fear. One well-known (and ethically controversial) fear conditioning experiment was performed by John Watson in 1919 [73]. In his experiment, Watson taught ...
computational modeling of observational learning - FORTH-ICS
... 1.2 How observational learning can help towards developing social robots Our ability to adapt to our social environment is one of the primary components behind the evolution of our intelligence (Barresi and Moore, 1995). For primates, socialization is an inter‐subjective process durin ...
... 1.2 How observational learning can help towards developing social robots Our ability to adapt to our social environment is one of the primary components behind the evolution of our intelligence (Barresi and Moore, 1995). For primates, socialization is an inter‐subjective process durin ...
Hybrid Scheme for Modeling Local Field Potentials from Point
... rates, synaptic currents and membrane potentials) has nevertheless been used as a proxy for the LFP when comparing with experiments. In a recent study comparing different candidate proxies, it was found that a suitably chosen sum of synaptic currents could provide a good LFP proxy, but only for the ...
... rates, synaptic currents and membrane potentials) has nevertheless been used as a proxy for the LFP when comparing with experiments. In a recent study comparing different candidate proxies, it was found that a suitably chosen sum of synaptic currents could provide a good LFP proxy, but only for the ...
Supervised and unsupervised learning.
... be known, and a suitable penalty function W : K × D → R must be provided. Non-Bayesian decision theory studies tasks for which some of the above information is not available. In practical applications, typically, none of the probabilities are known! The designer is only provided with the training (m ...
... be known, and a suitable penalty function W : K × D → R must be provided. Non-Bayesian decision theory studies tasks for which some of the above information is not available. In practical applications, typically, none of the probabilities are known! The designer is only provided with the training (m ...
Input evoked nonlinearities in silicon dendritic circuits
... Pyramidal cells in neocortex and hippocampus have highly complicated dendritic structures, but the computational contribution of the dendritic tree in neuronal processing is still elusive. Experimental evidence suggests that individual dendritic branches can be considered as independent computationa ...
... Pyramidal cells in neocortex and hippocampus have highly complicated dendritic structures, but the computational contribution of the dendritic tree in neuronal processing is still elusive. Experimental evidence suggests that individual dendritic branches can be considered as independent computationa ...
Distinctive Patterns in the First Movement of Brahms` String Quartet
... Quartet for 2 Violins, Viola and Violoncello, C minor, Op. 51/1, Edition Eulenburg Ltd., London, No. 240. For all three string quartets, only one instance of the repeated exposition, using the second ending, was taken. Melodic intervals were derived for each instrumental part. Only one note of a mul ...
... Quartet for 2 Violins, Viola and Violoncello, C minor, Op. 51/1, Edition Eulenburg Ltd., London, No. 240. For all three string quartets, only one instance of the repeated exposition, using the second ending, was taken. Melodic intervals were derived for each instrumental part. Only one note of a mul ...
Neural constraints on learning
... Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others1,2, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrai ...
... Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others1,2, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrai ...
Discriminative Structure and Parameter Learning for Markov
... current token is a Title and is followed by a period then it is likely that the next token is in the Venue field InField(Title,p1,c) FollowBy(PERIOD,p1,c) Next(p1,p2) InField(Venue,p2,c) ...
... current token is a Title and is followed by a period then it is likely that the next token is in the Venue field InField(Title,p1,c) FollowBy(PERIOD,p1,c) Next(p1,p2) InField(Venue,p2,c) ...
[pdf]
... study produced only non-significant trends in FFA activation, possibly due to the limited sample size. The experiment reported here is the same as that reported by Krawczyk et al. (2011), with the addition of: (a) five new master-level chess experts, bringing the total expert sample to an n of 11, a ...
... study produced only non-significant trends in FFA activation, possibly due to the limited sample size. The experiment reported here is the same as that reported by Krawczyk et al. (2011), with the addition of: (a) five new master-level chess experts, bringing the total expert sample to an n of 11, a ...
Inductive Intrusion Detection in Flow-Based
... A dendrogram which represents four iterations of a hierarchical clustering algorithm. The dendrogram can be seen as a binary tree with the data points as its leafs. . . . . . . . . . . A hypothesis which fits the training data very well. In fact, there are some minor training errors but the generali ...
... A dendrogram which represents four iterations of a hierarchical clustering algorithm. The dendrogram can be seen as a binary tree with the data points as its leafs. . . . . . . . . . . A hypothesis which fits the training data very well. In fact, there are some minor training errors but the generali ...
Catastrophic interference
Catastrophic Interference, also known as catastrophic forgetting, is the tendency of a artificial neural network to completely and abruptly forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. These networks use computer simulations to try and model human behaviours, such as memory and learning. Catastrophic interference is an important issue to consider when creating connectionist models of memory. It was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989), and Ractcliff (1990). It is a radical manifestation of the ‘sensitivity-stability’ dilemma or the ‘stability-plasticity’ dilemma. Specifically, these problems refer to the issue of being able to make an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum. The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. On the other hand, connectionst networks like the standard backpropagation network are very sensitive to new information and can generalize on new inputs. Backpropagation models can be considered good models of human memory insofar as they mirror the human ability to generalize but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is considered an issue when attempting to model human memory because, unlike these networks, humans typically do not show catastrophic forgetting. Thus, the issue of catastrophic interference must be eradicated from these backpropagation models in order to enhance the plausibility as models of human memory.