
Evolutionary Optimization of Radial Basis Function Classifiers for
... vector of the network is “similar” (depending on the value of the radius) to the center of its basis function. The center of a basis function can, therefore, be regarded as a prototype of a hyperspherical cluster in the input space of the network. The radius of the cluster is given by the value of t ...
... vector of the network is “similar” (depending on the value of the radius) to the center of its basis function. The center of a basis function can, therefore, be regarded as a prototype of a hyperspherical cluster in the input space of the network. The radius of the cluster is given by the value of t ...
Neural Correlates of Learning in the Prefrontal Cortex of the Monkey
... 1991) in the sense that the model is built by asking questions about representation of information, connectivity, neural processing, and plasticity. However, we have made no a priori choices and the main features of the model are based on the principles of organization and operation in the PFC. The ...
... 1991) in the sense that the model is built by asking questions about representation of information, connectivity, neural processing, and plasticity. However, we have made no a priori choices and the main features of the model are based on the principles of organization and operation in the PFC. The ...
Slide 1
... 1. Every clause gets a decoding neuron with = n ) output = 1 only if clause satisfied (AND gate) 2. All outputs of decoding neurons are inputs of a neuron with = 1 (OR gate) q.e.d. ...
... 1. Every clause gets a decoding neuron with = n ) output = 1 only if clause satisfied (AND gate) 2. All outputs of decoding neurons are inputs of a neuron with = 1 (OR gate) q.e.d. ...
Towards comprehensive foundations of computational intelligence.
... authors were wrong about the XOR (or the parity) problem which is easily solved by adding hidden neurons to the network, they were right about the topological invariants of patterns, in particular about the problem of connectedness (determining if the pattern is connected or disconnected). Such pro ...
... authors were wrong about the XOR (or the parity) problem which is easily solved by adding hidden neurons to the network, they were right about the topological invariants of patterns, in particular about the problem of connectedness (determining if the pattern is connected or disconnected). Such pro ...
Anthony Chang - Artificial Nerual Networks in Protein Secondary Structure Predictions
... In a feed-forward neural network architecture, a unit will receive input from several nodes or neurons belonging to another layer. These highly interconnected neurons therefore form an infrastructure (similar to the biological central nervous system) that is capable of learning by successfully perfo ...
... In a feed-forward neural network architecture, a unit will receive input from several nodes or neurons belonging to another layer. These highly interconnected neurons therefore form an infrastructure (similar to the biological central nervous system) that is capable of learning by successfully perfo ...
Learning Strengthens the Response of Primary Visual Cortex to
... The probable reason for difference between singleunit results and our results is the difference in the trained task. Our study likely boosted the total response to the trained stimulus because this response is used relatively directly in detection. Similarly, the single-unit studies found changes in ...
... The probable reason for difference between singleunit results and our results is the difference in the trained task. Our study likely boosted the total response to the trained stimulus because this response is used relatively directly in detection. Similarly, the single-unit studies found changes in ...
The 18th European Conference on Artificial - CEUR
... have different FLIF parameters and learning parameters, including no learning. In CABot3, connectivity within a subnet is always sparse, but it varies between subnets; this connectivity may have some degree of randomness, but in some cases it is tightly specified by the developer to guarantee partic ...
... have different FLIF parameters and learning parameters, including no learning. In CABot3, connectivity within a subnet is always sparse, but it varies between subnets; this connectivity may have some degree of randomness, but in some cases it is tightly specified by the developer to guarantee partic ...
1 Computational Intelligence - Chair 11: ALGORITHM ENGINEERING
... xn McCulloch-Pitts-Neuron 1943: xi { 0, 1 } =: B ...
... xn McCulloch-Pitts-Neuron 1943: xi { 0, 1 } =: B ...
Lecture 01 Introduction to Artificial Neural Networks
... 1. Every clause gets a decoding neuron with = n ) output = 1 only if clause satisfied (AND gate) 2. All outputs of decoding neurons are inputs of a neuron with = 1 (OR gate) q.e.d. ...
... 1. Every clause gets a decoding neuron with = n ) output = 1 only if clause satisfied (AND gate) 2. All outputs of decoding neurons are inputs of a neuron with = 1 (OR gate) q.e.d. ...
Detection and Tracking of Liquids with Fully Convolutional Networks
... example of labeled data and its corresponding rendered image is shown in figure 2. The cup, bowl, and liquid are rendered as red, green and blue respectively. Note that this method allows each pixel to have multiple labels, e.g., some of the pixels in the cup are labeled as both cup and liquid (mage ...
... example of labeled data and its corresponding rendered image is shown in figure 2. The cup, bowl, and liquid are rendered as red, green and blue respectively. Note that this method allows each pixel to have multiple labels, e.g., some of the pixels in the cup are labeled as both cup and liquid (mage ...
Proceedings of 2013 BMI the Second International Conference on
... There is a growing body of research about the outcomes of using virtual avatars (and other mediated self-representations). For example, the Proteus Effect suggests that people behave in ways that conform to their avatars' characteristics, even after avatar use, e.g., using taller avatars leads to mo ...
... There is a growing body of research about the outcomes of using virtual avatars (and other mediated self-representations). For example, the Proteus Effect suggests that people behave in ways that conform to their avatars' characteristics, even after avatar use, e.g., using taller avatars leads to mo ...
NEUROGENESIS Y PLASTICIDAD DEL HIPOCAMPO ADULTO
... The difference in the activation of immature and mature GCs is due to GABAergic inhibition ...
... The difference in the activation of immature and mature GCs is due to GABAergic inhibition ...
Unsupervised Many-to-Many Object Matching for Relational Data
... groups in a language would have the same relations with groups in another language, e.g. group {car, automobile, motorcar} is connected to {drive, ride} in English, and {Wagen, Automobil} is connected to {fahren, treiben} in German. As further examples, social networks from different research labora ...
... groups in a language would have the same relations with groups in another language, e.g. group {car, automobile, motorcar} is connected to {drive, ride} in English, and {Wagen, Automobil} is connected to {fahren, treiben} in German. As further examples, social networks from different research labora ...
Temporal Lobe Epilepsy
... Therefore there is need to automatic classification of EEG signals. Classification problem is a decision making task where many researchers have been working on. There are a number of techniques proposed to perform classification. Neural network is one of the artificial intelligent techniques that h ...
... Therefore there is need to automatic classification of EEG signals. Classification problem is a decision making task where many researchers have been working on. There are a number of techniques proposed to perform classification. Neural network is one of the artificial intelligent techniques that h ...
Research on Statistical Relational Learning at the
... sources, we can learn generalizations of them that allow us to map new sources automatically. We have done this successfully for relational and XML data [Doan et al., 2001; 2003b] and for Semantic Web ontologies [Doan et al., 2002] for the case of one-to-one mappings, and are currently extending our ...
... sources, we can learn generalizations of them that allow us to map new sources automatically. We have done this successfully for relational and XML data [Doan et al., 2001; 2003b] and for Semantic Web ontologies [Doan et al., 2002] for the case of one-to-one mappings, and are currently extending our ...
An Auxiliary System for Medical Diagnosis Based on Bayesian
... experienced experts can use this network as an aid in diagnosis of sleep-disorders. An example of interaction with the SDDS follows: for a given patient, the initial subset of symptoms observed to be present or absent are entered through a radio button interface; this is shown on the left side of th ...
... experienced experts can use this network as an aid in diagnosis of sleep-disorders. An example of interaction with the SDDS follows: for a given patient, the initial subset of symptoms observed to be present or absent are entered through a radio button interface; this is shown on the left side of th ...
Learning Morphology by Itself1 - Mediterranean Morphology Meetings
... whether children grow up equipped with the same battery of knowledge biases. In other words: where does all these a priori assumptions on word structure come to a learner from? Can we identify some basic cognitive mechanisms that are primary and foundational in the ontogenetic development of languag ...
... whether children grow up equipped with the same battery of knowledge biases. In other words: where does all these a priori assumptions on word structure come to a learner from? Can we identify some basic cognitive mechanisms that are primary and foundational in the ontogenetic development of languag ...
ppt - IIT Bombay
... Diagnostics in action(1) 1) If stuck in local minimum, try the following: • Re-initializing the weight vector. • Increase the learning rate. • Introduce more neurons in the hidden layer. 2) If it is network paralysis, then increase the number of neurons in the hidden layer. Problem: How to config ...
... Diagnostics in action(1) 1) If stuck in local minimum, try the following: • Re-initializing the weight vector. • Increase the learning rate. • Introduce more neurons in the hidden layer. 2) If it is network paralysis, then increase the number of neurons in the hidden layer. Problem: How to config ...
Consolidation of motor memory
... training, adapted behavior strongly depends on the plasticity in the cerebellar cortex. However, passage of time reduces or eliminates this dependence. Work on retention of VOR adaptation in also compatible with the cascade model. Lesioning of the flocculus prevents adaptation of the horizontal VOR ...
... training, adapted behavior strongly depends on the plasticity in the cerebellar cortex. However, passage of time reduces or eliminates this dependence. Work on retention of VOR adaptation in also compatible with the cascade model. Lesioning of the flocculus prevents adaptation of the horizontal VOR ...
Artificial neural network model for river flow forecasting
... Grijsen et al. (1992), Elmahi & O’Connor (1995), Shamseldin et al. (1999), Shamseldin & O’Connor (2003) and Antar et al. (2005). Thus, this paper will shed more light on potential data-driven models which can be used for flood forecasting on the Blue Nile. The ANN river flow forecasting models have ...
... Grijsen et al. (1992), Elmahi & O’Connor (1995), Shamseldin et al. (1999), Shamseldin & O’Connor (2003) and Antar et al. (2005). Thus, this paper will shed more light on potential data-driven models which can be used for flood forecasting on the Blue Nile. The ANN river flow forecasting models have ...
Search for the optimal strategy to spread a viral video: An agent
... the maximal number of friends in the population, 2) f2 = the number of agent’s followers divided by the maximal number of followers in the population, 3) f3 = the number of agent’s friends and followers divided by the maximal number of friends and followers in the population, 4) f4 = 1− (agent’s clu ...
... the maximal number of friends in the population, 2) f2 = the number of agent’s followers divided by the maximal number of followers in the population, 3) f3 = the number of agent’s friends and followers divided by the maximal number of friends and followers in the population, 4) f4 = 1− (agent’s clu ...
Signaling in large-scale neural networks
... in the outside world. Because neurons process synaptic input and reduce information, it is impossible to reconstruct their input patterns entirely from their output. In addition, it is practically never possible to record all the presynaptic input patterns that give rise to particular output in a ne ...
... in the outside world. Because neurons process synaptic input and reduce information, it is impossible to reconstruct their input patterns entirely from their output. In addition, it is practically never possible to record all the presynaptic input patterns that give rise to particular output in a ne ...
Catastrophic interference
Catastrophic Interference, also known as catastrophic forgetting, is the tendency of a artificial neural network to completely and abruptly forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. These networks use computer simulations to try and model human behaviours, such as memory and learning. Catastrophic interference is an important issue to consider when creating connectionist models of memory. It was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989), and Ractcliff (1990). It is a radical manifestation of the ‘sensitivity-stability’ dilemma or the ‘stability-plasticity’ dilemma. Specifically, these problems refer to the issue of being able to make an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum. The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. On the other hand, connectionst networks like the standard backpropagation network are very sensitive to new information and can generalize on new inputs. Backpropagation models can be considered good models of human memory insofar as they mirror the human ability to generalize but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is considered an issue when attempting to model human memory because, unlike these networks, humans typically do not show catastrophic forgetting. Thus, the issue of catastrophic interference must be eradicated from these backpropagation models in order to enhance the plausibility as models of human memory.