Download Artificial neural network

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Single-unit recording wikipedia , lookup

Artificial consciousness wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Brain–computer interface wikipedia , lookup

Multielectrode array wikipedia , lookup

Neurophilosophy wikipedia , lookup

Neurocomputational speech processing wikipedia , lookup

Neuroinformatics wikipedia , lookup

Cortical cooling wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Neuroethology wikipedia , lookup

Binding problem wikipedia , lookup

Neuroesthetics wikipedia , lookup

Neuroanatomy wikipedia , lookup

Synaptic gating wikipedia , lookup

Neural coding wikipedia , lookup

Biological neuron model wikipedia , lookup

Neural oscillation wikipedia , lookup

Optogenetics wikipedia , lookup

Neural modeling fields wikipedia , lookup

Connectome wikipedia , lookup

Neuroeconomics wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Mind uploading wikipedia , lookup

Central pattern generator wikipedia , lookup

Artificial intelligence wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Convolutional neural network wikipedia , lookup

Artificial neural network wikipedia , lookup

Neural binding wikipedia , lookup

Development of the nervous system wikipedia , lookup

Metastability in the brain wikipedia , lookup

Neural engineering wikipedia , lookup

Nervous system network models wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Transcript
TOPIC
ARTIFICIAL NEURAL NETWORKS
SUBMITTED BY:
CONTENTS
 Abstract
 Introduction
 Artificial Neuron
 Information Processing
 Artificial Neural Network
 Multilayer Perceptron
 Back propagation
 Charecterisation
 The Brain, Neural Networks And Computers
 Human And Artificial Neurons
 SEPARATION OF MIXED SIGNALS
 Firing Rules
 Why Use Neural Networks
 Neural Network Applications
 Summary
 References
ABSTRACT
This report is an introduction to Artificial Neural Networks.Firstly introduced the
detailed historical background of artificial neural networks and also explained
what is a neuron.The various types of neural networks are explained and
demonstrated.The concept of back propagation by which ANNs learn is
explained.
Also explained how human brain works.The comparision between human brain,
neural networks and computers is explained briefly.The explanation for how the
artificial neurons are constructed from human neurons is given.
Other topics explained are :


Separation of mixed signals.
Characteristics and firing rules.
The firing rules and why artificial neural networks are to be used is also
explained.Then the real life applications and summary about successes &
failures of ANNs are also introducedand explained.
Application areas include process modeling and control,machine
diagnostics,target recognition,credit rating,medical diagnosis,voice
recognition,financial forecasting,fraud detection,quality control and intelligent
searching.
INTRODUCTION
What They Are
A neural network is, in essence, an attempt to simulate the brain. Neural
network theory revolves around the idea that certain key properties of biological
neurons can be extracted and applied to simulations, thus creating a simulated
(and very much simplified) brain. The first important thing to understand then is
that the components of an artificial neural network are an attempt to recreate the
computing potential of the brain. The second important thing to understand,
however, is that no one has ever claimed to simulate anything as complex as an
actual brain. Whereas the human brain is estimated to have something on the
order of ten to a hundred billion neurons, a typical artificial neural network (ANN)
is not likely to have more than 1,000 artificial neurons.
Before discussing the specifics of artificial neural nets though, let us examine what
makes real neural nets - brains - function the way they do. Perhaps the single most
important concept in neural net research is the idea of connection strength. Neuroscience
has given us good evidence for the idea that connection strengths - that is, how strongly
one-neuron influences those neurons connected to it - are the real information holders in
the brain. Learning, repetition of a task, even exposure to a new or continuing stimulus
can cause the brain's connection strengths to change, some synaptic connections
becoming reinforced and new ones are being created, others weakening or in some cases
disappearing altogether. The amount of excitation or inhibition produced is of course,
dependent on the connection strength - a stronger connection means more inhibition or
excitation, a weaker connection means less. The third important component in
determining a neuron's response is called the transfer function. Without getting into more
technical detail, the transfer function describes how a neuron's firing rate varies with the
input it receives. A very sensitive neuron may fire with very little input, for example. A
neuron may have a threshold, and fire rarely below threshold, and vigorously above it. A
neuron may have a bell-curve style firing pattern, increasing its firing rate up to a
maximum, and then levelling off or decreasing when over-stimulated.
ARTIFICIAL NEURON
An artificial neuron (also called a "node" or "Nv neuron") is a basic unit in
anartificial neural network. Artificial neurons are simulations of biological
neurons, and they are typically functions from many dimensions to one
dimension.They receive one or more inputs and sum them to produce one
output. Usually the sums of each node are weighted, and the sum is passed
through a non-linear function known as an activation or transfer function. The
canonical form of transfer functions is the sigmoid, but they may also take the
form of other non-linear functions, piece wise linear functions, or step functions.
Generally, transfer functions are monotonically increasing.
A neural network is an interconnected group of nodes, akin to the vast
network of neurons in the human brain.
INFORMATION PROCESSING
In general, information processing is the changing (processing) of information in
any manner detectable by an observer. As such, it is a process, which describes
everything, which happens (changes) in the universe, from the falling of a rock (a
change in position) to the printing of a text file from a digital computer system. In
the latter case, an information processor is changing the form of presentation of
that text file. Information processing can more specifically be defined in terms of
conversion of latent information into manifest information. Latent and manifest
information is defined through the terms of equivocation (remaining uncertainty,
what value the sender has actually chosen), dissipation (uncertainty of the
sender what the receiver has actually received) and transformation (saved effort
of questioning - equivocation minus dissipation)
NEURAL NETWORK
What is a Neural Network?
A neural network is a powerful data-modeling tool that is able to capture and
represent complex input/output relationships. The motivation for the development
of neural network technology stemmed from the desire to develop an artificial
system that could perform "intelligent" tasks similar to those performed by the
human brain. Neural networks resemble the human brain in the following two
ways:
1. A neural network acquires knowledge through learning.
2. A neural network's knowledge is stored within inter-neuron connection
strengths known as synaptic weights.
The true power and advantage of neural networks lies in their ability to represent
both linear and non-linear relationships and in their ability to learn these
relationships directly from the data being modeled. Traditional linear models are
simply inadequate when it comes to modeling data that contains non-linear
characteristics.
Neural networks are defined as
Simplified view of an artificial neural network
A neural network is a system of interconnecting neurons in a network working
together to produce an output function. The output of a neural network relies on
the cooperation of the individual neurons within the network to operate.
Processing of information by neural networks is often done in parallel rather than
in series (or sequentially). Since it relies on its member neurons collectively to
perform its function, a unique property of a neural network is that it can still
perform its overall function even if some of the neurons are not functioning. That
is, they are very robust to error or failure.
ARTIFICIAL NEURAL NETWORK
Artificial neural networks are made up of interconnecting artificial
neurons (usually simplified neurons) designed to model (or mimick) some
properties of biological neural networks. Artificial neural networks can be used to
model the modes of operation of biological neural networks, whereas cognitive
models are theoretical models that mimick cognitive brain functions without
necessarily using neural networks while artificial intelligence are well-crafted
algorithms that solve specific intelligent problems (such as chess playing, pattern
recognition, etc.) without using neural network as the computational architecture
An artificial neural network (ANN) or commonly just neural network (NN) is an
interconnected group of artificial neurons that uses a mathematical and
computational model for information process based on a connectionist approach
to computation. In most cases an ANN is an adaptive system that changes its
structure based on external or internal information that flows through the network.
Artificial neural networks are adaptive models that can learn from the data and
generalize things learned. They extract the essential characteristics from the
numerical data as opposed to memorizing all of it. This offers a convenient way
to reduce the amount of data as well as to form an implicit model without having
to form a traditional, physical model of the underlying phenomenon. In contrast to
traditional models, which are theory-rich and data-poor, the neural networks are
data-rich and theory-poor in a way that a little or no a priori knowledge of the
problem is present. Neural networks can be used for building mappings from
inputs to outputs of these kinds of black boxes. The behavior of a black box
system is not usually known.
In more practical terms neural networks are non-linear statistical data modeling
tools. They can be used to model complex relationships between inputs and
outputs or to find patterns in data
MULTILAYER PERCEPTRON
The most common neural network model is the multilayer perceptron (MLP). This
type of neural network is known as a supervised network because it requires a
desired output in order to learn. The goal of this type of network is to create a
model that correctly maps the input to the output using historical data so that the
model can then be used to produce the output when the desired output is
unknown. A graphical representation of an MLP is shown below.
Block diagram of a two hidden layer multiplayer perceptron (MLP). The inputs are fed into the
input layer and get multiplied by interconnection weights as they are passed from the input layer
to the first hidden layer.Within the first hidden layer; they get summed then processed by a
nonlinear function (usually the hyperbolic tangent). As the processed data leaves the first hidden
layer, again it gets multiplied by interconnection weights, then summed and processed by the
second hidden layer. Finally the data is multiplied by interconnection weights then processed one
last time within the output layer to produce the neural network output.
BACK PROPAGATION
It is an algorithm by which the MLP and many other neural networks learn.With
back propagation, the input data is repeatedly presented to the neural network.
With each presentation the output of the neural network is compared to the
desired output and an error isomputed. The error decreases with each iteration
and the neural model gets closer and closer to producing the desired output. This
process is known as "training".
CHARACTERIZATION
In general, a biological neural network is composed of a group or groups of
physically connected or functionally associated neurons. A single neuron can be
connected to many other neurons and the total number of neurons and
connections in a network can be extremely large. Connections, called synapses,
are usually formed from axons to dendrites, though dendro dentritic microcircuits
and other connections are possible. Apart from the electrical signalling, there are
other forms of signaling that arise from neuro diffusion, which have an effect on
electrical signaling. As such, neural networks are extremely complex. While a
detailed description of neural systems seems currently unattainable, progress is
made towards a better understanding of basic mechanisms.
Artificial intelligence and cognitive modeling try to simulate some properties of
neural networks. While similar in their techniques, the former has the aim of
solving particular tasks, while the latter aims to build mathematical models of
biological neural systems.
In the artificial intelligence field, artificial neural networks have been applied
successfully to speech recognition, image analysis and adaptive control, in
order to construct software agents (in computer and video games) or
autonomous robots. Most of the currently employed artificial neural networks
for artificial intelligence are based on statistical estimation, optimisation and
control theory
The cognitive modelling field is the physical or mathematical modelling of the
behaviour of neural systems; ranging from the individual neural level (e.g.
modelling the spike response curves of neurons to a stimulus), through the
neural cluster level (e.g. modelling the release and effects of dopamine in the
basal ganglia) to the complete organism (e.g. behavioural modelling of the
organism's response to stimuli
THE BRAIN, NEURAL NETWORKS AND COMPUTERS
While historically the brain has been viewed as a type of computer, and viceversa, this is true only in the loosest sense. Computers do not provide us with
accurate hardware for describing the brain (even though it is possible to describe
a logical process as a computer program or to simulate a brain using a computer)
as they do not posses the parallel processing architectures that have been
described in the brain. Even when speaking of multiprocessor computers, the
functions are not nearly as distributed as in the brain.
Neural networks, as used in artificial intelligence, have traditionally been viewed
as simplified models of neural processing in the brain, even though the relation
between this model and brain biological architecture is very much debated.
HUMAN
AND ARTIFICIAL NEURONES
INVESTIGATING THE SIMILARITIES
-
How the Human Brain Learns?
Much is still unknown about how the brain trains itself to process information, so
theories abound. In the human brain, a typical neuron collects signals from
others through a host of fine structures called dendrites. The neuron sends out
spikes of electrical activity through a long, thin stand known as an axon, which
splits into thousands of branches. At the end of each branch, a structure called a
synapse converts the activity from the axon into electrical effects that inhibit or
excite activity in the connected neurones. When a neuron receives excitatory
input that is sufficiently large compared with its inhibitory input, it sends a spike of
electrical activity down its axon. Learning occurs by changing the effectiveness of
the synapses so that the influence of one neuron on another changes.
From Human Neurones to Artificial Neurones
We conduct these neural networks by first trying to deduce the essential features
of neurones and their interconnections. We then typically program a computer to
simulate these features. However because our knowledge of neurones is
incomplete and our computing power is limited, our models are necessarily gross
idealisations of real networks of neurones.
The neuron model
SEPARATION OF MIXED SIGNALS OR INDEPENDENT
COMPONENT ANALYSIS (ICA)
The Separation of Independent Sources (SIS) is a challenging problem with
significant potential applications. The problem is informally described as follows:
several unknown but independent temporal signals propagate through a mixing
and/or filtering, natural or synthetic, medium. By sensing outputs of this medium,
a network (e.g., a neural network, a system, or a device) is configured to
counteract the effect of the medium, and adaptively recovers the original,
unmixed, signals. Only the property of signal independence is assumed for this
processing. No additional apriori knowledge of the original signals is required.
This processing represents a form of self (or unsupervised) learning. The weak
assumptions and self-learning capability render such a network attractive from
the viewpoint of real-world applications.
This neural network approach to the signal processing problem of SIS has great
advantages over the existing adaptive filtering algorithms. For example when the
mixture of other signals is labeled as noise in this approach, no specific apriori
knowledge about any of the signals is assumed; only that the processed signals
are independent. This is in contrast to the noise cancelation method, proposed
by Widrow , which requires that a reference signal be correlated exclusively to
the part of the waveform (i.e. noise) that needs to be filtered out. This latter
requirement entails specific apriori knowledge about the noise as well as the
signal(s).
The separation of independent sources is valuable in numerous and major
applications in areas as diverse as telecommunication systems, sonar and radar
systems, audio and acoustics, image/information processing, and biomedical
engineering. Consider, e.g., the audio and sonar scenario when the original
signals are sounds and the mixed signals are the output of several microphones
or sensors placed at different vantage points. A network will receive, via each
microphone, a mixture of sounds that are usually delayed relative to one another
and/or the original sounds. The network's role is then to dynamically reproduce
the original signals, where each separated signal can be subsequently channeled
for further processing or transmission. Similar application scenarios can be
described in situations involving heart-rate measurements, communication in
noisy environments, engine diagnostics, and uncorrupted cellular phone
communications.
FIRING RULES
The firing rule is an important concept in neural networks and accounts for their
high flexibility. A firing rule determines how one calculates whether a neuron
should fire for any input pattern. It relates to all the input patterns, not only the
ones on which the node was trained.
A simple firing rule can be implemented by using Hamming distance technique.
The rule goes as follows:
Take a collection of training patterns for a node, some of which cause it to fire
(the 1-taught set of patterns) and others which prevent it from doing so (the 0taught set). Then the patterns not in the collection cause the node to fire if, on
comparison, they have more input elements in common with the 'nearest' pattern
in the 1-taught set than with the 'nearest' pattern in the 0-taught set. If there is a
tie, then the pattern remains in the undefined state.
Why use neural networks?
Neural networks, with their remarkable ability to derive meaning from
complicated or imprecise data, can be used to extract patterns and detect trends
that are too complex to be noticed by either humans or other computer
techniques. A trained neural network can be thought of as an "expert" in the
category of information it has been given to analyse. This expert can then be
used to provide projections given new situations of interest and answer "what if"
questions.
Other advantages include:
1.
Adaptive learning: An ability to learn how to do tasks based on the data
given for training or initial experience.
2.
Self-Organisation: An ANN can create its own organisation or
representation of the information it receives during learning time.
3.
Real Time Operation: ANN computations may be carried out in
parallel, and special hardware devices are being designed and
manufactured which take advantage of this capability.
4.
Fault Tolerance via Redundant Information Coding: Partial destruction
of a network leads to the corresponding degradation of performance.
However, some network capabilities may be retained even with major
network damage.
NEURAL NETWORK APPLICATIONS
The utility of artificial neural network models lies in the fact that they can be used
to infer a function from observations. This is particularly useful in applications
where the complexity of the data or task makes the design of such a function by
hand impractical.
Of course character recognition is not the only problem that neural networks can
solve. Neural networks have been successfully applied to broad spectrum of
data-intensive applications, such as:

Process Modeling and Control - Creating a neural network model for a
physical plant then using that model to determine the best control settings
for the plant.

Machine Diagnostics - Detect when a machine has failed so that the
system can automatically shut down the machine when this occurs.

Portfolio Management - Allocate the assets in a portfolio in a way that
maximizes return and minimizes risk.

Target Recognition - Military application which uses video and/or infrared
image data to determine if an enemy target is present.

Medical Diagnosis - Assisting doctors with their diagnosis by analyzing
the reported symptoms and/or image data such as MRIs or X-rays.

Targeted Marketing - Finding the set of demographics which have the
highest response rate for a particular marketing campaign.

Voice Recognition - Transcribing spoken words into ASCII text.

Financial Forecasting - Using the historical data of a security to predict
the future movement of that security.

Quality Control - Attaching a camera or sensor to the end of a production
process to automatically inspect for defects.

Intelligent Searching - An internet search engine that provides the most
relevant content and banner ads based on the users' past behavior.

Fraud Detection - Detect fraudulent credit card transactions and
automatically decline the charge.
SUMMARY
It's fine in theory to talk about neural nets that tell males from females, but if
that was all they were useful for, they would be a sad project indeed. In fact,
neural nets have been enjoying growing success in a number of fields, and
significantly: their successes tend to be in fields that posed large difficulties
for symbolic AI. Neural networks are, by design, pattern processors - they can
identify trends and important features, even in relatively complex information.
What's more, they can work with less-than-perfect information, such as blurry
or static-filled pictures, which has been an insurmountable difficulty for
symbolic AI systems. Discerning patterns allows neural nets to read
handwriting, detect potential sites for new mining and oil extraction, predict
the stock market, and even learn to drive.
REFERENCES
www. Neuro solutions.com
www. Neural networks.com