Download PowerPoint Presentation - Dallas Makerspace Calendar

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Unsupervised Machine
Learning
By Scott Renkes
[email protected]
Agenda
• Objectives
• The Synapse
• Hebb’s Rule , Decay and Asymptotic Hebbian Learning
• Cases and Mechanisms of Synaptic Feedback
• Bilinear Hebbian Learning Rule
• Habituation
• Other Unsupervised Learning
Objectives
• Identify the difference between Unsupervised/Supervised and
Online/Offline learning
• Know how a synapse works
• Understand Hebbian learning
• Understand the mechanisms of synaptic feedback
• Know the basics of habituation
• Identify other types of unsupervised learning
Unsupervised Learning
• Supervised is guided
• Data is labeled
• Outside observer sets success criteria
• Unsupervised is not guided
• No labels
• Left to its own devices
• Usually used for clustering or pattern recognition
The Synapse
• Action Potential reaches terminal
• Voltage opens Ca++ channels
• Ca++ causes vesicles to move to
and fuse with cell membrane
• Releases neurotransmitters into the
synaptic cleft
• Neurotransmitters bind to ion
channels
• Causes the postsynaptic neuron to
start depolarizing
• Enzymes break down
neurotransmitters resetting the
mechanism
Hebb’s Rule
• The more a synapse is used the
stronger the connection gets.
• Useful for associative memory
• Hebb’s Rule models this relationship
• 𝝙𝑤𝑖𝑗 = ε𝑥𝑖 𝑦𝑗
• Unstable
• Hebb’s Rule with decay
• 𝝙𝑤𝑖𝑗 = ε𝑥𝑖 𝑦𝑗 - λ*𝑤𝑖𝑗
• Weights will decrease with no activity
• Asymptotic Hebbian Learning
• 𝝙𝑤𝑖𝑗 = ε𝑦𝑗 (c𝑥𝑖 -𝑤𝑖𝑗 )
• Has a max weight that can be set by c
Synaptic Feedback
• Neurons have feed back that
relay whether or not their
activity causes another neuron
to activate
•
•
•
•
Protein pathways
Chemo feedback
Electric fields
Glial cells
• How do we model?
• Mechanism for each case
Bilinear Hebbian Learning
• Hebb’s Rule
• ε𝑥𝑖 𝑦𝑗
• Presynaptic Depression
• ϐ𝑥𝑖
• Postsynaptic Depression
• γ𝑦𝑗
• Decay
• λ*𝑤𝑖𝑗
• Bilinear Hebbian Learning
• 𝝙𝑤𝑖𝑗 =ε𝑥𝑖 𝑦𝑗 -ϐ𝑥𝑖 -γ𝑦𝑗 -λ*𝑤𝑖𝑗
• Each learning value can be a functional instead of a constant
• GABA feedback as an example
Habituation
• The mechanism to ignore repeated
activity
• Sock on your feet
• Turtle and ping pong ball
• Can be modeled with Hebb’s Rule
with a reverse weight
• Requires hysteresis
• Similar, repeated inputs get inhibited
• The less similar the next input, the
more the inhibition decreases
• Can use a logic table
• 𝑦𝑗 =H𝑥𝑖
Case
Rule
Similar Signal
𝝙H = ε𝑥𝑖 -λ*𝐻
Different Signal
𝝙H = −ε𝑥𝑖 -λ*𝐻
No Signal
𝝙H = -λ*𝐻
Other Unsupervised Learning
• K-means Clustering
• Groups data based on the means
of local features
• Self Organized Maps(SOM)
• Unsupervised ANN
• Neighborhood function
• Hierarchical Clustering
• Builds a tree based on feature
separation
Questions?
Future Classes
• Op Amp fundamentals
• Amplifier Design
• Analog Filter Design
• High level control of muscles/intro to neural networks
• Brain and spinal control of muscles
• Connect goniometer and EMG to a neural network
• Train the NN to output position data based on hysteresis and muscle
activation