Download ppt2 - Soft Computing Lab.

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Dynamic Time Warping and Neural Network
J.-Y. Yang, J.-S. Wang and Y.-P. Chena,
Using acceleration measurements for
activity recognition: An effective learning
algorithm for constructing neural classifiers
Pattern Recognition Letters, vol. 29, no. 16, pp. 2213-2220, 2008.
Spring Semester, 2010
Outline
 Background
 Activity Recognition Strategy
 Experiments
 Summary
2
Background
 Accelerometers can be used as a human motion
detector and monitoring device
– Biomedical engineering, medical nursing, interactive
entertainment, …
– Exercise intensity / distance, sleep cycle, and calorie
consumption
3
Background
Proposed Method Overview
 One 3-D accelerometer on the dominant wrist
 NNs
– Pre-classifier  static classifier or dynamic classifier
 Eight domestic activities
– Standing, sitting, walking, running, vacuuming, scrubbing,
brushing teeth, and working at a computer
4
Background
Neural Classifier
 Neurons in the Brain
– A neuron receives input from other neurons (generally
thousands) from its synapses
– Inputs are approximately summed
– When the input exceeds a threshold the neuron sends an
electrical spike that travels from the body, down the axon, to
the next neuron(s)
5
Background
Neurons in the Brain (cont.)
 Amount of signal passing through a neuron
depends on:
– Intensity of signal from feeding neurons
– Their synaptic strengths
– Threshold of the receiving neuron
 Hebb rule (plays key part in learning)
– A synapse which repeatedly triggers the activation of a
postsynaptic neuron will grow in strength, others will
gradually weaken
– Learn by adjusting magnitudes of synapses’ strengths
6
Background
Artificial Neurons
y
f(A)
+1
0
A
g( )
-1
Step Function
∑w.x
f(A)
+1
w1
w2
0
x3
x1
x2
7
w3
A
-1
Sigmoid Function
Background
Neural Classifier (Perceptron)
 Structure
 Learning
– Weights are changed in proportion to the difference (error)
between target output and perceptron solution for each
example
– Back-propagation algorithm
• The gradient descent method, Slow convergence and local minima
– The resilient back-propagation (RPROP)
• Ignore the magnitude of the gradient
8
Activity Recognition Strategy
 Pre-Classifier
 Static/Dynamic Classifier
9
Activity Recognition Strategy
Pre-Classifier (1/2)
 Two components of the acceleration data
– Gravitational acceleration (GA)
– Body acceleration (BA): High-pass filtering to remove GA
 Segmentation with overlapping windows
– 512 samples per window
10
Activity Recognition Strategy
Pre-Classifier (2/2)
 SMA (Signal Magnitude Area)
– The sum of acceleration magnitude over three axes
 AE (Average Energy)
– Average of the energy over three axes
– Energy: The sum of the squared discrete FFT component
magnitudes of the signal in a window
11
Activity Recognition Strategy
Feature Extraction
 8 attributes × 3axis = 24 features
– Mean, correlation between axes, energy, interquartile range
(IQR), mean absolute deviation, root mean square, standard
deviation, variance
12
Activity Recognition Strategy
Feature Selection (1/2)
 Common principal
component analysis (CPCA)
 If features are highly
correlated,
the corresponding vectors
are similar
 clustering to group similar loadings
13
Activity Recognition Strategy
Feature Selection (2/2)
 Apply the PCA
 Select the first p PCs (cumulative sum>90%)
 Estimate CPC
 Support vector clustering
14
Activity Recognition Strategy
Verification
15
Experiments: Environment (1/2)
 MMA7260Q tri-axial accelerometer
– Sensitivity: -4.0g ~ +4.0g, 100Hz
– Mount on the dominant wrist
 Eight activities from seven subjects
– Standing, sitting, walking,
running, vacuuming,
scrubbing, brushing teeth,
and working at a computer
– 2min per activity
16
Experiments
Environment (2/2)
 Window size = 512 (with 256 overlapping)
– 22 windows in one min., 45 windows in two min.
 Leave-one-subject-out cross-validation
– Training: 1min per activity = 22 windows × 8 activities× 6
subjects
– Test: 2min per activity = 45 windows × 8 activities
17
Experiments
FSS Evaluation
 Use six static selected features
18
Experiments
Recognition Result
 NN
– Hidden node
• Pre-classifier: 3
• Static-classifier: 5
• Dynamic-classifier: 7
– Epochs: 500
 Computational load of FSS
– Training without FSS = 7.457s, training with FSS = 8.46s
19
Summary
 Proposed method yielded 95% accuracy
– Pre-classifier  static / dynamic classifiers
 Author’s other publication
–
–
20
Yen-Ping Chen, Jhun-Ying Yang, Shun-Nan Liou, Gwo-Yun Lee, Jeen-Shing Wang:
Online classifier construction algorithm for human activity
detection using a tri-axial accelerometer.
Applied Mathematics and Computation 205(2): 849-860 (2008)
Related documents