Download ppt

Document related concepts

Mathematical physics wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Brownian motion wikipedia , lookup

Equations of motion wikipedia , lookup

Transcript
CSE 473/573
Computer Vision and Image
Processing (CVIP)
Ifeoma Nwogu
[email protected]
Lecture 19 – Dense motion estimation (OF)
1
Schedule
• Last class
– We finished stereo and multi-view geometry (high
level)
• Today
– Optical flow
• Readings for today: Forsyth and Ponce 10.6,
11.1.2
2
Visual motion
Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys
Motion and perceptual
organization
• Sometimes, motion is the only cue
Motion and perceptual
organization
• Sometimes, motion is the only cue
Motion and perceptual
organization
• Even “impoverished” motion data can evoke a
strong percept
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis",
Perception and Psychophysics 14, 201-211, 1973.
Motion and perceptual
organization
• Even “impoverished” motion data can evoke a strong
percept
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis",
Perception and Psychophysics 14, 201-211, 1973.
Motion and perceptual
organization
• Even “impoverished” motion data can evoke a strong
percept
YouTube video
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis",
Perception and Psychophysics 14, 201-211, 1973.
Uses of motion
•
•
•
•
Estimating 3D structure
Segmenting objects based on motion cues
Learning and tracking dynamical models
Recognizing events and activities
Classes of techniques for motion
estimation
• Feature-based methods
– Extract visual features (corners, textured areas) and track them
– Sparse motion fields, but possibly robust tracking
– Suitable especially when image motion is large (10s of pixels)
• Direct-methods
– Directly recover image motion from spatio-temporal image
brightness variations
– Global motion parameters directly recovered without an
intermediate feature motion calculation
– Dense motion fields, but more sensitive to appearance variations
– Suitable for video and when image motion is small (< 10 pixels)
Szeliski
Motion field
• The motion field is the projection of the 3D
scene motion into the image
Patch based image motion
How do we determine correspondences?
I
J
Assume all change between frames is due to motion:
J ( x, y )  I ( x  u ( x, y ), y  v( x, y ))
Optical flow
• Definition: optical flow is the apparent motion
of brightness patterns in the image
• Ideally, optical flow would be the same as the
motion field
• Have to be careful: apparent motion can be
caused by lighting changes without any actual
motion
– Think of a uniform rotating sphere under fixed
lighting vs. a stationary sphere under moving
illumination
Estimating optical flow
I(x,y,t–1)
I(x,y,t)
• Given two subsequent frames, estimate the apparent
motion field u(x,y) and v(x,y) between them
• Key assumptions
• Brightness constancy: projection of the same point looks
the same in every frame
• Small motion: points do not move very far
• Spatial coherence: points move like their neighbors
The brightness constancy constraint
I(x,y,t–1)
I(x,y,t)
• Brightness Constancy Equation:
I ( x, y, t  1)  I ( x  u ( x, y ), y  v( x, y ), t )
Linearizing the right side using Taylor expansion:
I ( x, y, t  1)  I ( x, y, t )  I x u ( x, y )  I y v( x, y )
Hence,
I x u  I y v  It  0
The brightness constancy constraint
I x u  I y v  It  0
• How many equations and unknowns per
pixel?
– One equation, two unknowns
• What does this constraint mean?
I  (u, v)  I t  0
• The component of the flow perpendicular to the
gradient (i.e., parallel to the edge) is unknown
The brightness constancy constraint
I x u  I y v  It  0
• How many equations and unknowns per pixel?
– One equation, two unknowns
• What does this constraint mean?
I  (u, v)  I t  0
• The component of the flow perpendicular to the
gradient (i.e., parallel to the edge) is unknown
gradient
(u,v)
If (u, v) satisfies the equation,
so does (u+u’, v+v’) if I  (u ' , v' )  0
(u’,v’)
(u+u’,v+v’)
edge
The aperture problem
Perceived motion
The aperture problem
Actual motion
The barber pole illusion
http://en.wikipedia.org/wiki/Barberpole_illusion
The barber pole illusion
http://en.wikipedia.org/wiki/Barberpole_illusion
Solving the aperture problem
• How to get more equations for a pixel?
• Spatial coherence constraint: pretend the pixel’s
neighbors have the same (u,v)
– E.g., if we use a 5x5 window, that gives us 25 equations per pixel
I (x i )  [u, v]  I t (x i )  0
 I x (x1 ) I y (x1 ) 
 I t (x1 ) 
 I (x ) I (x )
 I (x )
u
y
2   
t
2 
 x 2




 
  v 
  




 I x (x n ) I y (x n )
 I t (x n )
B. Lucas and T. Kanade. An iterative image registration technique with an application to
stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.
Solving the aperture problem
• Least squares problem:
 I x (x1 ) I y (x1 ) 
 I t (x1 ) 
 I (x ) I (x ) 


I
(
x
)
y
2  u 
t
2
 x 2




 
   v 
  




 I x (x n ) I y (x n )
 I t (x n )
• When is this system solvable?
• What if the window contains just a single straight edge?
B. Lucas and T. Kanade. An iterative image registration technique with an application to
stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp.
674–679, 1981.
Conditions for solvability
• “Bad” case: single straight edge
Conditions for solvability
• “Good” case
Lucas-Kanade flow
• Linear least squares problem
 I x (x1 ) I y (x1 ) 
 I t (x1 ) 
 I (x ) I (x ) 
 I (x )
u


y
2 
 x 2
 t 2 


 
   v 
  




I
(
x
)
I
(
x
)
 x n
y
n 
 I t (x n )

AUd
n2 21
n1
• Solution given by ( A T A)U  A T d
 I x I x

 I x I y
I
I
 I x I t 
I  u 
    

 I y I t 
yIy 
 v 
x y
• The summations are over all pixels in the window
B. Lucas and T. Kanade. An iterative image registration technique with an application to
stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp.
674–679, 1981.
Lucas-Kanade flow
  I x2

 I x I y
I I
I
x y
2
y
 u 
  I x It 

    

I
I
 v 

y
t


• Recall the Harris corner detector: M = ATA is the
second moment matrix
• We can figure out whether the system is solvable
by looking at the eigenvalues of the second
moment matrix
• The eigenvectors and eigenvalues of M relate to edge direction
and magnitude
• The eigenvector associated with the larger eigenvalue points in
the direction of fastest intensity change, and the other
eigenvector is orthogonal to it
Visualization of second moment
matrices
Visualization of second moment
matrices
Interpreting the eigenvalues
Classification of image points using eigenvalues of the
second moment matrix:
2
1 and 2 are small
“Edge”
2 >> 1
“Flat”
region
“Corner”
1 and 2 are large,
1 ~ 2
“Edge”
1 >> 2
1
Visualization of second moment
matrices
The Aperture Problem
Let
M   I I 
T
and
  I x I t 
b

  I y I t 
• Algorithm: At each pixel compute U by solving MU b
• M is singular if all gradient vectors point in the same direction
• e.g., along an edge
• of course, trivially singular if the summation is over a single pixel
or there is no texture
• i.e., only normal flow is available (aperture problem)
• Corners and textured areas are OK
Szeliski
Example
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Uniform region
– gradients have small magnitude
– small 1, small 2
– system is ill-conditioned
SSD – uniform region
Edge
– gradients have one dominant direction
– large 1, small 2
– system is ill-conditioned
SSD Surface -- edge
High-texture or corner region
– gradients have different directions, large magnitudes
– large 1, large 2
– system is well-conditioned
SSD Surface – textured area or corner
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Errors in Lucas-Kanade
• Fails when intensity structure in window is poor
• The motion is large (larger than a pixel)
– Iterative refinement
– Coarse-to-fine estimation
– Exhaustive neighborhood search (feature matching)
• A point does not move like its neighbors
– Motion segmentation
• Brightness constancy does not hold
– Exhaustive neighborhood search with normalized
correlation
Coarse-to-Fine Estimation
warp
+

a

aw
J pixels
refine
u=1.25

u=2.5 pixels
Δa
u=5 pixels
image J
Pyramid of image J
u=10 pixels
image I
Pyramid of image I
Szeliski
Coarse-to-Fine Estimation

ain
J
J
warp
+
pyramid
construction
J
I
Jw

a

a
warp
Jw

a
warp
+
pyramid
construction
I
refine
+
J
I
refine

aout
Jw
refine
I

a
Szeliski
Multi-resolution registration
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
State-of-the-art optical flow
Start with something similar to Lucas-Kanade
+ gradient constancy
+ energy minimization with smoothing term
+ region matching
+ keypoint matching (long-range)
Region-based
+Pixel-based +Keypoint-based
Large displacement optical flow, Brox et al., CVPR 2009
Source: J. Hays
Feature tracking
• So far, we have only considered optical flow
estimation in a pair of images
• If we have more than two images, we can
compute the optical flow from each frame to
the next
• Given a point in the first image, we can in
principle reconstruct its path by simply
“following the arrows”
Tracking challenges
• Ambiguity of optical flow
– Need to find good features to track
• Large motions, changes in appearance,
occlusions, disocclusions
– Need mechanism for deleting, adding new
features
• Drift – errors may accumulate over time
– Need to know when to terminate a track
Shi-Tomasi feature tracker
•
Find good features using eigenvalues of secondmoment matrix
– Key idea: “good” features to track are the ones whose
motion can be estimated reliably
•
From frame to frame, track with Lucas-Kanade
– This amounts to assuming a translation model for frame-toframe feature movement
•
Check consistency of tracks by affine registration to
the first observed instance of the feature
– Affine model is more accurate for larger displacements
– Comparing to the first frame helps to minimize drift
J. Shi and C. Tomasi. Good Features to Track. CVPR 1994.
Tracking example
J. Shi and C. Tomasi. Good Features to Track. CVPR 1994.
Non Gaussian noise
• Least square solution assumes error in the
image motion estimation are Gaussian in
nature
• The matrix M or the structured tensor matrix
is computed using finite difference methods
– forward, backward, and central differences
– Can obtain higher order evaluations based on how
the derivatives are computed (e.g adaptive
windowing etc.)
52
Robust Estimation
Noise distributions are often non-Gaussian, having much heavier tails. Noise
samples from the tails are called outliers.
• Sources of outliers (multiple motions):
– specularities / highlights
– jpeg artifacts / interlacing / motion blur
– multiple motions (occlusion boundaries, transparency)
u2
velocity space
+
+
u1
Black
Occlusion
occlusion
disocclusion
shear
Multiple motions within a finite region.
Black
Coherent Motion
Possibly Gaussian.
Black
Multiple Motions
Definitely not Gaussian.
Black
Layered Scene Representations
Motion representations
• How can we describe this scene?
Szeliski
Block-based motion prediction
• Break image up into square blocks
• Estimate translation for each block
• Use this to predict next frame, code difference
(MPEG-2)
Szeliski
Layered motion
• Break image sequence up into “layers”:
•

=
• Describe each layer’s motion
Szeliski
Layered motion
• Advantages:
• can better handle occlusions / disocclusions
• each layer’s motion can be smooth
• can be used for video segmentation in semantic
processing
• Difficulties:
• how to determine the correct number of layers?
• how to assign pixels?
• how to model the layer motion?
Szeliski
Layers for video summarization
Szeliski
Background modeling (MPEG-4)
• Convert masked images into a background
sprite for layered video coding
•
+
+
+
•
=
Szeliski
What are layers?
• [Wang & Adelson,
1994; Darrell &
Pentland 1991]
• intensities
• alphas
• velocities
Szeliski
Fragmented Occlusion
Results
Results
How to estimate the layers
1.
2.
3.
4.
5.
compute coarse-to-fine flow
estimate affine motion in blocks (regression)
cluster with k-means
assign pixels to best fitting affine region
re-estimate affine motions in each region…
Szeliski
Layer synthesis
• For each layer:
•
•
•
stabilize the sequence with the affine motion
compute median value at each pixel
Determine occlusion relationships
Szeliski
Results
Szeliski
Recent GPU Implementation
• http://gpu4vision.icg.tugraz.at/
• Real time flow exploiting robust norm +
regularized mapping
Recent results: SIFT Flow
Slide Credits
• Svetlana Lazebnik – UIUC
• Trevor Derrell – UC Berkeley
73
Next class
• Segmentation via clustering
• Readings for next lecture:
– Forsyth and Ponce chapter 9
– Szelinski chapter 5
• Readings for today:
– Forsyth and Ponce 10.6 and 11.1.2
74
Questions
75