Download Analysis-of-Optical-Flow-Techniques-in-Video

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Water metering wikipedia , lookup

Lift (force) wikipedia , lookup

Wind-turbine aerodynamics wikipedia , lookup

Derivation of the Navier–Stokes equations wikipedia , lookup

Navier–Stokes equations wikipedia , lookup

Bernoulli's principle wikipedia , lookup

Flow measurement wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Compressible flow wikipedia , lookup

Reynolds number wikipedia , lookup

Aerodynamics wikipedia , lookup

Turbulence wikipedia , lookup

Flow conditioning wikipedia , lookup

Fluid dynamics wikipedia , lookup

Rheology wikipedia , lookup

Transcript
Analysis of Optical Flow Techniques in Video
Processing
Sunil Kumar Aithal S
Assistant Professor, Dept of CSE
NMAMIT, Nitte, India
Email: [email protected]
Abstract— This article focuses on analyzing the two
famous approaches in optical flow for tracking movement
in video sequences. Horn-Schunck and Lucas-Kanade
optical flow techniques are analyzed to predict time
efficiency. It explains the theory of optical flow and
various approaches on how to detect object movement in
video sequences. Various video samples collected are run
through matlab software for tracking motion and time
elapsed by optical flow techniques is calculated. Also
analysis is done on three different video formats namely
.avi, .wmv, and .mpeg. The article finally concludes by
suggesting time efficient technique based on the
comparison of the elapsed time between different optical
flow techniques.
Index Terms— Optical Flow, Computer Vision, Motion
Detection, Horn-Schunck approach, Lucas-Kanade
approach.
1. Introduction
1.1 Review of Optical Flow
An object motion tracking in the visual scene has
become more interesting topic nowadays in the
computer vision area. One famous approach for
detecting the objects movement in video is optical
flow. Video is treated as a sequence of images and
each image is a composition of pixels. Estimating the
amount of displacement of each pixel between
successive frames or images is underlying concept in
optical flow. Optical flow concept was first introduced
by an American psychologist James J. Gibson in early
1940s while analyzing the visual stimulus provided to
animals moving through the world [1]. Optical flow is
defined as the analyzing the pattern of apparent motion
of objects(including humans), surfaces, and edges in a
visual scene arising due to the relative motion between
an observer (an eye or a camera) and the video [2][3].
Estimation of velocities of successive images or frames
is termed as optic flow vectors or velocity vectors.
Analyzing flow vectors direction of any object
movement in a visual scene can be tracked easily.
Many optical flow techniques exist for tracking
movement of objects in a visual scene. In this article
the comparison of two differential optical flow
techniques are analyzed and time efficiency is
determined.
2. Literature Review
2.1 Optical Flow Estimation
Video is considered as an ordered collection of frames
or images which allows the computation of optical
flow vectors as either instantaneous frame velocities or
discrete displacement of pixels from one image or
frame to another. Based on Taylor series assumptions
flow vectors between subsequent frame sequences can
be estimated on a neighborhood area of each pixel
considered at time t and displaced pixel between two
subsequent frames considered at another time t+δt. δt
represents small time difference during object motion
between considered frame sequences. Differential
optical flow techniques are analyzed for their time
efficiencies [8].
2.1.1 Review on Horn-Schunck’s Method
We can compute dense optical flow fields using this
differential technique [4]. Major assumption in this
approach to be made before computing flow vectors is
by considering that the optic flow is smooth over the
individual frame. It computes velocity field vectors m
and n by minimizing the following equation:
E = ∬ 𝑎2 𝑑𝑥𝑑𝑦 + α ∬ 𝑏 𝑑𝑥𝑑𝑦
where,
a = Ixm + Iyn + It, b = (p + q + r+ s),
p = (∂m/∂x)2,
q = (∂m/∂y)2,
r = (∂n/∂x)2,
s = (∂n/∂y)2.
(1)
Analysis of Optical Flow Techniques in Video Processing
In the equation (1), (∂m/∂x) and (∂m/∂y) are considered
as spatial derivatives of the optical velocity component
m, and α is the global smoothness term. By minimizing
the equation (1) the velocity field, [m n], for every
pixel in the frame or image is determined using
following equations:
𝑠+1
−𝑠
𝑚𝑥,𝑦
= 𝑚𝑥,𝑦
– p/q
where,
where,
𝐼𝑥 (𝑝1 )𝑚 + 𝐼𝑦 (𝑝1 )𝑛 + 𝐼𝑡 (𝑝1 ) = 0,
𝐼𝑥 (𝑝2 )𝑚 + 𝐼𝑦 (𝑝2 )𝑛 + 𝐼𝑡 (𝑝2 ) = 0,
(2)
…………………………………
𝐼𝑥 (𝑝𝑛 )𝑚 + 𝐼𝑦 (𝑝𝑛 )𝑛 + 𝐼𝑡 (𝑝𝑛 ) = 0.
−𝑠
−𝑠
p = Ix(Ix𝑚𝑥,𝑦
+ Iy𝑛𝑥,𝑦
+ It),
2
2
2
q = α + 𝐼𝑥 +𝐼𝑦
𝑠+1
−𝑠
𝑛𝑥,𝑦
= 𝑛𝑥,𝑦
– r/s
optical flow does not vary much in every sub-frame
region the standard optical flow equation (1) must
satisfy for every pixel in a given frame sequence.
Every sub-frame region can be assumed as a window
with w as its center. Therefore, the velocity vectors m
and n should satisfy the following equations:
(3)
−𝑠
−𝑠
r = Iy(Iy𝑚𝑥,𝑦
+ Iy𝑛𝑥,𝑦
+ It),
2
2
2
s = α + 𝐼𝑥 + 𝐼𝑦
𝑠
𝑠
In the equation (2) and (3), (𝑚𝑥,𝑦
𝑛𝑥,𝑦
) is the flow
−𝑠
−𝑠
velocity of the pixel at location (x, y), and (𝑚𝑥,𝑦
𝑛𝑥,𝑦
)
is the considered neighborhood average of
𝑠
𝑠
(𝑚𝑥,𝑦
𝑛𝑥,𝑦
) . For s=0, the initial velocity is 0.
Horn-Schunck’s iterative approach solves m and n as
follows:
Ix and Iy are estimated using 3 × 3 matrix [-1 2 -1; 0 0 0; 1 2 1] and its transposed form for
every pixel in the frame 1.
b) Between frames 1 and 2, It is estimated using
the 1 × 2 matrix [-1 1].
c) Estimate mean velocity of every pixel using 3
× 3 matrix [0 1 0; 1 0 1; 0 1 0].
d) Compute m and n by repeating all the above
steps for all pixels in a considered subsequent
frame sequences.
where 𝐼𝑥 (𝑝𝑖 ), 𝐼𝑦 (𝑝𝑖 ), 𝐼𝑡 (𝑝𝑖 ) are spatial-derivatives for
an image or frame intensity I considered at x and y
coordinates respectively at time t. p1, p2, and pn are the
pixels within the sub-frame window w. Lucas-Kanade
technique solves the constraint equation (1) for
estimating flow vectors m and n by assuming a
constant velocity in each sub-frame window.
The Lucas-Kanade method computes velocity vectors
m and n as follows:
a.
b.
c.
a)
In this approach the reference matrix used are also
called as standard convolution kernels. It acts as a
mask for identifying edge, sharpening and
smoothening a frame sequence [9].
2.1.2 Review on Lucas-Kanade Method
It is a robust differential technique for tracking motion
in visual scene and optic flow vectors estimation [5].
This technique uses least square criteria for flow
vectors estimation. An individual frame sequence is
decomposed into sub-frames. By considering that the
d.
Ix and Iy are computed using a matrix [-0.0833
0.6667 0 -0.6667 0.0833] for every pixels in
the frames 1 and 2.
It is estimated using a matrix [-1 1] for every
pixels in the frames 1 and 2.
Ix, Iy and It are smoothened for accuracy using
a matrix [0.0625 0.2500 0.3750 0.2500
0.0625].
For every pixel the minimized 2-by-2 linear
equations for a considered window are solved
and the estimated eigen values are compared
against the standard threshold to predict the
flow vectors m and n [8, 9].
In this approach the matrices used are also called as
standard convolution kernels. It acts as a mask for
identifying edge, sharpening and smoothening a frame
sequence [9].
3. Experiment and Analysis
The various video samples collected are tracked for
object motion detection using above said optical flow
techniques and time efficiency comparison of those
approaches is analyzed. However, there are certain
issues that the time complexity depends on. Thus, here
the assumption made is the brightness of the sample
video sequences does not vary much, also they are
Analysis of Optical Flow Techniques in Video Processing
compatible to run in matlab software, can be RGB
video samples, also there should be a noticeable
movement of object in the visual scene. The software
used is matlab R2012a, runs on 2GB RAM, Intel
Pentium processor 2.19GHz, 32 bit OS(Windows 7)
machine. Thus, collected video samples are run
through above discussed algorithms using matlab and
the time elapsed in seconds is recorded and tabulated in
Table 1. Finally, a graph Fig 1 is plotted to depict time
comparisons of two optic flow differential algorithms.
frames increases irrespective of video format the total
execution time taken by Horn-Schunck approach
increases drastically when compared to Lucas-Kanade
approach. Finally, this article helps researchers to study
the time efficiency of discussed algorithms on various
types of video samples and concludes by depicting
Lucas-kanade optical flow approach is more accurate
and much faster in optical flow estimation and motion
detection in visual scene than the other technique.
References
Video
Format
Table 1: Time elapsed of two optic flow methods
Total
Horn-Schunck
Lucas-Kanade
Frames
Approach
Approach
(seconds)
(seconds)
.AVI
600
74
58
.MPEG
903
34.20
26.76
1168
62
49.4
.WMV
2152
104.10
72.79
.WMV
2858
136.02
96.36
.MPEG
140
120
100
80
Horn-Schunck
60
Lucas-Kanade
40
20
0
.avi
.mpeg .wmv
Fig 1:Analysis of Time Efficiency
4. Conclusion
In this paper a review on optical flow and its two
famous differential techniques are discussed. Three
video formats .avi, .wmv, .mpeg samples comprising
different number of frames are collected and tested
through Horn-Schunck and Lucas-Kanade differential
optical flow techniques using matlab software. The
algorithms successfully tracked the movement of
objects in the video sequences and time taken by them
is recorded. Analysis shows that as the number of
[1] Gibson, J.J. (1950). The Perception of the Visual World.
Houghton Mifflin.
[2] Andrew Burton and John Radford (1978). Thinking in
Perspective: Critical Essays in the Study of Thought
Processes. Routledge. ISBN 0-416-85840-6.
[3] David H. Warren and Edward R. Strelow (1985).
Electronic Spatial Sensing for the Blind: Contributions from
Perception. Springer. ISBN 90-247-2689-1.
[4] Horn and Schunck, B.G. 1981. Determining Optical
Flow, Artificial Intelligence, 17: 185-204.
[5] Lucas, B., and Kanade, T. (1981) An Iterative Image
Registration Technique with an Application to Stereo Vision,
Proceedings of 7th International Joint Conference on
Artificial Intelligence (IJCAI), pp. 674-679.
[6] OpticalFlow:http://en.wikipedia.org/wiki/Optical_flow
[7] David J. Fleet and Yair Weiss (2006). "Optical Flow
Estimation". In Paragios et al. Handbook of Mathematical
Models in Computer Vision. Springer. ISBN 0-387-26371-3.
[8] Sunil Kumar Aithal S, "An Automated System for
Detecting Congestion in Huge Gatherings", International
Journal of Computer Applications (IJCA), (0975-8887) ,
International Conference on Information and Communication
Technologies, 2014.
[9] Optical Flow:https://www.mathworks.com.