Download Term Project Color and Illumination Independent Landmark

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

M-Theory (learning framework) wikipedia , lookup

The City and the Stars wikipedia , lookup

Pattern recognition wikipedia , lookup

Robot wikipedia , lookup

Self-reconfiguring modular robot wikipedia , lookup

List of Doctor Who robots wikipedia , lookup

Visual Turing Test wikipedia , lookup

Index of robotics articles wikipedia , lookup

Robotics wikipedia , lookup

Cubix wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Histogram of oriented gradients wikipedia , lookup

Computer vision wikipedia , lookup

Scale-invariant feature transform wikipedia , lookup

Visual servoing wikipedia , lookup

Transcript
CmpE-537 Computer Vision
Term Project
Color and Illumination Independent
Landmark Detection for Robot Soccer
Domain
By
Tekin Meriçli
Artificial Intelligence Laboratory
Department of Computer Engineering
Boğaziçi University
27/12/2007
Outline
•
•
•
•
•
•
•
Introduction
Related Work
Proposed Approach
Experimental Setup
Results
Conclusion
References
2
Introduction
• Three fundamental questions of mobile robotics
– “Where am I?”,
– “Where am I going?”,
– “How can I get there?”
• The aim of this project is to answer the first
question for robot soccer domain
– Specifically RoboCup Standard Platform League
(former 4-Legged League)
– Robots with vision sensors (i.e. cameras) are used
3
Introduction
4
Introduction
• All important objects on the field, that is the ball,
the beacons, and the goals, are color-coded
• This makes vision, and hence localization
modules highly dependent on illimunation
– The robots may not be able to detect the beacons at
all, or calculate their distances and orientations to the
beacons wrong if there is even a small change in the
illumination level
• Main motivation is to make the vision / localization
processes color and illumination independent in
the Standard Platform League domain
5
Related Work
• Color / illumination dependent approach
– Color segmentation / pixel classification on the
image
– Connected component analysis to build regions
– Sanity checks to remove noise and illogical
perceptions
• aspect ratio, minimum area, etc.
• Most of the RoboCup teams use this
approach [1–4]
6
Related Work
• Feature detection / recognition based
approach
– Used for simultaneous localization and
mapping (SLAM) purposes
– Scale-invariant feature transform (or SIFT) can
be used in algorithms for tasks like matchin
different views of an object or scene (e.g. for
stereo vision) and object recognition [7]
• SURF, which stands for Speeded-Up
Robust Features, approximates SIFT
7
Proposed Approach
• Image labeling process that has been used
in color segmentation-based approach is
replaced with region labeling in which the
landmarks and their immediate surrounding
are covered
– The robot is placed at a location where it can
see the landmark, and then a region is selected
around the landmark to specify the region in
which the robot should find the SURF features
and associate them with that particular
landmark
8
Proposed Approach
9
Proposed Approach
• This process is repeated for all landmarks
on the soccer field from different angles and
distances
• Supervised learning is used to learn the
associations between the feature
descriptors and the landmarks
• The distance values for landmarks are
calculated using the inter-feature distances
10
Experimental Setup
• A real Aibo ERS-7 robot is placed on the
field facing a particular landmark with
different angles and distances to take
pictures
• An offline visualizer tool is implemented to
show the SURF points on the image and
run tests on various images
11
Experimental Setup
12
Experimental Setup
• SURF points are shown as
little circles
• Details of the descriptors
are listed on the text area
• Similar feature points are
observed on different
images even though the
distance and angle values
are different
– Similarity is defined as the
distance between feature
points in 64 dimensional
feature space
13
Experimental Setup
• First step is to process the training images
and define the landmark regions by clicking
on the image
• The next step is to test images to check
whether the landmark in the image is
recognized and whether the distance and
angle estimates are correct
14
Results
• SURF computation took an average of 56ms on
354x290 images
– Aibo robots capture 208x160 images, but have a
slower processor; hence, SURF computation takes
59ms on average, which is approximately 17fps
• Landmark recognition performance was better than
distance estimates
– Due to the cylindrical shape of landmarks, some
feature points may be closer to or farther from each
other depending on the angle, or may totally be hidden
– Doing the computations on groups of feature points
rather than individuals may improve the performance
15
Conclusion
• A feature-based landmark detection approach is
explored
• Runs with reasonable fps rate
• Main contribution is that this approach provides
color (and illumination to some extent)
independence in vision and localization
processes in robot soccer domain
– It has not been tried by any of the RoboCup teams
so far
• Trying different SURF parameters and running
experiments on physical robots are left as future
work
16
References
•
•
•
•
•
•
•
[1] H. L. Akın et.al. “Cerberus 2006 Team Report”. 2006.
[2] Kaplan, K., B. Celik, T. Mericli, C. Mericli, and H. L. Akın. “Practical
Extensions to Vision-Based Monte Carlo Localization Methods for Robot
Soccer Domain”, In RoboCup International Symposium 2005, Osaka, July 1819, 2005.
[3] Peter Stone, Peggy Fidelman, Nate Kohl, Gregory Kuhlmann, Tekin
Mericli, Mohan Sridharan, and Shao-en Yu. “The UT Austin Villa 2006
RoboCup Four-Legged Team”. Technical Report UT-AI-TR-06-337, The
University of Texas at Austin, Department of Computer Sciences, AI
Laboratory, 2006.
[4] M. J. Quinlan et.al. “The 2006 NUbots Team Report”, 2007.
[5] Thomas Roefer et.al. “GermanTeam2006”, 2006.
[6] Herbert Bay, Tinne Tuytelaars, Luc J. Van Gool. “SURF: Speeded Up
Robust Features”, In ECCV’06, pp.404-417, 2006.
[7] Lowe, D. G., “Distinctive Image Features from Scale-Invariant Keypoints”,
In International Journal of Computer Vision, 60, 2, pp. 91-110, 2004.
17
References
•
•
•
•
•
[8] M. Ballesta, A. Gil, O. Martnez Mozos, and O. Reinoso. “Local descriptors
for visual slam”. In Proc. of the Workshop on Robotics and Mathematics,
Coimbra, Portugal, 2007.
[9] Barfoot, T D, “Online Visual Motion Estimation using FastSLAM with SIFT
Features”. In Proc. of the Int. Conf. on Robotics and Intelligent Systems
(IROS), Edmonton, Alberta, August 2-6, 2005.
[10] Pantelis Elinas and James J. Little. “Stereo vision SLAM: Near real-time
learning of 3D point-landmark and 2D occupancy-grid maps using particle
lters”. In IROS07, 2007.
[11] J. Little, S. Se, and D.G. Lowe. Vision-based mobile robot localization and
mapping using scale-invariant features. In IEEE Int. Conf. on Robotics &
Automation, 2001.
[12] Mart´nez Mozos, O. and Gil, A. and Ballesta, M. and Reinoso, O. “Interest
Point Detectors for Visual SLAM”. In Lecture Notes in Artificial Intelligence,
vol4788, 2007.
18
?
19