Download IEEE Paper Template in A4 (V1)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
MOBILE ROBOT NAVIGATION USING MONTE CARLO
LOCALIZATION
Amina Waqar
[email protected]
___________________________________________________________________________________________
Abstract— This paper presents an algorithm for the mobile Previously people have done a lot of work on tracking using
navigation of a robot using Monte Carlo. Previously, people did a Kalman filter which is a form of Phase Locked Loop (PLL)
lot of work for the tracking of mobile robot. Previously people and is less efficient , because of it , it can be used as tracking.
used grid-based approach that used high resolution 3D grids to Fig.1 shows working of Kalman filter. The black boxes show
represent the state space. Whereas this method is quite
the original position , green stars show the estimated position
computationally efficient. Using Monte Carlo Localization we
apply the sampling approach to divide the state space into and red crosses show the modified position by taking averages
samples. We can increase the number of samples where required. of both.
Monte Carlo Localization is easy to implement. Several results
proved that Monte Carlo yields more accurate results. And also, C. Markov Localization
Markov localization caters the problem of state estimation
it is computationally very efficient.
I. INTRODUCTION
Throughout the last decade, sensor-based localization has
been recognized as a key problem in mobile robotics (Cox
1991; Borenstein, Everett, & Feng 1996). In Localization, a
mobile robot estimates its position in a global co-ordinate
frame. There are two types of localizations: Global
Localization and position tracking. In global localization, a
robot does not know its original position whereas in position
tracking the robot knows its original position.Global
Localization is also known as “hijacked robot problem”
(Engelson 1994)in which the robot has to determine its
position from scratch.Many of the previous researches were
on tracking but now many people are working on both types
of localizations.In this paper we shall represent the robot’s
belief by probability density over the region in its range. The
range is determined by the range in which the sensors will be
able to work effectively.
Figure 1 .Tracking using Kalman Filter
A.
B. Previous Works
from sensor values. Markov localization is a probabilistic
algorithm: Instead of maintaining a single hypothesis as to
where in the world a robot might be, Markov localization
maintains a probability distribution over the space of all such
hypotheses. The probabilistic representation allows it to weigh
these different hypotheses in a mathematically sound way.
Before we delve into mathematical detail, let us illustrate
the basic concepts with a simple example. Consider the
environment depicted in Fig 2. For the sake of simplicity, let
us assume that the space of robot positions is one-dimensional,
that is, the robot can only move horizontally (it may not
rotate). Now suppose the robot is placed somewhere in this
environment, but it is not told its location. Markov
localization represents this state of uncertainty by a uniform
distribution over all positions, as shown by the graph in the
first diagram in Fig 2. Now let us assume the robot queries its
sensors and finds out that it is next to a door.
Markov localization modifies the belief by raising the
probability for locates next to doors, and lowering it anywhere
else.Consider that the resulting belief is multi-modal,
reflecting the fact that the available information is insufficient
for global localization. Notice also that places not next to a
door still possess non-zero probability. This is because sensor
readings have noise, and a single sight of a door is typically
insufficient to exclude the possibility of not being next to a
door.
Now let us assume the robot moves a meter forward.
Markov localization incorporates this information by shifting
the belief distribution accordingly, as visualized in the third
diagram in Fig 2.
To account for the inherent noise in robot motion, which
inevitably leads to a loss of information, the new belief is
smoother (and less certain) than the previous one. Finally, let
us assume the robot senses a second time, and again it finds
itself next to a door.
Now this observation is multiplied into the current (nonuniform) belief, which leads to the final belief shown at the
last diagram in Fig 2. At this point in time, most of the
probability is centered around a single location. The robot is
now quite certain about its position.
Figure 3 : Monte Carlo Simulation
Bel (l)=∫P(l|l’,a)Bel(l’)dl’
Bel is the belief of the robot that was uniform distribution
initially. To update a belief there must an action ‘a’ done by
the robot. The belief at position l, Bel(l) is updated using the
previous belief at position l’, Bel(l’).Then we convolve the
both the beliefs to get the new belief which guides the robot
where to go.
D. Monte Carlo Localization
In the Monte Carlo localization we discretize the space into
random samples. Since it is using global localization, it can
represent into multimodal distributions .Due to this reason,
less memory is required and is computationally efficient.
Grid-based approaches were also used but they were
computationally cumbersome.Grid-based approaches required
more memory also because they were using 3-D figures.
In our experiment we have modelled the robot with four
sensors on each side. Each of it emits a signal which is
reflected back as 1 if there is a wall and 0 if there is door or
any empty space. The range in our cases is five units ( 0-4).As
it moves along the path from door to wall the signals will
convert from 0’s to 1’s.Fig.3 explains the above simulation.
Fig.3(a) represents first belief of the robot after an action.
Fig.3(b) is an updated PDF based the previous PDF. Fig3.(c)
shows the convolution of both the PDF’s where the door
actually is and that way the robot should move.
II. CONCLUSIONS
In this paper we have concluded that Monte Carlo
Localization is an easy to implement and requires less
memory and is computationally efficient. Less memory is
attributed to the fact that belief is updated in the memory
location rather than occupying more and more memory
locations.This recursive algorithm is far more effective than
Kalman filter which was a form of Phase Locked Loop (PLL)
and was less efficient computationally and was not as precise.
Hence Kalman filter was only used for tracking purposes.
REFERENCES
Burgard, W.; Cremers, A.; Fox, D.; H ̈ahnel, D.; Lakemeyer, G.;
Schulz, D.; Steiner, W.; and Thrun, S. 1998a. The Interactive
Museum Tour-Guide Robot. Proc. of AAAI-98.
Burgard, W.; Derr, A.; Fox, D.; and Cremers, A. 1998b. Integrating global position estimation and position tracking for mobile robots: the Dynamic Markov Localization approach. Proc. of
IROS-98.
Carpenter, J.; Clifford, P.; and Fernhead, P. 1997. An improved
particle filter for non-linear problems. TR, Dept. of Statistics,
Univ. of Oxford.
Chung, K. 1960. Markov chains with stationary transition probabilities. Springer.
Cox, I. 1991. Blanche—an experiment in guidance and navigation of an autonomous robot vehicle. IEEE Transactions on
Robotics and Automation 7(2).
Dean, T. L., and Boddy, M. 1988. An analysis of time-dependent
planning. Proc. of AAAI-92.
Dellaert, F.; Burgard, W.; Fox, D.; and Thrun, S. 1999a. Using
the condensation algorithm for robust, vision-based mobile robot
localization. Proc. of CVPR-99.
Dellaert, F.; Fox, D.; Burgard, W.; and Thrun, S. 1999b. Monte
Carlo localization for mobile robots. Proc. of ICRA-99.
Doucet, A. 1998. On sequential simulation-based methods for
Bayesian filtering. TR CUED/F-INFENG/TR.310, Dept. of Engineering, Univ. of Cambridge.
Endres, H.; Feiten, W.; and Lawitzky, G. 1998. Field test of a navigation system: Autonomous cleaning in supermarkets. Proc. of
ICRA-98.
Engelson, S. 1994. Passive Map Learning and Visual Place
Recognition. Ph.D. Diss., Dept. of Computer Science, Yale Uni-
versity.
Fox, D.; Burgard, W.; Thrun, S.; and Cremers, A. 1998. Position
estimation for mobile robots in dynamic environments. Proc. of
AAAI-98.
Fox, D.; Burgard, W.; Kruppa, H.; and Thrun, S. 1999. A monte
carlo algorithm for multi-robot localization. TR CMU-CS-99120, Carnegie Mellon University.
Fox, D.; Burgard, W.; and Thrun, S. 1998. Active markov localization for mobile robots. Robotics and Autonomous Systems
25:3-4.
uncertainty: Discrete bayesian models for mobile-robot navigation. Proc. of IROS-96.
Kalman, R. 1960. A new approach to linear filtering and prediction problems. Tansaction of the ASME–Journal of basic engineering 35–45.
Kanazawa, K.; Koller, D.; and Russell, S. 1995. Stochastic simulation algorithms for dynamic probabilistic networks. Proc. of
UAI-95.
Kitagawa, G. 1996. Monte carlo filter and smoother for nongaussian nonlinear state space models. Journal of Computational
and Graphical Statistics 5(1).
Koller, D., and Fratkina, R. 1998. Using learning for approximation in stochastic processes. Proc. of ICML-98.
Kortenkamp, D.; Bonasso, R.; and Murphy, R., eds. 1997. AIbased Mobile Robots: Case studies of successful robot systems.
MIT Press.
Leonard, J., and Durrant-Whyte, H. 1992. Directed Sonar Sensing for Mobile Robot Navigation. Kluwer Academic.
Maybeck, P. 1979. Stochastic Models, Estimation and Control,
Vol. 1. Academic Press.
Nourbakhsh, I.; Powers, R.; and Birchfield, S. 1995. DERVISH
an office-navigating robot. AI Magazine 16(2).
Rubin, D. 1988. Using the SIR algorithm to simulate posterior
distributions. Bayesian Statistics 3. Oxford University Press.
Schiele, B., and Crowley, J. 1994. A comparison of position
estimation techniques using occupancy grids. Proc. of ICRA-94.
Simmons, R., and Koenig, S. 1995. Probabilistic robot navigation
in partially observable environments. Proc. of ICML-95.
Smith, R.; Self, M.; and Cheeseman, P. 1990. Estimating uncertain spatial relationships in robotics. Cox, I., and Wilfong, G.,
eds., Autonomous Robot Vehicles. Springer.
Tanner, M. 1993. Tools for Statistical Inference. Springer.
Thrun, S.; Bennewitz, M.; Burgard, W.; Cremers, A.; Dellaert,
F.; Fox, D.; H ̈ahnel, D.; Rosenberg, C.; Roy, N.; Schulte, J.; and
Schulz, D. 1999. MINERVA: A second generation mobile tourguide robot. Proc. of ICRA-99.
Thrun, S.; Fox, D.; and Burgard, W. 1998. A probabilistic approach to concurrent mapping and localization for mobile robots.
Machine Learning 31.
Zilberstein, S., and Russell, S. 1995. Approximate reasoning
using anytime algorithms. Imprecise and Approximate Computation. Kluwer.