Download CHAPT5-Sensors

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotary encoder wikipedia , lookup

Opto-isolator wikipedia , lookup

Geophysical MASINT wikipedia , lookup

Transcript
Lecture
FIVE
Sensors
5 Sensors
Sensors are the robot’s contact with the outside world and used to sense or measure the
robot’s environment or its own internal parameters such as temperature, force, luminance,
resistance to touch, weight, size, etc. These might include active and passive IR (infra-red)
sensors; sound and voice sensors; ultrasonic range sensors, positional encoders on arm joints,
head and wheels; compasses, navigational and GPS sensors; active and passive light and laser
sensors; a number of bumper switches; and sensors to detect acceleration, turning, tilt, odour
detection, magnetic fields, ionizing radiation, temperature, tactile, force, torque, video, and
numerous other types. We will discuss all these sensors in four categories. These are




Range,
Proximity,
Touch, and
Force-torque sensing
We classify sensors using two important functional axes:


Proprioceptive / exteroceptive and
Passive/active.
Proprioceptive sensors measure values internal to the robot; for example, motor speed, wheel
load, robot arm joint angles, battery voltage. Exteroceptive sensors acquire information from
the robot’s environment; for example, distance measurements, light intensity, sound
amplitude. Hence exteroceptive sensor measurements are interpreted by the robot in order to
extract meaningful environmental features.
Passive sensors measure ambient environmental energy entering the sensor. Examples of
passive sensors include temperature probes, microphones, and CCD or CMOS cameras.
Active sensors emit energy into the environment, then measure the environmental reaction.
Because active sensors can manage more controlled interactions with the environment, they
often achieve superior performance. However, active sensing introduces several risks: the
outbound energy may affect the very characteristics that the sensor is attempting to measure.
1
5.1 Range sensors
A range sensor measures the distance from a reference point to an object in the field of
operation. In such sensors time-of-flight concept is used in which distance is estimated based
on the time elapsed between the transmission of signal and return of reflection. A sensor
consists of two parts: a transducer to produce wave energy, and an aperture or antenna to
radiate or receive such energy. However these may be integrated into a single component.
• The energy is launched as a wave across an aperture or antenna
• It propagates through the atmosphere until it meets a reflector
• A proportion is backscattered to the receiver and gathered by an antenna
• The energy is converted back into an electrical signal and stored in a computer
Among the most common range sensors are:
• Infrared (IR),
• Sonar, and
• Laser sensors
Infrared (IR) sensors
IR sensors are non-contact sensors used to detect obstacles. They operate by emitting an
infrared light and detecting reflection from objects in front of the robot. IR sensor
measurements mainly depend on the surface and colour of the object. For example, black
objects are invisible to IR sensors. Since the IR signal is inversely proportional to distance, IR
sensors are inherently short range sensors. IR sensors are usually divided into two basic types:
the passive IR sensors that emit no IR radiation and the active types that emit an IR beam that
is again detected by reflection. The active IR sensors generally use an IR LED emitting an
invisible beam that is, in turn, picked up as a reflected spot on a wall or object by a photo
transistor. Some IR sensor products are shown in Figure 5.1.
Fig. 5.1: Infra-red light sensors
2
Sonar sensors
Sonar sensors emit a short powerful signal and receive the reflection off objects ahead of the
sensor. The distance of the object is calculated from the travel time of the signal and the speed
of sound. The general principle of sonar sensor is shown in Figure 5.2.
Fig. 5.2: The principles of first return sonar using a threshold detector.
The counter stops when the signal received exceeds a pre-set threshold. The time of flight can
be read directly from the counter and converted to distance. The system could be modified to
use a threshold which decreases with time over the receiving period to account for the
attenuation of the sonar signal with range.
An example of different noises of sonar sensors is illustrated in Figure 5.3. It can be seen that
the basic shape for the rough surface varies significantly with bearing angle. The basic shape
of a smooth surface image does not vary significantly, but there is significant change in the
amplitude of the peak with angle of incidence (arising from the angular variation of the
radiation pattern).
Uncertainties in sonar sensors
Various uncertainties are associated with the readings of sonar sensors.
The sensitivity of a sonar sensor is cone-shaped.
As a result the object distance can be anywhere in the sonar cone.
Also the accuracy of the object distance is a function of the width of the sonar beam pattern.
Sonar sensor gives erroneous readings due to specular reflection.
Specular reflection occurs when sonar beam hits a smooth surface at a shallow angle and
therefore does not reflect back to sensor but outward.
3
Fig. 5.3 Sonar images of surfaces: (a) image of a smooth surface at zero bearing; (b)
image of a smooth surface obtained at the same distance but a bearing angle of about
28° (the second small peak in plot (b) is due to a further reflector in the line of sight);
(c) image of a rough surface at zero bearing angle; and (d) image of a rough surface
obtained at the same position but bearing angle of about 28°.
Laser range finders
Laser range finders are particularly very common in mobile robots to measure the distance,
velocity, and acceleration of objects. A short light signal is sent out and the reflection off
object is detected to measure the elapsed time. Shorter wavelength reduces the specular
reflection. The very inexpensive diode lasers available as pointers and power tool line
generators make great robot add-ons.
Fig. 5.4: High-end-lesser and obstacle detectors
4
Triangulation-based active ranging
Triangulation-based ranging sensors use geometric properties manifest in their measuring
strategy to establish distance readings to objects. The simplest class of triangulation-based
rangers are active because they project a known light pattern (e.g., a point, a line, or a texture)
onto the environment. The reflection of the known pattern is captured by a receiver and,
together with known geometric values, the system can use simple triangulation to establish
range measurements. If the receiver measures the position of the reflection along a single axis,
we call the sensor an optical triangulation sensor in 1D. If the receiver measures the position
of the reflection along two orthogonal axes, we call the sensor a structured light sensor. These
two sensor types are described in the two sections below.
Optical triangulation (1D sensor)
The principle of optical triangulation in 1D is straightforward, as depicted in figure 5.5. A
collimated beam (e.g., focused infrared LED, laser beam) is transmitted toward the target. The
reflected light is collected by a lens and projected onto a position-sensitive device (PSD) or
linear camera. Given the geometry of Figure 5.5, the distance is given by equation (5.1):
D f
L
x
(5.1)
The distance is proportional to 1 / x ; therefore the sensor resolution is best for close objects
and becomes poor at a distance. Sensors based on this principle are used in range sensing up
to 1 or 2 m, but also in high-precision industrial measurements with resolutions far below 1
µm. Optical triangulation devices can provide relatively high accuracy with very good
resolution (for close objects). However, the operating range of such a device is normally fairly
limited by geometry.
Fig 5.5: Principle of 1D laser triangulation.
5
5.2 Proximity sensors
Proximity sensors generally have a binary output which indicates the presence of an object
within a specified distance interval. They are used in robotics for grasping or avoiding
obstacles. Among the most widely used proximity sensors are:
• Inductive sensors,
• Hall-effect sensors,
• Capacitive sensors,
• Ultrasonic sensors, and
• Optical proximity sensors.
Inductive sensors are based on a charge of inductance due to presence of a ferromagnetic
metallic object. The voltage waveform observed at the output of the coil provides an effective
means for proximity sensing.
Hall-effect sensors are based on Lorentz force which acts on a charged particle travelling
through a magnetic field. Bringing a ferromagnetic material close to the semiconductormagnetic device would decrease the strength of the magnetic field, thus reducing the Lorentz
force and the voltage across the semiconductor. This drop in voltage is the key to sensing
proximity with Hall-effect sensors.
Capacitive sensors are potentially capable of detecting all solid and liquid materials.
Capacitive sensors are based on detecting a charge in capacitance induced by a surface that is
brought near the sensing element. The sensing element is a capacitor composed of a sensitive
electrode and a reference electrode. Typically, these sensors are operated in a binary mode so
that a change in the capacitance greater than a threshold indicates the presence of an object.
Ultrasonic sensors reduce the dependence of material being sensed. The basic element is an
electro-acoustic transducer of piezoelectric ceramic type. The same transducer is used for both
transmitting and receiving. The housing is designed so that it produces a narrow acoustic
beam for efficient energy transfer and signal direction. Proximity of an object is detected by
analysing the waveforms of the both transmission and detection of acoustic energy signals.
Fig. 5.6: Ultra-sonic rage finders
6
Optical proximity sensors detect the proximity of an object by its influence on a propagating
wave as it travels from a transmitter to a receiver. This sensor consists of a solid-state LED,
which acts as a transmitter of an infrared light, and solid-state photodiode which acts as the
receiver. The cones of light formed by focusing the source and the detector on the same plane
intersect in a long, pencil-like volume. This volume defines the field of operation of the
sensor. A reflective surface that intersects the volume is illuminated by the source and seen by
the receiver.
5.3 Force and Torque sensors
Force and torque sensors are used for measuring the reaction forces developed at the joints. A
joint sensor measures the Cartesian components of force and torque acting on a robot joint.
Most wrist sensors function as transducers for transforming forces and moments exerted at the
hand into measurable deflections or displacements at the wrist. They consist of strain gauges
that measure the deflection of the mechanical structure due to external forces.
Shaft encoders are used to measure the movement of robot’s motors for both translational and
rotational movement. Positional encoders are probably the second most popular sensor on a
robot. Most experimental robots do not have arms and do not use positional encoders to
determine the positions of an arm’s different joints. They do use shaft encoders on the wheels
or motor shafts to determine the number of revolutions of the wheels and thus, the distance
travelled by the wheels. These encoders can use electrical contacts, magnetic Hall-effect
detectors, or the more popular optical path broken by rotating teeth or opaque and clear
graphics etched on a wheel. Absolute encoders output a binary word for each incremental
position and are complex and expensive. Incremental encoders provide a pulse for each
increment of shaft movement. The use of two optical channels enables the determination of
the direction of rotation.
5.4 Touch sensors
Touch sensors are used in robots to obtain information associated with the contact between a
manipulator hand objects in the workspace. Touch sensors can be subdivided into two groups:
binary and analogue. Binary sensors are basically contact devices such as micro-switches to
detect presence of an object in between end-effectors. On the other hand, analogue sensors are
compliant devices that output a signal proportional to force. During the past few years,
considerable effort has been made to the development of tactile sensing arrays capable of
producing touch information over a wider area of robot finger or hand. Using such several
approaches significant progress has been made in the construction of artificial skin.
5.5 Vision sensors
Vision is our most powerful sense. It provides us with an enormous amount of information
about the environment and enables rich, intelligent interaction in dynamic environments. The
first step in this process is the creation of sensing devices that capture the same raw
information light that the human vision system uses. Some examples are shown in Figure 5.7.
These sensors have specific limitations in performance when compared to the human eye.
7
Fig. 5.7: Camera and vision sensors
Two current technologies for vision sensors will be discussed:


CCD and
CMOS.
CCD technology
The charged coupled device is the most popular basic ingredient of robotic vision systems
today. The CCD chip, shown in Figure 5.8, is an array of light-sensitive picture elements, or
pixels, usually with between 20,000 and several million pixels total. Each pixel can be
thought of as a light-sensitive, discharging capacitor that is 5 to 25 µm in size. First, the
capacitors of all pixels are charged fully, and then the integration period begins. As photons
of light strike each pixel, they liberate electrons, which are captured by electric fields and
retained at the pixel. Over time, each pixel accumulates a varying level of charge based on the
total number of photons that have struck it. After the integration period is complete, the
relative charges of all pixels need to be frozen and read. In a CCD, the reading process is
performed at one corner of the CCD chip. The bottom row of pixel charges is transported to
this corner and read, then the rows above shift down and the process is repeated. This means
that each charge must be transported across the chip, and it is critical that the value be
preserved. This requires specialized control circuitry and custom fabrication techniques to
ensure the stability of transported charges. The photodiodes used in CCD chips (and CMOS
chips as well) are not equally sensitive to all frequencies of light. They are sensitive to light
between 400 and 1000 nm wavelength. It is important to remember that photodiodes are less
sensitive to the ultraviolet end of the spectrum (e.g., blue) and are overly sensitive to the
infrared portion (e.g., heat).
8
Fig 5.8: 2048x2048 CCD array
There are two common approaches for creating colour images. If the pixels on the CCD chip
are grouped into 2 x 2 sets of four, then red, green, and blue dyes can be applied to a colour
filter so that each individual pixel receives only light of one colour. Normally, two pixels
measure green while one pixel each measures red and blue light intensity.
CMOS Technology
The complementary metal oxide semiconductor chip is a significant departure from the CCD.
It too has an array of pixels, but located alongside each pixel are several transistors specific to
that pixel. Just as in CCD chips, all of the pixels accumulate charge during the integration
period. During the data collection step, the CMOS takes a new approach: the pixel-specific
circuitry next to every pixel measures and amplifies the pixel’s signal, all in parallel for every
pixel in the array. Using more traditional traces from general semiconductor chips, the
resulting pixel values are all carried to their destinations.
The CMOS chip is so much simpler that it consumes significantly less power; incredibly, it
operates with a power consumption that is one-hundredth the power consumption of a CCD
chip. In a mobile robot, power is a scarce resource and therefore this is an important
advantage. A commonly available low-cost CMOS camera is shown in Figure 5.8.
Fig. 5.8: A commercially available, low-cost CMOS camera with lens attached.
9
Visual ranging sensors
As we have seen earlier, a number of sensors are popular in robotics explicitly for their ability
to recover depth estimates. A fundamental problem with visual images makes range finding
relatively difficult. Any vision chip collapses the 3D world into a 2D image plane, thereby
losing depth information. If one can make strong assumptions regarding the size of objects in
the world, or their particular colour and reflectance, then one can directly interpret the
appearance of the 2D image to recover depth.
The general solution is to recover depth by looking at several images of the scene to gain
more information. An alternative is to create different images, not by changing the viewpoint,
but by changing the camera geometry, such as the focus position or lens iris. This is the
fundamental idea behind depth from focus and depth from defocus techniques.
The basic formula governing image formation relates the distance of the object from the Lens
d to the distance e from the lens to the focal point, based on the focal length f of the lens:
1 1 1
 
f d e
(5.2)
Example The formation of an image using a camera with a thin convex lens is shown in
Figure 5.9. An object with a height of 0.5m is placed at a distance of 1m from the lens with a
focal length of 35mm. What will be the height of the image?
Lens
Objec
t
ho
O
Image
f
hi
do
di
Figure 5.9: Formation of an image with a convex thin lens
The distance of the image d i can be calculated using the well-known formula
di 
1
1
1


do di
f
f *do
0.035
=
= 0.03627 m
0.965
do  f
Once the distance of the image d i is known, the height of the image can be calculated using
10
the relationship as follows
d o ho

di
hi
hi  ho
di
0.03627
 0.5
 0.018135 m
do
1.0
5.6 Machine olfaction (electronic nose)
Machine olfaction is the automated simulation of the sense of smell. It is an emerging
requirement of modern robotics where robots or other automated systems are needed to
measure the existence of a particular chemical concentration in air. This technology is still in
the early stages of development, but it promises many applications, such as:






quality control in food processing (e.g. taints, bacterial spoilage)
detection and diagnosis in medicine
detection of drugs, explosives and dangerous or illegal substances
military and law enforcement (e.g. chemical warfare agents)
disaster response (e.g. toxic industrial chemicals)
environmental monitoring (e.g. pollutants)
Pattern analysis constitutes a critical building block in the development of gas sensor array
instruments capable of detecting, identifying, and measuring volatile compounds, a
technology that has been proposed as an artificial substitute for the human olfactory system.
Some pattern recognition problems in machine olfaction such as odor classification and odor
localization can be solved by using time series kernel methods. There are three basic detection
techniques using:



Conductive-polymer odour sensors (poly-pyrrole)
Tin-oxide gas sensors
Quartz-crystal micro-balance sensor
They generally comprise an array of sensors of some type, the electronics to interrogate those
sensors and produce the digital signals, and finally the data processing and user interface
software.
Conventional electronic noses are not analytical instruments in the classical sense and very
few claim to be able to quantify an odour. These instruments are first ‘trained’ with the target
odour and then used to ‘recognise’ smells so that future samples can be identified as ‘good’ or
‘bad’ smells.
Electronic noses have been demonstrated to discriminate between odours and volatiles from a
wide range of sources. The list below shows just some of the typical applications for
electronic nose technology – many are backed by research studies and published technical
papers.
11
References
1. Bekey, G.A. (2005). Autonomous Robots: From Biological Inspiration to
Implementation and Control, The MIT Press.
2. Dowling, K. (1997). Limbless Locomotion: Learning to Crawl with a Snake Robot,
PhD Thesis, Robotic Institute, Carnegie Mellon University, USA.
3. Hirose, S. (1993). Biologically Inspired Robots (Snake-like Locomotor and
Manipulator, Oxford University Press.
4. Miller, G.S.P (2002). Snake Robot for Search and Rescue, in Neurotechnology for
Biomimetic Robots, Chapter 13, eds. J. Ayers, J.L. Davis, A. Rudolph, A Bradford
Book, The MIT Press.
5. Schilling, R.J. (1990). Fundamentals of Robotics: Analysis and Control, Prentice
Hall, Englewood Cliffs, New Jersey.
6. Siegwart, R. and Nourbakhsh, I. R. (2004). Introduction to Autonomous Mobile
Robot, A Bradford Book, The MIT Press.
7. Shuzhi, S.G. and Lewis, F.L. (2006). Autonomous Mobile Robots: Sensing, Control,
Decision Making and Applications, Taylor and Francis, Boca Raton, London, NY.
12