Download Extended depth-of-field iris recognition system for a

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ultrafast laser spectroscopy wikipedia , lookup

Fourier optics wikipedia , lookup

Night vision device wikipedia , lookup

Optical tweezers wikipedia , lookup

Surface plasmon resonance microscopy wikipedia , lookup

Magnetic circular dichroism wikipedia , lookup

Photon scanning microscopy wikipedia , lookup

Nonlinear optics wikipedia , lookup

Super-resolution microscopy wikipedia , lookup

Phase-contrast X-ray imaging wikipedia , lookup

Microscopy wikipedia , lookup

Interferometry wikipedia , lookup

Imagery analysis wikipedia , lookup

Confocal microscopy wikipedia , lookup

Nonimaging optics wikipedia , lookup

Hyperspectral imaging wikipedia , lookup

F-number wikipedia , lookup

Optical aberration wikipedia , lookup

Superlens wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Chemical imaging wikipedia , lookup

Preclinical imaging wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Transcript
Appeared in:
Proceedings of the SPIE Biometric Technology for Human Identification II,
Vol. 5779, pp. 41-50, Orlando, FL, 2005.
Extended depth-of-field iris recognition system for a workstation
environment
Ramkumar Narayanswamya*, Paulo E. X. Silveiraa,
Harsha Settyb, V. Paul Paucab, and Joe van der Grachtc
a
CDM Optics, Inc; b Wake Forest University; c HoloSpex, Inc;
ABSTRACT
Iris recognition imaging is attracting considerable interest as a viable alternative for personal identification and verification
in many defense and security applications. However current iris recognition systems suffer from limited depth of field,
which makes usage of these systems more difficult by an untrained user. Traditionally, the depth of field is increased by
reducing the imaging system aperture, which adversely impacts the light capturing power and thus the system signal-tonoise ratio (SNR). In this paper we discuss a computational imaging system, referred to as Wavefront Coded® imaging, for
increasing the depth of field without sacrificing the SNR or the resolution of the imaging system. This system employs a
especially designed Wavefront Coded lens customized for iris recognition. We present experimental results that show the
benefits of this technology for biometric identification.
Keywords: Iris recognition, imaging, depth-of-field, optics, computational imaging, biometric
1. INTRODUCTION
In many applications, such as airport security, physical access control, computer security, financial transactions, and
health-care documents, personal identification is typically confirmed with the use of ID cards or passwords. However,
these forms of identification are in themselves not secure. ID cards and passwords can be stolen or counterfeited. As a
result, any action or application that is contingent on establishing identity can be compromised. Systems based on the use
of biometrics verify identity on the basis of innate metrics, such as facial features, finger prints or iris patterns, making the
process more robust and substantially difficult to tamper [1].
In particular, identity verification using a person’s iris is attracting considerable interest. The human iris as a biometric has
several attractive features: the iris texture is highly unique; it is generally stable over the lifetime of an individual and the
image of the eye/iris can be captured from a distance using standard camera systems. The iris is the annular area between
the pupil and the white sclera in the eye. The iris has a rich texture of interlacing features that provide a signature that is
unique to each subject. In fact, the operating probability of false identification by the Daugman algorithm can be as high as
1 in 1010 [2-4]. Compared to finger printing, hand shape, facial features, or voice, the iris is generally considered more
stable and reliable for identification.
24”
Figure 1: Iris Biometric Recognition System. Increasing the depth of field of this system improves ease-of-use and
facilitates the adoption of the technology.
*
corresponding author: [email protected]
In one particular application, iris recognition is used instead of passwords for controlling the access and monitoring the use
of computer systems. Figure 1 illustrates an instance of this application where a computer user is situated 2 feet (61 cm)
from an iris recognition camera and is expected to authenticate his/her identity on a periodic basis. Currently existing
commercially available iris recognition systems for this application are difficult to use. The subject must be carefully
positioned so that the correct eye is within the field of view, in focus, and is held stationary to avoid motion blur. This
narrow field of view and a shallow depth of field mandate considerable user cooperation and training.
Ideal iris imaging systems should have a large field of view and a large depth of field, allowing users to be identified
without requiring their active cooperation. The system should capture adequate signal to facilitate good discrimination and
operate with both stationary and moving objects. This need for a high SNR at the detector and minimum motion blur (short
exposures) calls for a relatively fast optical systems (low F-number), which inherently leads to a shallow depth of field in
traditional optical systems.
In this paper we present a fast (low F-number) iris imaging system with an extended depth of field, a solution that cannot
be delivered with traditional optical designs. Our imaging system uses asymmetric-aspheric optics and complementary
signal processing. This technology referred to as Wavefront Coded imaging is an example of the emergent field of
computational imaging [5-6]. The use of Wavefront Coded imaging for biometric iris recognition has been previously
proposed [7-9]. This paper presents what is, to the best of our knowledge, the best performing Wavefront Coded system
available to date specifically designed for iris biometric recognition.
Figure 2: The diffraction limited optical resolution of the imaging system bears an inverse relationship with the Fnumber. This effect is typically not noticed, since digital imaging systems are usually limited by the detector
resolution rather than the optical resolution.
2. TRADE-OFF IN TRADITIONAL IMAGING SYSTEMS
Some of the main parameters that specify an imaging system are its resolution, depth of field (DOF), field of view and
exposure period per image-frame. The resolution is defined as the maximum spatial frequency that can be resolved in the
image at a predetermined contrast. For example, the resolution specification can be 50 line-pairs/mm at 10% contrast. The
field of view determines the spatial extent of the scene acquired by the sensor. Depth of field determines how far a planar
object can move away from the best focus position and still be imaged without focus errors. The exposure period is defined
as the detector integration time while acquiring an image frame, and it is mainly determined by the scene illumination and
how fast the scene is changing or the object is moving. These parameters are however not always complementary. In this
Section we show that to increase the DOF in traditional imaging systems one must trade off the DOF for optical resolution
and light gathering capacity, leading to a loss of contrast and SNR.
2.1. Diffraction limited resolution
The resolution of the optics is related to the aperture diameter and, for objects sufficiently distant from the imaging system,
it can be approximated by
"0 #
1
D
=
,
!.F /# !f
(1)
! 0 is the cut-off spatial frequency (specified in cycles/length), ! is the wavelength of light being used, D is the
diameter of the entrance pupil, f is the effective focal length and F/# is the F-number, given by the effective focal length
where
of the imaging system divided by its entrance pupil diameter. As the imaging system’s entrance pupil diameter, D, is
reduced (larger F-number), the optical resolution of the system drops as 1/F#, as shown in Figure 2. This effect is typically
overlooked, since digital camera systems tend to sample the image at a spatial frequency lower than the optical resolution.
Even in this regime, where the resolution is determined by the sensor and the not the optics, reducing the aperture size
leads to a drop in contrast at all spatial frequencies, especially at the high spatial frequencies, leading to an overall loss in
SNR.
Figure 3: The DOF of traditional imaging systems grows linearly with F-number from F/1 to F/32. For example,
moving from F/4 to F/8 doubles the depth of field. The depth of field is higher for objects located further away from
the lens. The plot shows the depth of field for objects located at 500, 600 and 700 mm from the lens. The focal length
of the lens is 50mm and the sensor pixel pitch is approximately 5 microns.
2.2. Depth of Field
The limiting resolution of the sensor is given by
vs =
1
,
2p
(2)
where νs is the cut-off spatial frequency and p is the pixel pitch. The system is referred to as an “under-sampled” system
when νs < vo. The depth of field in an under-sampled system is mainly determined by the diameter of the tolerable blur
(pixel size). In this case, the DOF is linear with F-number, as shown in Figure 3. Increasing the F-number further leads to
an over-sampled system, where the sensor cut-off frequency is greater than the highest frequency in the image-plane. For
such an over-sampled system, diffraction and aberration effects determine the DOF, which starts bearing a non-linear
relationship with F-number.
Figure 4: Light gathering capacity of the lens drops quadratically with increasing F-number.
2.3. Light gathering capacity
The F-number of the lens also determines its light gathering capability. A lens with large entrance pupils relative to its
focal length (small F-number) captures more light from the object compared to a lens with a smaller entrance pupil and the
same focal length (larger F-number). Each increase in the F-number, (usually rated in multiples of the square root of two,
i.e., F/1, F/1.4, F/2, F/2.8, etc) reduces the light gathering capacity by 50%. Figure 4 illustrates this point by plotting the
optical power incident on a single pixel of a hypothetical imaging system as a function of the F-number. The drop in signal
leads to a drop in the overall operating SNR.
2.4. Trade-off discussion
The depth of field of an imaging system increases linearly with increasing F-number. However optical resolution drops as
1/F# and the light capturing capacity drops as the square of 1/F#. In this paradigm, doubling the depth of field by going up
two F-stops implies reducing the light captured to 25% of the original amount of light, while suffering loss of contrast at
the higher spatial frequencies. The reduced SNR will lead to poor detection and, most likely, to a larger fraction of false
positive and/or false negative identifications. Increasing the integration time to compensate for the loss of signal can result
in motion blur.
3. WAVEFRONT CODED IMAGING SYSTEM
Wavefront Coded imaging, is a new paradigm in imaging that provides us with imaging systems that operate with large
apertures (low F-numbers) but with the DOF of a reduced aperture system. Wavefront coded imaging uses nonconventional aspherical optics to form point spread functions which do not vary significantly over an extended imaging
volume compared to traditional optics. Alternatively stated, the Wavefront Coded systems operate by coding the wavefront
such that the modulation transfer function (MTF) of the system does not lose modulation at higher spatial frequencies with
increasing defocus error. Defocus is commonly measured in terms of waves of defocus, and typically a quarter wave of
defocus is considered substantial in an optical design. Our iris imaging system has more than fifteen waves of defocus over
the intended imaging volume.
Figure 5: Experimentally measured modulation transfer function of a traditional imaging system in the presence of
defocus error. Notice that the MTF drops rapidly with increasing defocus at the higher spatial frequencies. This
plot graphs the MTF for defocus error associated with object distances of 20”, 23”, 24.5” and 28”. The lens focal
length is 50mm, operated at f/3.5 with a 5 micron pixel sensor, and the nominal object distance is 24.5”. The nulls
present in the defocused MTFs lead to an irreversible loss of information. Aliasing leads to measured MTF values at
high spatial frequencies that are higher than in reality.
3.1. Modulation transfer functions of a traditional and of a Wavefront Coded imaging systems.
Figure 5 shows the experimentally-determined MTF of our iris recognition system without Wavefront Coded optics for
various object positions. For the purpose of this discussion, we refer to our iris recognition system without Wavefront
Coded optics as the traditional imaging system. The various curves correspond to the system MTFs for different distances
from the object to the imaging system. The lens is a 50-mm focal length Cosmicar lens operated at F/3.5, and the detector
used has a pixel pitch p = 5 µm. The lens is adjusted to provide us with in-focus images at an object distance of 24.5”. The
corresponding MTF shows good modulation across all spatial frequencies. As the object moves away from the best focus
position, the system suffers from defocus error. This error manifests in the MTF as a dramatic drop in the modulation at the
higher spatial frequencies. Once this modulation drops below the noise floor all information in those spatial frequencies is
irremediably lost. Notice that a displacement of just 1.5” from the best focus position introduces nulls in the MTF. The
spatial frequency suffers a phase reversal every time the modulation goes through a null. Automated detection systems are
often sensitive to such contrast reversal, and these phenomena can limit the system performance even before the overall
loss in modulation would.
Figure 6 plots the MTF of the Wavefront Coded version of the same imaging set up. The MTF is plotted for 20”, 23” ,
24.5” and 28” object distances. Notice that the Wavefront Coded system maintains the modulation substantially higher than
the traditional MTF with defocus. The MTF is also free of nulls. Since the modulation is maintained above the noise floor
and free of nulls (zeros), the information from the object is preserved in spite of the large defocus error. Wavefront Coded
images can be processed to boost the spatial frequency to the desired levels to form crisp high-contrast images or to suit the
requirement of an automated detection algorithm. The processing is typically linear and achieved by convolving the image
with a filter, as depicted in Figure 7.
Figure 6: Experimentally measured modulation transfer of the custom designed Wavefront Coded imaging system.
Notice that the modulation levels are higher than those of the defocused traditional system, but lower than those of
the best focus traditional system. Also notice that that the MTFs do not have any nulls.
Wavefront Coded Imaging
Optical System
Processing
Wavefront Coded
Processing
Iris recognition
processing
Decision
Custom optical element
for iris recognition
Digital
detector
Figure 7: Schematic diagram of the Wavefront Coded imaging system for iris recognition. The optical system
differs from a traditional imaging system in that it is fitted with a custom optical element, and the acquired image is
post-processed as part of the detection process.
3.2. Description of a Wavefront Coded imaging system.
Wavefront coded imaging systems typically consist of a digital detector and a conventional optical system retrofitted with a
Wavefront Coded optical element (see Fig. 7). One side of the optical element usually has a non-spherical, circularly
asymmetric surface and the other side typically has a flat surface. The optical system is focused at the nominal object
distance and images of the object are recorded exactly as in a traditional system. In the processing step the recorded images
are convolved with a filter to arrive at the “decoded” image, fit for use in the application of choice. Examination of a
recorded image prior to processing renders a blurred version of the scene. But the blur is nearly uniform across the image
and does not vary significantly as a function of field or object distances within certain specifications. This blurred image
can be thought of as an image optimized for “information capture” rather than for human visualization. For better
processing performance, different filters can be applied to different object ranges. In the case of iris recognition, the radius
of the iris (which is quite invariant across the human population) can be used to estimate the object distance. In our system
best results were achieved when seven ranges were used.
4. WAVEFRONT CODED IMAGING APPLIED TO IRIS RECOGNITION
A Wavefront Coded element was custom designed for biometric iris recognition using the same conventional optical
system described in Section 3.1. The recognition algorithm was included in the optimization loop, resulting in a Wavefront
Coded optical element specifically optimized for iris recognition. The design process is described in more detail by
Narayanswamy et al [9]. The optical transmission of the Wavefront Coded element is defined by a high-order separable
function of the form P(x,y) = exp{-j[f(x) + g(y)]}, where f(x) and g(y) are high-order polynomials.
One of the key steps in the design of the Wavefront Coded iris recognition system is translating the iris recognition
algorithm’s requirement in terms of parameters relevant to optical and imaging systems design. In our system, this
requirement was partially met by optimizing the Wavefront Coded element with an iris recognition algorithm inside the
optimization loop [9]. This is particularly important in order to take into account the non-linear steps required for feature
extraction and identification. Nevertheless, correct iris identification requires the acquisition of the iris texture over a band
of spatial frequencies without distortion and at a sufficiently high SNR. A detailed analysis of the algorithm helped us
identify the spatial frequency bands of interest, and during the optimization of the Wavefront Coded optical element it was
also required that the resulting system maintained adequate modulation within these bands over the entire imaging range.
Figure 8: Estimated SNR at the highest spatial frequency of interest as the object distance is varied from 17.5” to
30”. The variation of spatial frequency with magnification is taken into account, so that the SNRs are plotted at a
constant spatial frequency in the object plane.
Figure 8 plots the estimated system SNR at the highest spatial frequency of interest as the object moves through the
imaging volume and undergoes different amounts of defocus error. Note that the magnification of the imaging system is
inversely proportional to the object distance, resulting in a variation of the highest spatial frequency of interest in the image
plane. We take this effect into account by tracking higher spatial frequencies in the image plane as the object moves farther
away. The plot shows that Wavefront Coded imaging (solid line) maintains the modulation sufficiently high to deliver a
minimum of 15 dB SNR. The corresponding curve for the traditional system (dashed line) drops below 15 dB for a
displacement of 1.5” from the best focus position. The iris recognition algorithm can be characterized in terms of the
minimum SNR needed at various spatial frequency bands of interest. Subsequently, the depth of field of the system can be
estimated by examining the SNR of the various frequency bands of interest.
Figure 9: Comparison between images of eyes captured with a traditional imaging system (left) and a Wavefront
Coded imaging system (right) at object distances of 20”, 24” and 28”. Both systems use a 50mm lens operating at
F/3.5. The lens is focused at an object distance of 24” and not refocused as the object is moved away from the best
focus position.
Figure 9 shows a comparison of the performance of both the traditional and Wavefront Coded systems by using iris images
acquired by each system at object distances of 20”, 24” and 28” when the each of the imaging systems are focused at an
object distance of 24”. The Wavefront Coded imaging system is fitted with a custom asymmetric aspherical optical
element, as shown in Figure 7, and the acquired images have been filtered as part of the processing step. Notice that the
Wavefront Coded images show fine detail such as eye-lashes and skin texture in all three distances, whereas the traditional
images appear very blurred when out of focus. Iris images often present specular reflections due to the reflection of the
illuminating light sources by the cornea. In the traditional images, the specular reflections are circular while in the
Wavefront Coded images they posses a peculiar shape. This is a characteristic of Wavefront Coded systems, whose pointspread functions are often non-circular and take on a variety of shapes due to the asymmetry of the pupil function. Also
note that the specular reflections have been partially removed from the Wavefront Coded images as part of the iris
recognition processing.
Figure 10 compares the iris identification Hamming distances (HD) as the user moves from a distance of 18” to 30” at 0.5”
increments. The HD is a measurement of the fractional difference between bits in two given binary iris codes [1]. A lower
HD indicates a good match, whereas a HD higher than 0.3 indicates a mismatch. The HDs are plotted for an “authentic” or
valid user. An “imposter” or a non-user would have HDs consistently around 0.45. Ten images are captured at each object
position and the HDs are calculated for each of the ten images. Then, the average of the ten HDs are calculated and
represented by a diamond-shape in Fig. 10. A threshold value of 0.2 provides us with accurate iris recognition from 23” to
25.5” for the traditional imaging system, whereas in the Wavefront Coded system we have accurate iris recognition from
20.5” to 28.5”, which is an increase in the depth of field by a factor of 3.
5.
CONCLUSION
Iris biometric systems are increasingly being used for access control of computer systems. Current iris recognition systems
are difficult to use due to their limited depth of field and field of view. Wavefront Coded imaging can be used to increase
the depth of field by mitigating defocus error. Field of view is increased by using sensors that cover a greater area on the
focal plane. Wavefront Coded imaging can mitigate the field-dependent aberrations present when operating at large angles.
In this paper we have presented a custom Wavefront Coded optical element for the iris recognition application and
experimentally demonstrated the positive impact it has on the application by extending the depth of field by three times
without requiring a reduction in the entrance pupil diameter of the imaging system.
Traditionally, increasing the depth of field by a factor of 3, requires moving from an F-number of 3.5 to an F-number of 10.
However, the light captured at F/10 drops to 12.5% of the light captured at F/3.5. Since eye-safety is a significant concern
in this application, it may be impossible to increase the light power by a factor of 8 and maintain the same SNR as that
achieved at F/3.5. Should increasing the illumination be possible, the application still may not attain the depth of field
since diffraction effects at F/10 will lead to loss in contrast and poor SNR at the higher spatial frequencies.
(a) Traditional System
(b) Wavefront Coded System.
Figure 10: Comparison between iris Hamming distances (HD) as the user moves from an object distance of 18" to
30" away from the imaging system. Ten images are captured at each position. The corresponding HDs are shown
as dots and the averages of the ten HDs at each position are shown as diamonds. Setting a threshold of 0.2 provides
a 2.5” depth of field for the traditional imaging system (a). The Wavefront Coded system (b) is able to maintain the
HD below 0.2 for a distance of 8”.
6. REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
S. Nanavati, M. Thieme and R. Nanavati, “Biometrics: Identity verification in a networked world,” John Wiley &
Sons, 2002.
J. G. Daugman, “High confidence visual recognition of person by a test of statistical independence,” IEEE Trans.
PAMI 15, 1148-1161 (1993).
J. G. Daugman, "The importance of being random: statistical principles of iris recognition," Pattern. Recognition 36,
279-291 (2003).
J. G. Daugman, “How iris recognition works,” IEEE Trans. Circuits and Systems for Video Tech. 14(1), 21-30 (2004).
E. Dowski, Jr. and W. T. Cathey, “Extended depth of field through wavefront coding,” Applied. Optics. 34, 1859-1866
(1995).
W. T. Cathey and E. Dowski, “A new paradigm for imaging systems,” Applied Optics, 41, 6080-6092 (2002).
J. van der Gracht, V. P. Pauca, H. Setty, E. R. Dowski, R. J. Plemmons, T. C. Torgersen, and S. Prasad, “Iris
recognition with enhanced depth-of-field image acquisition,” Proc. of SPIE 5358, 120-129 (2004).
R. Plemons, M. Horvath, E. Leonhardt, P. Pauca, S. Prasad, S. Robinson, H. Setty, T. Togersen, J. van der Gracht, E.
Dowski, R. Narayanswamy and P. E. X. Silveira, “Computational Imaging Systems for Iris Recognition,” Proc. of
SPIE 5559, 346-357, Denver (2004).
R. Narayanswamy, G. E. Johnson, P. E. X. Silveira, and H. B. Wach, “Extending the Imaging Volume for Biometric
Iris Recognition,” to be published in Applied Optics 44, Feb. 2005.