Download Compact Adaptive Optics Line Scanning Ophthalmoscope

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear optics wikipedia , lookup

Reflector sight wikipedia , lookup

Optical tweezers wikipedia , lookup

Photon scanning microscopy wikipedia , lookup

Hyperspectral imaging wikipedia , lookup

Super-resolution microscopy wikipedia , lookup

Nonimaging optics wikipedia , lookup

Microscopy wikipedia , lookup

Chemical imaging wikipedia , lookup

Preclinical imaging wikipedia , lookup

Optical aberration wikipedia , lookup

Confocal microscopy wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Photoreceptor cell wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Transcript
Compact Adaptive Optics Line Scanning
Ophthalmoscope
Mircea Mujat*, R. Daniel Ferguson, Nicusor Iftimia, and Daniel X. Hammer
Physical Sciences Inc., 20 New England Business Center, Andover MA 01810
*Corresponding author: [email protected]
Abstract: We have developed a compact retinal imager that integrates
adaptive optics (AO) into a line scanning ophthalmoscope (LSO). The
bench-top AO-LSO instrument significantly reduces the size, complexity,
and cost of research AO scanning laser ophthalmoscopes (AOSLOs), for
the purpose of moving adaptive optics imaging more rapidly into routine
clinical use. The AO-LSO produces high resolution retinal images with
only one moving part and a significantly reduced instrument footprint and
number of optical components. The AO-LSO has a moderate field of view
(5.5 deg), which allows montages of the macula or other targets to be
obtained more quickly and efficiently. In a preliminary human subjects
investigation, photoreceptors could be resolved and counted within
~0.5 mm of the fovea. Photoreceptor counts matched closely to previously
reported histology. The capillaries surrounding the foveal avascular zone
could be resolved, as well as cells flowing within them. Individual nerve
fiber bundles could be resolved, especially near the optic nerve head, as
well as other structures such as the lamina cribrosa. In addition to
instrument design, fabrication, and testing, software algorithms were
developed for automated image registration and cone counting.
2009 Optical Society of America
OCIS codes: (110.1080) Adaptive optics, (170.0110) Imaging systems, (170.5755) Retina scanning, (170.4470 )
Ophthalmology
References and links
1.
2.
3.
4.
5.
6.
7.
8.
9.
C.E. Bigelow, N.V. Iftimia, R.D. Ferguson, T.E. Ustun, B. Bloom, and D.X. Hammer, “Compact multimodal adaptiveoptics spectral-domain optical coherence tomography instrument for retinal imaging,” JOSA A, 24, 1327-1336 (2007).
S.A. Burns, R. Tumbar, A.E. Elsner, D. Ferguson, and D.X. Hammer, “Large-field-of-view, modular, stabilized, adaptiveoptics-based scanning laser ophthalmoscope,” JOSA A, 24, 1313-1326 (2007).
A.W. Dreher, J.F. Bille, and R.N. Weinreb, “Active Optical Depth Resolution Improvement of the Laser Tomographic
Scanner,” Appl. Opt., 28, 804-808 (1989).
E.J. Fernandez, B. Hermann, B. Povazay, A. Unterhuber, H. Sattmann, B. Hofer, P.K. Ahnelt, and W. Drexler, “Ultrahigh
resolution optical coherence tomography and pancorrection for cellular imaging of the living human retina,” Opt. Express,
16, 11083-11094 (2008). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-15-11083
B. Hermann, E.J. Fernandez, A. Unterhuber, H. Sattmann, A.F. Fercher, W. Drexler, P.M. Prieto, and P. Artal, “Adaptiveoptics ultrahigh-resolution optical coherence tomography,” Opt. Lett., 29, 2142-2144 (2004).
J.Z. Liang, D.R. Williams, and D.T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive
optics,” JOSA A, 14, 2884-2892 (1997).
A. Roorda, F. Romero-Borja, W.J. Donnelly, H. Queener, T.J. Hebert, and M.C.W. Campbell, “Adaptive optics scanning
laser ophthalmoscopy,” Opt. Express, 10, 405-412 (2002).
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-10-9-405
R.J. Zawadzki, B. Cense, Y. Zhang, S.S. Choi, D.T. Miller, and J.S. Werner, “Ultrahigh-resolution optical coherence
tomography with monochromatic and chromatic aberration correction,” Opt. Express, 16, 8126-8143 (2008).
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-11-8126
R.J. Zawadzki, S.M. Jones, S.S. Olivier, M.T. Zhao, B.A. Bower, J.A. Izatt, S. Choi, S. Laut, and J.S. Werner, “Adaptiveoptics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging,” Opt. Express, 13,
8532-8546 (2005). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-21-8532
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
Y. Zhang, B. Cense, J. Rha, R.S. Jonnal, W. Gao, R.J. Zawadzki, J.S. Werner, S. Jones, S. Olivier, and D.T. Miller, “Highspeed volumetric imaging of cone photoreceptors with adaptive optics spectral-domain optical coherence tomography,”
Opt. Express, 14, 4380-4394 (2006). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-10-4380
J.A. Martin and A. Roorda, “Direct and noninvasive assessment of parafoveal capillary leukocyte velocity,”
Ophthalmology, 112, 2219-2224 (2005).
Z.Y. Zhong, B.L. Petrig, X.F. Qi, and S.A. Burns, “In vivo measurement of erythrocyte velocity and retinal blood flow
using adaptive optics scanning laser ophthalmoscopy,” Opt. Express, 16, 12746-12756 (2008).
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-17-12746
K. Grieve and A. Roorda, “Intrinsic signals from human cone photoreceptors,” Invest. Ophthalmol. Vis. Sci., 49, 713-719
(2008).
R.S. Jonnal, J. Rha, Y. Zhang, B. Cense, W.H. Gao, and D.T. Miller, “In vivo functional imaging of human cone
photoreceptors,” Opt. Express, 15, 16141-16160 (2007).
J. Carroll, M. Neitz, H. Hofer, J. Neitz, and D.R. Williams, “Functional photoreceptor loss revealed with adaptive optics:
An alternate cause of color blindness,” Proc. Nat. Acad. Sci. USA, 101, 8461-8466 (2004).
H. Hofer, J. Carroll, J. Neitz, M. Neitz, and D.R. Williams, “Organization of the human trichromatic cone mosaic,” J.
Neurosci., 25, 9669-9679 (2005).
A. Roorda and D.R. Williams, “The arrangement of the three cone classes in the living human eye,” Nature, 397, 520-522
(1999).
A. Roorda and D.R. Williams, “Optical fiber properties of individual human cones,” J. Vis., 2, 404-12 (2002).
W. Makous, J. Carroll, J.I. Wolfing, J. Lin, N. Christie, and D.R. Williams, “Retinal microscotomas revealed with
adaptive-optics microflashes,” Invest. Ophthalmol. Vis. Sci., 47, 4160-4167 (2006).
N.M. Putnam, H.J. Hofer, N. Doble, L. Chen, J. Carroll, and D.R. Williams, “The locus of fixation and the foveal cone
mosaic,” J. Vision, 5, 632-639 (2005).
D.C. Gray, W. Merigan, J.I. Wolfing, B.P. Gee, J. Porter, A. Dubra, T.H. Twietmeyer, K. Ahmad, R. Tumbar, F. Reinholz,
and D.R. Williams, “In vivo fluorescence imaging of primate retinal ganglion cells and retinal pigment epithelial cells,”
Opt. Express, 14, 7144-7158 (2006). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-16-7144
J.I.W. Morgan, A. Dubra, R. Wolfe, W.H. Merigan, and D.R. Williams, “In Vivo Autofluorescence Imaging of the Human
and Macaque Retinal Pigment Epithelial Cell Mosaic,” 50, 1350-1359 (2009).
D.C. Gray, R. Wolfe, B.P. Gee, D. Scoles, Y. Geng, B.D. Masella, A. Dubra, S. Luque, D.R. Williams, and W.H. Merigan,
“In vivo imaging of the fine structure of rhodamine-labeled macaque retinal ganglion cells,” Invest. Ophthalmol. Vis. Sci.,
49, 467-473 (2008).
M.K. Yoon, A. Roorda, Y. Zhang, C. Nakanishi, L.-J.C. Wong, Q. Zhang, L. Gillum, A. Green, and J.L. Duncan,
“Adaptive Optics Scanning Laser Ophthalmoscopy Images Demonstrate Abnormal Cone Structure in a Family with the
Mitochondrial DNA T8993C Mutation,” Invest. Ophthalmol. Vis. Sci., iovs.08-2029 (2008).
S.S. Choi, N. Doble, J.L. Hardy, S.M. Jones, J.L. Keltner, S.S. Olivier, and J.S. Werner, “In vivo imaging of the
photoreceptor mosaic in retinal dystrophies and correlations with visual function,” Invest. Ophthalmol. Vis. Sci., 47, 20802092 (2006).
A. Invest. Ophthalmol. Vis. Sci.Roorda, Y.H. Zhang, and J.L. Duncan, “High-resolution in vivo imaging of the RPE
mosaic in eyes with retinal disease,” Invest. Ophthalmol. Vis. Sci., 48, 2297-2303 (2007).
J.I. Wolfing, M. Chung, J. Carroll, A. Roorda, and D.R. Williams, “High-resolution retinal imaging of cone-rod
dystrophy,” Ophthalmology, 113, 1014-1019 (2006).
S.S. Choi, R.J. Zawadzki, J.L. Keltner, and J.S. Werner, “Changes in cellular structures revealed by ultra-high resolution
retinal imaging in optic neuropathies,” Invest. Ophthalmol. Vis. Sci., 49, 2103-2119 (2008).
J.L. Duncan, Y.H. Zhang, J. Gandhi, C. Nakanishi, M. Othman, K.E.H. Branham, A. Swaroop, and A. Roorda, “Highresolution imaging with adaptive optics in patients with inherited retinal degeneration,” Invest. Ophthalmol. Vis. Sci., 48,
3283-3291 (2007).
D.X. Hammer, N.V. Iftimia, R.D. Ferguson, C.E. Bigelow, T.E. Ustun, A.M. Barnaby, and A.B. Fulton, “Foveal fine
structure in retinopathy of prematurity: An adaptive optics Fourier domain optical coherence tomography study,” Invest.
Ophthalmol. Vis. Sci., 49, 2061-2070 (2008).
P.M. Prieto, F. Vargas-Martin, S. Goelz, and P. Artal, “Analysis of the performance of the Hartmann-Shack sensor in the
human eye,” JOSA A, 17, 1388-1398 (2000).
N. Doble, G. Yoon, L. Chen, P. Bierden, B. Singer, S. Olivier, and D.R. Williams, “Use of a microelectromechanical
mirror for adaptive optics in the human eye,” Opt. Lett., 27, 1537-1539 (2002).
E.J. Fernandez, L. Vabre, B. Hermann, A. Unterhuber, B. Povazay, and W. Drexler, “Adaptive optics with a magnetic
deformable
mirror:
applications
in
the
human
eye,”
Opt.
Express,
14,
8900-8917
(2006).
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-20-8900
T. Shirai, “Liquid-crystal adaptive optics based on feedback interferometry for high-resolution retinal imaging,” Appl.
Opt., 41, 4013-4023 (2002).
F. Vargas-Martin, P.M. Prieto, and P. Artal, “Correction of the aberrations in the human eye with a liquid-crystal spatial
light modulator: limits to performance,” JOSA A, 15, 2552-2562 (1998).
D.X. Hammer, R.D. Ferguson, T.E. Ustun, C.E. Bigelow, N.V. Iftimia, and R.H. Webb, “Line-scanning laser
ophthalmoscope,” J. Biomed. Opt., 11, (2006).
D.X. Hammer, M. Mujat, N.V. Iftimia, and D. Ferguson, Compact adaptive optics line scanning laser ophthalmoscope, in
Photonics West -BIOS; Ophthalmic Technologies XIX, F. Manns, P.G. Söderberg, and A. Ho, eds., 7163, San Jose, CA,
2009.
D.X. Hammer, R.D. Ferguson, C.E. Bigelow, N.V. Iftimia, T.E. Ustun, and S.A. Burns, “Adaptive optics scanning laser
ophthalmoscope for stabilized retinal imaging,” 14, 3354-3367 (2006).
C.A. Curcio and K.R. Sloan, “Packing Geometry of Human Cone Photoreceptors - Variation with Eccentricity and
Evidence for Local Anisotropy,” Vis. Neurosci., 9, 169-180 (1992).
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
T.Y.P. Chui, H. Song, and S.A. Burns, “Individual variations in human cone photoreceptor packing density: Variations
with refractive error,” Invest. Ophthalmol. Vis. Sci., 49, 4679-4687 (2008).
D.W. Arathorn, Q. Yang, C.R. Vogel, Y. Zhang, P. Tiruveedhula, and A. Roorda, “Retinally stabilized cone-targeted
stimulus delivery,” Opt. Express, 15, 13731-13744 (2007). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-2113731
C.R. Vogel, D.W. Arathorn, A. Roorda, and A. Parker, “Retinal motion estimation in adaptive optics scanning laser
ophthalmoscopy,” Opt. Express, 14, 487-497 (2006). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-2-487
D. Lowe, “Method and apparatus for identifying scale invariant features in an image and use of same for locating an object
in an image,” US Patent number: 6,711,293 (2004).
E.J. Botcherby, M.J. Booth, R. Juskaitis, and T. Wilson, “Real-time extended depth of field microscopy,” Opt. Express, 16,
21843-21848 (2008). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-26-21843
D. Debarre, E.J. Botcherby, M.J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt.
Express, 16, 9290-9305 (2008). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-13-9290
D. Debarre, M.J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,”
Opt. Express, 15, 8176-8190 (2007). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-13-8176
M. Booth, T. Wilson, H.B. Sun, T. Ota, and S. Kawata, “Methods for the characterization of deformable membrane
mirrors,” Appl. Opt., 44, 5131-5139 (2005).
A.J. Wright, D. Burns, B.A. Patterson, S.P. Poland, G.J. Valentine, and J.M. Girkin, “Exploration of the optimisation
algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy,” Microsc. Res. and
Tech., 67, 36-44 (2005).
L. Sherman, J.Y. Ye, O. Albert, and T.B. Norris, “Adaptive correction of depth-induced aberrations in multiphoton
scanning microscopy using a deformable mirror,” J. Microsc., 206, 65-71 (2002).
1. Introduction
Adaptive optics (AO) has recently achieved success in a range of applications in
ophthalmology. It has been integrated into flood illumination full-field retinal cameras,
confocal scanning laser ophthalmoscopes (SLO), and optical coherence tomography
(OCT) instruments for high resolution reflectance imaging [1-10]. AO is used as a tool to
study the structural and functional aspects of vision, image elegant and complex retinal
circuitry, and the deterioration of such structures and processes during the progression of
disease. It has been used in direct measurements of the foveal avascular zone, retinal
capillary erythrocyte and leukocyte velocity, pulsatility, and other functional dynamics
[11, 12]. AO stimulation is also being used to measure and begin to understand the
characteristics and causes of intrinsic retinal signals [13, 14]. It has been used to study the
organization of the human trichromatic cone mosaic and defects in color vision compared
to standard psychophysical tests [15-17]. The waveguiding properties and aspects of cone
directionality (i.e., Stiles-Crawford effect) have also been confirmed with measurements
using adaptive optics instruments [18]. AO-corrected microperimetry to probe fixation
loci and retinal microscotomas have also been recently reported [19, 20]. The RPE cell
mosaic is being mapped in monkeys and humans and correlated to the cone mosaic [21,
22]. Ganglion cell function has been explored with exogenous dyes [21, 23]. In many of
the aforementioned studies, genetic defects are confirmed and explored directly in the
live eye [24]. Diseases such as rod-cone dystrophy are also being examined [25-27].
Systems are slowly migrating from the research lab into the clinic for use on patients with
a variety of diseases and conditions [28-30]. AO is also being applied to advanced
molecular and gene therapies, both in their development and as the primary method to
determine treatment efficacy at the cellular level.
AO systems use a wave-front sensor to sense ocular aberrations that arise primarily
from the cornea, crystalline lens, and tear film, and correct them using a wavefront
compensator in a closed-loop manner. A Hartmann-Shack wavefront sensor (HS-WS)
consisting of a lenslet array and CCD camera is typically used for rapid detection of
ocular aberrations [31], but other techniques such as interferometry can also be used.
Likewise, there are several methods to achieve wavefront correction including MEMS-
based deformable mirrors (DM) [32], magnetic actuator DMs [33], and liquid crystal
phase modulators [34, 35].
There are currently three barriers to widespread use of AO by clinicians and research
scientists. The first obstacle is the high cost of deformable mirrors (DM) and liquid
crystal phase modulators. The second barrier is system complexity. Currently, AO
systems can be built and operated only by researchers with extensive expertise in optics,
engineering, and instrumentation; AO systems have been designed and constructed for
the best imaging performance possible, to the exclusion of all other factors (size, cost,
complexity, ease-of-use, etc.). This has slowed the transition of AO from the research lab
to the clinic. Lastly, compelling applications of this powerful imaging technology
essential to future standard of eye care have yet to be integrated into clinical practice.
These barriers must be addressed together. The high DM cost will presumably
decrease with on-going technological innovation as more companies become involved in
system manufacturing and foster AO commercialization. With reports of significant
performance improvement of AO methods and systems for a growing number of
ophthalmic applications, demand for a commercial AO instrument will increase.
In order to reduce the complexity of AO, we have designed, developed, and tested a
compact adaptive optics line-scanning ophthalmoscope (AO-LSO). The goal is to build
an instrument that nearly anyone can operate in order to extend the perspective and
optical access that AO provides to as many research and clinical applications as possible.
The instrument is designed to fill the niche between SLO instruments that provide wide
field but limited resolution for routine clinical use and the complex, high resolution, highperformance AO instruments that are currently confined to research laboratories. A
compact, simplified AO instrument like the AO-LSO that can be used by
ophthalmologists, optometrists, and vision scientists will facilitate the introduction of this
technology and the development of new techniques to detect and treat retinal diseases.
2. Methods
2.1 Optical Setup
The fundamental principle behind any confocal imaging system (e.g., scanning laser
ophthalmoscope, confocal microscope, etc.) is the rejection of light scattered from
adjacent or out-of-plane voxels by use of a pinhole in front of the detector conjugate to
the focal plane. This significantly improves image contrast and resolution over flood
illumination and detection schemes. The line scanning approach multiplexes the
illumination and detection in one dimension, while rejecting scattered light outside of the
confocal range gate defined by the pixel size in a similar fashion as a flying-spot SLO.
We have previous shown that the LSO retains a substantial portion of the confocality of a
flying spot SLO with several key advantages [36]. The line scanning approach reduces
the hardware complexity by elimination of a high-speed scanner and associated mirror or
lens relays.
Figures 1 shows the optical schematic of the AO-LSO. All optics fit easily on an
18×24 inch (45×60 cm) optical breadboard. The system uses two optical relays each
consisting of two spherical mirrors to transfer the pupil to a large-stroke deformable
mirror (DM, Mirao, Imagine Eyes Inc.) and to a single imaging galvanometer. The
galvanometer pivots about the pupil to scan the line beam, created by a cylindrical lens,
across a small patch of retina. A fifth spherical mirror is the final ocular imaging element.
Spherical mirrors are preferred because they produce no back-reflections and to reduce
overall system aberration. The angles between the spherical mirrors are kept small to
minimize astigmatism. The magnification of the relays is constrained in the design by the
DM diameter (~15 mm), the linear array detector (LAD, e2v Inc.) length (10.24 mm),
and the desired input beam diameter at the pupil. The input beam diameter was set to
7.5 mm to achieve high lateral resolution for a dilated pupil with minimization of ocular
aberrations. The magnification from the LAD to the retina is 6.4 and so the field size on
the retina is 1.6 mm (5.5 deg for an emmetropic eye) and the pixel size of the retinal
images is 1.56 μm. Zemax modeling of the optical system indicates that diffractionlimited performance within an Airy disc radius of ~2.2 μm (at 820 nm) can be achieved
in a paraxial eye with aberration correction using only a small fraction of the DM
dynamic range. Residual off-axis astigmatism is corrected by the DM.
The overall pathlength is fairly long (~2.6 m from illumination fiber to eye); however,
the number of optical elements is small, and with the high stroke DM the system is
amenable to reduced focal lengths and folding for increased compactness. Instrument
focusing (de-focus correction) is performed with the DM in addition to wavefront
correction.
A dual superluminescent diode (SLD) source with a bandwidth of ~65 nm and a center
wavelength of ~820 nm (Broadlighter, Superlum Inc.) is used for illumination to
eliminate coherent speckle. The illumination beam is spread in one axis with a cylindrical
lens and transferred via the relays to the DM and the galvanometer, where it is scanned in
the other axis to produce a raster on the retina. The raster light returned from the retina is
de-scanned with the galvanometer and the fixed line is passed back through the system
and focused on a LAD to produce retinal images.
Fig. 1. Optical schematic of the AO-LSO retinal imager.
A separate beacon (735 nm laser diode, LD) is used to sense aberrations. The beacon
is introduced into the setup with a pellicle beamsplitter (PBS) and the return light is
separated with a dichroic beamsplitter (D2). The pellicle is used to pick off only a small
amount (~8%) of the LD light available at the input and to maximize the return signal.
The dichroic beamsplitter is designed for efficient separation of imaging and beacon light
(>95% transmission at 735 nm and >90% reflection at 820 nm). The beacon is parfocal
with the imaging beam and positioned near the center of the line. An additional lens relay
is used to de-magnify the returning beacon light to 7.5 mm on the HS-WS camera (Uniq
Inc.). The iris blocks light from corneal reflections from corrupting the HS-WS images.
The HS-WS and LAD are synchronized to the same frame-sync signal to ensure
aberration correction of the same retinal patch. Two flip-mounted mirrors are used for
HS-WS calibration using the imaging beam. With low system aberrations, this calibration
could be performed at other retinal conjugates and thus is not absolutely required for
operation.
A wax impression bite bar and forehead rest were used for head stabilization and a
system of translation stages for pupil centration. The subject’s fixation was coarsely
controlled with a computer-controlled 8×8 LED array fixation target. A cold mirror is
used to transmit imaging and beacon beams while reflecting the fixation target into the
eye.
2.2 Subjects
The system was initially tested in a limited number of human subjects without retinal
disease. Ten subjects (7 male, 3 female) with an average age of 47 years were imaged. 3
subjects were myopic, 4 subjects were emmetropic (visual acuity 20/40 or better), 3
subjects were hyperopic, and 6 subjects were presbyopic. The average refractive error
was approximately +1.5 D for the hyperopes and -5.0 D for the myopes. No subject’s
eyes were dilated during the investigation. However, most subjects had at least 4-5 mm
diameter pupils, which was sufficient for lateral and depth resolution improvement with
AO compensation. The room was darkened and a baffle was used to provide darkadapted pupil dilation. Some subjects had 7-8 mm pupils when not looking directly at the
NIR beacon and illumination beams. The pupils always constricted when imaging the
fovea. Irregular pupil dilation accounted for some of the variability seen in resultant
imaging performance and also accounted for the relatively smaller differential in AO
correction (as measured by Δ[wavefront error]RMS) for images collected from the fovea.
The central macula was the image target in most individuals, although in some
subjects, the optic nerve head (ONH) was also imaged. Photoreceptors could be resolved
in all subjects except two, one of which had a mild cataract. The deformable mirror was
able to statically correct de-focus (prior to AO compensation) in all individuals. LSO and
HS-WS videos with durations of 5-30 seconds were acquired.
3. Results
3.1 AO Performance
Adaptive optics dynamically corrected the input wavefront for ocular aberrations and
improved the lateral resolution in LSO images collected from human subjects. Figure 2
shows AO-LSO images (and video) of the fovea with and without AO correction with the
best focus. Photoreceptors can be resolved within ~1 deg. of the fovea. Other AO
performance metrics (e.g., typical wavefront error maps) are discussed elsewhere [37].
In one measurement, the average RMS error (Strehl ratio) improved from 0.34 μm
(2.2×10-4) to 0.07 μm (0.70) with AO correction. All Zernike orders up to 7 showed
improvement, although the most improvement was seen in the first four orders. The 1090 % risetime for the correction was 0.52 s, for a closed loop bandwidth of ~2 Hz, slower
than that achieved previously for other deformable mirrors [38]. The overall closed loop
bandwidth is determined by factors other than the DM response, including the wavefront
sensor acquisition frame rate – 15 fps in our case – and the time it takes to calculate spot
locations, which generally takes 10-20 ms for our algorithm [38].
Fig. 2. Comparison of unprocessed AO-LSO frames with (a) and without (b) adaptive optics. [Videos: 7.7 Mb /
13.6 Mb].
3.2 Imaging Performance
Photoreceptors could be resolved in 80 % of the individuals imaged in this small pilot
proof-of-principle human subject investigation. Photoreceptors could be resolved within
~0.3 mm (~1.0 deg.) of the fovea for some individuals. According to previously reported
histology [39], the cone size at this eccentricity is roughly equivalent to the size of 3
pixels for the AO-LSO instrument. Photoreceptors could be reliably counted (manually)
within ~0.5-1 mm (~1.5-3 deg.) for all subjects where they could be resolved. Thus,
photoreceptors could be counted across roughly 80% of the surface area of the macula
(central 5.5 mm). Figure 3 shows an AO-LSO image and video of the photoreceptor
mosaic for one subject.
The photoreceptor mosaic was examined more thoroughly in 4 of 10 subjects. Three
of these subjects were emmetropes and one had relatively large astigmatism (OS: 1.75D/065, OD: -2.5D/098). For these subjects, videos were acquired for generation of
montages across the entire macula (central 15 deg.). Additional analysis included manual
and automated cone counting and frequency analysis as a function of eccentricity.
Figure 4 shows examples of the photoreceptor mosaic imaged at various eccentricities for
these four subjects (cropped magnified regions). Note that subject (a) appear to have a
higher density than subject (b) at roughly equivalent eccentricities. For good fixators
(Fig. 4a), very little distortion in individual photoreceptors is seen. For other subjects,
intraframe distortions from microsaccades caused the cones to appear elongated (Fig. 4c)
along the direction of motion.
Fig. 3. Photoreceptor mosaic near the limit of resolution with AO-LSO. (a) Single AO-LSO image, (b)
Previously acquired LSO image showing region imaged in (a). [AO-LSO Video: 5.2 Mb].
In several individuals, short video clips (~5 sec.) were collected while different
fixation target LEDs were illuminated to move fixation to nine offset positions with
respect to the imaging raster. These were used to map the photoreceptor mosaic across
the entire macula. The best individual images from the videos (i.e., those with the best
photoreceptor contrast and absent shear from saccades) were used to build up a montage
of a region sized ~3.5-4.5 mm (~12-15 deg.) depending upon overlap and the subject’s
ability to fixate. The images were aligned manually (Fig. 5), where individual
photoreceptors could be matched in the overlap region. Software is under development to
automatically generate the montage. Torsional eye motion was not corrected for and
sometimes prevented perfect overlap of the individual frames.
Generally, AO correction was left active (i.e., loop closed) while the fixation target
LED was changed, and with slight adjustment of the beam to maintain pupil centration,
the videos could be collected in rapid sequence. In the four subjects where this sequence
was performed, the imaging time to collect all the videos ranged from 8 to 18 minutes. It
should be noted that to map a similar region of the retina with a research AOSLO with a
field of view of ~1.5 deg. would require something on the order of 100 individual scans.
(Although no technical obstacle precludes the development of a flying spot AOSLO with
a wider field, this is generally not done.) Figure 5 shows a central macula montage from
one individual.
Fig. 4. Photoreceptors resolved in four individuals. Approximate eccentricity: (a) 0.76 mm, (b) 0.76 mm, (c)
0.58 mm, (d) 0.72 mm. Scale bar = 100 μm.
3.3 Photoreceptor Counting
Photoreceptor counting at eccentricities from 0.5-2.5 mm was performed on four
subjects. A region sized 50×50 pixels (78×78 μm) was chosen where photoreceptor
contrast was good and there was no (or very little) shadowing from overlying retinal
capillaries. For the AO-LSO, regions larger than this usually had vessel shadows that
provided inaccurate cone density measurements. A program was developed (as part of
our AO analysis software libraries) to perform manual cone counting. The manual cone
counting software allowed the user to click on the location of an individual cone, which
immediately updated the image overlay with a mark indicating that the cone has been
counted. When all the cones in a region were fully marked, the image with overlay and
cone coordinates was saved. We have also developed automated cone counting software
described below, similar to previously published algorithms [40-42]. Besides selecting
the region on which to perform cone counting, the software is fully automated.
Depending on the region size and cone density, the software takes a few seconds to
several minutes to complete in the current Matlab implementation.
Fig. 5. Montage of nine AO-LSO frames acquired across the central macula.
To robustly automate the cone counting procedure, several factors have to be
controlled or accounted for, including inherent differences in photoreceptor reflectivity
and size, variable brightness across the image field, degree of AO correction, and eye
motion induced distortions. Photoreceptor waveguiding, caused by light interference
between individual discs within the photoreceptors, naturally varies from cone to cone. In
addition, blood vessel shadows and other overlaying structures and layers (e.g., NFL)
may induce changes in the reflectivity and contrast of the photoreceptor layer across the
field. Also, the AO-LSO has slightly non-uniform illumination (from the cylindrical
lens), and the field size is such, relative to the aplanatic patch, that AO correction may be
optimal in some regions but not others.
Our algorithm first corrects for a non-uniform image background. A 2×2 median filter
is applied. The background is then obtained by morphologically opening the image using
a disk-like structuring element with a radius of 5 pixels. After subtracting the
background, contrast stretching is performed for image enhancement. All local maxima
in the image are then identified, and statistical analysis provides the centroid for the
identified maxima as an initial guess for the cones’ locations. Finally, cone locations are
filtered to prevent duplicates from closely-spaced centroids. Filtering is the most time
consuming operation and is accomplished as follows: a rough estimate for the cone
diameter is obtained by thresholding, morphologic transformations (to remove spur,
isolated, and H-connected pixels), and statistical analysis to determine the average cone
area across the entire region; a region 1.4 times the average cone diameter around each
centroid (cone) is examined to determine if multiple centroids exist within the region,
indicating potential overlapping cones; if multiple centroids exist, the program calculates
the mean intensity within the expanded region for each centroid and keeps only the cone
with the maximum mean intensity. The 1.4 factor was found empirically and future
development will determine and use frequency analysis to adaptively correct this factor
for different cone densities (found at different eccentricities, for example).
Figure 6 shows a comparison between manual and automated cone counting at an
eccentricity of 0.8 mm. In this case, a region sized 50×50 pixel was chosen. (When regions
sized 100×100 pixels were chosen so as not to overlap with retinal vessels, the automated
software produced similar cone densities as the 50×50 pixel regions.) The cone counts for
the 50×50 pixel region was 140 for the manual counts and 121 for the automated counts,
respectively (13% difference). The locations of the counted cones overlap for manual and
automated analysis for a large fraction of cones. Even when the locations don’t overlap
because of pixel-to-pixel intensity variation and the manner in which the automated
software chooses the centroid, the automatic software algorithm still counted a cone with
nearly 90% accuracy. For the AO-LSO at eccentricities <0.8 mm, the 2×2 median filter
begins to influence the cone counts because cone peaks and contrast can be totally
smoothed out by the median filter. Conversely, removing the median filter will cause the
counts for large eccentricities to become artificially inflated.
Fig. 6. Comparison of manual and automated cone counts. (a) Full frame showing location of 50×50 pixel
regions selected for cone counting. Cones are marked in blue. (b) Filtered 50×50 pixel region used in the
automated software. Cone locations are shown with solid red circles in the overlay. (c) Unfiltered 50×50 pixel
region used for manual counting. Cone locations are shown with solid red circles (automated counts) and open
blue circles (manual counts) in the overlay.
The results from both manual and automatic counting are shown in Fig. 7 in
comparison to previously reported histology [39]. The manual counts achieve fairly good
agreement, albeit with slightly larger densities, in comparison to histology at all
eccentricities. Subject 1 generally had a higher cone density at all eccentricities. This
subject is the same as that shown in Fig. 4(a) where a qualitatively high density was
noted. The automated cone counts tended to be closer to histology for larger eccentricities
and were inaccurate for the smallest eccentricities (<0.8 mm). At this eccentricity, where
the cone densities are greater than ~20,000/mm2, the diameter of the average cone for the
AO-LSO is ~4.5 pixels. It is clear that below this the automated software is unable to
resolve individual photoreceptors. In general, the manual and automated cone counts
were reasonable well matched (<20% difference between them 82% of the time), with the
manual counts typically higher than the automated counts (77% of the time) and the
largest differences occurring at the smallest eccentricities. For eccentricities greater than
0.8 mm, therefore, the automated software appears to be quite robust and accurate at
counting cones.
There are several possible explanations for cone count differences from histology and
the fact that the manual cone counts were generally higher than automated counts. First,
there is naturally a spread in cone densities across the entire population, but as
importantly, there is a variation in the length/size of the eye which was not measured in
this study. The proper conversion factor from image angle to distance on the retina
depends on this value. Second, it is possible that the analyzer may incorrectly label some
areas as cones (false positives), and likewise, the software can fail to identify cones in
ambiguous regions that fail to trigger detection criteria (false negatives), each with
systematic tendencies to one or the other error type that are image- and eccentricitydependent. Third, it is possible that when choosing a sampling location, the analyzer
tends to choose a non-representative region. We did not distinguish in our analysis among
different retinal sampling location (i.e., nasal, temporal, inferior, superior), though it is
known that differences exist between cone densities along the meridional and equatorial
directions [40]. This could account for some of the apparent fluctuations in measured
cone densities as a function of eccentricity.
Fig. 7. Photoreceptor counts in comparison to previously reported histology [39]. AO-LSO images from four
subjects at various eccentricities were used for cone counting. Manual counts are shown with solid symbols and
automated counts are shown with open symbols.
3.4 Registration and Frame Averaging
Involuntary eye motion is always problematic for ophthalmic diagnostic techniques.
Motion effects relative to field size obviously increase as the field size decreases, and
there is frequently little overlap between adjacent AOSLO images with 1-2 deg fields
with poor fixators. Retinal tracking hardware has been previously integrated into
AOSLOs, successfully stabilizing images to within ~15 μm [38]. Real-time software
registration algorithms have also been demonstrated, stabilizing images to less than a
cone diameter when AO correction and patient fixation are adequate [41, 42]. However,
for simple, low-cost AO systems, some type of post-processing image registration
algorithm is required for optimal information extraction. Image registration and frame
averaging are important to increase the signal-to-noise ratio, to better resolve moving
particles (e.g., blood cells flowing in capillaries), when long duration scans are required
(e.g., autofluorescence of lipofuscin or other exogenous fluorophores), and when a
specific feature (e.g., photoreceptor, small capillary, etc.) is to be tracked over time. The
software described here is suited for both AOSLO and AO-LSO images (with some
limitations), does not require perfect AO correction, and accepts a lower than optimal
frame yield, discarding non-overlapping and wholly distorted frames.
In general, eye motion can cause intra-frame distortions: compressing, expanding, or
shearing pixels depending upon whether the movement is in the same, opposite, or
orthogonal direction as the image scanner. Eye motion within a frame will also cause
inter-frame translation between adjacent frames. Torsional eye motion can also cause one
frame to rotate with respect to another. Eye motion is not fast enough to occur during the
integration time of the linear detector (for the AO-LSO with line rates in the tens of kHz
the integration time is usually <100 μs), and this fact can be used as an advantage in the
registration software. Thus all processing is done on strips 20 pixels in width, which
together take ~1.3 ms to acquire (for a frame rate of 15 fps). The registration software
only performs translation of these small image strips. No image de-warping (expansion,
compression, or shear to compensate eye motion) is currently applied. If there are not
matched features because of excessive warping, the region is discarded. Image dewarping to increase yield is a task planned in future analysis software development.
There are three basic steps to the registration software algorithm. First, a key (or
anchor) frame is chosen. This is currently performed manually, although refinement of
the software will allow for this to be performed automatically using discrimination based
upon image brightness, contrast, and/or frequency content. Second, a feature-matching
routine is executed to find similar structural features in the key frame and all others.
Third, all frames are aligned to the key frame.
In the first step, the operator makes an initial pass on the images to select a key frame.
Images with blinks or excessive eye motion are also currently removed in the initial step.
Although these frames would be automatically removed in subsequent steps because of a
lack of matching features, until the software is fully automated, this step increases the
processing speed by reducing the number of frames to process. All images to be
processed are smoothed with a median filter to remove some image noise and contrast
stretched for image enhancement.
The second step identifies structural features that are identical in the two frames
(anchor frame and frame to be aligned). For this purpose we use the scale invariant
feature transform (SIFT) developed by Lowe [43]. SIFT provides the coordinates and a
descriptor (key) for each identified feature. The descriptor contains a series of parameters
related to the geometrical characteristics of the feature and the intensity gradient around
the feature, properties that are scale invariant in addition to rotation and translation
invariant. Depending on the image content, there can be as many as 1000 SIFT features
in a 500×500 pixel image. Figure 8 shows two frames with SIFT-matched features.
Once the features are identified in a pair of images, the next step is to match them. The
program calculates the angle (the arccosine of the dot product between the two
descriptors), rather than the Euclidian distance, between two descriptors. For small
angles, the ratio of angles is a close approximation – but more efficient to calculate – than
the ratio of Euclidean distances. We only keep matches in which the ratio of vector
angles from the nearest to second nearest neighbor is less than a pre-selected value (0.6).
A larger value would generate more matching points but add more false matches. There
are usually on the order of 100 matches for a pair of images. If there are less than ten
matches we consider the result unreliable, discard the frame from the set, and move on to
the next frame.
Fig. 8. SIFT features (red dots) matched in two frames.
Once the coordinates of all matched points in the anchor frame and frame to be
aligned are found, a transformation between frames is calculated. For simple whole frame
translation, the average translation angle for all pairs of matched points is found and
converted into a single set of x-y displacement coordinates. More typically, however,
intra-frame motion will cause distortions where different parts of the image will translate
by different distances and in different directions. In this case, the matched points need to
be filtered. A histogram (bin size = 20 deg.) of the translation angles for all pairs of
matched points is calculated and analyzed. If the histogram has only one maximum, this
indicates simple translation and we keep only the matched points from the bin
corresponding to this maximum. Translation angles outside of this bin most likely
indicate false feature matching. If the histogram has two maxima, this indicates that two
parts of the frame moved in different directions. We only keep the matched points from
the bins corresponding to the two maxima, and consider angles outside of these bins
unreliable.
To perform a sophisticated transformation in the presence of complex eye motion, the
frame to be aligned is split into 20-pixel thick horizontal strips. When a strip contains one
or more matched points, it is displaced by the average translation angle of the matched
points within the strip. To find the transformation of strips without any matched points, xy displacement coordinates are calculated by interpolation of the displacement values of
the first and the last valid strips (i.e., those with valid matched points). The final
registered image generally contains some discontinuities (seen as horizontal black stripes)
since individual strips are translated differently and full image de-warping is not
performed. To create a composite image of all frames without discontinuities, only the
non-zero image regions of the registered images are averaged. Both the average image
and each registered image are filtered with a contrast enhancement filter.
Figure 9 shows a single foveal image (a), a composite image of 89 frames with full
frame registration and averaging (b), a composite image with strip registration and
averaging (c), and a composite image without registration (d). Note that improvement in
vessel contrast (especially in the lower left portion of the image) in Fig. 9(c) compared to
Fig. 9(b).
Fig. 9. Comparison of registration techniques. (a) Single frame centered on the fovea of a subject. (b) 89-frame
average using full frame registration technique. (c) 89-frame average using the strip registration technique to
correct translation and intra-frame distortion from eye motion. (d) 89-frame average without registration.
3.5 Imaging Blood Flow and Retinal Structures
To examine flow in the finest retinal capillaries, the AO-LSO system focus was pulled
forward (with the DM) to the inner retinal layers where the vessels reside (inner nuclear
and inner plexiform layers). Flow could be visualized in the smallest capillaries that form
a ring around the foveal avascular zone (FAZ). Figure 10 shows the ring of capillaries
around the FAZ for three individuals and a video of flow in the subject shown in
Fig. 10(a). The images are averaged from 50-70 individual registered video frames using
the post-processing registration software described above.
Fig. 10. Capillary ring around FAZ for three individuals. Between 50 and 70 individual registered video frames
were averaged to produce each image. Scale bar = 200 μm. [Video of flow in (a): 3.0 Mb / 11.2 Mb].
On several subjects, the ONH was also examined. Figure 11 shows three images of the
ONH at different depths collected from three subjects. Individual nerve fiber bundles
entering the ONH can be visualized when the system focus is set to more anterior (i.e.,
inner) layers (Fig. 11a). As the focus is adjusted deeper, photoreceptors along the rim and
in the scleral crescent can be seen (Fig. 11b). When the focus is pushed deep into the
ONH, the pores and sieve-like structure of the lamina cribrosa can be seen (Fig. 11c).
4. Discussion
The AO-LSO system is not intended to replace current AOSLOs, but rather to
supplement it for those users that choose to sacrifice a small amount of performance for a
reduction in complexity. As such, it is worthwhile to discuss several limitations to the
AO-LSO approach, but also some advantages that it may have in comparison to a
conventional flying-spot AOSLO.
Fig. 11. AO-LSO images of the ONH at different depths. (a) Nerve fiber bundles and vessels at a shallow depth.
(b) Photoreceptors visualized at the rim. (c) Lamina cribrosa deep within the ONH. Images were collected from
three different subjects. Scale bar = 200 μm.
First, because the illumination optics involves a focused line rather than a focused
spot, there is a broad Gaussian variation in illumination intensity across the field for the
AO-LSO. In our implementation, we have chosen the cylindrical lens focal length such
that the 1/e power points are beyond the edges of the field. This effect is observed in the
images presented throughout the paper as a brightness changes from top to bottom (e.g.
Figs. 3, 9, and 10) or from left to right in the images that were not rotated 90º to
correspond correctly to retinal orientation (e.g. Figs. 6, 8). However, the illumination is
fixed and scalable across the field and does not degrade resolution or impair clinically
relevant diagnosis in any way.
Second, unlike an AOSLO, the minimum resolution of the AO-LSO is constrained by
the detection hardware. The image resolution of the instrument is essentially determined
by the LAD pixel size (projected onto the retina) and the Airy disc of the input beam.
Because an AOSLO uses a single point detector in which the image information is
encoded into temporal variations, any arbitrary detection aperture size relative to the Airy
disc can be projected and detected, depending upon the relay optics and confocal pinhole
size. Therefore, fairly large pinholes can be used while efficiently collecting light and
over-sampling an arbitrarily small field to achieve the highest attainable resolution. In
contrast, it is not so simple to oversample the field for the AO-LSO, since the separation
of adjacent (square) pixels and the confocal aperture are the same, fixed by the LAD. The
Nyquist sampling limit sets the maximum resolvable spatial frequency which corresponds
to twice the pixel separation. This should be a fraction of the Airy disc diameter to
produce ideal confocality and diffraction-limited resolution with AO-LSO. However,
such a small pixel relative to the Airy disc results in a hyperconfocal condition where
light collection efficiency is significantly diminished. Sampling below the Airy radius
results in rapid loss of signal strength without appreciable improvement in resolution.
Furthermore, cross-talk from adjacent illuminated line elements degrades contrast. Thus,
the pixel/field size cannot be made arbitrarily small for an AO-LSO However, since a
focused line is projected onto the retina rather than a focused spot, more power can be
safely delivered to the eye (ANSI extended source factor) to partially compensate for the
collection losses in the hyperconfocal case. In the AO-LSO presented here, we are
slightly hyperconfocal: the Airy disc radius is ~2.2 μm (at 820 nm in a 6 mm pupil
paraxial eye) and the pixel width reference to the retina is ~1.5 μm assuming an eye
length of 17 mm.
One of the consequences of the larger field size for the AO-LSO is that a region larger
than the aplanatic patch is imaged. The aplanaticity of the optical system, the degree of
compensation for spherical aberration and (off-axis) coma, has been characterized. The
aplanatic region or “patch” was effectively set as part of the system design (with AO
mirror corrections through the third order) so that the total residual aberrations beyond
these low orders across the 5.5 deg field would not exceed the Airy disc diameter at the
perimeter, and so that the required low-order AO corrections (other than sphere) would
not exceed the stroke of the mirror in any part of the field of regard of the large final
mirror. Furthermore, there is no single astigmatism correction (the dominant aberration of
off-axis spherical mirrors) that will compensate fully across large field angles. All these
aberrations have the effect of limiting how short the focal lengths can be in the optical
train relative to the field size, independent of the real eye, that represent an irreducible
“pseudo-aplanatic” patch size of the optical system over which high quality diffractionlimited focus is maintained. Of course, the eye itself with large pupil (though already
reasonably aplanatic by evolutionary design) will contribute irreducible aberrations and
further reduce the effective size of the aplanatic region.
In practice, a more severe limitation on the effective aplanaticity of the focal region
may arise from the gross tilt of the focal plane with respect to the retina across the field.
This is especially severe in myopes where the lengthened eye may tilt the retina away
from the ideal emmetropic posterior pole surface or with eccentric pupil alignment.
Fortunately, the AO-LSO can address this in some circumstances (i.e., nasal-temporal
tilt) by tilting the detector surface (with a rotary mount) in the plane of the line focus to
compensate – this is equivalent to moving a point detector axially [44] at the line rate to
adjust focus on the fly – a practical impossibility for large refractive errors with an SLO.
The vertical extent of the scan can be reduced to limit the (superior-inferior tilt) error of
the mosaic patch, and compensated with more patches.
Besides the advantages discussed above, the AO-LSO is also more amenable to very
high frame rates (>70 fps), using high speed linear detectors now available (e.g., Sprint,
Basler Vision Inc.). This advantage can be exploited for applications that require image
averaging and registration in the presence of eye motion, but also to study the dynamics
of retinal blood flow.
We have designed a second generation AO-LSO that further reduces the size and
complexity of the system. The overall footprint of the optical setup is reduced to 12×18
inches (30×45 cm), a size that can be mounted to a slit lamp stand. Also, we have begun
to explore wavefront sensorless algorithms [45-49] exploiting unique LSO features for
further hardware reduction. The wavefront sensorless approach to AO uses information
extracted from the image sensor to calculate the sample aberrations that are applied to the
correcting element, obviating the need for a separate wavefront sensor. These techniques
are often limited to small aberrations for AOSLOs because large aberrations will direct
light completely out of the pinhole. So the line field approach may offer some advantages
in correction of larger aberrations. Also, line image-based algorithms can be implemented
in extremely fast loops near the line acquisition rates (tens of kHz), so that the DM sets
the limiting rate for AO correction.
5. CONCLUSION
We have successfully demonstrated the integration of adaptive optics into a line scanning
ophthalmoscope. The instrument was able to achieve AO correction with comparable
fidelity to a research AOSLO and resolution of photoreceptors, capillaries, and other
clinically significant retinal targets. While there are some optical and physical constraints
fundamental to the line scanning technique that define system capabilities (e.g., the field
cannot be made arbitrarily small), the AO-LSO has significant advantages over research
AOSLOs that make it well-suited to further development into a clinical prototype
package and eventual commercialization. Future system development, including use of
wavefront sensorless algorithms, may lead to a further reduction in hardware, footprint,
cost, and overall instrument complexity. This development may eventually make high
resolution AO instrumentation more accessible to vision researchers and
ophthalmologists for a wide variety of applications from disease diagnosis to the
evaluation of the effectiveness of new drug therapies.
Acknowledgements
This work was supported by NIH grant EY018509. We thank Steve Burns for assistance in the development of cone
counting and frame registration software.