Download High-Resolution Retinal Imaging using Adaptive Optics in a

Document related concepts

Vibrational analysis with scanning probe microscopy wikipedia , lookup

Reflector sight wikipedia , lookup

Night vision device wikipedia , lookup

Phase-contrast X-ray imaging wikipedia , lookup

Surface plasmon resonance microscopy wikipedia , lookup

Hyperspectral imaging wikipedia , lookup

Retroreflector wikipedia , lookup

Eye tracking wikipedia , lookup

Superlens wikipedia , lookup

Nonimaging optics wikipedia , lookup

Chemical imaging wikipedia , lookup

Preclinical imaging wikipedia , lookup

Microscopy wikipedia , lookup

Optical coherence tomography wikipedia , lookup

Super-resolution microscopy wikipedia , lookup

Optical aberration wikipedia , lookup

Harold Hopkins (physicist) wikipedia , lookup

Confocal microscopy wikipedia , lookup

Transcript
High-Resolution Retinal Imaging
using Adaptive Optics in a
Confocal Laser Scanning
Ophthalmoscope
S TEPHEN G RUPPETTA
Imperial College of Science, Technology and Medicine
University of London
Submitted in partial fulfilment of the requirements for the degree of
Doctor of Philosophy
Abstract
A high-resolution system for imaging the human retina in-vivo is presented.
The imaging setup is a confocal microscope, which comprises the optics of
the eye, coupled to an adaptive optics system which corrects for the dynamic
aberrations of the eye to maximise the lateral and depth resolution. The need
for high-resolution retinal imaging is discussed with particular reference to
disease diagnosis and treatment. The thesis highlights some of the problems
associated with retinal imaging and discusses confocal microscopy and adaptive optics as two techniques that can be combined to resolve some of these
issues. The detailed presentation of these two techniques is followed by the
description of the design process of the imaging system and the implementation of this design. A selection of the retinal images obtained is presented
followed by an analysis in terms of lateral resolution, depth resolution and image contrast. The performance of the adaptive optics system is also assessed
via the change in the point-spread function from the eye with adaptive correction. The thesis is concluded with a discussion of the current limitations of
adaptive optics and retinal imaging and their prospects in ophthalmology.
Acknowledgements
The work described in the remaining pages of this thesis is only a part of what
the whole PhD experience is about – the easiest part to write about. Trying to
describe this experience is a next-to-impossible task; so I will not attempt such
a feat. It suffices to say that all the hyperbolic clichés befitting such an occasion are probably all true: a life-changing experience; a multi-faceted learning
stage; a once-in-a-lifetime endeavour; the list goes on. Most importantly, this
was not a solitary quest and it is the people who were – voluntarily or involuntarily – part of it to whom I owe my gratitude. However, writing a litany
of names will only achieve the undesirable effect that nobody will read my
acknowledgements; some might just scan through to see whether their name
is listed or not. So I will avoid this while still thanking all those who have
contributed in so many different ways.
Firstly, I would like to thank my supervisors who have given me the opportunity to do this work and provided the means and the expertise for the
project. Gratitude also goes to the members of what was the Applied Optics
group, with whom I shared offices, labs, coffee breaks, fruitful (sometimes)
discussions, entertaining (most of the times) conversations and so much more.
The guys in the optics workshop deserve a big thank-you for always making
everything that had to be made and for their much appreciated wise judgement, and their fair share of entertainment as well.
My life outside the lab (for there was one as well!) revolved mostly around all
the people I got to know at Olave, Clayponds and WTH; they made travelling
around the world while sitting at a dinner table possible and from them I have
learnt more than I have from books and journals.
And I owe much more than a simple thank-you to those who have been closest
to me in these years, with whom I shared innumerable memorable moments,
many joys and also a few sorrows. They have been able to tolerate me and
comprehend my character, and I have to be the first to admit that that is not
always an easy task; they are good listeners even when silence is all there is
to listen to; they are good friends and true friendships last a lifetime – that is
how long I will treasure them. Thank you.
One more thank-you has to go to this amazing city which has proved to be a
wonderful host and an unexpected teacher.
My biggest gratitude and appreciation goes to the major contributors by far to
this experience, whose unabating support spans way more than merely these
last few years. To my brother: thank you for always being a step ahead and
leading the way, thank you for understanding me as nobody else does. To my
parents: expressing my gratitude in words is not possible. If I was in a position to tackle confidently all the challenges which I was faced with, then that
is thanks to them. Grazzi talli dejjem emmintu fl-gh̄ażliet li gh̄amilt, tal-fiduċja li
dejjem urejtu fija u fuq kollox ta’ l-imh̄abba li dejjem tajtu. Grazzi minn qalbi.
SG
London, February 2004
You can never see
what the dark looks like
— anon
Contents
Abstract
Acknowledgements
Contents
1 Introduction
1.1 Why is Retinal Imaging Necessary? . . . . . . . . . . . . . . . . .
1.1.1 The Requirement for In-Vivo Imaging . . . . . . . . . . .
1.2 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
3
4
2
3
6
11
12
15
17
Imaging the Human Retina In-Vivo
2.1 The Human Eye . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Issues with Imaging the Retina . . . . . . . . . . . . . . . .
2.3 Aberrations in the Human Eye and their Measurement . .
2.3.1 The Dynamic Nature of the Aberrations of the Eye
2.4 Retinal Imaging Systems . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
19
24
27
32
33
Confocal Microscopy and Ophthalmoscopy
3.1 Theoretical Background of Confocal Microscopy . .
3.1.1 Image Formation in the Confocal Microscope
3.1.2 Lateral and Depth Resolution . . . . . . . . .
3.1.3 Power Spectrum Analysis . . . . . . . . . . .
3.2 Confocal Ophthalmoscopy . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
40
44
48
53
57
. .
.
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
Adaptive Optics
4.1 Overview of Adaptive Optics . . . . . . . . . . . . . . . . . . . .
4.1.1 Adaptive Optics in the Human Eye . . . . . . . . . . . .
4.2 Components of an Adaptive Optics System . . . . . . . . . . . .
4.2.1 Wavefront Sensing . . . . . . . . . . . . . . . . . . . . . .
Shack-Hartmann Wavefront Sensor . . . . . . . . . . . .
Determining the Optimal Centroiding Algorithm through
Simulations . . . . . . . . . . . . . . . . . . . . .
6
60
61
61
65
66
67
74
4.2.2
4.2.3
5
Active Wavefront Correction . . . . . . . . . . . . . . . .
Membrane Deformable Mirror . . . . . . . . . . . . . . .
Control System . . . . . . . . . . . . . . . . . . . . . . . .
Design and Implementation of the
Ophthalmoscope
5.1 Imaging Subsystem . . . . . . . . .
5.1.1 Illumination . . . . . . . . .
5.1.2 Beam Size and Scanning . .
5.1.3 Image Formation . . . . . .
5.2 Wavefront Sensing and Correction
5.2.1 Wavefront Sensing Branch .
5.2.2 Wavefront Correction . . .
5.2.3 Controlling the AO System
5.3 Operation of the LSAO . . . . . . .
78
81
84
Laser Scanning Adaptive
.
.
.
.
.
.
.
.
.
91
95
95
100
105
108
108
110
112
115
6
Analysis of Retinal Images and AO Correction
6.1 AO-corrected Retinal Images . . . . . . . . . . . . . . . . . . . .
6.1.1 Alignment and Averaging of Frame Sequences . . . . . .
6.1.2 Presentation of Retinal Images . . . . . . . . . . . . . . .
6.2 Lateral Resolution Estimation from Power Spectra of the Images
6.3 Contrast Analysis of Retinal Images . . . . . . . . . . . . . . . .
6.4 Axial Sectioning through the Retina . . . . . . . . . . . . . . . .
6.5 PSF Monitoring during AO Correction . . . . . . . . . . . . . . .
6.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . .
119
120
121
123
126
129
131
134
136
7
Conclusion
7.1 Current Issues with AO in Ophthalmology
7.1.1 Effects of the Living Eye . . . . . . .
7.1.2 Technical Limitations . . . . . . . . .
7.2 The Road Ahead for Retinal Imaging . . . .
138
139
139
143
146
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A Safety Considerations for the Light Levels Entering the Eye
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
150
B Lateral and Axial Intensity Distribution of an Imaging System
153
B.1 Lateral Intensity Distribution . . . . . . . . . . . . . . . . . . . . 154
B.2 Axial Intensity Distribution . . . . . . . . . . . . . . . . . . . . . 155
C MATLAB Code for Image Alignment and Averaging
Bibliography
156
161
7
List of Figures
2.1
2.2
2.3
2.4
2.5
2.6
2.7
.
.
.
.
.
.
. .
20
22
23
25
28
34
37
3.1 Optical configurations for a conventional microscope, a scanning microscope and a confocal scanning microscope. . . . . . . . . . . . .
3.2 Depth discrimination in a confocal configuration. . . . . . . . . . . .
3.3 Imaging of a point source by a simple lens. . . . . . . . . . . . . . .
3.4 Intensity distribution plots of the PSF in terms of the normalised op-
41
43
46
3.5
3.6
3.7
3.8
Schematic cross-section of the human eye. . . . . . . . . . . . . .
Reduced model eye. . . . . . . . . . . . . . . . . . . . . . . . .
Representation of the a cross-section of the retina. . . . . . . . . .
Representation of the retinal layers at the fovea. . . . . . . . . . .
Representation of Scheiner’s principle. . . . . . . . . . . . . . . .
The direct observation of the retina using a direct ophthalmoscope.
Schematic representation of an OCT setup for retinal imaging. . .
.
.
.
.
.
tical variable v for the conventional microscope and the confocal microscope respectively. . . . . . . . . . . . . . . . . . . . . . . . . .
Contour plots of |h(u, v)|2 and |h(u, v)|4 . . . . . . . . . . . . . . . . .
A plot of the intensity distribution in a conventional and confocal microscope as a function of the axial variable u. . . . . . . . . . . . . .
Variation of the integrated intensity
in the v-plane with defocus u. . .
p
Plot of the CTF H in terms of δ = ξ 2 + η 2 for conventional and confocal imaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
51
52
54
56
4.1 Schematic representation of a simplified AO system for astronomical
imaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Simplified schematic representation of an AO system for the eye. . . 64
4.3 Representation of the principle of operation of a Shack-Hartmann wavefront sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8
4.4 Spot displacement in a single Shack-Hartmann lenslet from a tilted
wavefront. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Approximation of an arbitrary wavefront by a wavefront consisting
of plane wave tilted segments. . . . . . . . . . . . . . . . . . . . . .
4.6 1000 Gaussian-distrubuted random points generated by a Montecarlo
4.7
4.8
4.9
4.10
4.11
simulation and the corresponding pixellated image (50x50 pixels), including noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulated Shack-Hartmann spots with noise and graphs showing the
centroiding accuracy for the iterative process described in the text. .
Defined search area for centroiding superposed on the array of image
pixels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Schematic representation of segmented deformable mirrors with piston segments and tip-tilt segments, and a continuous facesheet deformable mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Layered structure of a bimorph deformable mirror and typical electrode structure in the piezo layer. . . . . . . . . . . . . . . . . . . .
An illustration and a cross-sectional representation of a membrane deformable mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Schematic representation of the Laser Scanning Adaptive Ophthalmoscope (LSAO). . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 The setup as mounted on the lab-bench. . . . . . . . . . . . . . . . .
5.3 Formation of a rectangular raster on the retina via two scanning mirrors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Schematic representation of the SLO built as a precursor to the final
system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5 Screenshots showing the display of the Shack-Hartmann spots ob-
5.6
5.7
5.8
5.9
tained from an eye with the half-waveplate rotated so as to give the
strongest signal and the weakest signal at the Shack-Hartmann sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Screen shots of the SH pattern without and with scanning of the beam.
Geometry of the 37 electrodes of the OKO TUDelft deformable mirror.
Wavefront correction with the deformable mirror. . . . . . . . . . . .
Imaging branch of the system showing how a sliding mirror is used
to switch between retinal imaging and double-pass PSF imaging. . .
69
70
75
77
78
80
81
82
92
93
94
96
100
106
111
113
116
6.1 A raw frame from the imaging system showing the optic disc head, an
average of a sequence of 50 frames and average of the same 50 frames
after having gone through the alignment procedure. . . . . . . . . . 122
6.2 Retinal images before and after AO correction. . . . . . . . . . . . . 124
6.3 Detail from one of the retinal images showing distinct features. . . . 126
9
6.4 The power spectra of retinal images taken without and with AO correction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.5 Histograms of the pixel values from 0 to 255 of the images taken without and with AO correction. . . . . . . . . . . . . . . . . . . . . . . 130
6.6 Plot of the normalised integrated intensity obtained with a mirror at
the object plane while scanning the detector axially through focus at
the image plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.7 Plot of the normalised integrated intensity obtained while imaging the
retina as the detector is scanned axially through focus at the imaging
plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.8 A series of frames from the retina at different axial positions. . . . . .
6.9 Double-pass PSFs before and after AO correction. . . . . . . . . . . .
10
132
133
134
135
CHAPTER
1
Introduction
The role and importance of the human eye in our daily lives hardly needs to
be stressed. Most people rank vision above the other human senses, and the
eyes provide the first stage in the complex process that leads to our perception
of vision. At this very instant, light reflected from this page is being collected
by the reader’s eye and focused on the retina at the back of the eye; this light
triggers a series of chemical reactions responsible for the neural signals which
cascade through the layers of the retina and are finally channelled through to
the brain where further processing gives rise to the ’image’ of each successive
word on this page. This in turn sets off a myriad of other processes in the brain
which are beyond the scope of this thesis or the knowledge of its author.
This essential function of the eye has led to considerable interest in studying
and understanding the eye, partly since knowledge of how the eye functions
is the first step towards understanding the visual process but also because it
can give insight into the diseases, or other conditions which can affect the eye,
leading to a reduction or complete loss of vision.
11
CHAPTER
1:
Introduction 12
A significant part of this eye research deals with, or benefits from, being able
to image the different structures of the eye; this includes the retina which is
the focus of the work being presented in this thesis. The eye, however, is an
organ designed to look out and not to be looked into, and we are constantly
reminded of this when we look at people’s eyes and see the only window into
the eye, the pupil, as a pitch black disc, since hardly any light is coming back
out of it; this imposes a number of constraints on imaging the internal structures of the eye. The complexity of the eye and its mechanisms – a complexity
which arises from the demanding requisites of the human visual process –
does not make imaging the internal structures any easier.
This chapter will outline some of the reasons why imaging the retina is of
importance to scientists, clinicians and ultimately (and most importantly) to
the general public. This will introduce the work discussed in this thesis: the
design and implementation of a retinal imaging system whose aim is to investigate a technique that has the potential of providing retinal images of higher
image quality than current state-of-the-art instrumentation. This introductory
chapter will be concluded by outlining the work presented in this thesis chapter by chapter.
1.1
W HY IS R ETINAL I MAGING N ECESSARY ?
Before embarking on the task of discussing the issues related to retinal imaging in the living human eye and presenting the design and implementation of
an imaging system aimed at dealing with some of these issues, it is worth asking why it is necessary to image the retina at all. The eye is one of the key elements in the visual process; therefore understanding how the eye works, and
specifically the functions of the retina, is the first step towards understanding
CHAPTER
1:
Introduction 13
vision. Being able to look at the living retina, thus, gives a better insight into
how each individual retinal component contributes to vision. The improvement in imaging systems permits the imaging of smaller retinal structures
and thinner layers, and therefore helps in determining what the functions of
individual types of cells are, in monitoring the blood flow in the retina even in
the narrowest capillaries, and in viewing other mechanisms which manifest
themselves on a small scale.
The knowledge obtained by looking at the retinal structures can be coupled
with other information obtained from psychophysical experiments for a better understanding of vision. Psychophysical experiments are ones in which a
series of spatial and temporal stimuli are applied to the eye and the subject’s
response to these stimuli is recorded, from which deductions about certain visual mechanisms can be made. Some techniques from which retinal imaging
benefits can also be applied to psychophysics. One such example is adaptive optics (AO), which is a technique which will be discussed in detail in this
thesis, whose purpose is to overcome the effect of eye aberrations; the use of
AO enables the projection of stimuli with higher spatial frequencies onto the
retina when compared to the case in which no aberration correction is made,
thus widening the scope of psychophysical tests.
Whereas the understanding of the human visual system is a sufficient reason,
from the pure science perspective, for justifying research in retinal imaging,
there is a further reason for the ever-increasing interest in retinal imaging:
the study, diagnosis and treatment of eye diseases. The application of retinal
imaging to the medical field makes it a field of interest to a wider range of people: ophthalmologists, optometrists and other professionals in the ophthalmic
field; commercial companies which hope to have as many of the former as pos-
CHAPTER
1:
Introduction 14
sible as their customers; national governments and other funding institutions
that aim to provide the tools required for an improved health care; and, not
least, for the general public who can benefit from early diagnosis and cures
for retinal diseases. The need for improved retinal imaging systems which
can provide higher image quality is felt in all stages of dealing with ocular
diseases: the study of the diseases themselves in terms of their causes and
their progression with time; the development of a treatment for each disease;
the diagnosis of the diseases at a stage when it is early enough to cure; and the
treatment itself, whether of a surgical nature or otherwise.
One specific example of eye disease for which improved retinal imaging systems are essential is glaucoma. The reasons for singling out glaucoma of all
ocular diseases are twofold: it affects a large number of people (about 1 in every 50 people over the age of 40 are affected by glaucoma in the UK making
it one of the major causes of blindness in the country and in most developed
nations [2]) and it is easily controlled in most cases if it is diagnosed early
enough. The damage caused by glaucoma is due to the increase in internal
ocular pressure which affects the optic disc, hindering the supply of blood to
the periphery of the retina, and progressing towards the centre with time. For
this reason the initial stages of the disease only affect peripheral vision which
makes it very hard for the person suffering from it to notice its effects until
the damage has reached the central regions of the retina which are responsible for central vision, by which time it can already be too late. The presence of
glaucoma can be detected, however, by measuring the thickness of the optic
disc region; but this requires high depth-resolution imaging, which successive generations of retinal imaging systems are improving. The availability
of such retinal imaging systems capable of diagnosing glaucoma (or at least
of pinpointing those eyes which are at risk) could replace the lengthier proce-
CHAPTER
1:
Introduction 15
dures in use today making wider screening of the population more feasible.
Another beneficiary from improved retinal imaging systems is the understanding and diagnosis of age-related macular degeneration (AMD). This is yet another of the major causes of blindness in many countries [2]. It leads to the
degeneration of photoreceptors in the central region of the retina, the macula,
resulting in loss of central vision. Because of its progression in terms of degeneration of individual cells, improved lateral resolution in imaging systems
can help study the initial stages of AMD and also improve the prospects for
its early diagnosis.
These two examples highlight the importance that research in retinal imaging
systems can have in the general ocular health care of a large number of people. The high incidence of these and other ocular diseases is one of the major
driving forces in speeding up work in this field of study.
1.1.1
The Requirement for In-Vivo Imaging
The anatomy of the retina is well-documented due to the high-resolution imaging available from standard microscopy techniques on excised sections of retinas. Ex-vivo imaging of diseased retinas also provides useful information on
the diseases themselves through their effect on the retina. However, imaging excised tissue does not provide any direct evidence of the living functions
of the retina. Furthermore, the death of the retina in itself causes structural
changes to its various components, brought about by the removal of blood
supply, neural stimuli and all the other interdependant mechanisms in the living organ. Added to these changes are those alterations caused by the chemicals used to preserve the dead retina and in some cases to dye it in order to
highlight specific features; all this means that the images obtained of the retina
CHAPTER
1:
Introduction 16
ex-vivo are only an indication of what the organ looked like in the living eye
and cannot serve as a replacement for in-vivo imaging.
The case for in-vivo retinal imaging for the study, diagnosis and treatment of
diseases is even more evident. Excised retinas that had suffered from an ocular disease are generally only available when the disease had progressed over
a long period of time and hence, the damage to the retina being already severe.
It is therefore not possible to monitor the progression of a disease through exvivo images, and they also play no role at all in their diagnosis and treatment.
The need for high-specification in-vivo retinal imaging systems is therefore
paramount for ophthalmic healthcare.
The conditions imposed on the imaging system by the need to image living
retinas rather than excised ones are not negligible. Firstly, the imaging technique must be a non-invasive one – it must access the retina through the optics
of the eye without causing any damage to the retina or to any other part of the
eye or human body. This leads to restrictions on the wavelength and intensity
of light used for illuminating the retina.The imaging system also has to utilise
the optics of the eye as part of its optical setup; the issues concerning the latter
restriction will be discussed in considerable detail in this thesis.
There are also a number of practical requirements that a retinal imaging system must satisfy for its use in many of its applications. In particular, an instrument which is to be used routinely under clinical conditions must be sufficiently compact and portable, it must be relatively easy for the operator to
use and comfortable for the patient, it should perform to a high standard on a
fairly wide range of patients and its cost has to be accessible to the institutions
that are likely to need it. All these requirements are not necessarily essential
CHAPTER
1:
Introduction 17
for lab-based prototypes aiming at primary research, but they must be taken
into account if any instrument is to progress to a clinical stage.
1.2
T HESIS O VERVIEW
Since the difficulties encountered when imaging the human retina in-vivo are
mostly due to the anatomy of the eye and its complex mechanisms, the understanding of the eye itself is an essential starting point to developing any imaging system. Chapter 2 describes the general anatomy of the eye and it highlights the major issues that have to be dealt with when imaging the retina. The
effects of the eye’s aberrations on the light going through it are discussed. An
overview of retinal imaging systems, past and present, concludes the chapter.
Chapter 3 discusses image formation in confocal microscopy, which is the
imaging technique employed in the work described. Confocal imaging is a
branch of microscopy particularly suited for imaging non-invasively 3D structures due to its high depth resolution when compared to conventional microscopy, thus making optical axial sectioning possible. In addition to this
property, confocal microscopy also offers higher lateral resolution and image
contrast than its conventional counterpart. The application of confocal microscopy to ophthalmology is also discussed at the end of the chapter.
The loss of lateral and depth resolution incurred by the dynamic aberrations
of the eye when imaging the retina is partially recovered in the imaging system being presented by using adaptive optics to correct for these aberrations.
Chapter 4 discusses how adaptive optics dynamically measures and corrects
for the aberrations introduced by the system. Its historical development in
astronomy, and its adaptation to ophthalmology, is presented. The major sub-
CHAPTER
1:
Introduction 18
systems of an adaptive optics setup are described.
The combination of the techniques described in the preceding two chapters
into a single imaging system is described in chapter 5. The design is discussed in terms of the choice of parameters for the various key aspects of the
system and the implementation of this design is presented. The emphasis of
this chapter is on the final imaging system built, though the whole process
was actually a series of incremental iterations through a number of systems,
each with added functionality. The whole design and building process took
the best part of the three and a half years over which this project spanned and
dealt with the optics, mechanics, electronics, computational modelling, software development and testing of the system (not to mention the fair share of
equipment repair, computer-crash fixing, supplier-chasing and other menial
tasks.)
A sample of retinal images obtained from the system is presented in chapter 6
showing the effect of adaptive optics correction, including the quantification
of the most significant metrics of image quality. The performance of the system is also discussed in terms of the point-spread function of the system.
The final chapter draws some conclusions about adaptive optics in ophthalmology and about retinal imaging in general, highlighting some of the issues
which still need to be tackled in future work and charting the near future of
retinal imaging systems.
CHAPTER
2
Imaging the Human Retina In-Vivo
Imaging the living human retina is not an easy job. The issues that have to
be tackled when undertaking such a task vary from having uncooperative,
impatient subjects to safety considerations regarding the amount of light that
can be used to illuminate the retina. The optics we are burdened with do
not make life easier, since to image the retina we need to make use of the
optics of the eye – an instrument which, to paraphrase Helmoltz [48], should
be returned to the manufacturer due to its poor quality had it been bought.
For this reason, in order to attempt to image the retina it is essential to have an
understanding not only of the anatomy and physiology of the retina but also
of the eye as a whole.
2.1
T HE H UMAN E YE
The human eye, as an optical system, can be described rather simply: the refractive surfaces at the front of the eye form an image of an object situated in
front of it on an image plane that coincides with the retina at the back of the
eye, which is the photosensitive component. The human eye as a biological
organ, however, is rather complex since it has to provide various mechanisms
19
CHAPTER
2:
Imaging the Human Retina In-Vivo 20
Retina
Choroid
Iris
Sclera
Cornea
Lens
Fovea
visual axis
geometrical axis
Aqueous
humour
Vitreous
humour
Optic Nerve
Figure 2.1: Schematic cross-section of the human eye (adapted from Kaufman and Alm [56].)
to ensure that the image is always focused on the retina, to control the amount
of light entering the eye, to prevent photoreceptors from bleaching due to continuous exposure to light, and other functions essential for providing the perceived image we see when we open our eyes. Figure 2.1 shows a schematic
cross-section of the human eye.
The principal refracting surface of the eye is the cornea. The interface between
air and the cornea provides the highest refractive index change in the eye (the
refractive index of the cornea is n = 1.3761 in the middle of the visible range
1
Values quoted for the refractive indices and optical powers of the transparent surfaces,
and for other dimensions of the eye are typical values from Davson [27].
CHAPTER
2:
Imaging the Human Retina In-Vivo 21
of wavelengths while that of air is approximately n = 1.000.) The cornea has
a refractive power of +43 diopters (D) out of a total of +58 D for the whole
eye, the crystalline lens providing the remaining refractive power. The lens is
a flexible, layered structure with an non-homogenous refractive index which
ranges from n = 1.406 at the core to n = 1.386 at the cortex. The regions between the cornea and the lens, and behind the lens are filled with the aqueous
humour and vitreous humour respectively – liquids consisting mostly of water and nutrients and having a refractive index of n = 1.336.
The eye as an optical system is best understood by means of schematic model
eyes, the classical example being the Gullstrand model eye [27]. For most
purposes, however, it suffices to consider a reduced schematic eye such as the
one shown in figure 2.2. Thus, the axial length of the adult eye from the front
surface of the cornea to the retina is 24.4 mm with the principle plane situated
1.5 mm behind the front surface of the cornea. For an emmetropic, relaxed eye
the optical configuration described above focuses a collimated beam of light
onto the fovea, which is the central section of the retina. When the eye focuses
on near objects, contraction of internal eye muscles brings about a change in
the shape of the lens, increasing the curvature of its surfaces and hence its
refractive power. This process is referred to as accommodation.
The aperture stop in the eye is the iris, which is a ring-shaped muscular structure situated right in front of the crystalline lens. The iris can expand and
contract to give a pupil size ranging from 2 mm to 8 mm. The principal function of the iris is to regulate the amount of light entering the eye depending on
the ambient light levels, but as discussed later in this chapter, the pupil size
also has an effect on the aberrations introduced by the optics of the eye.
CHAPTER
2:
Imaging the Human Retina In-Vivo 22
Principal plane
1.5
mm
5.7mm
H
N
24.4mm
Figure 2.2: Reduced model eye showing the principal point H and the nodal point N.
The image is formed on the retina. However the retina is not a simple two
dimensional membrane but a more complex three dimensional structure. A
schematic cross-section of the retina is illustrated in figure 2.3, showing the
various layers of this structure. The retina is separated from the vitreous by
the inner limiting membrane. The nerve fibre layer (NFL) lies immediately
after the limiting membrane. All the nerve fibres are channelled towards the
optic disc where they leave the eye through the optic nerve. The nerve fibres
are connected to the photoreceptors via a series of complex links with ganglion, amacrine, bipolar and horizontal cells, which make up the next layers
in the retina before we get to the photoreceptors. The photoreceptor cells are
also highly structured. At one end they have synaptic bodies through which
the effect of light on the photoreceptor is transmitted on to the bipolar cells.
Light is collected by the other end which is located furthest away from the direction of the incoming light. These light-sensitive ends of the photoreceptors
CHAPTER
2:
Imaging the Human Retina In-Vivo 23
Choroid
Cone
Photoreceptors
Rod
Horizontal
cell
Bipolar
cell
Amacrine
cell
Ganglion
cell
Nerve
fibre layer
direction of
incoming light
Inner
limiting
membrane
Figure 2.3 : Representation of the cross-section of the retina showing the principal layers
(adapted from Kaufman and Alm [56].)
CHAPTER
2:
Imaging the Human Retina In-Vivo 24
make up the retinal pigment epithelium (RPE).
The human retina has two distinct types of photoreceptors: rods and cones.
Rods are sensitive to lower light intensities than cones are, but have a
monochromatic response. They are responsible for vision in dark conditions
which explains why the eye cannot distinguish colours in the dark. Cones
require more light than rods to trigger a signal. This means that cones contribute considerably towards vision in bright light conditions. Three types of
cones, having peak sensitivities in the short-, medium- and long-wavelength
regions of the visible spectrum respectively, give rise to colour discrimination
in vision. The central part of the fovea contains only cones which are smaller
(around 2 µm in diameter) and more densely packed in this region than elsewhere in the retina, giving a higher visual acuity in this region. The layers
at the front of the retina are thinner or completely absent at the fovea, thus
increasing the amount of light reaching the photoreceptors. Figure 2.4 shows
schematically the distribution of retinal layers at the fovea.
2.2
I SSUES WITH I MAGING THE R ETINA
In an optical imaging system, the object and image planes can be swapped
so that any structure in the image plane of the original system can be imaged
onto what was the object plane of the original system. In the eye this principle
can be applied to obtain an image of the retina. However, the eye as an optical
system and as a biological organ is optimised for vision and this brings about
some problems which need to be overcome before we can image the retina.
The first such issue which needs to be tackled in any retinal imaging system is
that of adequately illuminating the retina. The pupil of the eye is the sole win-
CHAPTER
2:
Imaging the Human Retina In-Vivo 25
foveola
RPE
PC
ONL
OPL
INL
IPL
GL
direction of
incoming light
Figure 2.4: Representation of the retinal layers at the fovea showing how the outer layers are
pushed to the side at the central part of the fovea, the foveola. RPE - retinal pigment epithelium, PC - photoreceptor cells, ONL - outer nuclear layer (containing nuclei of photoreceptor
cells), OPL - outer plexiform layer (containing nerve fibres from photoreceptor cells), INL inner nuclear layer (containing nuclei of horizontal, bipolar and amacrine cells), IPL - inner
plexiform layer (containing nerve fibres from cells in INL) and GL - ganglion layer (adapted
from Davson [27].)
dow we have to the retina. Its area and its distance from the retina restrict the
proportion of light coming back out from the eye after backscattering from the
retina. The ratio of outgoing to incoming light intensities can be maximised
by using a dilated pupil, but this will introduce its own problems as discussed
later in this section. Furthermore, apart from the geometrical considerations,
we are trying to image a structure whose function is to collect light, and hence
only a small proportion of the incident light is scattered back from the retina.
This very low ratio of outgoing to incoming light intensities cannot be compensated by increasing the incident light when dealing with the living human
eye because the damage thresholds of light for the retina and other ocular
structures must be taken into account. For this reason, the incident power for
a retinal imaging system is strictly limited by the appropriate safety standards
CHAPTER
2:
Imaging the Human Retina In-Vivo 26
to ensure safe operation of the system. Great care has been taken in all the
work associated with this thesis to make sure that the light levels used are
well under those stipulated as the maximum permissible levels by the British
standards [3]2 .
Since the retina is a layered 3D structure, any images we obtain from a retinal
imaging system will be strongly dependant on the relative reflectivity of the
various layers within the retina and the depth of focus of the imaging system.
In particular it is both clinically and scientifically useful to be able to image
the distinct layers separately. As an example we could mention the need to be
able to monitor accurately the variation in thickness of the nerve fibre layer in
the optic disc region for the early diagnosis of glaucoma.
Another substantial difficulty with imaging the living human retina is the blur
introduced by the optics of the eye. The diameter of the iris, which acts as the
limiting aperture of the eye, determines the extent of diffraction which occurs.
Under bright light conditions, when the pupil can be as small as 2 mm in diameter, diffraction is significant in widening the point spread function (PSF)
of the eye. The effect of diffraction is reduced under dark light conditions or
when the pupil is artificially dilated to a larger diameter, up to 8 mm. The effect of diffraction is however combined with that of aberrations arising from
the poor optical quality of the refractive elements of the eye. Chromatic aberrations need to be taken into account when imaging the retina polychromatically, however the work presented here deals with a monochromatic imaging
system; hence chromatic aberrations will not be considered further. The major monochromatic aberrations in most eyes tend to be defocus, introduced
in myopic and hypermetropic eyes, and astigmatism, although these can be
2
A discussion of light level safety is given in appendix A
CHAPTER
2:
Imaging the Human Retina In-Vivo 27
easily measured and corrected for by using spectacles, contact lenses or trial
lenses in the imaging system. The eye’s optics, however, introduce aberrations that go beyond these low-order ones, particularly for larger pupil sizes.
These higher-order aberrations have a negligible effect on the perception of vision, but they do adversely affect the resolution attainable when imaging the
retina. Thus, in our quest for increase of the lateral and depth resolutions of
retinal imaging systems, these aberrations need to be measured and corrected
for. Furthermore, these aberrations are changing on relatively small timescales
(up to 30 Hz at least [29]) and hence dynamic monitoring and correction is required to maintain the improved resolution over the entire imaging period.
The correction of the higher-order aberrations of the eye is central to the work
presented here and their study is discussed in more detail in the following
section.
2.3
A BERRATIONS IN THE H UMAN E YE AND THEIR M EASUREMENT
The need for optical corrections to the eye was recognised at least as early
as the 13 th century when the first spectacle corrections were made [93]. In
the 17 th century, Scheiner developed a technique to improve the accuracy of
spectacle correction [27]: by placing a mask right in front of the eye with small
apertures that allow only the axial ray and another off-axis ray going into the
pupil of the eye, a subject with ametropic eyes would see two images of a
point source placed in the far field of the eye. Lenses with different powers
would then be placed in front of the eye until the subject perceived the two
images as one. The power of the lens used when this occurs would thus be the
correction required for that eye. Figure 2.5 illustrates this principle schematically.
CHAPTER
2:
Imaging the Human Retina In-Vivo 28
Emmetropic
Myopic
Hyperopic
Figure 2.5: Representation of Scheiner’s principle.
The same technique was also used later on to determine other aberrations of
the eye. Young was the first to detect spherical aberration in the eye, in 1801,
using the Scheiner principle [121]. In the same work he also showed that the
eye has astigmatism. The presence of spherical aberration in the human eye
was further corroborated using variations of the same technique by Volkmann
in 1846 [106], Jackson in 1885 [41], and by Ames and Proctor in 1921 [4] who
quantified the spherical aberration by varying the angle of the off-axis beam
with respect to the axial beam until the images of the point source were per-
CHAPTER
2:
Imaging the Human Retina In-Vivo 29
ceived to coincide by the subject. These results also showed that on an average
population there was a bias towards positive spherical aberration in the human eye.
Pi [41] and Stine [101] studied the wave aberrations in the four quadrants of
the eye’s pupil separately where they not only confirmed the presence of significant positive spherical aberration in the subjects they tested but also that
the aberrations varied between all four quadrants of the pupil. Thus the eye
was exhibiting asymmetrical aberrations. Further work measuring the spherical aberration of the eye and the lack of symmetry was presented by Otero and
Duran [78] who used circular pupils of varying diameters, Koomen et al. [60]
who used annular pupils, Van Bahr [107] using Scheiner’s principle along two
meridians rather than one, and Ivanoff [55] and Webb et al. [117] who used
variations of the technique used by Ames and Proctor.
This idea of subjective ray tracing was developed further in the 1990s into
an objective technique. Independent work by Navarro et al. [75, 76] and
Molebny [73] developed a laser ray tracing technique in which a narrow beam
is delivered to the eye successively at different pupil locations and the aberrations reconstructed from the image produced by the backscattered light on a
CCD camera.
In parallel with the ray tracing approach outlined above, other techniques
were also being developed to measure the eye’s aberrations. The Foucault
knife-edge test was used succesfully to measure the spherical aberration of
the eye by Berny and Slansky in 1969 [12, 13]. An image of a point source
was formed on the retina and that image was in turn used as the object for the
knife-edge test. The same technique was used a few years later by El Hage
CHAPTER
2:
Imaging the Human Retina In-Vivo 30
and Berny [42] together with corneal topography methods to separate the effect of the aberrations due to the cornea and the crystalline lens in the eye.
This showed that the aberrations due solely to the cornea or to the lens are
larger than those for the whole eye, indicating that the cornea and lens have
a compensatory role where the aberrations of one cancel out or reduce the
aberrations of the other. This was shown to be particularly true for spherical
aberration.
In 1898, Tscherning [102] used a subjective technique to measure the eye’s
aberrations in which he placed a rectangular grid in front of the pupil of the
eye and projected the image of the grid onto the retina using a spherical lens.
The aberrations were then obtained from a drawing of the grid that the subject
was asked to draw. The accuracy of this technique was improved by Howland
and Howland in 1977 [52] by placing the grid between two cylindrical lenses
with equal power placed such that their cylindrical axes are orthogonal.
Howland and Howland chose to represent the wave aberration in terms of
a two-dimensional Taylor polynomial up to fourth order. This was the first
work in which the need was felt to represent the aberrations not just by using
defocus, astigmatism, coma and spherical aberrations, as all previous work
had done, but by using a wider system which permits the representation of
higher order aberrations which are present in the eye and better represents
the asymmetry in the eye’s aberrations. The authors also highlight the usefulness of the Zernike set of bases to represent ocular aberrations due to the
orthonormality of this set over circular pupils. The Zernike polynomial bases
have indeed been since used as a de facto standard in representing ocular aberrations to date.
CHAPTER
2:
Imaging the Human Retina In-Vivo 31
The studies of Howland and Howland, as well as the follow up to this work
by Walsh et al. [111] in which an objective version of the same technique was
developed, showed the presence of significant third order coma and coma-like
aberrations in the human eye, in addition to fourth order terms which include
spherical aberration.
Another wavefront sensing technique recently adopted for ocular use is the
Shack-Hartmann sensor. The sensor as described by Platt and Shack [81] consists of a two-dimensional array of small lenslets on which the wavefront is incident. Each individual lenslet locally samples the gradient of the wavefront
to give a map of local gradients from which a wavefront map can be reconstructed. The Shack-Hartmann sensor is described in more detail in chapter 4
where it is described as one of the main components of the adaptive optics system presented in this thesis. The Shack-Hartmann sensor was first applied to
measure the aberrations of the human eye by Liang et al. [63, 64] who demonstrated the presence of higher-order aberrations beyond fourth-order Zernike
terms, in particular for large pupil sizes. A larger study using the same technique also confirmed the presence of higher-order aberrations [51].
All the methods outlined above attempt to quantify the optical performance
of the eye by measuring specific aberrations. The earlier studies were mostly
concerned with spherical aberration, astigmatism and coma, mostly because
the techniques used were insensitive to the smaller magnitudes of the higherorder aberrations. Later studies showed the presence of higher-order aberrations in the eye and though there is yet no universal agreement on the magnitudes of the high order Zernike terms in the eye and their effect on visual
performance, their presence is well accepted. The cut-off for the Zernike terms
above for which their effect can be considered to be negligible has a particular
CHAPTER
2:
Imaging the Human Retina In-Vivo 32
importance in adaptive optics since it defines the spatial order for which the
adaptive optics system should be designed to correct. Further standardisation
of wavefront sensors should ensure that sensors based on the same technique,
such as Shack-Hartmann wavefront sensors, should give repeatable results
that can be reliably compared with those from other instruments.
Work has also been done in measuring the overall optical quality of the eye
without analysing specific aberrations. Le Grand in 1937 [40] obtained an estimate of the modulation transfer function (MTF) of the eye from the measurement of the ratio of contrast sensitivity threshold of a conventional grating and
that of interference fringes formed directly on the retina. The same method
was later used with a laser source by Campbell et al. [19, 20]. Flamant in 1955
quantified the optical quality of the eye through objective measurements of the
line spread function (LSF) [35] and a development on this method by Santamarı́a in 1987 measured the point spread function (PSF) from the eye [94]. The
objective measurement of the PSF was also used to analyse the phase transfer
function (PTF) of the eye [9] and to reconstruct a wave aberration map of the
eye using retrieval algorithms [10].
2.3.1
The Dynamic Nature of the Aberrations of the Eye
Earlier in this chapter the eye was described as a simple optical system but a
complex biological organ. The various physical and physiological processes
which are continually occurring in the eye must undoubtedly have an effect
on the optical quality of the eye. The accommodation of the eye changes the
focus of the optical system. The microfluctiations of accommodation have
frequencies of up to 5 Hz, which means that focus error will fluctuate at the
same frequency [23]. However, the change in shape of the crystalline lens
will also introduce fluctuations in other aberrations. Hofer et al. showed that
CHAPTER
2:
Imaging the Human Retina In-Vivo 33
higher-order Zernike terms fluctuate with temporal frequencies of up to 2 Hz
at least [49]. Similar time variation was observed for all 32 Zernike terms considered. Diaz-Santana et al. used a high-bandwidth adaptive optics system
to show that the fluctuations of the eye’s aberrations have a non-negligible
effect up to temporal frequencies of at least 30 Hz [29]. The contribution of
microfluctuations in accommodation is only one of the causes of the dynamic
behaviour of aberrations. Other factors could include eye movements and
tremors, fluctuations in the tear film topography, and the effect of the respiratory and cardiac cycles. Current research in various laboratories is currently
tackling some of these issues [32, 43, 44, 49, 66].
The importance of this dynamic nature of the ocular aberrations becomes
more evident when trying to correct for these aberrations. If we had to apply
an ideal perfect static correction based on an instantaneous measurement of
the eye’s aberrations, then the correction will only be ideal at the instant of the
measurement. Similarly, applying a static correction based on a time-averaged
measurement of the eye’s aberrations will only correct for the static component of the aberrations leaving an instantaneous residual aberration at any
one time. For this reason, to achieve an optimal correction over a prolonged
time interval we are required to measure and correct for the eye’s aberrations
continuously.
2.4
R ETINAL I MAGING S YSTEMS
Considering the eye as a refractive element focusing a collimated beam onto
the retina, it can be seen that if two eyes were placed facing each other, then the
image of one retina will be formed on the other retina. This simple principle
forms the basis of the direct ophthalmoscope, invented by Helmholtz in 1851,
CHAPTER
2:
Imaging the Human Retina In-Vivo 34
Variable
corrective
lens
Subject eye
Observer eye
Illumination source
Figure 2.6: The direct observation of the retina using a direct ophthalmoscope.
in which the patient’s retina is illuminated so that there is enough light reflecting back from it into the observer’s eye, and therefore the observer can view
the image of the patient’s retina. Figure 2.6 shows schematically the design
of a basic ophthalmoscope. A variation of the direct ophthalmoscope is the
indirect ophthalmoscope in which the observer views a virtual image of the
subject’s retina rather than a real one. This configuration can provide a larger
field of view and thus can be used to observe larger areas of the retina without the need to move around between different views. Both direct and indirect
ophthalmoscopes are used nowadays routinely by optometrists and ophthalmologists for screening patients and for basic diagnosis of retinal pathologies.
Another standard retinal imaging device found in many eye clinics is the fun-
CHAPTER
2:
Imaging the Human Retina In-Vivo 35
dus camera. Fundus photography is an extension of the indirect ophthalmoscope in which the image is recorded by a photographic film or digital camera. This setup provides the advantage that a permanent image can be created
which can be stored for re-examining and for monitoring the progression of
certain diseases. The fundus photographs obtained are a superposition of the
various wavelength-dependant reflections from the different structures in the
distinct layers of the retina. It is possible to reduce the effect of reflections from
particular layers by introducing colour filters which block wavelengths which
are strongly reflected by those layers, provided that the retinal structures in
which interest lies do not also contribute strongly in that wavelength. The
idea of wavelength filtering has been used in fundus photographs as a way to
distinguish between different retinal layers fairly successfully. However, the
lack of better depth discrimination is still a problem when trying to achieve
ever-higher resolution and contrast in in-vivo retinal imaging.
A solution to depth discrimination in retinal imaging is provided by confocal
microscopy. Confocal microscopy allows the optical sectioning of a 3D sample
by blocking light originating from out-of-focus planes in the sample. This improves both the depth resolution of the imaging system and the contrast of the
images by reducing considerably the effect of light scattered from unwanted
layers. Confocal microscopy is described in more rigorous detail in chapter 3.
Following the ground work done in the development of a confocal microscope
for the eye, or a confocal scanning laser ophthalmoscope (SLO)3 , by Webb and
co-workers in the 1980s [112, 113, 114, 115, 116], the SLO has become a commercially available device used as an important tool in ophthalmic clinics.
3
Though the acronym SLO was originally coined for the non-confocal instrument, with the
confocal instrument being referred to as CLSO or CSLO in some literature, the confocal instrument has rendered the non-confocal one obsolete and hence the term SLO is generally
used to refer to the confocal version nowadays. This norm will be followed in this thesis.
CHAPTER
2:
Imaging the Human Retina In-Vivo 36
Further improvement in depth resolution was brought about by the use of
low-coherence interferometry for imaging the retina. Optical coherence tomography (OCT) was developed and first used on biological samples in-vitro
[53, 122]. Light reflected from the retina is interfered with that of a reference
beam. A low-coherence light source is used so that interference is observed
only for light reflected from layers in the retina which are separated by a thickness smaller than the optical coherence length of the source. Thus the coherence length of the light used determines the depth resolution of the imaging
system. Scanning is then employed to image different depths and different
lateral locations on the retina. Figure 2.7 illustrates schematically the principle of operation of this technique. OCT was used to image the human retina
in-vivo showing retinal layers which were previously imaged only in excised
samples [47, 84]. OCT has also been commercialised and used for the clinical
diagnosis of a number of retinal pathologies of which glaucoma and macular
holes are just two. OCT, as described above, is a longitudinal imaging system.
This means that transverse images can only be created using a large number of
longitudinal stacks which can be digitally processed to produce transverse images. This compromises the quality of these images due to the length of time
required to produce enough longitudinal stacks to extract a transverse image.
This problem was addressed by the development of en-face OCT which creates
transverse stacks thus reducing considerably the length of time required for
a transverse image, and hence increasing lateral resolution and image quality
in general [83, 90].
The need to resolve ever smaller details on the retina for clinical and scientific
purposes has led to further developments in retinal imaging. Techniques analogous to stellar interferometry have been used to determine inter-cone distance in the human eye [8]. However, actual retinal images were also obtained
CHAPTER
2:
Imaging the Human Retina In-Vivo 37
x-scan
Reference
mirror
Low coherence
source
Detector
Figure 2.7: Schematic representation of an OCT setup for retinal imaging.
on subjects with very good ocular optics in which cones could be resolved at
eccentricities of 0.5o from the foveal centre using a high-magnification fundus
camera [71]. Resolution of cones was also achieved with an SLO modified
for high-magnification imaging in which the raw frames were post-processed
to align and average a number of successive frames for noise reduction purposes [108, 109]. Deconvolution techniques on retinal images have also been
used successfully to image cones in-vivo in the retina using wavefront measurements obtained from a Shack-Hartmann sensor [21, 22].
However, as discussed earlier in this chapter, one of the principle drawbacks
in high-resolution retinal imaging is the significant presence of ocular aberrations which show both a high inter-subject variability as well as a temporal variability for any one subject. The latter issue is somewhat analogous to
the problems incurred by the Earth’s turbulent atmosphere for astronomical
CHAPTER
2:
Imaging the Human Retina In-Vivo 38
imaging in ground-based telescopes. The analogy has been taken further by
using the same technique used to overcome the dynamic-aberration problem
in astronomical imaging in retinal imaging, namely adaptive optics (AO). AO
has been used to correct for the aberrations of the human eye in real-time and,
in the experiments designed to obtain images of the retina, to resolve cones
close to the foveal centre [29, 34, 37, 38, 50, 65, 92].
CHAPTER
3
Confocal Microscopy and
Ophthalmoscopy
The idea of optically scanning an object with the purpose of creating an image
of it is not new. It was first suggested by Alexander Bain in 1843 in his proposals to what later evolved to be the fax machine. McMullan reviews some
of the other early contributions to scanned imaging [70]; however, the first
working optical scanning microscope was described by Young and Roberts in
the early 1950s [87, 120]. Though this work had already shown the potential
of scanning microscopy in improving image quality, it was only a number of
subsequent design and technological advances that sped up the development
of the scanning microscope into the indispensable tool it is today.
The first of these contributions was the introduction of the confocal principle to scanning microscopy. The confocal arrangement gives rise to an improved lateral and depth resolution over conventional microscopy. The increased depth resolution is particularly of interest when the aim is to image
different layers within a 3D structure optically and unobtrusively. In fact, the
first confocal scanning microscope described by Minsky was used to obtain
39
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 40
3D images of the brain [72].
The next surge in the development of confocal scanning microscopy came
with the advent of the laser which addressed one of the major problems that
scanning microscopy was still facing, namely a poor light source. Davidovits
and Egger described the confocal laser scanning microscope in 1969 [25, 26].
Further significant improvements were shown by Brakenhoff et al., where bacteria which could not be resolved by conventional microscopes were imaged
using a confocal laser scanning microscope [16, 17, 18], and Sheppard and
co-workers [95, 96, 97, 98]. Indeed, in contrast to the previous two decades,
progress in this field was very rapid in the 70s and 80s as illustrated by the
exhaustive review given by Pluta [82].
Webb and co-workers brought scanning microscopy into ophthalmology; they
were the first to image the retina using a scanning ophthalmoscope, and later a
confocal instrument [113, 114, 115, 116]. The success of these instruments was
their ability to image distinct layers of the retina with relative ease and comfort
for the patient and to produce high resolution and high contrast images in
digital format making them easy to manipulate.
3.1
T HEORETICAL B ACKGROUND OF C ONFOCAL M ICROSCOPY
In a conventional optical microscope the key elements are a condenser lens,
which illuminates a section of the object to be imaged using light from an extended source, and an objective lens, which forms an image of the illuminated
section on the image plane. This configuration is represented schematically
in figure 3.1(a). The role of the condenser lens is to provide illumination and
is not responsible for imaging; hence it is the objective lens which is the sole
CHAPTER
(a)
Point
source
Objective
Image
plane
Object
Objective
Point
source
(c)
Confocal Microscopy and Ophthalmoscopy 41
Condenser
Source
(b)
3:
Collector
Detector
Object
Objective
Collector
Object
Point
detector
Figure 3.1 : Optical configurations for (a) a conventional microscope, (b) a scanning microscope and (c) a confocal scanning microscope.
imaging lens in this system and it is the optical quality of the objective lens
which affects the resolution of the image and not that of the condenser lens.
Figure 3.1(b) shows a basic scanning microscope in which a point source is
imaged, via an objective lens, on the object so that only a small section of the
object is illuminated1 . A collector lens then collects light originating from this
1
For an ideal point source, the object is illuminated by the PSF of the objective lens.
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 42
small illuminated region onto an intensity detector. An image of the object is
formed by scanning the input beam with respect to the object and displaying
the intensity signal obtained from the detector by synchronising the detector
output with the scanning mechanism. Compared to the conventional microscope, the image in this scanning microscope is constructed point by point,
potentially eliminating distortion and fluctuations in magnification due to the
wide-field imaging in the conventional case. However, the resolution of the
image produced is still dependant on the optical quality of the objective lens
alone, since this is the only lens responsible for imaging.
In the configuration represented in figure 3.1(c), a pinhole is placed at the image plane of the collector lens just in front of the detector. This effectively
transforms the detector into a point detector. It also modifies the role of the
collector lens in such a way as to make it an imaging lens as well, since only
light originating from a very small region on the object is detected. In this
configuration, the objective and collector lenses have the same focal point on
the object so that the same small region illuminated by the objective is being
imaged by the detector. This gives rise to the term confocal configuration.
The imaging role of both lenses in a confocal microscope gives rise to improved depth and lateral resolution over the conventional case. Subsequent
sections in this chapter analyse these two cases in more detail. However, it
is fairly straightforward to understand qualitatively how a confocal arrangement gives rise to improved depth discrimination, which is the most sought
after feature of confocal microscopy. Figure 3.2 shows a simplified representation of the light detection in a confocal microscope. The pinhole is placed at
the focal plane of the collector lens and collects light from the other focal plane
of the lens, which is the plane of interest in the 3D object being imaged. Light
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 43
Focal
plane
Pinhole
Object
Detector
Out-of-focus
plane
Figure 3.2: Depth discrimination in a confocal configuration.
originating from an out-of-focus plane in the object forms a defocused image
at the detector pinhole; this means that most of the signal from out-of-focus
planes is blocked and does not contribute to the final image. The further away
a plane is from the focal plane, the less is its contribution to the final image.
This allows unobtrusive 3D sectioning of an object.
The improved resolution in a confocal microscope can be seen as being obtained at the expense of field of view2 . In fact a confocal microscope only
images one point at a time and the overall image is reconstructed from these
individual points. This feature can be advantageous if the imaging system
is used on-axis since the imaging system is only required to be diffractionlimited on-axis. This would however require that the object is mechanically
2
This principle was explained by Lukosz [69].
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 44
scanned across a stationary beam. This mode is referred to as an object-scan
configuration, and is in general the preferred mode of operation for confocal
microscopes. There are instances, however, when a beam-scan configuration
has to be used. In this case the beam is scanned across the stationary object using oscillating or rotating mirrors or acousto-optic light modulators. A
beam-scan confocal microscope has to be used in instances when the speed of
scanning is crucial or when the object cannot be mechanically scanned. The
latter scenario is the case we have in ophthalmology where the object is the
living human retina, making the beam-scan configuration the preferred, or
only, choice.
A couple of consequences of a scanning system which can be used to our advantage in an imaging microscope are worth a brief mention at this stage. The
first of these is the ease with which zooming in and out of a region of interest in the object can be implemented by changing the scan amplitude of the
beam. The second is the digital format of the images produced3 . This makes
manipulation and computational processing and enhancement of the images
much easier.
3.1.1
Image Formation in the Confocal Microscope
The imaging properties of an optical system can be described by the amplitude
point-spread function (PSF) of the system: the amplitude distribution that an
imaging system gives in the region of the image plane when imaging a single
point. The PSF h(x, y) in the image xy−plane can be given in terms of the
3
This is a restriction caused by the point-by-point imaging nature of a scanning system.
Analogue imaging mechanisms, such as cathode tubes, have been used in early scanning
microscopes, but technological advances have made digital systems the default in all scanning
microscopes today.
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 45
Z
∞
pupil function P (ξ, η), by [39]
h(x, y) =
Z
∞
−∞
j2π
− λd
(ξx+ηy)
P (ξ, η) e
2
dξ dη,
−∞
(3.1)
where ξ and η are the spatial coordinates at the plane of the lens, λ is the
wavelength and d2 is the distance of the objective lens from the image plane,
as shown in figure 3.3. Equation 3.1 shows that the PSF is the Fourier transform of the pupil function. For an aberration-free objective lens with a circular
aperture, the pupil function has the value of unity within the aperture of the
lens and zero elsewhere, and hence the normalised PSF h is given by4
h(v) =
2J1 (v)
,
v
(3.2)
where J1 is the first order Bessel function of the first kind and v is the di√
mensionless variable corresponding to the radial coordinate r = x2 + y 2 and
defined by
v=
where sin(α) =
a
d2
2πr
sin(α),
λ
(3.3)
is the numerical aperture, a being the radius of the diffract-
ing aperture at the plane of the objective lens. The intensity distribution of the
diffraction-limited PSF is given by
2
I(v) = |h(v)| =
2J1 (v)
v
!2
.
(3.4)
For an extended object characterised by a transmission function t(x0 , y 0 ), the
intensity distribution of the image is given in terms of the PSF and the transmission function by [39]
I(x, y) = |h|2 ⊗ |t|2
(3.5)
for incoherent imaging, where ⊗ denotes convolution, and
I(x, y) = |h ⊗ t|2
4
(3.6)
The derivation for obtaining this diffraction-limited point-spread function is given in appendix B
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 46
y
η
Lens
plane
α
x
d2
Image
plane
ξ
d1
Figure 3.3: Imaging of a point source by a simple lens.
for the coherent case.
For a confocal arrangement we have two imaging lenses; the objective lens
images the point source onto the object and the collector lens images this illuminated area of the object on the image plane pinhole. Hence, the object is
now illuminated by the amplitude PSF of the objective lens, h1 (x, y). If a reflective system is considered so that the object has a reflectance given by the
reflectance function r(x, y), then the amplitude distribution h1 (x, y).r(x, y) on
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 47
the object acts as an input signal for the second lens in the confocal system,
the collector lens5 . The amplitude distribution U (x0 , y 0 ) at the detector plane,
where x0 and y 0 are the coordinates at the detector plane, is given by [118, 119]
U (x0 , y 0 ; xs , ys ) =
Z
∞
Z
−∞
∞
−∞
h1 (x1 , y1 ) r(xs − x1 , ys − y1 ) h2 (x0 − x1 , y 0 − y1 ) dx1 dy1 , (3.7)
where h2 (x0 , y 0 ) is the amplitude PSF of the collector lens and (xs , ys ) are the
coordinates of the scan position. However, an ideal confocal system is characterised by a point detector; hence U (x0 , y 0 ; xs , ys ) only needs to be considered
at x0 = y 0 = 0, which are the coordinates of the detector. Therefore
U (xs , ys ) = U (0, 0; xs , ys ) =
Z
∞
Z
∞
−∞
−∞
h1 (x1 , y1 ) r(xs − x1 , ys − y1 ) h2 (−x1 , −y1 ) dx1 dy1 .
(3.8)
For an aberration-free system, the PSFs of the objective and collector lenses
are even functions, so that h2 (−x1 , −y1 ) = h2 (x1 , y1 ) and
U (xs , ys ) =
Z
∞
−∞
Z
∞
−∞
h1 (x1 , y1 ) h2 (x1 , y1 ) r(xs − x1 , ys − y1 ) dx1 dy1 ,
(3.9)
which is the convolution of the product of the PSFs with the reflectance function:
U (xs , ys ) = h1 h2 ⊗ r.
(3.10)
Furthermore, for a reflective system h1 = h2 , and we are interested in the
intensity distribution IC at the detector rather than amplitude; hence
IC (xs , ys ) = |h2 ⊗ r|2 .
(3.11)
Comparing this with equation 3.6 for coherent imaging, we note that a confocal microscope is a coherent imaging system with an effective PSF equal to the
5
A reflective system is being considered here since it is more relevant to this thesis, however the same applies to a transmissive system, in which case the reflectance function would
be replaced by a transmission function.
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 48
square of the PSF of the objective/collector lens6 . For an aberration-free system with circular pupils, the effective PSF of the confocal microscope is given
by squaring equation 3.2, and hence the image of a point source is given by
the intensity distribution
IC (v) =
3.1.2
2J1 (v)
v
!4
.
(3.12)
Lateral and Depth Resolution
Comparing the diffraction-limited PSF of a conventional microscope to that
of a confocal microscope, as given by equations 3.4 and 3.12 respectively and
plotted in figure 3.4, it can be seen that the confocal PSF is narrower than
the conventional one. This gives rise to an increase in lateral resolution in a
confocal microscope compared to its conventional counterpart.
However it is the improved depth resolution of the confocal arrangement
which has made the confocal microscope such a useful tool in many fields
in which optical sectioning of a sample is required. To consider the depth
response of the confocal microscope we will use the equation representing
the PSF as a Fourier transform of the pupil function, equation 3.1, as a starting point and rewrite it in polar coordinates as a function of the zeroth order
Bessel function of the first kind (as shown in appendix B):
h(v) = 2
Z
0
a
P (ρ) J0 (vρ) ρ dρ,
(3.13)
where (ρ, θ) are the coordinates at the pupil plane which has a circular aperture of radius a. The pupil function is assumed to be symmetric so that it is a
function only of the radial coordinate ρ.
6
For a transmissive system the effective PSF would be the product of the PSFs of the objective and collector lenses.
CHAPTER
4
3
3:
Confocal Microscopy and Ophthalmoscopy 49
4
Conventional
3
2
2
vy 1
vy 1
-1
-1
-2
-2
-3
-3
0
Confocal
0
-4
-4
-4 -3 -2 -1 0
(a)
vx
1
2
3
4
-4 -3 -2 -1 0
(b)
1.0
1
2
3
4
(c)
Conventional
Confocal
I(v) 0.5
0
-6
vx
-4
-2
0
v
2
4
6
Figure 3.4 : (a) and (b) are the 2D intensity distribution plots of the PSF in terms of the
normalised optical variable v for the conventional microscope and the confocal microscope
respectively. (c) represents a 1D plot of the same intensity distribution patterns.
In order to consider the response of a system in the axial direction it is conve1
2
nient to introduce a defocus term in the form of a quadratic phase factor e 2 juρ ,
where u is the dimensionless variable corresponding to the axial coordinate z
and defined by
8πz
α
sin2
.
λ
2
u=
(3.14)
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 50
Z
a
Equation 3.13 thus becomes
h(u, v) = 2
0
1
2
P (ρ) e 2 juρ J0 (vρ) ρ dρ.
(3.15)
This gives the amplitude distribution in the axial and lateral directions in the
region of the focus. Figure 3.5 shows contour plots of |h(u, v)|2 and |h(u, v)|4
obtained by numerical integration of equation 3.15, which correspond to the
intensity distributions close to the focus of a coherent conventional and a confocal microscope respectively.
Considering an aberration-free objective such that P has the value of unity
within the aperture and zero elsewhere, and taking v = 0 so as to consider
only the axial response we can find that7
sin (u/4)
.
u/4
|h(u, 0)| =
(3.16)
The intensity response of a conventional microscope is thus given by
"
2
I(u, 0) = |h(u, 0)| =
sin (u/4)
u/4
#2
.
(3.17)
It was shown earlier that the effective PSF for a confocal microscope is given
by h2 , hence the axial intensity response of the confocal instrument is given by
"
4
IC (u, 0) = |h(u, 0)| =
sin (u/4)
u/4
#4
,
(3.18)
which shows that the confocal microscope has a sharper axial response resulting in higher depth resolution over the conventional microscope. This can be
seen from the plots of equations 3.17 and 3.18 as shown in figure 3.6.
The optical sectioning property of the confocal microscope can be further illustrated by considering the radial integrated intensity of planes in the region
7
The derivation for obtaining this diffraction-limited axial response for the PSF is given in
appendix B.
CHAPTER
(a)
20
0.
10
Confocal Microscopy and Ophthalmoscopy 51
00
5
2
0.0
15
3:
0.02
0.0
05
0.00
3
0.0
02
0.03
05
0.
0.002
0.2
5
u
0.5
0.9
0
0.7
-5
0.3
0.1
-10
-15
-20
-20
(b)
0.03
-15
-10
-5
0
5
v
10
15
20
10
15
20
20
0.002
15
2
00
0.
5
0.00
10
0.0
0
3
0.02
0.05
0.2
5
0.5
u
0.9
0
0.7
-5
0.3
0.1
0.03
-10
-15
-20
-20
-15
-10
-5
0
v
5
Figure 3.5: Contour plots of (a) |h(u, v)|2 and (b) |h(u, v)|4 .
Contour
intensity
values:
0.9
0.7
0.5
0.3
0.2
0.05
0.03
0.02
0.005
0.003
0.002
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 52
1.0
Conventional
Confocal
I(u) 0.5
0
-15
-10
-5
0
u
5
10
15
Figure 3.6: A plot of the intensity distribution in a conventional and confocal microscope as
a function of the axial variable u.
of the image plane for a point source [99]:
Z
Iint (u) =
∞
I(u, v) v dv.
0
(3.19)
The integrated intensity of a plane is a measure of the contribution of that
plane to the overall signal, and therefore variation of the integrated intensity
serves as a metric for the resolution of axially-separated planes in a microscope [118, 119]. For the conventional microscope, equation 3.19 can be written as
Iint (u) =
Z
0
∞
|h(u, v)|2 v dv.
(3.20)
From Parseval’s theorem8 we can conclude that the integral above is equal to
8
Theorem
states that if g(x) and G(ξ) are Fourier transform pairs, then
R ∞ Parseval’s
R∞
2
2
|g(x)|
dx
=
|G(ξ)|
dξ [15].
−∞
−∞
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 53
the integral of the square modulus of the pupil function, since the pupil function is the Fourier transform of the PSF. The axial variation of h is represented
1
2
in the pupil function by adding a phase factor e 2 juρ which however cancels
out when the modulus is taken. Thus we are left with the definite integral of
a constant function which is itself a constant. This implies that the integrated
intensity in a conventional microscope is independent of the axial position.
The same does not apply for the confocal microscope, for which the integrated
intensity is given by
Iint,C (u) =
Z
∞
|h(u, v)|4 v dv
0
(3.21)
and is a non-constant function. Numerical integration of the above expression
shows that the integrated intensity falls off with increasing axial distance from
the focus [99], as illustrated in figure 3.7.
The above considerations on the lateral and depth resolution of a confocal
microscope have assumed the case of a pinhole of infinitesimal diameter. In
practice the finite size of the pinhole will affect the performance of the microscope. The analysis of confocal microscopy for finite size pinholes shows that
the ideal pinhole size is equal to the Airy disc diameter at the pinhole plane;
the improvement of lateral resolution compared to conventional microscopy
is however negligible for this pinhole size [24, 118, 119].
3.1.3
Power Spectrum Analysis
In section 3.1.1 the object to be imaged has been represented in terms of a reflectance function r(x, y). In order to gain a better insight into the optical system it is useful to represent the object in the spatial frequency domain rather
than the spatial domain. The frequency spectrum R(ξ, η) of the object is given
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 54
Integrated intensity
1
0.9
Conventional
0.8
Confocal
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
2
4
6
8
10
12
14
16
18
20
u
Figure 3.7: Variation of the integrated intensity of a v-plane with defocus u.
by
R(ξ, η) =
ZZ
r(x, y) e−2πj(ξx+ηy) dx dy,
(3.22)
and the reflectance function is given by the inverse Fourier transform:
r(x, y) =
ZZ
R(ξ, η) e2πj(ξx+ηy) dξ dη.
(3.23)
This expression for r can be substituted in the convolution integral for the
intensity distribution in a confocal microscope, equation 3.11, giving
Z Z
2
2
h (x, y) r(xs − x, ys − y) dx dy IC (xs , ys ) = Z Z Z Z
2
2
−2πj(ξxs +ηys ) 2πj(ξx+ηy)
=
h (x, y) R(ξ, η) e
e
dx dy dξ dη .
(3.24)
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 55
However, h2 is the effective PSF hef f of the confocal system which can be written in terms of the effective pupil function Pef f of the confocal microscope
using equation 3.1, and hence we can write the inverse transform to give Pef f
in terms of h2 :
Pef f (ξ/λd2 , η/λd2 ) =
ZZ
j2π
h2 (x, y) e λd2
(ξx+ηy)
dx dy.
(3.25)
Substituting in equation 3.24 gives
Z Z
2
−2πj(ξxs +ηys )
IC (xs , ys ) = HC (ξ, η) R(ξ, η) e
dξ dη ,
(3.26)
where HC (ξ, η) is the coherent transfer function (CTF) for the confocal microscope, which is equal to Pef f . Since the pupil function is the Fourier transform
of the PSF, it follows from the convolution theorem that the confocal CTF is
given by
HC (ξ, η) = P (ξ/λd2 , η/λd2 ) ⊗ P (ξ/λd2 , η/λd2 ).
(3.27)
Analogously we can deduce that the CTF for the conventional coherent microscope is simply given by the pupil function.
For unaberrated circular pupils, the convolution in equation 3.27 is represented geometrically by the region of overlap between two circles of equal
radius, which we can show by simple geometry to be equal to [15]

HC (δ) =
where δ =
√
2  −1 δ
cos
π
2δ0
!
−
v
u
δ u
t
2δ0
1−
δ
2δ0
!2


,
(3.28)
ξ 2 + η 2 and δ0 = a/λd2 . δ0 was chosen so that it is the cut-off of
the pupil function P (ξ/λd2 , η/λd2 ) since P (ξ, η) has a non-zero value within
the circle of radius a. Thus, whereas the conventional coherent microscope
has a cut-off equal to δ0 , the cut-off for the confocal microscope is given by 2δ0
which is the value for which equation 3.28 falls to zero.
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 56
1
Conventional
0.8
0.6
H(δ)
Confocal
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
δ
Figure 3.8 : Plot of the CTF H in terms of δ =
p
ξ 2 + η 2 for conventional and confocal
imaging.
Figure 3.8 shows the CTFs for the coherent conventional microscope and the
confocal case. From this figure we can see why direct comparison of the performance of the two microscopes cannot be made exclusively by looking at
their cut-off frequencies: even though the confocal microscope has twice the
cut-off frequency, the CTF is a monotonically decreasing function and hence,
higher frequencies contribute less to the image; the conventional case, on the
other hand, is a constant function. It should also be noted that the confocal
CTF is identical to the frequency response of an incoherent conventional microscope [39].
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 57
3.2
C ONFOCAL O PHTHALMOSCOPY
There are mechanisms in the human eye which try to keep the retina in the
focal plane of the eye’s optics so that a focused image is formed on the layer
containing the photoreceptors. This is also the configuration we require for
using the eye as part of a microscope to image the retina. Fundus cameras
illuminate a wide patch of the retina uniformly so that the optics of the eye
can then act as an objective lens in a conventional microscope.
The 3D structure of the retina, however, prompts the need for an imaging system with a larger depth resolution than conventional microscopes. This can
be achieved by replacing the flood illumination used in fundus photography
with a point illumination by using the eye’s optics as an objective lens analogous to the schematic representation shown in figure 3.1(c). Light is backscattered from the illuminated region and collected again by the optics of the eye,
this time acting as a collector lens. In this configuration the eye is acting as a
reflective mode confocal microscope, provided a pinhole is placed at the imaging plane so that the intensity detector collecting the light is transformed into
a point detector.
The optical sectioning capability of a confocal microscope is dependant on
the lens-object pair in more ways than one. Firstly, we have seen from equations 3.18 and 3.14 that the depth resolution is a function of the numerical
aperture squared. There is very little that can be done to increase the numerical aperture of the system other than use a dilated eye pupil so that the
aperture of the collector/objective lens is maximised. The typical numerical
aperture of the eye with a dilated pupil of 7 mm is around 0.2.
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 58
A second issue with the human eye is that, whereas in general the collector/objective lens of a confocal microscope is carefully chosen so as to give
a diffraction-limited PSF, we do not have such a luxury with the eye. Even
though the optics of the eye can be diffraction-limited for a pupil size of around
2 mm, it seldom is for larger pupil sizes since the aberrations become more
significant as the pupil size increases. In general confocal ophthalmoscopes, a
trade-off between the decreased numerical aperture of a smaller pupil size and
the increased aberrations of the larger pupil size has to be made to maximise
the resolution of the instrument. Alternatively, the aberrations introduced by
the large pupil size can be corrected for so as to approach diffraction-limited
optical quality while using the largest numerical aperture possible.
From the above considerations we can work out the theoretical limit for the
maximum depth resolution attainable by a confocal ophthalmoscope. For the
maximum numerical aperture of the human eye, which is 0.2, and for the
diffraction-limited case we can work out the depth resolution, which we will
take to be equal to the full width at half-maximum of the axial intensity response given by equation 3.18. Using this definition we find that the depth
resolution du is equal to [24]
du = 0.9
nλ
,
(NA)2
(3.29)
where n is the refractive index and NA is the numerical aperture. From the
above equation we can conclude that the maximum depth resolution that a
confocal ophthalmoscope can achieve is around 15 µm (for a wavelength midway through the visible spectrum.) In stark contrast to this figure is the depth
resolution of the best confocal ophthalmoscopes available commercially today
which is around 300 µm.9
9
Data for Heidelberg Retina Tomograph II (HRT II) [1]
CHAPTER
3:
Confocal Microscopy and Ophthalmoscopy 59
The same limitations described above in reference to the depth resolution of a
confocal ophthalmoscope apply to the lateral resolution of such an instrument.
Equations 3.12 and 3.3 show that the lateral resolution is also dependant on
the numerical aperture of the system, and the aberrations in the pupil function
also have the effect of reducing this resolution. Taking the full-width halfmaximum of the PSF (equation 3.12) as a measure of resolution, the maximum
lateral resolution dv attainable theoretically for an unaberrated 7 mm pupil is
given by [24]
dv = 0.37
λ
NA
(3.30)
and is roughly equal to 1 µm, a figure markedly better than the lateral resolution of the best instruments available to date, which is around 10 µm.9
These discrepancies between the theoretically attainable lateral and depth resolution of confocal ophthalmoscopy and the actual resolutions of the current
state-of-the-art instruments are the justification for undertaking research into
dynamic aberration correction in the confocal ophthalmoscope using adaptive
optics.
CHAPTER
4
Adaptive Optics
An adaptive optics system is one which has the capability of changing its optical characteristics depending on the input to the system. Using this general definition we will immediately realise that adaptive optics systems are far
more common than some might think. They include the human eye, for example, which can change its entrance pupil diameter and its refractive power
depending on the input light intensity and wavefront. Other examples, which
can sometimes be taken for granted, include: the compact disc reader and
auto-focus photographic cameras, which use internal feedback to modify the
focus of their optics; photochromatic prescription spectacles, which alter their
transmissivity of the visible and UV-spectrum in response to the intensity of
incident UV-light.
However, in recent years, adaptive optics (AO) has taken a more specific
meaning. First proposed by Babcock in 1953 [11], an AO system is one which
corrects for the aberrations of an incoming beam of light for the purpose of
trying to obtain a flat wavefront from the aberrated incoming wavefront, and
hence pushing the resolution of an optical system closer to its diffraction limit.
60
CHAPTER
4:
Adaptive Optics 61
The original proposal and implementation of AO systems was motivated by
astronomy, to correct for the dynamic aberrations introduced to the incoming
beam by the turbulent atmosphere of the Earth.
4.1
O VERVIEW OF A DAPTIVE O PTICS
As ground-based telescopes for astronomical imaging became bigger and bigger, thus reducing the loss of resolution due to diffraction, the effect of the
aberrations introduced by the turbulent atmosphere became more significant.
One way of bypassing the problem is to place the telescope in orbit around
the Earth, thus eliminating the passage of light through the atmosphere, as
with the Hubble Space Telescope. However, cheaper and easier solutions are
required, and AO proved to be a popular solution. When AO was first proposed in the 1950s, the technology was not good enough to make it feasible [11, 45]. The concept of astronomical imaging unhindered by the aberrations introduced by the atmosphere was however demonstrated in 1970 by
Labeyrie with the development of speckle interferometry [61]. The interest in
AO was rekindled by Hardy in 1978 [45] since by this time technological advances had made the implementation of an AO system possible. The general
setup of an AO system for astronomical imaging is shown in figure 4.1. Most
of the initial work was undertaken in military research programmes, but by
1991 the first astronomical AO system was reported functioning on a 3.6 m
telescope in Chile [86].
4.1.1
Adaptive Optics in the Human Eye
By the mid-1990s AO was a well established technology in astronomical imaging. Most of the principal telescopes around the world either had an AO sys-
CHAPTER
Incoming
aberrated
wavefront
4:
Adaptive Optics 62
Adaptive
mirror
Control
System
Corrected
wavefront
Image
plane
Wavefront
sensor
Figure 4.1: Schematic representation of a simplified AO system for astronomical imaging.
tem or had one planned. This, however, was also a period in which many
of the ideas and technology from astronomical AO were being channeled to
be used in ophthalmology. The first attempt at low order active correction in
a retinal imaging system was presented by Dreher et al. in 1989 [30]. However, it was with the use of the Shack-Hartmann sensor for ocular aberration
measurements in 1994 [63] that the current research in AO kicked off. The
Shack-Hartmann sensor was novel to the field of ophthalmology but it had
been used for a long time in astronomy and was a common wavefront sensor
CHAPTER
4:
Adaptive Optics 63
for astronomical AO systems. This led to the development of a closed-loop AO
system for the correction of ocular aberrations by Liang et al. [65], very similar
in concept to its astronomical counterpart. Since then, a number of closed loop
AO systems have been developed to enhance the resolution of retinal images
obtained or to improve the visual performance of the eye [29, 34, 37, 38, 91, 92].
Figure 4.2 shows a simplified schematic representation of an AO system for
the eye. A collimated laser beam is used as an input so that it forms a spot
on the retina. This spot will serve as the source of light for the wavefront
sensor of the system, and depending on the imaging mode used could also be
the illuminating source for the image-forming microscope. The backscattered
light from the retina will be aberrated on its way out of the eye. This aberrated
beam is incident on the deformable mirror which applies a correction to the
wavefront. The resultant wavefront is sensed by the wavefront sensor which
sends its data to a control computer; this, in turn, generates the appropriate
signals for the mirror so as to try to maintain the best correction possible.
The light is thus subject to a double pass process: it goes through the aberrating optics twice – once on its way in and another time on its way out. As
shown by Artal [6, 7], the intensity PSF Idp (x, y) obtained from such a double
pass process is given by the correlation of I1 (x, y) and I2 (x, y), which are the
single pass PSFs of the incoming and outgoing paths respectively:
Idp (x, y) = I1 (x, y) ? I2 (x, y),
(4.1)
where ? denotes correlation. If we have a symmetric double pass in which the
entrance and exit pupils are of equal diameters, then I1 (x, y) = I2 (x, y) and the
double pass PSF is given by an autocorrelation function, which is symmetric.
In this case information about the odd aberrations of the eye are lost; the wavefront sensor will not be able to detect odd aberrations and consequently the
CHAPTER
4:
Adaptive Optics 64
Eye
Illuminating source
Retinal
image
Deformable
mirror
Wavefront
sensor
Figure 4.2: Simplified schematic representation of an AO system for the eye.
mirror will be incapable of correcting for them. Thus, in general, it is required
to break the symmetry of the double pass by using entrance and exit pupils of
a different diameter. This is, however, not required in a scanning microscope
since the scanning of the spot across the retina breaks this double-pass symmetry by introducing spatial incoherence; this enables the measurement of the
odd aberrations as well.
The techniques described in this chapter have been implemented in the AO
system presented in the following chapter. Though the interest in this work
lies in enhancing the resolution of retinal images, the same AO setup can be
CHAPTER
4:
Adaptive Optics 65
used to enhance vision in the human eye beyond the conventional defocus
and astigmatism correction provided by spectacles, contact lenses or other developing technologies such as laser corrective surgery [43, 65]. Whether such
’super-spectacles’ will ever make it into a neat small device which fits onto a
person’s nose is another issue. Visual enhancement by means of an AO system
however does offer the possibility of performing psychophysical experiments
designed to make use of the added visual acuity in the quest of understanding
better the complex functions of the retina and the brain.
4.2
C OMPONENTS OF AN A DAPTIVE O PTICS S YSTEM
An AO system can be decomposed into three major subsystems, namely the
wavefront sensor, the active corrective element and the control system, as
shown in figure 4.1. The performance of the AO system is dependant on the
performance of each of these three elements. The aberrated incoming wave
must be accurately measured by the wavefront sensor since the system will
not be able to correct aberrations it cannot sense. On the other hand, being able
to measure the wavefront very accurately has little use for improving the resolution of the imaging system if the corrective device cannot correct for these
aberrations. Finally, the link between the wavefront sensor and the corrective
device is made by the control system which has to be efficient enough to reduce the delay between the measurement of the wavefront and its correction
so as to provide a sufficiently high sensing-correction temporal bandwidth to
match that of the dynamic atmospheric turbulence.
The basics of AO are similar whatever the application being considered,
whether astronomical, ophthalmological or otherwise. The distinct characteristics of the different applications, however, mean some aspects of the im-
CHAPTER
4:
Adaptive Optics 66
plementation of AO systems will change. The following sections describe the
three subsystems making up an AO system, with specific emphasis on the
ophthalmological application.
4.2.1
Wavefront Sensing
Wavefront sensing deals with the measurement of the aberrations of a wavefront, or stated otherwise, the deviation of the wavefront from a plane wave.
Depending on the application, the wavefront sensor might need very high accuracy in representing the wavefront, high speed in taking measurements, the
ability to work with low light levels and simplicity of design. A low cost will
also be required for many applications. In an AO system, the choice of sensor
and its parameters will also depend on the rest of the system components; a
wavefront sensor does not need to sample and represent the incoming wavefront very accurately if the corrective device can only correct for a few loworder aberrations, for example.
Some wavefront sensing techniques for measuring the optical quality of the
human eye were discussed in chapter 2. Not all are, however, adequate for an
AO system. Interferometric sensors in astronomical AO must be ones which
do not require a reference arm since this is not usually available. These include shearing interferometers and point diffraction interferometers. Other
interferometers can be equally useful in applications in which it is possible
to have a reference arm, such as in ophthalmology. Of the wavefront sensors
which sample locally the wavefront, the Shack-Hartmann sensor is probably
the simplest in concept and the most widely used both in astronomy and ophthalmology. This is the wavefront sensor used in the work presented in this
thesis and it is explained in detail below. Other wavefront sensors include
curvature wavefront sensors, which measure the local curvature of the wave-
CHAPTER
4:
Adaptive Optics 67
fronts rather than the local slopes [79, 88, 89]. This kind of sensor can offer
the benefit of a better match with curvature-based correcting devices such as
membrane deformable mirrors. The pyramidal wavefront sensor is another
technique which has been proposed and tested and it is an extension of the
Foucault knife-edge principle [54, 85]. The Fourier plane representing the
pupil coincides with the vertex of a pyramidal prism; four images of the pupil
are formed, each by the light being collected by one of the four faces of the
prism; these images can be used to reconstruct the wavefront aberrations.
It is also conceivable to have AO systems without a wavefront sensing subsystem in which the control loop tries to correct for the aberrations by trying
to maximise certain parameters such as image-sharpness of the image or the
light intensity in the PSF enclosed within the Airy radius for the optical system.
Shack-Hartmann Wavefront Sensor
The setup of a Shack-Hartmann wavefront sensor is fairly simple: a 2D array
of adjacent lenslets is placed in front of a CCD camera, where the distance between the lenslet array and the CCD is equal to the focal length of the lenslets.
Thus, if a plane wave is normally incident on the lenslet array, an array of
spots is formed on the CCD camera such that each spot lies on the optical axis
of its corresponding lenslet. The setup of the Shack-Hartmann sensor is illustrated in figure 4.3.
If we introduce tilt to the incident wavefront, then we have a shift of the array
of spots. This scenario is illustrated for the case of just one lenslet in figure 4.4.
Thus the displacement ∆x of the spot in the x-direction is a function of the
gradient or tilt mx of the incident wavefront in the x-direction and the focal
CHAPTER
4:
Adaptive Optics 68
(a)
Aberrated
wavefront
Lenslet
array
Spot
shift
Plane
wavefront
Lenslet focal length
(b)
(c)
Figure 4.3 : (a) 1D representation of the principle of operation of a Shack-Hartmann wavefront sensor. (b) The 2D spot patterns produced by a plane wave and (c) by an arbitrarily
aberrated wavefront.
length f of the lenslets; therefore the tilt can be calculated from
mx =
∆x
,
f
(4.2)
my =
∆y
.
f
(4.3)
and similarly in the y-direction:
CHAPTER
4:
Adaptive Optics 69
Plane
wavefront
mx
∆x
Tilted
wavefront
f
Figure 4.4: Spot displacement in a single Shack-Hartmann lenslet from a tilted wavefront.
In the above case where pure tilt is the only aberration present in the wavefront, the gradients are simply given by the partial derivatives of the wavefront, which we denote by W (x, y), so that
mx =
∂W (x, y)
,
∂x
(4.4)
my =
∂W (x, y)
.
∂y
(4.5)
We can now generalise further and assume an arbitrarily aberrated wavefront
incident on the lenslet array. Each lenslet will sample the wavefront locally
and the spot displacement will be proportional to the average tilt within the
aperture of the lenslet. Thus the Shack-Hartmann wavefront sensor is measuring the local tilt in the x- and y-directions, as illustrated in figure 4.5. The
average gradient is now given by
R
mx =
R
R
my =
∂W (x,y)
ds
∂x
ds
∂W (x,y)
ds
∂y
R
ds
,
(4.6)
,
(4.7)
CHAPTER
4:
Adaptive Optics 70
Aberrated
wavefront
Lenslet
array
Approximated
wavefront
Figure 4.5 : Approximation of an arbitrary wavefront by a wavefront consisting of plane
wave tilted segments.
where ds is the infinitesimal element of area and integration is over the lenslet
aperture.
Whereas for the plane wave and for the wavefront containing only tilt the
spot position can be determined from the centre of the symmetric intensity
distribution produced on the CCD camera1 , the spot produced due to an arbitrarily aberrated wavefront will not, in general, be symmetrical about its
centre. Therefore the spot position has to be estimated in some other way. A
suitable parameter to be used as an estimate of the spot position is the centroid
1
This intensity distribution will in general be an Airy disc or a 2D sinc-squared pattern,
depending on whether the lenslets have circular or rectangular apertures.
CHAPTER
4:
Adaptive Optics 71
of the spot, given by the coordinates (xc , yc ), which are the first moments of
the intensity distribution:
R
I(x, y) x dx dy
xc = R
,
I(x, y) dx dy
(4.8)
R
I(x, y) y dx dy
yc = R
.
I(x, y) dx dy
(4.9)
Therefore the Shack-Hartmann wavefront sensor represents the wavefront as
a set of pairs of gradients, mx and my , containing one pair for each lenslet subaperture. It is sometimes useful to represent the wavefront in alternate forms
to the raw gradient data provided by the wavefront sensor for the purposes of
better interpretation and understanding of the wavefront maps, and also to be
able to compare the wavefront maps obtained from other kinds of wavefront
sensors. For this purpose it is possible to reconstruct the wavefront in terms of
a basis of choice. The most common basis for representing ocular wavefront
aberrations is the Zernike set of polynomials [14, 77]. This is mostly due to the
fact the the Zernike basis form an orthonormal set over a unit circular pupil.
Therefore, the wavefront can be represented in terms of Zernike terms Zi :
W (x, y) =
∞
X
ci Zi (x, y),
(4.10)
i=1
where ci is the coefficient of the i th Zernike term. Substituting equation 4.10
above in equations 4.6 and 4.7, the x- and y-gradient measurements can be
used to reconstruct the wavefront in terms of Zernike polynomials.
However, for the purposes of an AO system, the reconstruction of the wavefront in terms of Zernike polynomials, or any other polynomial basis, is not
required since the gradient signals themselves can be used to generate control
signals to drive the active corrective element. This point will be detailed further in the discussion of the control system of an AO setup later in this chapter.
CHAPTER
4:
Adaptive Optics 72
Since each lenslet in a Shack-Hartmann array gives a signal proportional to
the average gradient over the lenslet aperture area, the accuracy of the wavefront representation is limited by the sampling imposed by the finite size of
the lenslets. This clearly introduces an error in the reconstructed wavefront or
in its representation within an AO control system. This error could be minimised considerably by using smaller (and hence more) lenslets in the array
so that the wavefront is sampled more finely. This is usually not a problem in
applications where light levels are not an issue; however, most applications of
interest, including both astronomy and ophthalmology, tend to have restrictions on the amount of light available for wavefront sensing.
The problem brought about by low light levels is that the larger the number of lenslets used to sample the wavefront, the less light is available for
each lenslet; thus the spots produced would suffer from low signal-to-noise
ratios, which is also adversely affected by the fact that the spots produced are
larger in size due to the diffraction limit imposed by smaller lenslets. This
decrease in signal-to-noise ratio increases the error in centroid location of the
spots, even up to a point where the spots are not detectable. Another problem
brought about by having a large number of lenslets, and hence spots, is that
the processing of each CCD frame will be more computationally intensive.
This might not be an issue for post-processing wavefront reconstruction but
it can be a concern in an AO closed loop where the excess delay can slow the
system down considerably. Thus a compromise on the number of lenslets to
be used in a Shack-Hartmann sensor has to be made taking into account the
amount of light available, the processing speed required, and the parameters
of the active corrective device in an AO system (since high sampling of the
wavefront might not be desirable if the active device can only correct for a
CHAPTER
4:
Adaptive Optics 73
smaller number of aberrations.)2
Thus, the principle sources of error in a Shack-Hartmann wavefront sensor
include the finite sampling of the lenslet array, Poisson photon noise, readout noise of the detector and speckle noise. Photon noise can be reduced by
increasing the amount of light used so that there is more light for wavefront
sensing whenever this is possible, or alternatively by reducing the number
of lenslets as discussed above. The choice of detector will affect the impact
of readout noise on locating the spot centroids, and speckle noise can sometimes be reduced by introducing scanning techniques to scan the illuminating
light on the scattering object thus averaging out speckle effects. This latter
method has been used successfully in wavefront sensing in the human eye
where speckle effects are considerable due to the highly scattering characteristics of the retina [49]. The imaging system implemented in the work described
in this thesis uses a pair of scanning mirrors to scan a spot into a raster on the
human retina; though the principle purpose of the two scanners is for image
formation in a scanning microscope setup, it also offers the desired side effect
of reducing speckle noise in the wavefront sensor CCD images.
The algorithms used for locating the centroid of the spots can also introduce
their own errors. Defining an area of interest where to look for each spot, for
example, can have a considerable influence on the centroiding accuracy. As an
illustration of this point, the various centroiding algorithms employed in the
design of the AO system presented here are outlined in the next subsection.
The algorithms were based on simulations carried out to investigate the effect
of the size and nature of the region of interest used to locate the spot centroids.
2
The determination of the lenslet size in a Shack-Hartmann sensor for astronomical imaging is well understood, and in particular an estimate of this size is given by the Fried parameter, which is a measure of the quality of the wave after having propagated through the
turbulent atmosphere [36, 46, 103].
CHAPTER
4:
Adaptive Optics 74
Determining the Optimal Centroiding Algorithm through Simulations
In order to determine the appropriate centroiding algorithm to be used in the
AO system used in this work, a series of simulations where designed and run
using MATLAB. The code written simulates a single Shack-Hartmann spot by
generating a series of (x, y) coordinates representing photon arrivals on the
detector. This is achieved through Monte Carlo simulations using a 2D Gaussian probability density function (PDF). A similar procedure using a uniform
PDF was used to generate noise and was added to the data representing the
spot. The series of (x, y) points was then subdivided into finite square intervals with the number of points within each square added to give a total value.
This process models the pixellation which occurs on a CCD camera, the sum
obtained for each square being equal to the output signal of the corresponding pixel. Figure 4.6 shows an example of the distribution of points obtained
from the Monte Carlo simulation and the corresponding pixellated image with
noise.
This gives a representation of a single Shack-Hartmann spot in the presence of
noise, as output by a CCD camera. This model was thus used to compare centroiding algorithms. The basics of centroiding are fairly simple: an area of the
frame is defined and the centroid within that area is obtained. In our model it
is possible to measure the accuracy of centroiding since the actual position of
the centroid of the spot is known from the parameters used to create the distribution of photons making up the spot, and hence the difference between this
value and the value obtained after adding noise, pixellating and centroiding
is an estimate of the accuracy of the process. Because of the presence of noise,
the size of the search area used will have an effect on the centroid estimate, as
will the signal-to-noise ratio.
CHAPTER
4:
Adaptive Optics 75
1
(a)
0.8
0.6
0.4
0.2
y
0
-0.2
-0.4
-0.6
-0.8
1
1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
x
(b)
5
10
15
pixels
20
25
30
35
40
45
50
5
10
15
20
25
30
35
40
45
50
pixels
Figure 4.6: (a) 1000 Gaussian-distrubuted random points generated by a Monte Carlo simulation and (b) the corresponding pixellated image (50x50 pixels), including noise.
CHAPTER
4:
Adaptive Optics 76
Therefore, an iterative process was set up in which a search area was defined
and used to locate the centroid, the search area was then shifted so that it is
centred on the located centroid and its size reduced, and the centroid calculated again. This procedure provides a measure of centroiding accuracy as a
function of the search area used. Figure 4.7 shows the centroiding accuracy
as a function of the number of iterations for various spots with different noise
levels in the image. The cases shown in figure 4.7(a) and (b) had an initial
search area of size 20 pixels by 20 pixels, with this dimension reduced by a
factor of 1.5 at every odd-numbered iteration (the even-numbered iterations
only shifted the search area so that it is centred on the previously located centroid). Thus, by the 10 th iteration the width of the search area is less than that
of 4 pixels which is comparable to the actual size of the spot. Thus centroiding
accuracy is lost beyond this point since the search area becomes smaller than
the spot itself. For the case shown in figure 4.7(c), the initial search area had a
width equal to that of 10 pixels which means that the point where the search
area becomes of a comparable size to the spot is reached by the 8 th iteration.
The level of pixellation in the case represented by figure 4.7(b) is half that of
the the other two.
Another source of error brought about by this method of creating a search area
over an array of pixels is that in general the search area will not encompass
only whole pixels. As illustrated in figure 4.8, the boundary of the search
area will include within it only a fractional amount of each of the edge pixels.
However, centroiding over the whole array will include those pixels and this
will introduce a bias to the location of the centroid. Thus, a mask was created
so that the edge pixels are weighted depending on what fraction of the pixel
is included in the search area. This measure redresses the bias introduced by
the boundary pixels and the graphs in figure 4.7 show that the inclusion of the
CHAPTER
4:
Adaptive Optics 77
Average error of centroid, in pixels
1
(a)
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0
2
4
6
8
10
12
14
16
18 20
Average error of centroid, in pixels
Number of iterations
(b)
0.9
with weighted edges
without weighted edges
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
2
4
6
8
10
12
14
16
18 20
Number of iterations
Average error of centroid, in pixels
(c)
with weighted edges
without weighted edges
0.9
with weighted edges
without weighted edges
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0
2
4
6
8
10
12
14
16
18 20
Number of iterations
Figure 4.7: Simulated Shack-Hartmann spots with noise (left) and graphs showing the centroiding accuracy for the iterative process described in the text (right). Three cases are shown
with different levels of pixellation ((a) and (c) are 50x50 pixels, (b) is 25x25 pixels), noise
(signal-to-noise ratio is 2 for (a) and (c) and 1.5 for (b)) and starting search area sizes (20x20
pixel initial search area for (a) and (b) and 10x10 pixel initial search area for (c).)
CHAPTER
Image
pixels
4:
Adaptive Optics 78
Undesired
increase
in search
area due to
inclusion of
whole pixels
Defined
search
area
Centre of search area
defined by preceeding
centroid estimate
Fraction of pixel used
as weighting factor
Figure 4.8 : Defined search area for centroiding superposed on the array of image pixels.
Since the centre of the search area is a previous centroid estimate the boundary of the search
area falls arbitrarily across the edge pixels. Inclusion of the edge pixels introduces a bias in the
centroid measurement unless they are appropriately weighted.
mask does increase the accuracy of centroiding.
4.2.2
Active Wavefront Correction
The second key subsystem of an AO system is the active corrective device. The
information about the wavefront aberrations provided by the wavefront sensor must be used to provide an appropriate phase correction to the wavefront.
Even though transmissive liquid crystals capable of modulating phase can be
used as active elements in an AO system, reflective devices have been more
successful thus far, primarily due to the fact that liquid crystal technology to
CHAPTER
4:
Adaptive Optics 79
date does not allow sufficiently high bandwidth modulation and introduces
polarisation effects [67, 68].
A number of reflective deformable mirrors have been developed in recent
years and used in AO systems. The choice of correction device depends on
the nature of the aberrations being corrected, the ease required in driving the
mirror and the amount of correction required. The quality of the mirror correction can be quantified in many cases by the residual variance of the aberrations due to the mirror fitting error [46, 103].
Segmented mirrors consist of the reflective surface being made up of a number of adjacent segments that can be controlled separately [103]. Each segment could be controlled by just one actuator which provides piston motion.
This means that wavefront correction can be achieved by approximating the
wavefront by parallel segments each displaced by a piston term, as shown in
figure 4.9(a). These mirrors therefore introduce discontinuous phase corrections to the incoming wavefront which might not be the ideal correction for
continuous wavefronts such as those emerging from the typical human eye.
For this reason a relatively large number of actuators might be needed to accurately compensate for even the simpler lower order aberrations. This issue
can be resolved with segmented mirrors in which each segment has tip and
tilt motion as well as piston. Such a device can better correct for a continous
wavefront, thus giving a smaller fitting error. The extra degrees of freedom offered in the latter case do however increase the complexity of controlling such
a mirror and can affect the speed of a closed-loop AO system. The lack of continuity of the segmented mirror surface also introduces issues with unwanted
light scattering due to the diffraction from the edges between the segments.
These losses can be eliminated by using a continuous facesheet surface with
CHAPTER
(a)
(b)
Wavefronts
representing
mirror surfaces
4:
Adaptive Optics 80
(c)
Figure 4.9 : Schematic representation of segmented deformable mirrors with (a) piston segments and (b) tip-tilt segments, and (c) a continuous facesheet deformable mirror.
the actuators pushing directly on the reverse of the mirrored layer, producing
a deformation of the whole surface.
Another kind of deformable mirror used in AO systems is the bimorph mirror [33, 59, 100]. As illustrated in figure 4.10, the bimorph mirror consists
of a segmented layer of piezoelectric material glued to a continuous layer of
mirrored substrate material. When a voltage is applied to the piezo layer it
expands laterally introducing a shear in the layered structure. This shear will
cause the combined structure to bend. Therefore by applying different voltages to the different parts of the piezo layer, variable curvature of the reflecting surface can be obtained. As opposed to the segmented mirrors described
CHAPTER
(a)
Reflective
surface
Piezo
segments
Passive
layer
4:
Adaptive Optics 81
(b)
Adhesive
layer
Figure 4.10 : (a) Layered structure of a bimorph deformable mirror and (b) typical electrode
structure in the piezo layer.
above, the bimorph mirror surface is continuous and its first derivative is also
continuous. Furthermore they can be manufactured relatively easily and at
low-cost, making them ideal candidates for corrective devices in applications
with a low budget or for mass production.
The deformable mirror employed in this work is an electrostatically driven
membrane mirror in which a thin metallic membrane is placed just above a set
of electrostatic electrodes which pull the membrane locally depending on the
voltage applied to them. This last type of deformable mirror being considered
here is discussed below.
Membrane Deformable Mirror
The membrane deformable mirror is a continuous facesheet device in which
the reflective surface is typically a thin metallic-coated silicon nitride membrane held in place over a PCB board containing the array of electrodes [105].
This setup is illustrated in figure 4.11. The membrane layer and the electrode
CHAPTER
4:
Adaptive Optics 82
(a)
Reflective
membrane
Electrodes
Substrate PCB
(b)
Figure 4.11 : (a) An illustration and (b) a cross-sectional representation of a membrane
deformable mirror.
layer form a capacitive arrangement. When a voltage is applied to an electrode, the membrane is pulled down locally so that it is deformed. Since the
electrostatic force produced by the electrodes can only pull the membrane in
one direction, the mirror has to be biased so that the membrane is pulled halfway, with this position serving as the zero-position. This has the effect of
reducing the stroke of the mirror by half, which can be a problem since most
existing available membrane mirrors already suffer from low stroke. In addition, the structural geometry of the membrane means that it is held in place
along its circumference, introducing a boundary condition which forces the
deformation to have a zero value at the edge. Thus for optimal wavefront correction, only a smaller central effective diameter of the reflective membrane
can be used3 .
3
For the OKO TUDelft mirror used in this work, out of a total membrane diameter of
15 mm, the central region of diameter around 10 mm gives the optimal performance [80].
CHAPTER
4:
Adaptive Optics 83
The maximum displacement of the membrane produced by a voltage being
applied to one electrode is directly above the centre of that electrode. The
mirror deformations can be described in terms of the curvature of the surface
by the membrane equation [80]:
∇2 z(x, y) = −
P (x, y)
,
T
(4.11)
where z is the membrane deformation at the coordinates (x, y), T is the membrane tension and P is the pressure caused by the electrostatic attraction between the electrode and the membrane. P is a function of the permittivity in
free space 0 , the potential difference between the electrode and the membrane
V , and the distance between the electrode and the membrane d; it is given by
V2
P = 0 2 .
d
(4.12)
The membrane deformable mirror can be modelled so as to compare the performance of different mirrors with different characteristics such as number of
electrodes, stroke, effective membrane area and other parameters, and quantify their performance for correcting for particular sets of aberrations. This approach has been used to model the correction of a 37-channel membrane mirror4 with wavefronts generated using Kolmogorov spatial statistics, similar to
those arising from the passage of light through atmospheric turbulence [80].
A similar approach has also been investigated using ocular aberrations from
a statistical set obtained from real aberration measurements on a large sample
of eyes, since there is as yet no mathematical model describing the statistical
distribution of typical ocular aberrations [57]. This modelling confirms observations that even though 37 actuators allow the generation of sufficiently
high-order correction, the low stroke available on these devices can strongly
reduce the effect of the higher-order term correction.
4
The parameters used for this simulation where those for an OKO TUDelft 37-channel
membrane mirror.
CHAPTER
4.2.3
4:
Adaptive Optics 84
Control System
The previous sections have discussed the two components of an AO system
which measure and correct the incoming wavefront. However, the step between measuring the wavefront and producing the appropriate deformation
on the deformable mirror to correct for it is not a trivial one, since the wavefront sensor signals have to be converted into mirror electrode signals in a loop
which ensures the minimum possible system latency, and which is stable such
that the required level of correction is maintained over time. The control system described in this section is not the only possible approach in controlling
an AO system but it is the one used in this project, which is based on the AO
code used in a low cost AO system described previously by Paterson et al. [80]
The deformable mirror can be represented in terms of the control signals supplied to it. For a control signal xj , which corresponds to a voltage value applied to the j th electrode, the membrane deformation can be described in
terms of an influence function rj (x, y). The set of influence functions for all
of the n electrodes in the mirror form a linear basis for the deformation of the
membrane surface, such that the phase of the surface Φ(x, y) can be described
in terms of a superposition of the influence functions weighted by their respective control signals:
Φ(x, y) =
n
X
xj rj (x, y).
(4.13)
j=1
Φ(x, y) can be described in terms of an orthonormal basis such as the Zernike
basis Zi which is a commonly used set of basis, though not the only one, or for
that matter not necessarily the best one either5 . This can be written as
Φ(x, y) =
∞
X
W (x, y) ci Zi (x, y),
(4.14)
i=1
5
The orthonormality of the Zernike basis for the membrane mirror is justified by the fact
that the effective membrane is circular with unit radius, after appropriate normalisation.
CHAPTER
4:
Adaptive Optics 85
where W (x, y) is the unit circular function given by
(
W (x, y) =
1 if x2 + y 2 ≤ 1
,
0 otherwise
(4.15)
and the coefficients ci can be obtained from the inner product of Φ(x, y) with
the individual Zernike basis functions:
ci =
Z Z
W (x, y) Φ(x, y) Zi (x, y) dx dy.
(4.16)
Substituting for Φ(x, y) from equation 4.13 gives the coefficients in terms of
the control signals xj :
ci =
n Z Z
X
W (x, y) Zi (x, y) rj (x, y) dx dy xj .
(4.17)
j=1
Equation 4.17 can be represented in matrix form by
c = Mx,
(4.18)
where c and x are vectors containing the elements ci and xj respectively, and
M is the influence matrix of the mirror whose elements Mij are given by the
double integral term in equation 4.17. Therefore Mij is a measure of the effect
that the j th electrode has on the i th Zernike term.
It follows that the vector c contains the Zernike coefficients which completely
represent the mirror surface when the actuator signals in x are applied. In
practical implementations, this is true only up to a finite number of Zernike
terms since the vector c has to be truncated to a finite size due to computational limitations. We can represent an arbitrary incoming wavefront, using
the same number of terms, by the vector c0 containing the respective coefficients. The mirror surface which gives the smallest least-squares error when
correcting for this arbitrary wavefront is obtained from the actuator signals
given by
x = M−1∗ c0 ,
(4.19)
CHAPTER
4:
Adaptive Optics 86
where M−1∗ is the pseudoinverse of the influence matrix M. This provides a
way of quantifying the residual wavefront error ce after correction, given by
ce = MM−1∗ c0 − c0
= [MM−1∗ − I] c0 .
(4.20)
A similar representation can be extended to the wavefront sensor. The vector
s of wavefront sensor signals arising from an arbitrary wavefront, which we
denote by a vector of Zernike coefficients c0 , incident on the wavefront sensor
can be given by
(4.21)
s = Sc0 .
S is the wavefront sensor response matrix. For a Shack-Hartmann sensor, the
elements are given by
Sij =
 RR
(x,y)

W (x, y) Wi (x, y) ∂Zj∂x
dx dy


for x-gradient signals


 R R W (x, y) W (x, y) ∂Zj (x,y) dx dy
i
∂y
for y-gradient signals
,
(4.22)
where W (x, y) is the aperture function as defined in equation 4.15 and Wi (x, y)
is a similar aperture function for the lenslet aperture corresponding to the i th
signal. Sij therefore represents the effect that the j th Zernike term has on the
i th sensor signal output.
The AO control system must produce a set of mirror electrode signals x from
the set of wavefront signals s. This can be represented in matrix notation by
x = Cs,
(4.23)
where C is the control matrix for the system. The wavefront sensed by the
wavefront sensor in an AO loop is the sum of the incoming wavefront phase c0
and the phase of the deformable mirror surface Mx (as given by equation 4.18).
In an ideal closed-loop system in which the wavefront sensor and the deformable mirror are perfectly matched, this should represent a plane wave
CHAPTER
4:
Adaptive Optics 87
and all the sensor signals would have a value of zero. However, in a real
AO system perfect correction is not achieved and the wavefront sensor will
always measure residual wavefront aberrations. Thus, substituting for the input wavefront in equation 4.21, the wavefront sensor signal obtained is given
by
s = S(c0 + Mx).
(4.24)
If we represent the response of the sensor to the uncorrected incoming wavefront as s0 and we define the response signal for the whole AO system B =
SM, then we can rewrite equation 4.24 above as
s = s0 + Bx.
(4.25)
Note that the elements of B are given by Bln = Slm Mmn where the common
index m represents the number of Zernike terms being considered while the
indices l and n represent the number of wavefront sensor signals and the number of mirror electrodes respectively.
Since the vector s represents the measured residual wavefront aberrations it
can be used as a measure of the amount of correction achieved by the system;
we take this correction error to be equal to the least-squares error σ 2 given by
σ 2 = s · s = sT s,
(4.26)
which, using equation 4.25 becomes
σ 2 = (sT0 + xT BT )(s0 + Bx).
(4.27)
Thus we can find a set of mirror electrode signals which give a minimum
least-squares error by solving for x in
∂
[(sT + xT BT )(s0 + Bx)] = 0,
∂xi 0
(4.28)
CHAPTER
4:
Adaptive Optics 88
which has a solution given by
BT s0 = −BT Bx,
(4.29)
x = −[BT B]−1 BT s0 ,
(4.30)
and hence
which is of the form required by equation 4.23 for the control of an AO system.
Therefore for the least-squares controller, the control matrix of the system is
given by
C = −[BT B]−1 BT ,
(4.31)
which is the negative of the pseudoinverse of B. Since B = Bln , for the product
BT B to be non-singular, and hence invertible, it is necessary that l be greater or
equal to n. This means that we require at least the same number of wavefront
sensor signals as mirror electrodes for controlling such an AO system6 . The
control matrix in a practical AO system can be obtained through calibration.
The influence matrix is built up by applying voltages to each actuator on the
mirror and measuring the wavefront sensor signals for each electrode. The
control matrix is then obtained from the pseudoinverse of the influence matrix.
In order to deal with the control matrix C as given above it is useful to consider
the singular value decomposition (SVD) of matrices. The SVD of the system
response matrix B can be represented in terms of an l × l orthogonal matrix U,
an n × n orthogonal matrix V and an l × n diagonal matrix Λ, so that
B = UΛVT ,
(4.32)
so that the control matrix C is given by the negative of the pseudoinverse of B,
C = −VΛ−1∗ UT .
6
(4.33)
Since each spot in a Shack-Hartmann wavefront sensor provides two signals, namely an
x- and a y-gradient, we only require a number of spots equal to half the number of electrodes.
CHAPTER
4:
Adaptive Optics 89
The columns of the matrix U are a complete set of wavefront sensor modes
and hence they form a basis for the vector space s. Moreover, the orthogonality of U required by the definition for the SVD ensures that this basis is
orthogonal. Similarly, the columns of matrix V form a complete set of mirror modes and are an orthogonal basis for the vector space x. Therefore the
diagonal elements λi of the matrix Λ relate the sensor mode ui to the mirror
mode vi such that a mirror signal equal to vi gives a corresponding wavefront
sensor signal equal to λi ui . Since in an AO loop the wavefront sensed by the
sensor is being corrected by the mirror, then we can invert the above relation
so that the wavefront component represented by the sensor signal ui can be
fully corrected by a mirror signal equal to λ−1
i vi . This means that if λi has a
very small value, then the i th mirror mode will be very sensitive to changes
in the measured wavefront sensor signal and hence, that particular mode will
also be sensitive to noise in the wavefront sensor signals. Such modes can
introduce instability in the AO loop and therefore it may be necessary to suppress modes with very low singular values. This can be done by restricting
the condition factor of an AO system, which is the ratio of the largest value to
the smallest value of λi .
The mirror-sensor system can have actuator modes vi which do not have a
corresponding non-zero λi . These mirror modes will therefore not be sensed
by the wavefront sensor. Such modes can build up in an AO loop because
they cannot be detected and corrected for. Such build up of unsensed modes
will introduce unwanted phase components in the mirror surface which adversely affect the wavefront correction and also use up mirror stroke limiting
the stroke available for other modes. Therefore the AO control algorithm must
include a way of eliminating the build up of these modes, as will be discussed
later in this section.
CHAPTER
4:
Adaptive Optics 90
Similarly there can be wavefront sensor modes ui for which there is no corresponding non-zero λi ; these modes cannot be corrected by the mirror. Such
uncorrectable modes do not affect the AO loop other than the fact that since
they are not corrected, the aberrations they represent will still be present in
the resultant wavefront. All other modes for which there is a non-zero λi can
be sensed and corrected by the AO system.
Together with the spatial control described above, the AO system operating in
closed loop must also have a temporal control system. The control system is
an iterative one where the mirror control signals for the n th iteration depend
on the control signals used in the (n − 1)th iteration and the wavefront sensor
signals of the n th iteration. A simple temporal control can be expressed by
writing the mirror actuator control signals xn at the n th iteration as
xn = (1 − β)xn−1 + gCsn ,
(4.34)
where g is the gain of the system and β is a bleed parameter, having a value
much smaller than 1, which is introduced to prevent the building up of the
unsensed modes in the system. The values for the gain g and the bleed parameter β can be chosen by trial and error. The AO system in this project
allows the end user to provide values for these two parameters, as well as the
condition factor described above.
CHAPTER
5
Design and Implementation of the
Laser Scanning Adaptive
Ophthalmoscope
Some of the issues encountered when imaging the human retina in-vivo have
already been discussed earlier in chapter 2. Confocal microscopy provides
a powerful technique for slicing through the distinct layers of the retina and
imaging one layer at a time. In practice the thickness of the layers imaged depends on the axial resolution of the microscope which, in the case of a confocal
ophthalmoscope, is strongly reduced due to the aberrations introduced by the
optics of the eye. The adverse effect of ocular aberrations on axial, as well as
lateral, resolution is inherent to any imaging system which uses the optics of
the eye as a refracting element. It has also been discussed in chapter 2 that the
dynamic nature of the aberration fluctuations in the eye gives rise to the need
for dynamic aberration correction for optimal recovery of resolution. Adaptive optics provides a solution to this problem.
The imaging system described in this work combines these two techniques,
confocal microscopy and adaptive optics, in the attempt to try and attain ever
91
CHAPTER
5:
Design and Implementation of the LSAO 92
WFS
APD
RP
CP
BS
DM
PBS
SF
Sy
Sx
Laser
axial
scanning
HWP
Eye
Figure 5.1 : Schematic representation of the Laser Scanning Adaptive Ophthalmoscope
(LSAO). SF - spatial filter, PBS - polarising beamsplitter, DM - deformable mirror, Sx and
Sy - x- and y-direction scanners, HWP - half-waveplate, BS - beamsplitter, WFS - wavefront
sensor, RP - reference path, CP - confocal pinhole, APD - avalanche photodiode.
CHAPTER
5:
Design and Implementation of the LSAO 93
Figure 5.2: The setup as mounted on the lab-bench.
higher axial and lateral resolution in retinal imaging. This chapter discusses
the design of the system and the implementation of this design through the
various stages of construction. The final version of the system is the one
shown schematically in figure 5.1, with a photograph of the actual breadboardmounted setup in figure 5.2. This laser scanning adaptive ophthalmoscope
(LSAO) uses a 633 nm He-Ne laser as a light source. The beam, after being
cleaned by a spatial filter (SF)1 , is input into the rest of the system via a beamsplitter (PBS). The incoming beam reflects off the deformable mirror (DM) and
is relayed through a series of 4-f lens systems onto a couple of scanners (Sy and
Sx) which scan the beam in directions perpendicular to each other, and onto
1
The acronyms given in brackets correspond to the labels in figure 5.1.
CHAPTER
5:
Design and Implementation of the LSAO 94
x-direction
scanning
y-direction
scanning
x-y raster
Figure 5.3 : Formation of a rectangular raster on the retina via two scanning mirrors. Instantaneous beam focused on the retina is moved across the retina to trace the raster pattern.
the pupil of the eye. The instantaneous beam is focused by the eye’s optics
onto the retina while the scanning of the beam moves the focused spot in a
rectangular raster on the retina. Figure 5.3 shows the scanned beam in the
system. Light backscattered from the retina goes through the same path as the
incoming beam and is transmitted through the beamsplitter (PBS). A second
beamsplitter (BS) channels a fraction of the light coming back from the retina
onto the wavefront sensor (WFS) and the rest is focused on the confocal pinhole (CP) in front of the detector (APD), the signal from which is fed into a
framegrabber to create the retinal image.
CHAPTER
5:
Design and Implementation of the LSAO 95
5.1
I MAGING S UBSYSTEM
The basic image forming components in the system are those of a confocal
microscope in which the eye’s optics act as the collector/objective lens. This
setup is more commonly referred to as a scanning laser ophthalmoscope (SLO)
in ophthalmology. As a stepping stone towards building the setup shown
in figure 5.1, an SLO was first built in the lab. This basic SLO, illustrated
schematically in figure 5.4, was a useful precursor to the expanded system,
which includes the AO components, since it was a testbed for determining the
required parameters for imaging. The key aspects of the SLO are the source of
illumination, the scanning mechanism and the image formation system. All
three of these aspects were designed and implemented for the SLO shown in
figure 5.4; however the same three components were then transferred to the
LSAO and hence they will be discussed directly with reference to the latter
setup.
5.1.1
Illumination
The choice of illumination is critical in any imaging system. Different structures in the human retina absorb, reflect and transmit light of different wavelengths in different amounts. For example green light, in the region between
500 nm and 550 nm, offers high contrast when imaging blood vessels in the
eye since this wavelength is strongly absorbed by haemoglobin, the oxygencarrying molecule found in red blood cells. This wavelength is also strongly
scattered by bleached photoreceptor cones making it a useful wavelength for
photoreceptor imaging. Longer wavelengths, closer to the near-IR region,
penetrate deeper in the layers of the retina and are scattered strongly by the
choroidal layers. This can cause loss of contrast in certain retinal imaging configurations.
CHAPTER
5:
Design and Implementation of the LSAO 96
APD
BS
CP
SF
Sy
Sx
Laser
Eye
Figure 5.4: Schematic representation of the SLO built as a precursor to the final system. SF spatial filter, BS - beamsplitter, Sx and Sy - x- and y-direction scanners, CP - confocal pinhole,
APD - avalanche photodiode.
Other considerations also need to be taken into account. Since the amount of
absorption of the retina is higher towards the shorter-wavelength end of the
visible spectrum, light intensity safety levels for illuminating the eye are lower
CHAPTER
5:
Design and Implementation of the LSAO 97
for the shorter wavelengths, and hence less light can be used to illuminate the
retina. Adding this to the fact that the higher absorption at these wavelengths
means that a smaller fraction of the incident light is reflected back out of the
eye for imaging, then considerably less light is available for imaging the retina.
The light intensity permissible in the near-IR region of the spectrum, on the
other hand, is an order of magnitude larger than that for green light [3] and
there is also another order of magnitude increase in the fraction of the incident
light reflected back out of the eye [28]. Yet another drawback encountered
when using green light is that this is the wavelength the retina is most sensitive to. This can make the process of retinal imaging uncomfortable for the
patient thus reducing the likelihood of routine clinical use as well as potentially making the subject involuntarily less co-operative. This latter point can
have wider reaching implications in imaging systems where the co-operation
of subjects is essential for minimising head and eye movements during the
imaging process which can affect considerably the imaging characteristics of
the system.
Three different wavelengths were tested during preliminary stages of the construction of the SLO shown in figure 5.4. A 532 nm frequency-doubled diodepumped solid-state laser (Melles-Griot)2 was used for illumination. The return signal from the eye at this wavelength, however, gave a very poor signalto-noise ratio at the imaging detector. In addition to that, the CCD chosen for
wavefront sensing was a high-speed camera (DALSA)3 which has its highest
quantum efficiency towards the red and IR end of the spectrum, thus contributing to a low sensitivity at 532 nm. Even though the 532 nm source was
the preferred wavelength for illumination for retinal imaging in the system,
2
Manufacturers of the principle components used are given in brackets.
The choice of detector for wavefront sensing will be discussed at a later stage in this
chapter.
3
CHAPTER
5:
Design and Implementation of the LSAO 98
the poor signal-to-noise ratios obtained prompted the use of an alternative
wavelength. Thus the SLO was modified to use an 820 nm diode laser (Access
Pacific). As expected from analysis of retinal reflectance measurements such
as those given by Delori et al. [28], the light returning from the eye at this
wavelength was much stronger and gave a much higher signal-to-noise ratio
at the imaging detector than the green laser. Nevertheless, the source which
was finally chosen for the SLO and eventually also for the LSAO was a red
633 nm He-Ne laser (Spectra Physics). This was chosen as a compromise between the desired imaging characteristics of a wavelength in the green range
of the spectrum and the higher light levels available for imaging and wavefront sensing offered by the longer wavelengths.
The power reaching the pupil of the eye was controlled by placing fixed neutral density filters between the laser source and the spatial filter so that 100 µW
was incident on the eye. The choice of incident power was chosen after careful
considerations of the maximum permissible exposure (MPE) levels for ocular
radiation as given by the British standard [3]. Further details on these safety
considerations are given in appendix A.
Initially, polarisation effects of the illuminating light were not considered, and
even though the laser source itself is linearly polarised all the optics in the system were polarisation independent. The first versions of the lab setup shown
in figure 5.1 had a non-polarising beam splitter delivering light to the rest
of the setup instead of the polarising beamsplitter shown (PBS). This 90 : 10
beamsplitter coupled 10% of the light from the laser and spatial filter into the
rest of the system. This large ratio was chosen so that a large proportion of
the returning signal from the retina (90%) would be transmitted through the
beamsplitter. However, the reflections from the several doublet lenses making
CHAPTER
5:
Design and Implementation of the LSAO 99
up the relay systems, though strongly reduced due to the anti-reflection coatings on the lenses, were still strong compared to the signal returning from the
retina4 . This can be explained by the large difference in light intensity between
the incoming and outgoing beams due to the large light loss occurring at the
eye, making even very small reflections of the incoming beam from the lens
surfaces comparable to the signal returning from the retina. These reflections
resulted in strong signals on the wavefront sensing CCD making accurate spot
centroiding impossible.
These reflections were removed by replacing the beamsplitter feeding the light
from the laser to the rest of the system with a polarising beamsplitter as shown
in figure 5.1. By aligning the linearly polarised laser so that its axis of polarisation is parallel to the polarisation component reflected by the beamsplitter, all
the light from the source was reflected to the rest of the system. Since the reflections from the optical surfaces do not alter the polarisation of the reflected
light, all the reflections are blocked from the return path by the polarising
beamsplitter which only transmits the orthogonal component of polarisation.
However, the polarisation characteristics of the light returning from the eye
are affected by the eye itself. The cornea is a birefringent layer which retards
the polarisation component of the light along one of the axis, thus changing
the linearly polarised light into elliptically polarised [104]. In addition, the
retina also has birefringent properties and partially depolarises the light as it
scatters from it [31, 58]. The combined effect of these two polarisation changes
is that the returning beam has a polarisation component which is orthogonal to the polarisation of the incoming beam, and is thus transmitted through
the polarisation beamsplitter to the imaging and wavefront sensing branches.
This orthogonal component can be maximised by rotating the polarisation of
The reflection from the lens coatings is quoted as 10−3 while the ratio of light output from
the eye to the incident light is 10−4 .
4
CHAPTER
5:
Design and Implementation of the LSAO 100
(a)
(b)
Figure 5.5: Screenshots showing the display of the Shack-Hartmann spots obtained from an
eye with the half-waveplate rotated so as to give (a) the strongest signal and (b) the weakest
signal at the Shack-Hartmann sensor.
the incoming beam just before it enters the eye. This is achieved by means of
a half-waveplate mounted on a rotating mount so that the returning signal is
maximised by rotating the waveplate. Figure 5.5 shows the wavefront sensor
signal obtained from an eye with the waveplate rotated so that the maximum
signal is obtained compared to the signal when the waveplate is rotated to
give a minimum signal.
5.1.2
Beam Size and Scanning
The size of the beam throughout the system is determined by the effective
sizes of the various key optical components in the system. The entrance and
exit pupils of the system are determined by the component on which we have
the least control, namely the eye’s pupil. An 8 mm diameter entrance beam,
this value being equal to the size of a fully dilated pupil in the human eye,
would give a diffraction-limited spot size of 2.7µm on the retina, using the
CHAPTER
5:
Design and Implementation of the LSAO 101
expression for the diameter x of the Airy disc:
x = 1.22
λf
,
nR
(5.1)
where λ = 633 nm is the wavelength used, f is the focal length of the eye
equal to 18 mm, n is its refractive index which can be approximated to 1.33
for the whole eye, and R is the radius on the beam cross-section. In practice,
however, such a large entrance pupil would introduce considerable amounts
of aberrations to the incoming wavefront so that the spot produced would be
considerably larger than the calculated diffraction limit. SLOs usually use a
narrow input beam for illumination so that the effect of aberrations are negligible, since only a small central fraction of the pupil is used5 . A 2 mm diameter entrance beam would provide a spot size on the retina very close to the
diffraction-limited spot size of 10 µm. In the system being described in this
chapter, however, an AO system is used to correct for the eye’s aberrations.
Thus, a larger entrance beam was chosen so that after correction, the spot produced on the retina would be closer to the diffraction-limited value for the
larger pupil size. An entrance pupil of 6 mm was chosen so that the smallest
retinal spot size attainable is 3.5 µm. Similary, it was shown in chapter 3 that
the axial resolution of a confocal microscope has a strong fourth power dependance on the sinc function of the optical coordinate u which is itself a function
of the numerical aperture squared. Thus a larger entrance pupil, in the presence of aberration correction, will strongly increase the axial resolution and
hence the optical sectioning properties of the microscope.
This retinal spot size, however, is not the resolution of the imaging system. As
discussed in chapter 3 the lateral resolution of a confocal microscope limited
by diffraction is dependent on the square of the pupil function so that the
5
The motivation for a narrow input beam in most SLOs is for having different entrance
and exit pupil sizes as a way of separating the incoming and outgoing beams.
CHAPTER
5:
Design and Implementation of the LSAO 102
resolution xc is given by
xc = 0.88
λf
.
nR
(5.2)
This expression gives a resolution of 2.5 µm for confocal ophthalmoscope with
a 6 mm entrance pupil.
The beam size however is not fixed to 6 mm throughout the whole system.
The deformable mirror (DM) used for wavefront correction has an effective
active diameter of 10 mm. Thus, to optimise the use of the DM, the diameter
of the beam incident on it was set to 10 mm. The beam is then reduced via
a 4-f relay system to a diameter of 3 mm due to the small reflective surfaces
of the scanning system, and another 4-f relay expands the beam again after
the scanning system to a diameter of 6 mm which is the entrance pupil at the
eye. On the return path, the beam reflected to the wavefront sensing branch is
reduced to a 2 mm diameter which is the size of the CCD chip used. Besides
changing the beam size, the relay lens systems throughout the setup also ensure that the eye’s pupil is conjugate to both scanning surfaces to ensure that
the beam is stationary on the return path, to the DM since this is correcting
the wavefront present at the pupil plane, and to the lenslet array of the ShackHartmann sensor which has to sense the wavefront present at the DM surface.
The term confocal in confocal microscopy refers to the fact that light is focused
at both the object and image planes, and this gives rise to the need for a scanning mechanism to form an x-y raster on the retina. The scanning mechanism
needs to be synchronised with the detection and image formation system so
that the voltage output from the detector can be used to re-create the retinal
image. Also, the return beam from the eye goes through the same scanning
system on the way to the wavefront sensing and imaging branches, thus descanning the return beam so that it is completely stationary after the scanning
CHAPTER
5:
Design and Implementation of the LSAO 103
system.
The scanning system (Electro-Optical Products Corp.) comprises an 8 kHz
resonant scanner which provides the horizontal beam scan and a 50 Hz galvanometer scanner for the vertical scan. A 1 : 1 relay system conjugates the
two scanning surfaces. The resonant scanner has a sinusoidal oscillation; in
order to simplify the construction of an image frame, only the central portion
of the forward scan was used for imaging. This portion can be considered to
be linear so that no manipulation of the pixels is required. The full scan could
also have been used in which case the pixels close to the edge of the image
where the scan is non-linear would have had to be appropriately resized to
eliminate image distortion at the edges. The vertical galvanometer scanner
is driven by a sawtooth signal. For safety reasons this scanner is set in such
a way so that its rest position (i.e. the mirror orientation when there is no
driving signal) blocks the beam from going through the rest of the optical system and reaching the eye. This feature prevents a stationary beam, for which
light safety levels are lower than for a scanned beam, from entering the eye. A
custom-built electronic board (Optisense) was added to the scanner driver box
so as to provide synchronisation signals at the start of each horizontal and vertical scan. These signals are input into the imaging framegrabber as triggers
to signal the start of a new line and frame for the purpose of reconstructing
the image. This point will be returned to later in this chapter.
Both scanners have a continuously variable amplitude, with maximum amplitudes of 20o for the horizontal scanner and 8o for the vertical one. In practice
however the scanner amplitudes of the two scanners have to be coupled so
that they respect the aspect ratio used by the image reconstruction software.
Since the images produced are in the form of 256×160 pixel frames, a 1.6 : 1 as-
CHAPTER
5:
Design and Implementation of the LSAO 104
pect ratio in the scanner amplitudes ensures that there is no distortion of the
form of vertical or horizontal stretches in the image. This restricts the maximum useable amplitude of the horizontal scanner to 12.8o . Since the beam
is relayed from the scanning system to the eye’s pupil via a 1 : 2 4-f lens system, the effective maximum angular subtense of the scanned beam at the eye’s
pupil is of 6.4o × 4o .
For the average adult eye, this maximum angular subtense represents a retinal patch of size (2.2×1.4) mm. Each pixel of the 256×160 pixel frame represents a retinal patch which is roughly (9×9) µm. As the scanner amplitudes
are decreased, a smaller patch of the retina is illuminated by the raster. Since
the level of pixellation is not changed, each pixel will correspond to a smaller
patch of the retina giving rise to a magnified image. Thus, varying the scanner
amplitudes provides us with a simple and smooth method for zooming into
the retinal image.
As we zoom in to obtain smaller field, higher magnification images, the size
of the retinal patch represented by one pixel will approach the size of the spot
produced on the retina. This will make the image resolution more dependant
on the optical resolution of the system rather than on the level of pixellation.
When the pixel size is less than about a quarter of the diameter of the retinal
spot produced, the resolution of the image obtained is limited by the optical resolution of the system. This corresponds to the case where the trace of
the spot over the length of a single pixel is less than the Rayleigh separation
for two-point resolution. Assuming the spot size of 3.5 µm which is obtained
from a 6 mm diameter entrance beam in the diffraction-limited case, this point
is reached when the image field is about (200×140) µm large. In practice the
spot size will always be larger than the ideal diffraction-limited case which
CHAPTER
5:
Design and Implementation of the LSAO 105
means the above criterion is valid also for lower magnification images. For
a high resolution imaging system it is essential that the resolution is not being limited by the pixellation of the image but by the optical resolution of the
system.
As discussed above, scanning is essential in any confocal microscope in order
to form an image. The scanning mechanism however also offers us another
desired effect when dealing with an AO system. Because of the coherent nature of the laser light source and the roughness of the retina, reflection from
the retina gives rise to speckle effects. Speckle noise can introduce a large degree of error when performing centroiding of the spots in a Shack-Hartmann
image [49]. However, scanning moves the spot across different areas of the
retina, and since the scanning speed is much faster than the frame readout
speed in the Shack-Hartmann CCD the speckle is averaged out. Figure 5.6
shows arrays of Shack-Hartmann spots obtained from the system for an artificial eye with a stationary beam and with a scanned beam. This considerable
reduction of speckle strongly increases the centroiding accuracy. Yet another
advantage brought about by scanning is the breaking of the double-pass symmetry which makes the detection and correction of odd aberrations possible.
Because of the high optical sectioning capabilities of the confocal microscope
it is essential to be able to scan in depth through the different layers of the
retina. The depth scan in the system being presented was implemented by the
axial translation of the lens closest to the eye in the system shown in figure 5.1.
This changes the divergence or convergence of the beam incident on the retina,
thus introducing small amounts of defocus which effectively scan the retina
axially.
CHAPTER
(a)
5:
Design and Implementation of the LSAO 106
(b)
Figure 5.6: Screen shots of the SH pattern (a) without and (b) with scanning of the beam.
5.1.3
Image Formation
The optical image formed in a confocal microscope is the image of the PSF
produced on the retina by the incoming beam at any one instant multiplied
by the reflectance of the retina at that location. A pinhole at the image plane
ensures the confocal nature of the microscope and is responsible for the increased lateral and axial resolution obtained. A 50.8 mm focal length lens is
used to collect the light and the pinhole is placed at its focus. The ideal pinhole
size in a confocal system is given by the size of the Airy diameter for the system [24], which for a 10 mm diameter pupil, using equation 5.1 above, gives
an ideal pinhole size of around 10 µm. In practice however, the light throughput for such a small pinhole would be fairly small so as to affect considerably
the signal-to-noise ratio. Also, the determination of the ideal pinhole size referred to above assumes diffraction-limited imaging which is only achieved
if we have perfect correction from our AO system; and this is never the case
CHAPTER
5:
Design and Implementation of the LSAO 107
in practice. For these reasons a series of larger pinholes ranging from 25 µm
to 100 µm were used in the system. These larger pinhole sizes give a compromise between the ideal imaging characteristics of the system and an adequate
signal for detection.
The detection module used (Analog Modules) comprises of a silicon-based
avalanche photodiode and an electronic amplification circuit. The output of
the detector module is a continuous voltage signal with amplitude proportional to the light intensity incident on it and whose temporal behaviour describes changes in reflectivity of the retinal layer being imaged across the x-y
raster being scanned. This signal can thus be used to reconstruct an intensity
image of the retinal patch scanned. For this reason, the analogue signal output by the detector is fed into a framegrabber card (DataTranslation) which
digitises the signal into a value between 0 and 255, where 0 represents black
and 255 white.
The framegrabber is controlled via software written in C++ using the software developer kit of the framegrabber. The TTL signals from the scanner
drivers are input into the framegrabber and provide the VSync and HSync
signals6 , which are the triggers signalling the start of the scan of the vertical
and horizontal scanners respectively. The arrival of every high TTL HSync
signal prompts the framegrabber to start displaying a new line and the high
TTL VSync signals indicate the start of a new frame. The framegrabber thus
reconstructs the retinal intensity image which is displayed in real time by the
imaging software written. The software also offers the possibility of recording
single frames as well as sequences of frames to disk. The saved frames can
6
TTL signals are standard pulse signals generally serving as trigger or logical signals. In
video standards, the vertical and horizontal synchronisation triggers (VSync and Hsync) are
TTL signals.
CHAPTER
5:
Design and Implementation of the LSAO 108
then be exported to other software such as MATLAB for analysis and further
processing, and the sequence of frames can be used to create movies of the
image of the retina over the span of a few seconds.
Even though measures have been taken in the design of the system to maximise the signal-to-noise ratio at the detector, the frames recorded by the system still had a non-negligible noise level. For this reason software for processing the frames was written using MATLAB so as to align a sequence of frames
with each other and average them, thus also averaging out some of the noise
present in the images. The procedure is described in more detail in chapter 6
when the retinal images are presented.
5.2
WAVEFRONT S ENSING AND C ORRECTION
The system has been described so far in this chapter in terms of its image forming components; this imaging system however benefits from aberration correction via the AO system represented in the schematic diagram in figure 5.1
by the deformable mirror and the wavefront sensing branch. The following
sections will discuss the implementation of these two key AO system components, together with the control system which is the third component.
5.2.1
Wavefront Sensing Branch
The wavefront sensor chosen for this project is the Shack-Hartmann sensor
due to its simplicity of implementation and relative ease of control. In the
design of a Shack-Hartmann sensor it is necessary to consider the spatial and
temporal sampling of the wavefront which is to be measured. The spatial sampling is performed by the array of lenslets which divide the wavefront into
subapertures over which the average gradient is measured while the tempo-
CHAPTER
5:
Design and Implementation of the LSAO 109
ral sampling is dependant on the readout characteristics of the CCD used for
wavefront sensing.
A lenslet array has two defining parameters, namely the pitch of the lenslets
and their focal length. The element used (WelchAllyn) is a regular array of
lenslets with a centre-to-centre distance of 200 µm in which each lenslet has
a focal length of 7 mm. This means that for a diffraction-limited spot formed
by these lenslets, using equation 5.1, we get a spot size of 54 µm. The spots
were focused on a CCD camera (DALSA) with (16 × 16) µm pixels, and thus
a diffraction-limited spot would be sampled by about 9 pixels on this camera.
In practice, the spots obtained from a real eye are larger than this and hence
more pixels are required. The advantage of using short focal length lenslets
is that the spots produced are smaller than if longer focal lengths were used.
This means that the light intensity per spot is spread over a smaller number of
pixels giving a higher signal-to-noise ratio. This also makes possible the use of
smaller search areas when calculating the spot centroids which, as illustrated
by the simulations presented in chapter 4, give rise to higher centroiding accuracy due to the reduced noise level associated with a smaller number of pixels.
The choice of focal length for the lenslets must also ensure however that the
spot produced is larger than a single pixel in order to make spot movements
detectable.
The signal-to-noise ratio of the Shack-Hartmann spots produced on the CCD
also depends on the readout speed of the camera. A higher readout speed
means that each frame is the result of a shorter integration time thus giving a
weaker signal. On the other hand, faster CCD readout speeds translate into
higher temporal sampling of the incoming wavefront which determines the
speed of the closed loop correction of the AO system [46, 103]. Therefore
CHAPTER
5:
Design and Implementation of the LSAO 110
in practice, a compromise needs to be found between sampling speed and
signal-to-noise ratio of the signal. A fast CCD camera was chosen so that the
wavefront sampling speed could be determined by the light levels available
from the eye and not by the technical limitations of the CCD camera itself. The
choice of a fast camera also brings about its own drawbacks, namely that of
an increased readout noise level of the camera. Notwithstanding the increased
noise due to the CCD, it was still possible to sample the wavefront from the
eye at around 100 frames per second (fps), this being the highest frame rate
for which the Shack-Hartmann spots had a sufficient signal-to-noise ratio to
be accurately located by the centroiding software; this frame rate is more than
twice the readout rate achievable with other cameras tested in our labs which
offer a much reduced noise level7 . The system was thus designed to achieve
the highest frame rate possible with the technology available thus also maximising the bandwidth of the closed loop system. A framegrabber (MuTech) is
used to transfer the frames containing the Shack-Hartmann spots to the computer running the AO control system.
5.2.2
Wavefront Correction
The correction of the wavefront is achieved using a 37-element membrane
deformable mirror (OKO TUDelft). Figure 5.7 shows the geometry of these
electrodes. The signals output by the control system are amplified by a 40channel amplifier (Thompson) with a high voltage power supply (Delta) so
that a potential difference in the range 0 V - 200 V is applied to each electrode.
The membrane is held at zero potential. The maximum stroke at the centre
of the mirror is 9 µm. However the mirror is biased so that the membrane
zero-position is mid-way through its full range of movement. Biasing the mir7
High specification QImaging cameras were tested in our labs showing a reduction in noise
by a factor of 20 when compared to the DALSA camera employed but maximum frame rates
inferior to 50 fps. The DALSA CCD has a maximum frame rate of around 800 fps.
CHAPTER
5:
Design and Implementation of the LSAO 111
Electrodes
Figure 5.7: Geometry of the 37 electrodes of the OKO TUDelft deformable mirror.
ror reduces the effective stroke available on the mirror to half the maximum
value. Nevertheless, the maximum optical path change applied to the incoming wavefront is still 9 µm at the centre of the pupil since the deformation
of the wavefront occurs as a result of a reflection from the membrane surface which doubles the amplitude. It must be noted that the figure quoted
above represents the maximum range of movement of the centre of the mirror. When dealing with individual aberration components, whether Zernike
terms, mirror modes or other, the maximum amplitude change of the mirror
surface is less than this maximum value, with the effective stroke decreasing
as the spatial order of the aberrations increases. Therefore even though the
37-element mirror is capable of deformations which can be decomposed in 37
mirror modes, the effective correction available for the higher orders of these
modes is restricted by the stroke of the mirror for these orders; the maximum
number of useable system modes in the system presented here was 10 out of
the possible 37.
As discussed in chapter 3, the resolution of a confocal microscope depends on
CHAPTER
5:
Design and Implementation of the LSAO 112
the PSF of both collector and objective lenses. In a confocal ophthalmoscope,
the eye’s optics take up both roles, the former on the incoming path and the
latter on the outgoing path. Thus to obtain the maximum benefit from aberration correction, the aberration of both paths has to be corrected. This can
be achieved by placing the deformable mirror in the common path for both
incoming and outgoing beams. The phase of the illuminating plane wave is
altered upon reflection from the mirror on the first pass. In an ideal system this
introduces aberrations to the incoming wavefront which are equal in magnitude but with an opposite sign to the aberrations introduced by the eye and
are therefore cancelled out on the first pass through the eye’s optics. On the return path, the wavefront is aberrated again as it goes through the optics of the
eye on its way out of the eye; these aberrations are again cancelled out by the
phase change upon reflection from the mirror on the outgoing path. Figure 5.8
illustrates schematically how the same membrane deformation achieves the
correction for both beams.
5.2.3
Controlling the AO System
The extraction of the set of signals from the wavefront sensor frames and their
conversion to electrode signals is taken care of by the control system. The
code used for this project is an adaptation of the code developed within the
group by Paterson for a low-cost AO system [80]. The system developed by
Paterson et al. was designed with the same deformable mirror and wavefront
sensing camera as the one used in the system presented in this thesis, making
not only the control algorithm but also the code for interfacing with the hardware reusable with only minor modifications. The hardware interfacing had
been written for a Linux operating system and was thus reused on a Linux
based PC. This machine was only used to run the AO system. The retinal
image acquisition, display and recording was done on a separate machine so
CHAPTER
5:
Design and Implementation of the LSAO 113
Transmitted
plane wave
(a)
4
Eye
wavefront
function
(b)
Incident plane wave
from source
3
2
1
Reflected pre-aberrated wavefront
Aberrated wavefront from eye
1
Unaberrated
wave from
retina
2
4
3
Reflected
plane wave
Deformable
mirror
Figure 5.8: (a) Incoming path: Plane wave incident on deformable mirror (1) is pre-aberrated
(2 and 3), with these aberrations cancelling out with the aberrations introduced by the eye to
give an unaberrated wavefront incident on the retina (4), in an ideal case. (b) Outgoing path:
Light from retina (1) is aberrated by the optics of the eye (2 and 3) with these aberration being
corrected by the deformable mirror (4).
CHAPTER
5:
Design and Implementation of the LSAO 114
that the transfer of data to the computer from the APD detector and from the
wavefront sensing CCD would not have to compete for the same bandwidth
on the computer bus thus slowing down both acquisition processes, and in
particular reducing the closed-loop bandwidth of the AO system.
Initially, a reference beam is used to provide the Shack-Hartmann sensor with
a plane wave. The spots produced by the plane wave define the zero-position
for every spot from which the deviation of spots from aberrated wavefronts
can be calculated. The reference beam is provided by a mirror as shown in
the path labelled RP in figure 5.1. The 2D Fourier transform of the reference
frame is taken and the average spot spacing can be estimated. This estimate
is used to define the search areas within which the spot centroids are to be
calculated; the centroids of the reference spots are therefore found. A calibration process is also required in order to build up the system matrix B as
defined in chapter 4. A voltage is applied to each electrode on the deformable
mirror one at a time, and for each electrode the x- and y-displacements of the
Shack-Hartmann spots are measured. An ’artificial eye’, consisting of a positive achromat lens placed at the eye’s pupil plane with a sheet of paper as a
scattering target placed at its focal plane, was used for calibrating purposes.
The wavefront sensor signals obtained from the calibration process make up
the system influence matrix B.
Having obtained the reference spots and calibrated the system to obtain the
system matrix required for the control algorithm defined by equation 4.34, the
condition factor, gain and bleed parameter8 are chosen and the control matrix C is calculated before the AO subsystem is switched on. At the start of
each iteration of the AO loop, a frame from the wavefront sensing CCD is
8
The three parameters were defined in chapter 4.
CHAPTER
5:
Design and Implementation of the LSAO 115
read and the centroid of each spot is calculated. The centroiding algorithm
from the original code, which calculated a single centroid per spot over the
pre-defined search area, was modified to use an iterative centroiding process
whereby the search area was reduced in size and centred on the most recent
centroid value at every iteration, as implemented in the simulations discussed
in chapter 4. The mask with weighted edge pixels was used. It also proved
useful to monitor the signals output to the mirror electrodes to ensure that for
a given set of input parameters the correction applied was stable and that the
electrodes were not clipping to their maximum or minimum value.
It is useful to monitor the double pass PSF from the eye to give a measure of
the optical quality of the system and to compare the optical quality before and
after AO correction. To monitor the double pass PSF, a mirror was introduced
to reflect the light being focused onto the confocal pinhole onto a CCD camera
(Pulnix). This modification to the imaging branch of the system is illustrated
in figure 5.9. The mirror was mounted in a way such that it can easily slide
in and out of the beam path so that switching between retinal imaging and
double pass PSF monitoring is easily implemented.
5.3
O PERATION OF THE LSAO
The AO-assisted confocal microscope described above is completed with the
human eye, which provides not only the target to be imaged, namely the
retina, but also the optics which act as collector and objective lenses for the
confocal microscope. The placement of this final component of the optical system offers more difficulties than the other components since the whole human
body has to be positioned so that the eye is aligned with the optical system,
and even then it is not possible to eliminate all movements of the eye with
CHAPTER
5:
Design and Implementation of the LSAO 116
CCD
camera
APD
Confocal
pinhole
Sliding
mirror
Figure 5.9 : Imaging branch of the system showing how a sliding mirror is used to switch
between retinal imaging and double-pass PSF imaging.
respect to the rest of the system. In order to minimise these movements, a
bite bar is used. The bite bar consists of a fixed U-shaped aluminium mount
covered in dental wax on which an imprint of the subject’s teeth is made. The
subject bites on the imprint and thus the head movements are strongly restricted. The bite bar is mounted onto a composition of two translation stages
and a high-resolution lab jack providing the possibility of aligning the eye accurately to the incoming beam.
The alignment procedure consists of mounting the bite bar on the breadboard
bench and with the subject biting on the dental imprint the head is translated
laterally and vertically in directions perpendicular to the beam, until the pupil
of the eye is approximately centred on the incoming beam. The lens closest to
the eye is translated if required to provide a coarse adjustment for any defo-
CHAPTER
5:
Design and Implementation of the LSAO 117
cus in the subject’s eye. The pupil is then translated axially so that the scanned
beam is stationary at the pupil, as illustrated in figure 5.3. For an emmetropic
eye this occurs when the distance between the eye’s pupil and the last lens in
the system is equal to the focal length of that lens. The subject is then asked to
look at a fixation target which sets the required angle of the eye with the beam,
thus determining which part of the retina is being imaged. A target image on a
computer screen is used for fixation. At this stage, further lateral and vertical
fine adjustments of the translation stages are made to the position of the pupil
to achieve accurate positioning with respect to the incoming beam. Once this
alignment procedure is carried out for a subject, it is possible for the subject
to move away from the bite bar and go back to the same position with only
minor adjustments being required. The correct pupil alignment can also be
monitored from the wavefront sensor since the full pupil of Shack-Hartmann
spots is only obtained when the eye’s pupil is well-aligned to the system. Indeed this is essential for the functioning of the AO system since all the spots
within the pupil are required for the control system.
The image-formation system and the AO system are completely independent
of each other from the point of view of their computer interfaces. The retinal
imaging software allows the retinal image to be viewed on screen in real time,
to record a single snapshot of the image or to record a time series of frames.
The AO system can be switched on at any stage of the image acquisition process.
The full imaging procedure is usually completed in less than 30 minutes. The
time spent by the subject looking into the illumination beam is, however, usually less than 3 minutes in total. The initial alignment procedure is usually
performed in under a minute, after which the subject can move away from
CHAPTER
5:
Design and Implementation of the LSAO 118
the system. The reference measurement and calibration of the system can be
done at this stage. The subject is then asked to place his or her eye back into
the system for a series of short time periods, around 20 seconds each, where
the AO parameters, namely the condition factor, gain and bleed parameters,
can be chosen. The target for setting these parameters is to try and maximise
the number of system modes used (through the condition factor) and the gain
of the system while ensuring that the AO correction is stable and the mirror
actuators are not clipping to their maximum value. This is achieved by monitoring the actuator voltages while running the AO closed loop. Experience
with using the system makes the choice of the correct parameters easier and
thus, usually 3 or 4 sets of parameters need to be tried before the ideal set is
chosen. The system is then all set to take images of the retina with or without adaptive correction. The time required for image acquisition depends on
the number of frames being recorded, but it is always shorter than a minute.
Before imaging the retina, the subject’s iris is allowed to dilate by keeping it
in the dark. In some instances, artificial pupil dilation is used by instilling
one drop of 1% tropicamide in the eye, though natural dilation is generally
sufficient.
CHAPTER
6
Analysis of Retinal Images and AO
Correction
The prototype system discussed in the preceding chapter was built on a lab
optical bench, as shown in figure 5.2, and several series of retinal images were
taken to demonstrate the effect of adaptive optics (AO) correction on image
quality; more specifically on image contrast and resolution. Chapter 5 also illustrated how the double pass point-spread function (PSF) at the image plane
can be monitored and recorded, and this data was also collected and used
to demonstrate the effect of the AO system on aberration correction. This
chapter will present a sample of the data collected from the system and a discussion and analysis of these images. The retinal images shown are from the
right eyes of three subjects, ranging from an emmetropic eye to an eye with -2
diopters (D) of sphere and -4 D of cylinder correction required. The subjects
who required sphere and cylinder correction wore their prescribed spectacles
throughout the whole imaging procedure to minimise the amount of static
aberrations the AO system had to deal with.
119
CHAPTER
6:
Analysis of Retinal Images and AO Correction 120
6.1
AO- CORRECTED R ETINAL I MAGES
As retinal imaging systems keep improving, automated analysis of retinal features is becoming ever more possible; this enables the automatic diagnosis of
certain retinal diseases. An example of this is offered by commercial SLOs,
such as the Heidelberg Retinal Tomograph II [1], which estimate the change
in depth of the optic disc across different regions and compare these measurements to a database to determine whether the eye is possibly suffering from
glaucoma. However, even with the state of present-day technology, it is still
necessary for experienced ophthalmologists to make judgements by visual inspection of the retina or of retinal images, and this is not likely to change in
the near future. For this reason one of the most useful metrics for assessing
retinal image quality is also one of the simplest: the subjective perception of
the retinal image when viewed.
Before considering the contrast and resolution of the retinal images obtained,
the actual images are presented showing the visual improvement in image
quality with the aid of AO correction. The raw frames obtained from the imaging system had a significant amount of noise present, and for this reason an
alignment and averaging procedure was developed to reduce the noise level
in the retinal images without compromising the image resolution. This noise
reduction procedure also ensures that the small scale features visible in the
images, such as bright spots representing light from photoreceptors, are actual retinal features and not noise artefacts. The MATLAB algorithm written
for this purpose is described in the following section.
CHAPTER
6.1.1
6:
Analysis of Retinal Images and AO Correction 121
Alignment and Averaging of Frame Sequences
The voltage output from the APD detector is composed of the signal proportional to the retinal reflectivity and a noise background signal. This noise is
mostly due to the intensity instability of the laser source and the readout noise
of the detector which is amplified by the amplification electronics. This noise
translates into a randomly-distributed digital value on each pixel in the image
frame. Because of the random nature of this noise its effect on the image can
be reduced by summing a number of successive frames so that for each pixel
the noise value is averaged out. The problem with this technique for the purpose of averaging out retinal images is that because of the unavoidable eye
movements during imaging, each successive frame represents a retinal image
which is slightly shifted from the previous frame. The larger the number of
frames averaged, the higher will be noise reduction but the range of shifts
across the whole set of frames would also be larger thus reducing the contrast
and resolution of the final averaged image. This is clearly demonstrated by
the first two images shown in figure 6.1(a) and 6.1(b). The first is a raw single
frame obtained from the imaging system; it shows the optic disc head of the
right eye of the subject. The second is the average of a sequence of 50 frames.
The noise level on the first image is evident; however, even though the noise
was significantly reduced in the second image due to the averaging process,
both contrast and resolution were lost thus nullifying the effect of noise reduction.
This issue can be resolved by aligning the sequence of retinal frames with
respect to each other before averaging them. The 2D convolution function
conv2 in MATLAB was used to correlate the sequence of frames with the
first frame of the sequence. The shift in the correlation peaks was then used to
shift the actual frames so that they are all aligned with the first frame in the se-
CHAPTER
6:
Analysis of Retinal Images and AO Correction 122
(b)
(a)
(c)
Figure 6.1 : (a) A raw frame from the imaging system showing the optic disc head, (b) an
average of a sequence of 50 frames and (c) average of the same 50 frames after having gone
through the alignment procedure. The images represent a 3o × 3o retinal patch and the sequence of 50 frames spans 1 s.
quence. Once the frames are aligned they can be averaged so that the desired
noise reduction is achieved. Figure 6.1(c) shows the result of this procedure
on the same 50 frames used for the simple averaging.
The improvement in the image due to noise reduction can be quantified using
the ratio
C =
pmax − pmin
,
pave
(6.1)
where pmax and pmin are the maximum and minimum pixel values of the image
CHAPTER
6:
Analysis of Retinal Images and AO Correction 123
respectively, and pave is the average pixel value over the whole image. For
the single frame shown in figure 6.1(a), C has a value of 3.6 whereas for the
aligned and averaged image in figure 6.1(c), C is 4.2. This corresponds to an
increase of 22%.
6.1.2
Presentation of Retinal Images
The images presented in this section are images which have been aligned and
averaged using this algorithm; in all cases 30 successive frames were used
for this procedure. Figure 6.2 shows retinal images of different magnification
extracted from a number of series in which AO correction was applied during the acquisition of the series, thus showing the retinal image without and
with dynamic aberration correction. Figure 6.2(a) represents a retinal patch
2.0o × 1.4o at approximately 2o from the visual axis. The AO system was correcting for 8 system modes with a gain of 0.3, these being the parameters corresponding to the best stable correction. The best correction was determined
by increasing the number of system modes being corrected and the gain while
monitoring the output of the mirror electrode signals, ensuring that they are
not clipping and that they are stable. Comparing the images before and after
adaptive correction it is possible to observe the improved image quality of the
retinal image taken with aberration correction. One of the major contributions
to the perceived improvement in the image is the increase in contrast of the
AO-corrected image; however, improvement in resolution also contributes to
the enhanced image quality.
A detail from the AO-corrected image in figure 6.2(a) is shown in figure 6.3,
and highlights two features in the retinal image. The encircled bright spot is
likely to be due to light from a photoreceptor whose orientation is aligned with
that of the axis of the imaging system, thus maximising the light coupled back
CHAPTER
(a)
6:
Analysis of Retinal Images and AO Correction 124
AO off
AO on
(b)
AO off
AO on
(c)
AO off
AO on
Figure 6.2: Retinal images before and after AO correction. Images represent retinal patches
of size (a) 2.0o × 1.4o , (b) 2.0o × 1.4o and (c) 4.0o × 4.0o .
CHAPTER
6:
Analysis of Retinal Images and AO Correction 125
(d)
AO off
AO on
(e)
AO off
AO on
Figure 6.2 (contd): Images represent retinal patches of size (d) 1.0o × 0.7o and (e) 0.8o × 0.6o .
out through the optical system. The axial discrimination of the confocal system can be illustrated by following the path of the blood vessel indicated by
the arrow. The blood vessel moves out of the transverse plane being imaged
as it crosses a larger blood vessel (towards the bottom of the image); as the
vessel moves out of focus its contribution to the image is strongly decreased
so that it becomes hardly visible at all. Figure 6.2(b) also shows a similar behaviour.
The remaining images in figure 6.2 are further examples of data obtained from
the system also showing the improvement in visual image quality with AO
correction. Figure 6.2(b) represents the same retinal patch size at the same
CHAPTER
6:
Analysis of Retinal Images and AO Correction 126
Figure 6.3 : Detail from one of the retinal images showing distinct features (as explained in
the text.)
angular displacement from the centre of the visual field for a different subject
to the one in (a), with the AO parameters set to correct for 9 system modes at
a gain of 0.3. Figure 6.2(c) shows a larger field of view representing a 4o × 4o
image centred on the optic nerve head. The images in figure 6.2(d) and (e)
show smaller fields of view at higher magnifications.
6.2
L ATERAL R ESOLUTION E STIMATION FROM P OWER S PECTRA
OF THE I MAGES
Having presented the retinal images obtained from the imaging system and
the effect of dynamic aberration correction on the perceived image quality of
these images, the analysis of the images in terms of quantifying their lateral
resolution will now be discussed. In order to do this it is best to transform the
spatial-domain images into the frequency domain. A MATLAB algorithm was
written and implemented using the 2D fast Fourier transform function fft2
to obtain the power spectrum of the retinal images. A point x pixels away
CHAPTER
6:
Analysis of Retinal Images and AO Correction 127
from the central, DC term of the power spectrum obtained using a discrete
Fourier transform represents a spatial frequency corresponding to a period of
N/x, where N is the extent of the power spectrum, in pixels, along the axis being considered [15]. This scaling relationship was confirmed for the MATLAB
implementation of the fast Fourier transform by testing it on a generated sinusoidal grating.
For a real image, the power spectrum calculated can thus be used to estimate
the spatial resolution of the image. The average value of the power spectrum
of the background noise was used to determine a threshold for the power
spectra of the retinal images; values below this threshold are set to zero. Figure 6.4 shows the power spectra of the images shown in figure 6.2(d). A comparison of the power spectra for images with the AO correction switched off
and on shows that the power spectrum for the image taken with adaptive
correction extends to higher spatial frequencies than that for the case with no
correction. This is consistent with an increase in lateral resolution. This improvement can be quantified by translating the displacement of the highest
frequency component of the power spectrum from its centre point into spatial
resolution in terms of pixels in the spatial domain. The extent of the power
spectrum along an axis was determined by defining a line in the perpendicular direction such that 1% of the non-zero points lie on one side of the line;
these boundaries are represented by the red lines in figure 6.4. Thus, from
figure 6.4, the width in the horizontal direction of the power spectrum for the
image without AO correction is of 24 pixels from the central DC term, while
the corresponding figure for the AO-corrected image is of 33 pixels, an increase of approximately 35%. The same increase is observed for the vertical
direction in the power spectrum, where the corresponding figures for the extent of the power spectrum are of 17 pixels from the central component for
CHAPTER
6:
Analysis of Retinal Images and AO Correction 128
AO off
20
40
60
80
100
120
140
20
40
60
80
100
120
140
160
180
200
60
80
100
120
140
160
180
200
AO on
20
40
60
80
100
120
140
20
40
Figure 6.4: The power spectra of retinal images taken without and with AO correction. The
red lines indicate the extent of the spread in the power spectra, with the shift towards higher
spatial frequencies for the lateral horizontal direction indicated with red arrows.
CHAPTER
6:
Analysis of Retinal Images and AO Correction 129
the uncorrected image and 23 pixels for the AO-corrected image. The lack of
symmetry between the two axes is due to the fact that the retinal images themselves are rectangular and not square. The image size, and consequently the
size of the power spectrum array, in this case was 208 × 146 pixels; thus we can
use the expression N/x as an estimate of the lateral resolution of the images.
For the horizontal direction, N = 208 and x = 33 for the AO-corrected image
giving an estimate for the highest spatial frequency component which corresponds to a period of approximately 6 pixels. The same resolution estimate
is obtained for the vertical direction. Since for this retinal image, one pixel
represents a retinal patch of 0.30 × 0.30 , then the resolution estimate translates
to 1.80 , or approximately 10 µm1 .
6.3
C ONTRAST A NALYSIS OF R ETINAL I MAGES
As discussed earlier in this chapter, the improved image quality of the retinal
images with AO correction as compared to the uncorrected ones is due not
only to an increase in lateral resolution but also to an increase in contrast of the
image. The reasons for this increase are twofold: as dynamic aberrations are
corrected the spot produced on the retina is closer to a diffraction-limited spot,
and this narrower spot thus illuminates a smaller retinal patch and reduces
the averaging effect that illuminating a larger section has on the reflected light
intensity for any instantaneous spot; the aberration correction also narrows
the PSF at the imaging pinhole so that a larger percentage of the total light
intensity falls within the pinhole area resulting in a larger contribution to each
pixel of the image constructed.
1
This is a similar resolution to the commercially available SLOs [1]. Thus implementing an
AO system analogous to that shown in this work to a commercial SLO should provide further
improved resolution.
CHAPTER
6:
Analysis of Retinal Images and AO Correction 130
3500
AO off
3000
pixel count
2500
2000
1500
1000
500
0
0
50
100
150
200
250
pixel value
3500
AO on
3000
pixel count
2500
2000
1500
1000
500
0
0
50
100
150
200
250
pixel value
Figure 6.5 : Histograms of the pixel values from 0 to 255 of the images (shown in figure 6.2(a)) taken without and with AO correction.
Figure 6.5 shows histograms of the pixel values of the retinal images in figure 6.2(a). A comparison of the two histograms shows that the peak of the
histogram for the AO-corrected image is shifted towards higher pixel values
than that for the uncorrected image; this confirms that the overall intensity
of the retinal image is increased with AO correction as expected from the
higher throughput through the confocal pinhole. It can also be noted, how-
CHAPTER
6:
Analysis of Retinal Images and AO Correction 131
ever, that apart from the shift in the peak of the histogram representing the
AO-corrected image, the width of the histogram is also wider than that for
the uncorrected image histogram. This indicates that a wider range of pixel
values is being used and this is consistent with an increase in image contrast.
The width of these histograms can be represented by their full-width at halfmaximum (FWHM). The FWHM of the histogram for the uncorrected retinal
image is 35 pixel values (out of a total of 256) while the FWHM for the histogram for the AO-corrected retinal image is 65 pixel values, which is nearly
a doubling of the width.
6.4
A XIAL S ECTIONING THROUGH THE R ETINA
Apart from the benefits to the retinal images obtained due to the increased lateral resolution and contrast, axial resolution is also a key aspect of confocal microscopy which is strongly affected by aberrations. In chapter 3, axial response
in confocal microscopy was discussed in detail and comparisons between conventional and confocal microscopy, showing the improved axial sectioning offered by the latter, were presented. More specifically, equations 3.17 and 3.18
compare the axial intensity distribution of the two microscope configurations
showing the narrower distribution for the confocal case. Furthermore, the
integrated intensity over a plane as a function of axial depth is given by equation 3.20 for conventional microscopy and by equation 3.21 for the confocal
case, also illustrated by figure 3.7.
In order to characterise the imaging system, the integrated intensity obtained
from the system having a mirror at the object plane and scanning the confocal pinhole-detector block through focus was obtained. This was done by
summing the pixel values of the image from the system for each scan posi-
CHAPTER
6:
Analysis of Retinal Images and AO Correction 132
Normalised Intensity
1
0.5
0
0
20
40
60
80
Axial displacement /
100
120
µm
Figure 6.6 : Plot of the normalised integrated intensity obtained with a mirror at the object
plane while scanning the detector axially through focus at the image plane.
tion and plotting the sum of each frame as a function of axial position of the
pinhole. The plot is given in figure 6.6 showing a fall-off in intensity as the
pinhole-detector block is scanned away from focus. Since the difference in
integrated intensity for the confocal system between two axial positions corresponds to different intensity contributions of the planes at those positions,
the integrated intensity plot can be used as a metric for the axial resolution of
a confocal microscope in terms of the resolving power of two planes. Defining axial resolution in these terms is generally considered a more meaningful
measure than the axial resolution of two points separated axially [119]. Thus
the FWHM of this plot can be used as an estimate of the axial resolution of the
confocal microscope, giving a value of 55 µm at the object plane.
CHAPTER
6:
Analysis of Retinal Images and AO Correction 133
Normalised Intensity
1
0.5
0
0
50
100
150
200
Axial displacement /
250
300
350
µm
Figure 6.7: Plot of the normalised integrated intensity obtained while imaging the retina as
the detector is scanned axially through focus at the imaging plane. The plot is an average of
10 scans.
The same procedure was performed while imaging the retina. This gives an
integrated intensity plot as a function of axial position in the region of the
imaging plane; the plot obtained from the average of 10 scans at the same
retinal location is shown in figure 6.7. The plot approximately follows the integrated intensity distribution given by equation 3.21. In a similar fashion as
shown above, the axial resolution can be estimated by reading off the FWHM
for this plot, giving a value of 270 µm at the retina for this case2 .
2
As was the case with the lateral resolution estimate discussed earlier, this figure is only
marginally better than the axial resolution of commercial SLOs, which is 300 µm [1].
CHAPTER
6:
Analysis of Retinal Images and AO Correction 134
Figure 6.8: A series of AO-corrected frames from the retina at different axial positions.
The axial sectioning property of the confocal microscope can be shown by
the series of images shown in figure 6.8. These are frames taken from a sequence captured while scanning in the z−direction and show various sections
at different axial depths. The noise present in these images when compared
to the images presented earlier is due to the fact that the frames displayed are
single frames which have not been aligned and averaged with the algorithm
described above, since the algorithm requires a series of frames from the same
plane.
6.5
PSF M ONITORING DURING AO C ORRECTION
This chapter so far has presented an analysis of the imaging system in terms of
the retinal images output from the system. This can be easily justified by the
fact that the aim of the system is to image the retina, and hence it is the retinal
images which are the most critical data to be obtained from the system. However it is also useful to examine the double-pass PSF produced by the system
CHAPTER
6:
Analysis of Retinal Images and AO Correction 135
Intensity (pixel value)
AO off
200
150
100
50
0
600
400
y
600
800
400
200
200
0
0
x
Intensity (pixel value)
AO on
200
150
100
50
0
600
400
y
600
800
400
200
200
0
0
x
Figure 6.9: Double-pass PSFs before and after AO correction.
at the image plane, that is the plane where the confocal pinhole is placed, to
illustrate further the effect of AO correction. The system was thus designed to
give the possibility of easily switching between imaging the retina and monitoring the PSF, as described in chapter 5 and as shown in figure 5.9. Since
the only alteration to the system is done to the imaging branch, this switch
can be done without affecting the illumination, scanning or wavefront-sensing
systems and therefore the AO-correction procedure is unchanged. The same
CHAPTER
6:
Analysis of Retinal Images and AO Correction 136
imaging protocol described in chapter 5 is followed, the only difference being that the image captured is that of the double-pass PSF at the image plane
rather than a reconstruction of the retinal image from the APD detector.
Figure 6.9 shows the intensity distribution of the double pass PSFs recorded
on the CCD camera from the right eye of one of the subjects. These distributions are obtained from single frames extracted from a sequence; one frame
corresponds to an instant just before AO correction and the other immediately after the correction is turned on. The colour map in these intensity distributions is such that red represents the highest pixel value from both images
(which corresponds to the peak of the AO-corrected one) and blue represents
0. The aberration correction of the wavefront can be seen from the higher
intensity concentration towards the centre of the AO-corrected PSF which results in a higher peak intensity and a narrowing of its width. These effects
were discussed above with reference to the increase in contrast and resolution
of the retinal images. The aberration correction can be quantified by the ratio
of peak intensities of the PSFs before and after correction which shows that
for the case illustrated in figure 6.9 the intensity increased by a factor of 1.7.
6.6
C ONCLUDING R EMARKS
This chapter has presented some of the images produced by the imaging system developed and showed the potential benefits of the dynamic correction of
the ocular aberrations using an AO system. Estimates giving an indication of
the improvement in optical quality of the images output were obtained. The
final chapter discusses the performance of this first-generation prototype system and highlights the issues which need to be solved in order to allow this
and similar retinal imaging techniques to provide repeatable performances
CHAPTER
6:
Analysis of Retinal Images and AO Correction 137
over a wide range of eyes and also for different imaging sessions on the same
eye. Considerations on how these issues can be tackled will also be made as
part of an outlook of the future of retinal imaging and the role of AO in it.
CHAPTER
7
Conclusion
The importance of imaging the living human retina not only in relation to
diagnosing diseases and monitoring their progression but also in relation to
basic vision science was discussed in the beginning of this thesis. The rest of
the thesis proposed an imaging system designed to overcome some of these
issues and improve the image quality of in-vivo retinal images; the design and
implementation of the system was discussed in detail and its output presented
and analysed. The results shown in the previous chapter provide an experimental backing for the potential benefits shown to be attainable in theory by
the dynamic correction of ocular aberrations in a confocal microscope to image the retina.
The process of building this first-generation prototype and its functioning,
however, also showed some shortcomings and problems that arise which limit
the performance and repeatability of the system. The variability in performance of the AO system both for the same subject at different imaging sessions and between different subjects makes routine retinal imaging difficult,
as can be seen from the results presented in the previous chapter. This be-
138
CHAPTER
7:
Conclusion 139
haviour has so far been a general trend with ophthalmic AO systems [29, 34,
37, 50, 65, 91] and it is a major obstacle for further advancement of AO into
clinical ophthalmology. These issues, however, also fulfil a secondary aim of
this project, namely that of providing feedback for the next stage in this research: developing a second-generation instrument which can be used in a
clinical environment. This project has recently started in this group whereby
a commercial SLO will be modified to include an AO system, with the whole
instrument being compact and portable.
7.1
C URRENT I SSUES WITH AO IN O PHTHALMOLOGY
AO has thus far proved to be a promising technique in retinal imaging as
well as other fields in ophthalmology, such as psychophysical tests on vision,
but its full potential can only be tapped if a number of issues concerning AO
in ophthalmology are successfully tackled. This section will try to highlight
some of these issues.
7.1.1
Effects of the Living Eye
Most of the limitations of any retinal imaging system are brought about by
the eye itself. This is mainly due to the fact that the eye is a living organ designed to collect light in the visible spectrum and having numerous complex
mechanisms aimed at benefiting our visual perception. Thus, the high absorption of light by the retina means that a very small fraction of the incident
light is reflected back, and the geometrical considerations of a relatively small
pupil area and its distance from the retina reduce further the amount of light
which comes back out of the eye. In the prototype described in this thesis, the
amplitude of the returning light is being split into an imaging branch and a
wavefront sensing branch, thus reducing even more the light intensity avail-
CHAPTER
7:
Conclusion 140
able at the detection device in each of these branches. This restricts further the
spatial sampling capability of the wavefront sensor. The large ratio required
between the incoming and outgoing beams also introduced a further problem in the setup built: the reflections from the optical surfaces in the system,
though reduced by means of anti-reflection coatings, were still of the same order of magnitude of the returning signal. Polarisation effects had to be used
to get around this problem.
Besides the issue of low light levels, eye movements must also be dealt with.
Firstly, the eye can only be placed in the optical system by placing the subject’s
whole body in it. This issue is one that imposes several restraints on the imaging system. The subject’s head can never be fixed rigidly enough to prevent
any movement of the head; indeed the more restrictive the head restraint the
more likely that the subject will feel uncomfortable and move his or her head
even more. Besides overall head movements, the eye itself will move within
its eye socket; these movements include both voluntary and involuntary ones.
The voluntary eye movements vary considerably between subjects; some subjects are better at fixating at a target than others, and more so, practice improves the ability to fixate. The involuntary eye movements are harder to control. These are generally grouped into three categories with varying displacement amplitudes and frequencies, namely drifts, saccades and tremors [27].
These movements also introduce further setbacks when considered in terms
of wavefront sensing and AO correction. When obtaining a wavefront map
to characterise the aberrations of the eye, an eye movement can result in a
change in the aberration map irrespective of the changes in aberrations due to
other mechanisms in the eye. These variations in alignment make comparison
of wavefront maps taken at different times with the same wavefront sensor
CHAPTER
7:
Conclusion 141
difficult. Comparison of wavefront maps taken by different instruments is
even trickier since there are more changing parameters. For AO, the problem
is slightly different because in theory it does not matter which section of the
pupil is being measured and at which orientation with respect to the optical
system as long as we correct for whatever aberrations are present; however,
the eye movements introduce a dynamic component of their own to the aberrations measured by the sensor which the AO closed-loop system has to try to
correct. Since the aberration changes brought about by the eye’s translations
and rotations are in general of a larger amplitude than a number of dynamic
higher-order aberration terms inherent to the eye, the capability of correcting
for the latter is reduced.
A possible solution to resolving the problem of eye movements is to include a
subsystem to track the movements of the eye and provide information on the
translations and tilts of the eye during the acquisition of measurements. Techniques to track eye movements include the monitoring of the displacement of
the Purkinje images with respect to each other1 and the tracking of the movement of the pupil by means of edge detection from an image of the eye’s pupil
and iris [74]. However, whereas eye tracking can offer many benefits to wavefront sensing, its benefits to AO are less clear since unless such an eye tracking
system is linked actively in closed loop to the imaging system, thus increasing
markedly the complexity of the imaging system, then the knowledge of the
extent of the eye movements will offer little benefit.
The separation of the effects of eye movements on aberration changes in the
eye from those of other ocular mechanisms will help to understanding better
1
The Purkinje images are the reflections resulting from the first four optical surfaces of the
eye: the anterior corneal surface, the posterior corneal surface, the anterior lens surface and
the posterior lens surface.
CHAPTER
7:
Conclusion 142
the origins of eye aberrations. A better understanding of ocular aberrations
will be an aid in designing AO systems better tailored to correct for them.
Work has also been done in separating the aberrations due to the cornea and
those due to the crystalline lens by comparing the aberrations of the whole eye
to corneal topography measurements [5, 42]. Such studies indeed show that
the individual aberration contributions of the cornea and the lens are larger
than that for the whole eye, indicating that in general the cornea and lens
somewhat compensate for each other’s aberrations. In a similar fashion it is
necessary to study the effects of other structures and layers in the eye individually.
Current studies are also underway aimed at understanding the temporal variations of the tear film layer [32, 66], which is the front-most layer on the
corneal surface, in particular since the air-tear film boundary offers the largest
refractive index change in the eye’s refracting optics. The effect of general biological cycles, mainly the heart pulsation and breathing, on aberrations is also
a current topic of investigation [43, 44, 49].
These series of studies will hopefully provide a better understanding of the
ocular aberrations and will make representation of the overall eye aberrations
into separate components dependant on the various ocular structures and optical mechanisms possible. Such understanding might aid the development of
an analytical statistical model which can adequately represent the eye’s aberrations, somewhat similar to how Kolmogorov statistics represent the aberration variation in the turbulent atmosphere of the Earth. Such a model would
be a valuable tool in dealing with the eye’s aberrations. In the absence of such
a model, an empirical statistical model obtained through a series of large-scale
aberration measurements from a wide range of subjects could also be useful.
CHAPTER
7.1.2
7:
Conclusion 143
Technical Limitations
The importance of having a better understanding of the human eye’s aberrations remains a very current issue for many areas of ophthalmology, and
specifically in relation to aberration-correction techniques such as AO and others. AO itself, however, will also benefit from technological advances in the
components it comprises, which can contribute to an improvement in performance of all three AO subsystems: the wavefront sensor, the active corrective
device and the control system.
A limiting factor in Shack-Hartmann wavefront sensors is the CCD camera
used to collect light from each lenslet in the lenslet array to output a frame
containing the Shack-Hartmann spot pattern. Low light intensities impose
high sensitivity and low noise level requirements on the CCD camera, especially when high spatial and temporal wavefront sampling are required. A
compromise always needs to be found between the noise level acceptable, the
number of lenslets in the lenslet array, and the sampling rate of the wavefront
in terms of frames per second output by the CCD camera. Advances in detector technology will give more flexibility in the choice of these variables.
The above discussion deals with issues relating to the Shack-Hartmann wavefront sensor, this being the most commonly used wavefront sensor in ophthalmic applications at present. Its widespread use however does not necessarily make the Shack-Hartmann sensor the best solution for wavefront sensing. Among the other alternatives being considered is curvature sensing which
gives a measure of the local second derivative of the wavefront function; a
Shack-Hartmann sensor measures the local first derivative and hence gives
a measure of tip and tilt rather than curvature. The use of curvature sensors would provide a better match with corrective devices such as membrane
CHAPTER
7:
Conclusion 144
deformable mirrors or bimorph mirrors that are curvature-based devices. A
better match between the two subsystems would reduce the discrepancies between what the wavefront sensor can measure and what the mirror can correct
for.
The deformable mirror of an AO system is probably the element which would
benefit mostly from technological improvements. In the system described in
this thesis, the limited stroke available on the deformable mirror was one of
the major factors limiting the performance of the AO correction. Given the
large range of amplitudes present in the various components of eye aberrations, in most cases a small number of aberration components are using up
most or all of the mirror stroke. This could be observed while choosing the
AO control parameters for correcting for the eye’s aberrations: most actuators were clipping (reaching their maximum voltage signal) when more than
10 system modes where being corrected for, which is considerably lower than
the maximum number of system modes which is 37. Mirrors with a larger
stroke would therefore not only be able to cope with larger-amplitude aberrations, but would also enable a larger number of modes to be simultaneously
corrected.
Nevertheless, deformable mirror technology is advancing rapidly with mirrors having increasing stroke and increasing number of actuators.
And
progress is going ahead on different mirror technologies, whether membrane
mirrors, segmented ones or bimorphs, as well as transmissive devices based
on liquid crystal technology; which of these technologies will be best suited
for AO in the human eye will depend on their development in the coming
years.
CHAPTER
7:
Conclusion 145
The technical limitations on the control subsystem of an AO system are computational ones. Computer processing power (which determines the processing time of the AO closed-loop algorithm) and bus speeds (which determine
the rate of transfer of data from the wavefront sensor to the computer and
from the computer to the deformable mirror) are hardly limiting factors for the
closed-loop bandwidth of an ophthalmic system. Thus, higher specification
computers are unlikely to have any drastic impact on AO performance in the
near future. Nevertheless the control subsystem can benefit from improved algorithms which increase the accuracy and efficiency of the closed-loop correction. Improvements in locating the spot positions in a Shack-Hartmann sensor
is one such example, as demonstrated by simulations presented in chapter 4.
Another area with scope for improvement is the reconstructor used in the control loop. This system, as most ophthalmic applications of AO, uses a leastsquares reconstructor to determine the signals to be applied to the mirror from
the wavefront sensor signals. The benefit of this reconstructor is its ease of use;
however, better performances could be offered by other reconstructors. One
such case could be an adaptation to a closed-loop system of the optimal reconstructor as described by Wallner [110]; for this method, some knowledge
of the statistics of eye aberrations might be required, whether from empirical
or analytical models.
Though the image quality of the retinal images produced is ultimately limited
by the optical characteristics of the imaging system, the presentation of these
images can benefit from post-processing image enhancement techniques. The
motivation for enhancing the representation of the retinal images is made
stronger by keeping in mind that for most retinal imaging systems to be used
in a clinical environment, the images of the retina will be viewed by trained
specialists who will have to make a subjective judgement regarding the state
CHAPTER
7:
Conclusion 146
of health of the eye based on those images. A simple post-processing algorithm was presented in this thesis showing how the subjective perception of
the image can be markedly improved by reducing noise in the images. Further image processing techniques could be applied in order to highlight edges
and recognise patterns, thus aiding the detection of certain retinal features or
anomalies.
A final issue to be included with the technical limitations of AO systems is
that of cost of the system. Whereas the price tag of an experimental setup is
not always necessarily among the topmost of priorities in basic research, it
most certainly is if such systems are to be successful on a larger, commercial
scale. Though most of the optics, mechanics, laser sources and electronics can
be easily obtained at low cost, the components which contribute mostly to the
overall expense of such as a system as that presented in this thesis are the
detector used for wavefront sensing (due to the high specification required
for high speed and low noise performance) and the deformable mirror. The
potential low production-cost of bimorph mirrors therefore makes them particularly attractive for cheaper AO systems.
7.2
T HE R OAD A HEAD FOR R ETINAL I MAGING
The progression from the first direct ophthalmoscope, which made possible
the direct viewing of the retina in a living eye, to the current state-of-the-art in
retinal imaging was presented in chapter 2. What the retinal imaging system
or systems to be found in ophthalmic clinics and hospitals will be in the near
future is impossible to determine, quite naturally2 . It is however possible to
have a look at what the current research in retinal imaging is leading to.
2
Nothing seems more appropriate in these cases than the phrase coined by Niels Bohr:
“Prediction is very difficult, especially about the future.”
CHAPTER
7:
Conclusion 147
Ever since its first application to ophthalmology, AO has proved to be a
promising technique. Its successful application to fundus photography, retinal densitometry, psychophysics and SLOs [50, 65, 91, 92] paves the way for
its use in other retinal imaging techniques. However, there is also work to
be done in making AO itself a robust tool which can perform well and repeatably on a wide-range of subjects. Thus, the way ahead for AO-assisted
retinal imaging is two-pronged: advances in AO and application of AO to
other imaging techniques. Some issues regarding the former were discussed
above; the remaining paragraphs will try to highlight some of the main work
currently underway regarding the latter.
SLOs have proved to be a successful instrument clinically; commercial variants of the SLO are commonly used in a large number of ophthalmic clinics
and hospitals for general retinal examinations, and also in particular for the
screening for glaucoma, since the change in thickness of the optic disc area can
be an early sign of the disease. Thus, the development of AO-assisted SLOs
could be the next stage which would follow from the current generation of
SLOs.
However, for applications in which depth resolution is the most sought after
specification, OCT imaging offers a considerable advantage over the SLO [53].
The depth resolution in OCT is independent of the optical quality of the imaging system, including the eye’s optics; instead it is a function of the coherence
length of the source used for illumination since the image is formed through
interference of the signal returning from the retina with a reference beam. For
this reason the depth resolution in OCT imaging is considerably higher than
in an SLO, but it also means that it cannot be improved by aberration correction via AO. The lateral resolution of images constructed by an OCT sys-
CHAPTER
7:
Conclusion 148
tem, however, is limited by the system’s aberrations, and this is where the
OCT technique can benefit from AO. OCT imaging systems are also used clinically in analysing retinal features and pathologies which manifest themselves
mostly in the axial direction, notably the measurement of the optic disc depth
for glaucoma diagnosis, macular hole detection and others. Thus, AO-assisted
OCT will also be a natural progression from the currently available systems.
Interest also lies in understanding the polarising effects of the retina, specifically in separating the polarisation characteristics of the distinct retinal layers.
Polarimetry studies on the retina can give an axial map of retinal polarisation, and hence the use of AO correction can increase the axial resolution of
these polarisation maps and will make possible attributing specific polarising
effects on particular retinal layers [62].
The list given above of retinal imaging systems being developed is certainly
not an exhaustive one. Just as the possibility of looking at the retina through
an ophthalmoscope opened up new prospects for understanding the retina,
retinal pathologies and vision in the 19th century, even now, as retinal imaging systems keep improving and making smaller retinal features visible, better
understanding of retinal mechanisms and more reliable and earlier diagnosis
of diseases becomes ever more possible. The current wave of development in
retinal imaging is making possible applications which were not conceivable a
few decades ago. These comprise: the possibility of imaging individual cells
or clusters of cells and monitoring their progression with time, or even stimulating individual photoreceptors with light to study the visual process in the
retina and the brain; the monitoring of blood flow through the whole circulatory system in the retina, including the smaller vessels and capillaries; the
imaging of distinct layers deep inside the retina and the measurement of their
CHAPTER
7:
Conclusion 149
thickness and changes in thickness; and more. As retinal imaging systems become more specialised it is likely that a number of different techniques will
be the basis of the set of instruments that will find their way into clinics and
hospitals and vision research labs in the future. Whether confocal microscopy
or adaptive optics will be among these techniques is an open question.
APPENDIX
A
Safety Considerations for the Light
Levels Entering the Eye
The light entering the eye can cause damage to the ocular structures if the intensity of the light is too high. For the visible spectrum the major hazard is
that of photochemical and thermal injuries to the retina. This is mainly due
to the fact that the principle function of the retina is that of absorbing light
within this wavelength range. In addition, if the light is being focused onto
a spot on the retina, the incident power per unit area can be quite high. For
this reason is it is essential that for any retinal imaging system, or any other
system in which light is delivered into the eye, the incident light intensity is
kept well below the safe levels for the retina. The British and European standard EN 60825-1;1994 sets the maximum permissible exposure (MPE) values
for light levels deemed to be safe for viewing, depending on the wavelength,
duration of exposure and the nature of illumination [3].
The basic MPE values can be calculated directly from a table of formulae for
continuous exposure of laser radiation. The case for a scanned laser beam,
however, is not so clear. The standard states:
150
APPENDIX
A:
Safety Considerations for the Light Levels Entering the Eye 151
For scanned laser radiation within a stationary circular aperture
stop having a 7 mm diameter, the resulting temporal variation of
detected radiation shall be considered as a pulse or series of pulses.
It is therefore necessary to estimate the repetition frequency for illumination
of a single retinal location while scanning. Throughout the calculations the
parameters are chosen so that the most restrictive value is used at all times,
giving the lowest estimate for the MPE value.
The MPE for a pulsed illumination is determined via a series of calculations,
as described in the standard:
The MPE for wavelengths from 400 nm to 106 nm is determined by
using the most restrictive of requirements a), b) and c). [. . . ]
a) The exposure from any single pulse within a pulse train shall
not exceed the MPE for a single pulse.
b) The average exposure for a pulse train of duration T shall not
exceed the MPE [given] for a single pulse of duration T .
c) The exposure from any single pulse within a pulse train shall
not exceed the MPE for a single pulse multiplied by the correction
factor C5 .
MPEtrain = MPEsingle ×C5 ,
where
MPEtrain = MPE for any single pulse in the pulse train
MPEsingle = MPE for a single pulse
C5 = N−1/4
N = number of pulses expected in an exposure.
The smallest retinal patch scanned in the system is of about 180 µm×180 µm
which is equivalent to a trace covering 18 spot diameters (assuming a retinal spot size of diameter 10 µm). Since the beam is scanned laterally with an
APPENDIX
A:
Safety Considerations for the Light Levels Entering the Eye 152
8 kHz scanner, the dwelling time at each retinal location (the time for which
we can approximate stationary continuous illumination) is around 7 µs. This
can be considered to be the duration of a single pulse in the interpretation discussed above. Using this value together with a pulse repetition rate equal to
the frame rate, that is 50 Hz, the values given by the three conditions above
can be calculated using the appropriate formulae given in the standard. This
gives an MPE equal to about 2 mW for a wavelength of 633 nm for an uninterrupted exposure of 10 minutes. The value chosen for the incident light was a
factor of 20 lower than the MPE and the overall time a subject spent looking
into the laser beam was generally less than 3 minutes.
In addition to the power of the incident light, the galvanometer scanner was
set so that its rest position is such that it reflects light away from the optical
axis. Thus even when the laser is switched on, the light will only reach the
subject’s eye if the scanners are functioning. This ensures that a continuous
unscanned beam, for which the MPE is lower, is never incident on the eye.
APPENDIX
B
Lateral and Axial Intensity
Distribution of an Imaging System
Chapter 3 gives an outline of the theory of confocal microscopy, in which expressions for the lateral and axial resolution are presented (equations 3.12 and
3.18 respectively). These expressions can be obtained by first considering the
expression for the point-spread function (PSF) h in terms of the Fourier transform of the pupil function P (equation 3.1)
h(x, y) =
Z
∞
Z
−∞
∞
j2π
− λd
(ξx+ηy)
P (ξ, η) e
2
dξ dη.
−∞
(B.1)
Transforming into polar coordinates using the transformations
ξ = ρ cos θ,
(B.2)
η = ρ sin θ,
where (ρ, θ) are polar coordinates in the lens aperture plane, and
x = r cos ψ,
y = r sin ψ,
(B.3)
where (r, ψ) are polar coordinates in the image plane, gives
h(r, ψ) =
Z aZ
0
2π
cos(θ−ψ)
− j2πρr
λd
P (ρ, θ) e
0
153
2
ρ dρ dθ.
(B.4)
APPENDIX
B:
Lateral and Axial Intensity Distribution of an Imaging System 154
Considering radially symmetric pupil functions gives P (ρ, θ) = P (ρ). Using
the integral representation of the Bessel function:
Jn (x) =
j −n Z 2π jx cos α jnα
e
e dα,
2π 0
(B.5)
equation B.4 becomes
h(r) = 2π
a
Z
P (ρ) J0
0
And substituting for v =
2πra
λd2
2πρr
λd2
(B.6)
ρ dρ.
as defined in equation 3.3 gives equation 3.13:
Z
h(v) = 2π
0
a
P (ρ) J0
vρ
a
(B.7)
ρ dρ.
1
2
Normalising h and P and introducing the defocus term e 2 juρ gives equation
3.15:
h(u, v) = 2
Z
0
1
1
2
P (ρ) e 2 juρ J0 (vρ) ρ dρ.
(B.8)
A general solution to the above equations can be obtained in terms of the
Lommel functions as shown by Born and Wolf [14]. What follows is the solution of the equation in the two special cases when u = 0 and v = 0, which give
the lateral and axial response of the function h respectively.
B.1
L ATERAL I NTENSITY D ISTRIBUTION
To derive the diffraction limited PSF in the focal plane we take u = 0 and
consider P to have a value of unity within the aperture and zero elsewhere, so
that equation B.8 reduces to
h(0, v) = h(v) = 2
Z
0
1
J0 (vρ) ρ dρ.
(B.9)
Using the recurrence relation
i
d h n+1
x Jn+1 (x) = xn+1 Jn (x)
dx
(B.10)
APPENDIX
B:
Lateral and Axial Intensity Distribution of an Imaging System 155
for n = 0, so that
d
[xJ1 (x)] = xJ0 (x),
dx
(B.11)
and integrating to give
xJ1 (x) =
x
Z
0
x0 J0 (x0 ) dx0
(B.12)
we can write
2 Z1
J0 (vρ) (vρ) d(vρ)
h(v) = 2
v 0
=
2J1 (v)
,
v
(B.13)
which is the amplitude PSF given by equation 3.2 and from which we can
obtain the intensity distribution I(v) = |h(v)|2 .
B.2
A XIAL I NTENSITY D ISTRIBUTION
To obtain an expression for the axial distribution we can set v = 0 and consider
again a pupil function having a value of unity within the aperture and zero
elsewhere. Thus equation B.8 reduces to
h(u, 0) = h(u) = 2
1
Z
1
2
e 2 juρ ρ dρ.
0
1
(B.14)
2
The substitution ζ = e 2 juρ can be used to integrate this expression giving
h(u) = −
2 sin(u/2)
2(1 − cos(u/2))
+j
,
u
u
(B.15)
and hence:
q
|h(u)| =
=
8[1 − cos(u/2)]
u
sin(u/4)
,
u/4
(B.16)
since [1−cos(u/2)] = 2 sin2 (u/4). This is the expression given by equation 3.16,
from which we can obtain the intensity distribution I(u) = |h(u)|2 .
APPENDIX
C
MATLAB Code for Image Alignment
and Averaging
%AlignAve
%function image ave
defFrames = 20;
defPath = ”;
defName = ’default’;
defExtension = ’.tif’;
thfactor = 1.5;
init = 1;
vars = who;
sz vars = size(vars);
for i = 1:(sz vars)
if strcmp(’name of file’,vars(i))
init=0;
end
end
if (init == 1) %initialises variables to their default if they don’t already exist
nu of frames = defFrames;
path = defPath;
name of file = defName;
extension = defExtension;
end
clear init vars sz vars
156
APPENDIX
A:
MATLAB Code for Image Alignment and Averaging 157
opt = ”;
%displays current settings and outputs menu
while (1)
disp(’ ’)
disp(’Current Settings are:’)
disp(’ ’)
disp(sprintf(’ number of frames:\ t%d’,nu of frames))
disp(sprintf(’ base file name:\ t%s’,name of file))
disp(sprintf(’ path:\ t\ t\ t%s’,path))
disp(sprintf(’ extension:\ t\ t%s’,extension))
disp(’ ’)
disp(’ c - proceed with Current values’)
disp(’ f - change number of Frames’)
disp(’ n - change base Name of files’)
disp(’ p - change path of files’)
disp(’ e - change Extension of files’)
disp(’ d - restore Default settings’)
disp(’(t - set factor for thresholding)’)
disp(’ x - exit’)
disp (’ ’)
opt = input(’ –¿ ’,’s’);
switch opt
case ’c’
break;
case ’f’
nu of frames = input(’Enter number of frames: ’)
case ’n’
name of file = input(’Enter base name of files: ’,’s’)
case ’p’
path = input(’Enter Path of files: ’,’s’)
case ’e’
extension = input(’Enter extension of files: ’,’s’)
case ’d’
nu of frames = defFrames;
path = defPath;
name of file = defName;
extension = defExtension;
case ’t’
thfactor = input(’Enter threshold factor: ’)
case ’x’
break;
otherwise disp(’¿¿¿INVALID OPTION’)
end
end
%Main bit
if (opt == ’c’)
APPENDIX
A:
MATLAB Code for Image Alignment and Averaging 158
a=[];
disp(sprintf(’\nPlease wait while the data is being read’))
tic
%reads data from disk
for i = 1:nu of frames;
a(:,:,i)=double(imread(strcat(path,name of file,num2str(i),extension)));
disp(sprintf(’\ b.’))
end
fprintf(’Image Alignment in progress\n’)
%correlate images and find translation coordinates
transl(nu of frames,2) = 0;
[subsizex, subsizey] = size(a(:,:,1));
%thfactor = 1.5;
%threshold factor
for i = 1:nu of frames,
tic
%threshold images for correlation
thhold = (max(max(a(:,:,i)))/thfactor);
for k = 1:subsizex
for l = 1:subsizey
if a(k,l,i)¿=thhold
thimg(k,l,i) = 1;
else thimg(k,l,i) = 0;
end
end
end
if i == 1
imagesc(thimg(:,:,i))
colormap(gray)
end
%correlate threshold images
xcor = conv2(thimg(:,:,i),flipud(fliplr(thimg(:,:,1))),’same’);
[imax,jmax] = max(xcor);
[imax,ycentre] = max(imax);
xcentre = jmax(ycentre);
%determine centre of correlation
corval = xcor(xcentre,ycentre);
%xcentre = xcentre - transp(1);
%ycentre = ycentre - transp(2);
APPENDIX
A:
MATLAB Code for Image Alignment and Averaging 159
timeconv = toc*(nu of frames-i);
mins = floor(timeconv/60);
secs = mod(timeconv,60);
fprintf(’Processing frame %d\nTime remaining in minutes: %d:%2.0f \ n’,i,mins,secs)
transl(i,1) = -(xcentre - subsizex/2);
transl(i,2) = -(ycentre - subsizey/2);
end
%align images
imgalig = a;
for i = 1:nu of frames,
if transl(i,1) ¿= 0,
imgalig(transl(i,1)+1:subsizex,:,i) = imgalig(1:subsizex-transl(i,1),:,i);
imgalig(1:transl(i,1),:,i) = 0;
elseif transl(i,1) ¡ 0,
imgalig(1:subsizex-abs(transl(i,1)),:,i) = imgalig(abs(transl(i,1))+1:subsizex,:,i);
imgalig(subsizex-abs(transl(i,1)):subsizex,:,i) = 0;
end
if transl(i,2) ¿= 0,
imgalig(:,transl(i,2)+1:subsizey,i) = imgalig(:,1:subsizey-transl(i,2),i);
imgalig(:,1:transl(i,2),i) = 0;
elseif transl(i,2) ¡ 0,
imgalig(:,1:subsizey-abs(transl(i,2)),i) = imgalig(:,abs(transl(i,2))+1:subsizey,i);
imgalig(:,subsizey-abs(transl(i,2)):subsizey,i) = 0;
end
end
%find maximum translation
translmax(1,1) = max(transl(:,1));
%max positive shift for x
translmax(1,2) = min(transl(:,1));
%max negative shift for x
translmax(2,1) = max(transl(:,2));
%max positive shift for y
translmax(2,2) = min(transl(:,2));
%max negative shift for y
% average aligned and nonaligned images
A(subsizex,subsizey) = 0.0;
B(subsizex,subsizey) = 0.0;
for i = 1:nu of frames,
A = A + a(:,:,i);
B = B + imgalig(:,:,i);
APPENDIX
A:
MATLAB Code for Image Alignment and Averaging 160
end
figure
colormap(gray(255));
raw = a(:,:,1);
imagesc(a(:,:,1));
title(’raw image’)
figure
colormap(gray(255));
imagesc(A)
unaligned=A;
title(’unaligned’)
figure
colormap(gray(255));
aligned = B(translmax(1,1)+1:subsizex+translmax(1,2) ,
translmax(2,1)+1:subsizey+translmax(2,2));
imagesc(aligned)
title(’aligned’)
%%%%%%%%%%%%%%%%%%%%%%%%
%average = sum(a,3);
%figure;
%colormap(sat);
%imagesc(average(1:300,10:256));
%title(’Averaged Image’);
%figure;
%colormap(sat);
%imagesc(a(1:300,10:256,1));
%title(’First Frame’);
%first = a(:,:,1);
%disp(sprintf(’\nDone - time elapsed %.2f s’,toc))
end
for i=1:5
beep
pause(.2)
end
clear a i defExtension defFrames defName defPath opt corval imax imgalig
jmax k l thhold thimg thfactor
clear mins secs subsizex subsizey timeconv transl xcentre xcor ycentre A
Bibliography
[1] Heidelberg Engineering website http://heidelbergengineering.com.
[2] Royal National Institute of the Blind website http://rnib.org.uk.
[3] Safety of laser products. Part 1: Equipment classification, requirements
and user’s guide. British and European Standard BS EN 60825-1;1994,
BSi.
[4] G Ames and R Proctor. Dioptrics of the eye. J. Opt. Soc. Am., 5:22–84,
1921.
[5] P Artal and A Guirao. Contributions of the cornea and the lens to the
aberrations of the human eye. Optics Letters, 23(21):1713–1715, 1998.
[6] P Artal, I Iglesias, N López-Gil, and D G Green. Double-pass measurements of the retinal-image quality with unequal entrance and exit pupil
sizes and the reversibility of the eye’s optical system. J. Opt. Soc. Am. A,
12(10):2358–2366, 1995.
[7] P Artal, S Marcos, R Navarro, and D R Williams. Odd aberrations and
double-pass measurements of retinal image qualtiy. J. Opt. Soc. Am. A,
12(2):195–201, 1995.
161
Bibliography 162
[8] P Artal and R Navarro. High-resolution imaging of the living human
fovea: measurement of the intercenter cone distance by speckle interferometry. Optics Letters, 14(20):1098–1100, 1989.
[9] P Artal, J Santamarı́a, and J Bescos. Phase-transfer function of the human eye and its influence on point-spread function and wave aberration. J. Opt. Soc. Am. A, 5(10):1791–1795, 1988.
[10] P Artal, J Santamarı́a, and J Bescos. Retrieval of wave aberration of
human eyes from actual point-spread function data. J. Opt. Soc. Am. A,
5(8):1201–1206, 1988.
[11] H W Babcock. The possibility of compensating astronomical seeing.
Publ. Astron. Soc. Pac., 65:229–236, 1953.
[12] F Berny. Etude de la formation des images rétiniennes et détérmination
de l’aberration de sphericité de l’œil humain. Vision Research, 9:977–990,
1969.
[13] F Berny and S Slansky. Wavefront determination resulting from Foucault test as applied to the human eye and visual instruments. In J H
Dickenson, editor, Optical Instruments and Techniques, pages 375–386.
Oriel Press, 1969.
[14] M Born and E Wolf. Principles of Optics. Pergamon Press, sixth edition,
1980.
[15] R N Bracewell. The Fourier Transform and its Applications. McGraw-Hill,
third edition, 2000.
[16] G J Brakenhoff. Imaging modes in confocal scanning light microscopy
(CSLM). Journal of Microscopy, 117:233–242, 1979.
Bibliography 163
[17] G J Brakenhoff, J S Binnerts, and C L Woldringh. Developments in high
resolution confocal scanning light microscopy (CSLM). In E A Ash, editor, Scanned Image Microscopy, pages 183–200. Academic Press, 1980.
[18] G J Brakenhoff, P Blum, and P Barends. Confocal scanning light microscopy with high aperture lenses. Journal of Microscopy, 117:219–232,
1979.
[19] F W Campbell and D G Green. Optical and retinal factors affecting visual resolution. J. Physiol., 181:576–593, 1965.
[20] F W Campbell and R W Gubisch. Optical quality of the human eye. J.
Physiol., 186:558–578, 1966.
[21] D Catlin. High Resolution Imaging of the Human Retina. PhD thesis, Imperial College of Science Technology and Medicine, 2002.
[22] D Catlin and C Dainty. High-resolution imaging of the human retina
with a Fourier deconvolution technique. J. Opt. Soc. Am. A, 19(8):1515–
1523, 2002.
[23] W N Charman and G Heron. Fluctuations in accomodation: a review.
Ophth. Physiol. Opt., 8:153–163, 1988.
[24] T R Corle and G S Kino. Confocal Scanning Optical Microscopy and Related
Imaging Systems. Academic Press, 1996.
[25] P Davidovits and M D Egger.
Scanning laser microscope.
Nature,
223:831, 1969.
[26] P Davidovits and M D Egger. Scanning laser microscope for biological
investigations. Applied Optics, 10:1615–1619, 1971.
[27] H Davson, editor. Physiology of the Eye. Longman, third edition, 1972.
Bibliography 164
[28] F C Delori and K P Pfilbsen. Spectral reflectance of the human ocular
fundus. Applied Optics, 28(6):1061–1077, 1989.
[29] L Diaz-Santana, C Torti, I Munro, P Gasson, and C Dainty. Benefit of
higher closed-loop bandwidths in ocular adaptive optics. Optics Express,
11(20):2597–2605, 2003.
[30] A W Dreher, J F Bille, and R N Weinreb. Active optical depth resolution
of the laser tomographic scanner. Applied Optics, 28(4):804–808, 1989.
[31] A W Dreher, K Reiter, and R N Weinreb. Spatially resolved birefringence of the retinal fiber layer assessed with a retinal laser ellipsometer.
Applied Optics, 31:3730–3735, 1992.
[32] A Dubra, J C Dainty, and C Paterson. Measuring the effect of the tear
film on the optical quality of the eye. Investigative Ophthalmology and
Visual Science, 43:2045, 2002.
[33] E M Ellis. Low-cost bimorph mirrors in adaptive optics. PhD thesis, Imperial
College of Science Technology and Medicine, 1999.
[34] E J Fernández, I Iglesias, and P Artal. Closed-loop adaptive optics in the
human eye. Optics Letters, 26(10):746–748, 2001.
[35] F Flamant. Etude de la repartition de la lumiere dans l’image rétinienne
d’une fente. Rev. Opt. Theor. Instrum., 34:433–459, 1955.
[36] D L Fried.
Optical resolution through a randomly inhomogenous
medium for very long and very short exposures.
J. Opt. Soc. Am.,
56:1372–1379, 1966.
[37] J-F Le Gargasson, M Glanc, and P Léna. Retinal imaging with adaptive
optics. C. R. Acad. Sci. Paris, 2(4):1131–1138, 2001.
Bibliography 165
[38] M Glanc. Applications ophtalmologiques de l’optique adaptative. PhD thesis,
Université de Paris XI, 2002.
[39] J W Goodman. Introduction to Fourier Optics. McGraw-Hill, second edition, 1996.
[40] Y Le Grand. La formation des images rétinienne sur un mode de vision éliminant les défaults optique de l’œil. In 2e Réunion de l’Institut
d’Optique, 1937.
[41] Y Le Grand. Optique Physiologique. Masson, 1964.
[42] S G El Hage and F Berny. Contributions of the crystalline lens to the
spherical aberration of the eye. J. Opt. Soc. Am., 63(2):205–211, 1973.
[43] K Hampson. The higher-order aberrations of the human eye: relation to the
pulse and effect on vision. PhD thesis, Imperial College of Science Technology and Medicine, 2004.
[44] K Hampson, I Munro, C Paterson, and J C Dainty. Aberration dynamics
and the cardiopulmonary cycle. submitted to Optics Express, 2004.
[45] J W Hardy. Active optics: a new technology for the control of light. Proc.
IEEE, 66:651–697, 1978.
[46] J W Hardy. Adaptive optics for astronomical telescopes. Oxford University
Press, 1998.
[47] M R Hee, J A Izatt, E A Swanson, D Huang, J S Schuman, C P Lin,
C A Puliafito, and J G Fujimoto. Optical coherence tomography of the
human retina. Arch. Ophthalmol., 113:325–332, 1995.
[48] H von Helmholtz. Helmholtz’s treatise on physiological optics. Optical Society of America, 1909.
Bibliography 166
[49] H Hofer, P Artal, B Singer, J L Aragón, and D R Williams. Dynamics of
the eye’s wave aberration. J. Opt. Soc. Am. A, 18(3):497–505, 2001.
[50] H Hofer, L Chen, G Y Yoon, B Singer, Y Yamauchi, and D R Williams.
Improvement in retinal image quality with dynamic correction of the
eye’s aberrations. Optics Express, 8(11):631–643, 2001.
[51] X Hong, L Thibos, A Bradley, D Miller, X Cheng, and N Himebaugh.
Statistics of aberrations among healthy young eyes. In Vision Science and
Its Applications, OSA Technical Digest, pages 90–93, 2001.
[52] H C Howland and B Howland.
A subjective method for the mea-
surement of monochromatic aberrations of the eye. J. Opt. Soc. Am,
67(11):1508–1518, 1977.
[53] D Huang, E A Swanson, C P Lin, J S Schuman, W G Stinson, W Chang,
M R Hee, T Flotte, K Gregory, C A Puliafito, and J G Fujimoto. Optical
coherence tomography. Science, 254:1178–1181, 1991.
[54] I Iglesias, R Ragazzoni, Y Julien, and P Artal. Extended source pyramid wave-front sensor for the human eye. Optics Express, 10(9):419–428,
2002.
[55] A Ivanoff. Les aberrations de l’œil. In Editions de la revue d’optique, 1953.
[56] P L Kaufman and A Alm, editors. Adler’s Physiology of the Eye. Mosby,
tenth edition, 2002.
[57] G Kennedy. to be published.
[58] H B klein Brink and G J van Blokland. Birefringence of the human fovea
area assessed in-vivo with Mueller-matrix ellipsomoetry. J. Opt. Soc. Am.
A, 5:49–57, 1988.
Bibliography 167
[59] S A Kokorowski. Analysis of adaptive optical elements made from
piezoelectric bimorphs. J. Opt. Soc. Am., 69:181–187, 1979.
[60] M Koomen, R Tousey, and R Scolnik. The spherical aberration of the
eye. J. Opt. Soc. Am., 39(5):370–376, 1949.
[61] A Labeyrie. Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images. Astronomy
and Astrophysics, 6:85–87, 1970.
[62] D Lara-Saucedo and J C Dainty. Depth resolved polarization sensitive
imaging of the eye using a confocal Mueller matrix ellipsometer - proof
of principle. Investigative Ophthalmology and Visual Science, 44:3627, 2003.
[63] J Liang, B Grimm, S Goelz, and J Bille. Objective measurement of wave
aberrations of the human eye with the use of a Hartmann-Shack wavefront sensor. J. Opt. Soc. Am. A, 11(7):1949–1957, 1994.
[64] J Liang and D R Williams. Aberrations and retinal image quality of the
normal human eye. J. Opt. Soc. Am. A, 14(11):2873–2833, 1997.
[65] J Liang, D R Williams, and D T Miller. Supernormal vision and highresolution retinal imaging through adaptive optics. J. Opt. Soc. Am. A,
14(11):2884–2892, 1997.
[66] T J Licznerski, H T Kasprzak, and W Kowalik. Analysis of shearing interferograms of tear film by the use of fast Fourier transforms. J. Biomed.
Optics, 3(1):32–37, 1998.
[67] G D Love. Liquid-crystal phase modulator for unpolarized light. Applied
Optics, 32(13):2222–2223, 1993.
[68] G D Love. Liquid crystal adaptive optics. In R K Tyson, editor, Adaptive
Optics Engineering Handbook. Marcel Dekker, 1999.
Bibliography 168
[69] W Lukosz. Optical systems with resolving power exceeding the classical
limit. J. Opt. Soc. Am., 56(11):1463–1472, 1966.
[70] D McMullan. The prehistory of scanned imaged microscopy part 1:
scanned optical microscopes. Proceedings of the Royal Microscopical Society, 25(2):127–131, 1990.
[71] D T Miller, D R Williams, G M Morris, and J Liang. Images of cone photoreceptors in the living human eye. Vision Research, 30(8):1067–1079,
1996.
[72] M L Minsky. Microscopy apparatus. US Patent no. 3,013,467, November
1957.
[73] V Molebny. Principles of ray-tracing aberrometry. Journal of Refractive
Surgery, 16:572–575, 2000.
[74] P U Muller, D Cavegn, G d’Ydewalle, and R Groner. A comparison of a
new limbus tracker, corneal reflection technique, purkinje eye tracking
and electro-oculography. In G d’Ydewalle and J V Rensbergen, editors,
Perception and Cognition. Elsevier, 1993.
[75] R Navarro and M A Losada. Aberrations and relative efficiency of light
pencils in the living human eye. Optometry and Vision Science, 74(7):540–
547, 1997.
[76] R Navarro and E Moreno-Barriuso. Laser ray-tracing method for optical
testing. Optics Letters, 24(14):951–953, 1999.
[77] R J Noll. Zernike polynomials and atmospheric turbulence. J. Opt. Soc.
Am., 66(3):207–211, 1976.
[78] J M Otero and A Duran. Continuación del estudio de la miopı́a nocturna. Anales de Fı́sica y Quı́mica, 38:236, 1942.
Bibliography 169
[79] C Paterson and J C Dainty. Hybrid curvature and gradient wave-front
sensor. Optics Letters, 25(23):1687–1689, 2000.
[80] C Paterson, I Munro, and J C Dainty. A low cost adaptive optics system
using a membrane mirror. Optics Express, 6(9):175–185, 2000.
[81] B Platt and R V Shack. Lenticular Hartmann-screen. Opt. Sci. Center
Newsl., 5(1):15–16, 1971.
[82] M Pluta. Advanced Light Microscopy vol 2 : Specialized Methods, pages
353–379. Elsevier, 1989.
[83] A Gh Podoleanu, J A Rogers, D A Jackson, and S Dunne. Three dimensional OCT images from retina and skin. Optics Express, 7(9):292–298,
2000.
[84] C A Puliafito, M R Hee, C P Lin, E Reichel, J S Schuman, J S Duker, J A
Izatt, E A Swanson, and J G Fujimoto. Imaging of macular disease with
optical coherence tomography. Ophthalmology, 102:217–229, 1995.
[85] R Ragazzoni. Pupil plane wavefront sensing with an oscillating prism.
J. Mod. Optics, 43:189–193, 1996.
[86] F Rigaut, G Rousset, P Kern, J C Fontanella, J P Gaffard, and F Merkle.
Adaptive optics on a 3.6 m telescope: results and performance. Astron.
Astophys., 250:280–290, 1991.
[87] F Roberts and J Z Young. The flying-spot microscope. Proceedings of the
IEE, 99(Pt 3a):747–757, 1952.
[88] F Roddier. Curvature sensing and compensation: A new concept in
adaptive optics. Applied Optics, 27:1223–1225, 1988.
Bibliography 170
[89] F Roddier, C Roddier, and N Roddier. Curvature sensing: A new wavefront sensing technique. Proc. SPIE, 976:203–209, 1988.
[90] J A Rogers, A Gh Podoleanu, G M Dobre, D A Jackson, and F W Fitzke.
Topography and volume measurements of the optic nerve using en-face
optical coherence tomography. Optics Express, 9(10):533–545, 2001.
[91] A Roorda, F Romero-Borja, W J Donnelly III, H Queener, T J Hebert, and
M C Campbell. Adaptive optics scanning laser ophthalmoscopy. Optics
Express, 10(9):405–412, 2002.
[92] A Roorda and D R Williams. The arrangement of the three cone classes
in the living human eye. Nature, 397(6719):520–522, 1999.
[93] M L Rubin. Spectacles: Past, present and future. Survey of Ophthalmology, 30:321–327, 1986.
[94] J Santamarı́a, P Artal, and J Bescos. Determination of the point-spread
function of human eyes using a hybrid optical-digital method. J. Opt.
Soc. Am. A, 4:1109–1114, 1987.
[95] C J R Sheppard. The scanning optical microscope. In Physics Teacher,
pages 648–651. American Association of Physics Teachers, 1978.
[96] C J R Sheppard. Scanning optical microscope. In Electronics and Power,
pages 166–172. IEE Publication, 1980.
[97] C J R Sheppard. 15 years of scanning optical microscopy at Oxford.
Proceedings of the Royal Microscopical Society, 25(5):319–321, 1990.
[98] C J R Sheppard and A Choudhury. Image formation in the scanning
microscope. Optica Acta, 24(10):1051–1073, 1977.
Bibliography 171
[99] C J R Sheppard and T Wilson. Depth of field in the scanning microscope.
Optics Letters, 3:115–117, 1978.
[100] E Steinhaus and S Lipson. Bimorph piezoelectric flexible mirror. J. Opt.
Soc. Am., 69:478–481, 1979.
[101] G H Stine. Variations in refraction of visual and extra-visual pupillary
zones. Am. J. Ophthalmol., 13:101, 1930.
[102] M Tscherning. Optique Physiologique. Carre et Naud, 1898.
[103] R K Tyson. Principles of Adaptive Optics. Academic Press, second edition,
1997.
[104] G J van Blokland and S C Verhelst. Corneal polarization in the living
human eye explained with a biaxial model. J. Opt. Soc. Am. A, 4:82–90,
1987.
[105] G Vdovin and P M Sarro. Flexible mirror micromachined in silicon.
Applied Optics, 34:2968–2972, 1995.
[106] A W Volkmann. Wagner’s Handworterbuch der Physiologie. Vieweg and
Son, 1846.
[107] G von Bahr. Investigation into the spherical and chromatic aberrations
of the eye and their influence on its refraction. Acta Ophthal., 23:1, 1945.
[108] A R Wade and F W Fitzke.
In-vivo imaging of the human cone-
photoreceptor mosaic using a confocal LSO. Lasers and Light in Ophthalmology, 8:129–136, 1998.
[109] A R Wade and F W Fitzke. A fast, robust pattern recognition system for
low light level image registration and its application to retinal imaging.
Optics Express, 3(5):190–197, 1998.
Bibliography 172
[110] E P Wallner. Optimal wave-front correction using slope measurements.
J. Opt. Soc. Am., 73:1771–1776, 1983.
[111] G Walsh and W N Charman. Objective technique for the determination
of monochromatic aberrations of the human eye. J. Opt. Soc. Am. A,
1(9):987–992, 1984.
[112] R H Webb. Optics for laser rasters. Applied Optics, 23(20):3680–3683,
1984.
[113] R H Webb and G W Hughes. Scanning laser ophthalmoscope. IEEE
Transactions n Biomedical Engineering, 28(7):488–492, 1981.
[114] R H Webb and G W Hughes. Scanning laser ophthalmoscope: Design
and applications. J. Opt. Soc. Am., 72(12):1808, 1982.
[115] R H Webb, G W Hughes, and F C Delori. Confocal laser scanning ophthalmoscope. Applied Optics, 26(8):1492–1499, 1987.
[116] R H Webb, G W Hughes, and O Pomerantzeff. Flying spot TV ophthalmoscope. Applied Optics, 19(17):2991–2997, 1980.
[117] R H Webb, C M Penney, and K P Thompson. Measurement of ocular
wavefront distortion with a spatially resolved refractometer. Applied
Optics, 31:3678–3686, 1992.
[118] T Wilson, editor. Confocal Microscopy. Academic Press, 1990.
[119] T Wilson and C Sheppard, editors. Theory and Practice of Scanning Optical
Microscopy. Academic Press, 1984.
[120] J Z Young and F Roberts. A flying-spot microscope. Nature, 167:231,
1951.
Bibliography 173
[121] T Young. On the mechanism of the eye. In Philosophical Transactions of
the Royal Society of London, pages 23–88, 1801.
[122] R C Youngquist and S Carr. Optical coherence-domain reflectometry: A
new optical evaluation technique. Optics Letters, 12:158–160, 1987.