Download A study on the visual illusion effects on the pupillary aperture

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Optical coherence tomography wikipedia , lookup

Photoreceptor cell wikipedia , lookup

Human eye wikipedia , lookup

Transcript
Lund University
Bachelor Thesis
January 20, 2017
A study on the visual illusion effects
on the pupillary aperture
Mahyar Hemmati
Supervised by
Roger Johansson
Carl Erik Magnusson
Abstract
The main purpose of this thesis is to investigate whether our visual system responds
differently to actual changes in light intensity compared to equiluminant images that
give illusions of light and darkness, as well as to examine how the pupil is affected by
perceived afterimages that arise as a consequence of having observed colour illusions.
11 students and employees at Lund University (6 women and 5 men) ranging in
age from 21 to 38 years participated in the experiment. The participants observed
images illustrating visual and colour illusions, respectively, whilst their pupils were
being recorded with the SensoMotoric eye tracking equipment iView XTM , which
operated with a sampling rate of 1250 Hz. The illusion stimuli were created with the
raster graphics editor GIMP. The data of the occurrence, extent and duration of the
pupillary responses were recorded with the built-in software programme iViewX 2.7,
and processed on MATLAB with the application of a linear interpolation and lowpass filter of 10 Hz. It was deduced that the pupil responded to perceptual brightness
more than to the actual income of physical light from an equiluminant image, and
furthermore found that the pupillary aperture responds to photocell exhaustion and
projection of colour onto achromatic images to a much greater extent than when
exposed to the images that gave illusions of light and darkness.
I
Acknowledgements
First and foremost, I would like to express my sincere gratitude to Roger Johansson who
introduced me to the world behind the eye and the concept of eye tracking, and who with
his supervision has kept me on the right track. His guidance and constant collaboration
has been pivotal in the implementation of the project and is greatly appreciated.
Furthermore, I would like to thank Carl Erik Magnusson for his ceaseless support and
mentorship, which has embodied this project from its incipience to its conclusion and
enabled its continuation in every aspect of the journey.
Finally, I would like to thank all those who so generously took the time to participate in
the experiment. Your contribution is very much appreciated and the memory of it will for
always – at least metaphorically speaking – dilate the pupils of my eyes.
II
CONTENTS
CONTENTS
Contents
List of acronyms and abbreviations
IV
1 Introduction
1.1 Purpose and motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
2 Human eye and noise handling
2.1 Ocular anatomy and properties . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Light adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Noise filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2
5
6
3 Method
3.1 Experimental set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Software and stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
8
10
12
4 Results
14
5 Discussion
16
6 Outlook
19
7 Bibliography
21
8 Appendices
8.1 APPENDIX. MATLAB-CODES . . . . . . . . . . . . . . . . . . . . . . . .
24
24
III
List of acronyms and abbreviations
CONTENTS
List of acronyms and abbreviations
ANS
CCD
cpd
DFT
EEG
eV
fMRI
GIMP
Hz
HSL
IR
LED
LUX
MEG
MATLAB
MTF
PSNS
PSF
PLR
RGB
SMI
SNS
autonomic nervous system
charge-coupled device
cycles per degree
discrete Fourier transform
electroencephalogram
electron volt
functional magnetic resonance imaging
GNU Image Manipulation Program
hertz
hue, saturation, and luminosity
infrared radiation
light-emitting diode
lumen per square meter
magnetoencephalography
matrix laboratory
modulation transfer function
parasympathetic nervous system
point spread function
pupillary light reflex
red, green, and blue
SensoMotoric Instruments
sympathetic nervous system
IV
1 INTRODUCTION
1
1.1
Introduction
Purpose and motivation
Envisioning a more intriguing variant of a multifunctional detector than the eye would, to
say the least, be a fruitless task. Docked to a neural network and to the visual cortex at
the back of the head, the eyes both process and filter data in a way that by far is more
efficient than that of a modern CCD sensor, which comprises the electronic detector in a
digital camera (Nayar and Mitsuaga, 2000). From an optical perspective, the behaviour
that the eye exhibits is akin to that of the camera. Both do, for example, function as
diaphragms. The manner in which the ring around the aperture of the axis of the camera
changes the size of the aperture controlling the amount of incoming light that traverses the
instrument, is analogous to the way the iris of the eye regulates the quantity of light that
enters through the cornea. And in the same way a camera lens generates an inverted image
in the plane of the CCD arrays and again subjects it to inversion by the camera’s internal
software to erect the image, the visual inputs that form an image on the photon-detecting
retina are inverted and via the optic nerve transferred to the visual cortex in the brain,
where it is transformed into a coherent, right-side up image (Hardy and Perrin, 1932;
Pedrotti, Pedrotti and Pedrotti, 2007; Young, Freedman and Ford, 2011). Regardless of
the vast complexity of their underlying structure, however, our eyes and our perception of
both what we directly see and what see through other mediums can easily be manipulated
and deceived by visual illusions, and ergo, it is of great necessity to ascertain the extent to
which our vision can be trusted, not least for research-related or scientific purposes.
Previous studies have shown that it is possible to stimulate contractions and dilations of
the pupil by displaying visual illusions, i.e. images that affect our perception in such a way
that it will deviate from objective reality. The visual illusions used in an experiment by
Endestad and Laeng in 2012 gave illusions of light and darkness, respectively, despite the
fact that they had the same light intensity. The actual change in the pupillary signal whilst
being exposed to these images and non visual illusion-images of the same light intensity
was, however, not ascertained in that experiment. Whether or not afterimages caused by
colour illusions also are recorded by the pupil, is yet another phenomenon that previously
has not been studied. By delving deeper into the examination of the pupillary response
system of the eyes, this thesis does thus endeavour to determine how our visual system
responds to both actual changes in light intensity and to apparent ones, which, depending
on our past experiences and acquired knowledge, alter the way we perceive them. If the
actual physical differences decide how we react to visual stimuli, then the pupil signal
should change along with the changes in light intensity, whereas the pupil signal, in the
case of visual illusions, should look the same if our cognitive functions in the brain are
predominant.
Moreover, it can be stated that if there happens to be a clear correlation between the
magnitude of the change in pupillary size in regard to both visual illusions and actual
changes in light intensity, there is a slight possibility that our eyes also can convey something about our unconscious reactions to visual illusions occurring naturally in real life,
1
2 HUMAN EYE AND NOISE HANDLING
something that we now are unaware of but that underlies our perception of all potential,
naturally occurring visual images. These naturally occurring visual illusions could, for
instance, be related to astronomical images or scans of molecular structure.
Technical applications of research encompassing eye tracking technology are prevalent
in numerous branches of medicine, for instance relating to the functional neuroimaging
techniques fMRI, MEG, EEG, and to laser refractive surgery. When it comes to other
kinds of applications of this kind of research, such as psychological or societal ones, eye
tracking has further proven to be a rather useful tool in everything from automotive designs
to research related to commercialism – such as web usability and advertising.
2
2.1
Human eye and noise handling
Ocular anatomy and properties
Anatomically speaking, the eyeball is nearly fully spheroidal, most particularly characterised by the corneal tissue through which light first enters, the iris which regulates the
amount of incoming light, and the retina on which the final image is formed. The iris is
composed of a pupillary and a ciliary zone, the former being bounded by the pupil and
the latter extending to the outermost parts of the iris, see Figure 1a below. The ciliary
zone confines the dilator, a radial muscle that contracts so as to dilate the pupil, as well as
the pupillary zone which confines the sphincter, a concentric muscle which, by contracting,
decreases the pupillary size and the focal length of the crystalline lens, thereby yielding a
more convex shape (Andreassi, 2007; Pedrotti, Pedrotti and Pedrotti, 2007; Young, Freedman and Ford, 2011). Outgoing stimuli released from the ANS to these independently
regulated muscles of the iris are what gives rise to a change in the pupillary size, and
depending on how the ANS exerts its control on these muscles, the pupillary aperture can
either dilate or constrict (Pamplona, Oliveira and Baranoski, 2009; Tilmant et al., 2003).
At most, the pupil can constrict to 2 mm and dilate to 8 mm (Hardy and Perrin, 1932).
There is, however, some individual variation when it comes to these values.
The focus of the retina is one of the main components and preconditions of the human
eye’s visual acuity, which can be interpreted as the reciprocal of the minimum angle of
resolution. The retina is composed of delicate films of an elaborate network of nerve
fibres that, from the optic nerve, branch out into rods and cones, i.e. the diminutive
photoreceptors of dim and bright light, respectively, that transfer electrochemical signals
to the brain via the axons of the optic nerve fibres (Hardy and Perrin, 1932; Litke et al.,
2004; Pedrotti, Pedrotti and Pedrotti, 2007). The retina possesses approximately 7 million
cones and 75 to 150 million rods, with the cones being densely integrated near the macula,
and the rods chiefly being clustered at the periphery of the retina (Pedrotti, Pedrotti and
Pedrotti, 2007).
The cornea (see Figure 1b below) through which light enters, has a refractive index
of 1.376 and constitutes three-quarters of the total dioptric power D of the eye, which
furthermore is defined as the reciprocal of the focal length f. Since the focal point of the eye
2
2.1 Ocular anatomy and properties
2 HUMAN EYE AND NOISE HANDLING
(a) The upper half of the eyeball, seen from
a frontal perspective. The sphincter (unlabelled) can be seen encircling the pupil in
the iris, which in turn is encompassed by
the dilator muscle (Gray, 1918).
(b) A cross-section of the eyeball. (Wikimedia Commons, 2005).
Figure 1. Illustrations of the anatomy of the eye.
lies on the retina, solely images that are formed there are perceived as sharp and distinct.
This occurs for objects at infinity, at which the dilator is free from tension. When viewing
near objects, however, the dilator contracts and, consequently, the radii of curvature R
decreases as the lens of the eye is protruded outwards, thereby also decreasing the focal
length. This is referred to as accommodation. But as opposed to how focusing manifests
itself in cameras, the distance between the lens and the retina remains the same even
though the focal length decreases, rather than the other way around (Young, Freedman
and Ford, 2011).
Measurements in the branch of pupillometry – defined as the study of changes in the
diameter of the pupillary aperture – are performed by means of IR, whose long wavelengths
(spanning between 770 nm to 1 mm) evade being registered by the human eye. Also,
since IR photons have insufficient energy to excite the photoreceptors of the eye, infrared
photography enables recordings of pupillary behaviour irrespective of the eye colour within
the iris (Hess, 1972; Pedrotti, Pedrotti and Pedrotti, 2007).
The quality of one’s yielded image can be depicted by the point spread function and
the modulation transfer function, the latter being the Fourier transform of the former.
The PSF expresses the impulsive response that is the statistical distribution of light of a
point object which is obtained when light diffracts through an optical system (or in the
case of the eye, the pupil), and the MTF conveys the diminution of the contrast in an
image as a function of spatial frequency, which further is defined as the visual detail per
degree of angular aperture. These functions are, however, formed on the basis of statistical
data, and herein lies the uncertainty of their accuracy (Gross et al., 2007; Goodman, 1996;
3
2.1 Ocular anatomy and properties
2 HUMAN EYE AND NOISE HANDLING
Holmqvist et al., 2010).
Ideally, the relation between the contrast and all values of spatial frequencies is 1:1,
but seeing as how all optical filters – including the eye – function as low-pass filters, they
do in reality yield a reduced contrast for higher values of spatial frequency. Figure 2 below
depicts the contrast sensitivity as a function of spatial frequency. The contrast sensitivity
function conveys the extent to which the ganglion cells, i.e. the neurons of the retina, are
sensitive to a harmonic stimulus (Gross et al., 2007).
Figure 2. The contrast sensitivity as a function of spatial frequency, given in cycles per degree. At low values of spatial
frequencies, signals from the centre and vicinity of the neuron
abate one another, thereby attenuating the sensitivity to the
stimulus. Intermediary values of the spatial frequency yield optimal responses as the distribution of the light and dark bars
is symmetric over the centre of the neuron. At high spatial frequencies, the contrast is, again, averaged by the central region
of the neuron (Wikimedia Commons, 2013).
4
2.2 Light adaption
2.2
2 HUMAN EYE AND NOISE HANDLING
Light adaption
The underlying pupillary motor system of the iris has an intricate way of responding to
differences in light intensity. In response to intense light, the neural subcortical networkregulated PLR rapidly brings forth constriction of the pupil so as to protect the retinal
receptors, through an innervation of the circular fibres of the iris tissue, brought forth
by the neurons in the PNS (Andreassi, 2007; Endestad and Laeng, 2012). In scotopic
vision, i.e. low-intensity light conditions, the pupils are caused to dilate by the rods, the
radial fibres that are governed by the SNS, thereby increasing the intake of light from
what is available from surrounding object surfaces. Similarly, in photopic vision, more
luminous conditions stimulate the cones, which furthermore are individually activated by
the concentric fibres of the SNS as high-intensity light and colour compositions are being
processed (Pedrotti, Pedrotti and Pedrotti, 2007).
Light adaption and perception are, however, slightly subjective and the proportional relation between the actual and perceived brightness was in 1860 illustrated by the Feschnerlaw approximation (Hecht, 1924):
u(x) = A ln(
x
)
x0
(1)
where u denotes the subjective magnitude of the psychological sensation, A is a constant
that depends on the stimuli, and the variables x and x0 denote the ratio between the
magnitude of the physical sensation and the threshold value of that sensation, respectively.
It has furthermore been proven that the eye, after perceiving light for a certain time,
can give rise to afterimages, which, if being layered on a congruent achromatic image, can
give rise to quite distinct qualities (Daw, 1962). This discovery was epitomised in a visual
illusion (called the ”Spanish castle illusion”) by John Sadowski in which the chromatic
components of the illusion were desaturated and the hue was inverted. When viewing a
black and white version of the image immediately after being exposed to the desaturated
image, it was learnt that the image was believed to be seen in colour, something that was
interpreted with the parsimonious explanation that the afterimage from the previously
seen image had been displaced onto the achromatic image (Anstis, Vergeer and Van Lier,
2012).
To elaborate, the photoreceptors in the retina become overexposed to a certain stimulus
for a long time, and by focusing on a specific point, such as a dot, a temporary desensitisation of the cone cells in the retina occurs. After the removal of the stimulus and an
immediate exposure to an achromatic version of the image, the cone cells that have not
detected any colours and thus have not been exhausted, briefly project those colours onto
the black and white image due to the fact that the occipital lobe in the brain still is processing light, by so doing giving rise to an image that is perceived to be in colour (Anstis,
Vergeer and Van Lier, 2012; BBC Four, 2015).
5
2.3 Noise filtering
2.3
2 HUMAN EYE AND NOISE HANDLING
Noise filtering
Due to the fact that both high and low frequency noise are generated from both eye
movements and the eye tracking device itself, noise reduction is a necessity for obtaining
manageable signals. The so-called flicker noise that is generated by the eye tracking equipment and that requires reduction encompasses both pink and white noise, the former being
defined as noise having octaves (intervals between two musical pitches in which one of them
is either twice as small or twice as great as the other) that carry the same amount of energy,
and the latter being defined as arbitrary fluctuations that have equal power throughout all
bandwidths (Coey, Wallot, Richardson and Van Orden, 2012; Stoyanov, Gunzburger and
Burkardt, 2011; Wang et al., 2016).
As opposed to white noise, for which the energy is independent of the frequency, the
power of pink noise – or f ractal noise – is inversely proportional to its own frequency. The
power spectral density S(f ), denoting the statistical average of the frequency components
constituting a signal, can for fractal patterns thus be expressed as (Wang et al., 2016):
S(f ) =
1
fα
(2)
where the scaling exponent α denotes a signal-dependent parameter which, in the case of
pink noise, assumes a value in the vicinity of 1 (Coey, Wallot, Richardson and Van Orden,
2012), which is what further indicates that all frequencies composing it carry the same
amount of energy (Wang et al., 2016).
The genesis of (2) dates back to 1925, in an experiment executed by J.B. Johnson,
with the aim of studying the Poisson noise (i.e. noise modelled by a random mathematical
object consisting of randomly scattered points in mathematical points) in vacuum tubes.
Under the assumption that the discharge of electrons from the cathode inside the tube
in Johnson’s experiment gave rise to a current pulse, Walter Schottky mathematically
described the results through the exponential relation (Johnson, 1925; Schottky, 1926):
N (t) = N0 e−λt ,
(3)
where λ denotes the relaxation rate, i.e. the rate of the change of the signal as a function
of time, for which t ≥ 0.
For a flow of current pulses, however, the power spectrum takes the form of a Lorentzian
probability distribution (Schottky, 1926):
S(f ) =
N0 2 n
f 2 + λ2
6
(4)
2.3 Noise filtering
2 HUMAN EYE AND NOISE HANDLING
where n denotes the average pulse rate. By taking a superposition of the above Lorentzian
process with a range of relaxation rates being uniformly distributed between λ1 and λ2 ,
one would yield formula (2), for frequencies lying in the range λ1 f λ2 (Bernamont,
1937).
Apart from occurring as a product of human data in eye tracking, pink noise occurs
in diverse fields of nature, such as in semiconductors, fluctuations in heart rate, vacuum
tubes, and in emissions of the highly luminous quasars, all of which having power spectral
densities that roughly align with the theoretical model (Caloyannides, 1974; Dutta and
Horn, 1981; Johnson, 1925; Glass, 2001; Wang et al., 2016). White noise, on the other hand,
predominantly occurs due to random fluctuations and more specifically, in the context of
eye tracking, it is produced by the eye tracking equipment itself (Wang et al., 2016).
Since noise, up to a few degrees of visual angle, can cloak minor differences in gaze
location, data filtering becomes a prerequisite for obtaining clear results. Data filtering is
implemented through filtering systems such as the heuristic and the bilateral one, which
both are inherent in the system of the equipment as well as active during the recording
of the data. But whereas the heuristic filter reduces the impurities in the peak data by
replacing impulse noise of a sample with a latent sample and then replacing the ramp noise
of two samples with two latent samples, the bilateral filter is two dimensional and does not
insert any latencies, but rather maintains the great boundaries of great signal shifts, whilst
averaging the diminutive changes in the signal that originate from noise by collating and
replacing the intensity of each pixel with a weighted average of its adjacent data values
(Stampe, 1993; SMI, 2011).
Furthermore, low and high-pass filters can be applied retroactively, either with the
purpose of suppressing noise, or to enhance details. Low-pass filtering readily passes low
frequencies whilst attenuating high ones, by so doing reducing the noise but yielding an
image with a more indistinct appearance. High-pass filters, on the other hand, operate
in the opposite way by increasing the brightness of the central pixel relative to the ones
adjacent to it, thereby emphasising the finer details of the image (including the noise) and
reducing the blur (Makandar and Halalli, 2015).
Both filters are based on the DFT, which transforms the image to the frequency domain
in accordance with (Gonzalez and Woods, 2008):
M N
vy
ux
1 XX
f (x, y)e−2πi( M + N )
F (u, v) =
M N x=1 y=1
(5)
where u and v denote the spatial variables of the frequencies, the natural exponential
function denotes the basic function and where M and N denote the number of pixels in
the x- and y-direction, respectively. The image that is yielded by the Fourier transform
can be expressed by (Gonzalez and Woods, 2008):
7
3 METHOD
f (x, y) =
M X
N
X
ux
vy
F (u, v)e2πi( M + N )
(6)
x=1 y=1
Fourier analyses can be regarded as a more sophisticated operation when it comes to
analysing pupillary data.
3
3.1
Method
Experimental set-up
An eye tracker with an integrated iView X Hi-Speed system was used in the experiment,
incorporating both camera technology of high resolution and complex image processing
algorithms, with a data acquisition rate of 1250 Hz (i.e. 1 sample/0.8 ms).
(a) The adjustment panel above the
mirror allows for eye selection (monocular or binocular) and camera focus,
and its adjacent mirror holder for adjusting the fastening of the mirror,
which can be tilted to a maximum of
45◦ .
(b) Image showing the position of the
body when using the eye tracker.
Figure 3. The Hi-speed tracking column that was utilised in
the experiment. Source: Used by permission of SMI.
8
3.1 Experimental set-up
3 METHOD
An independently operating tracking column in the eye tracker contained infrared light.
In its uppermost parts, the module furthermore contains an integral high-speed camera
whose main purpose is to process the infrared illuminated eye-image that is reflected to
it from an adjustable mirror situated above the view aperture, see Figure 3 above. The
Hi-Speed camera system uses invisible LED-radiation with a wavelength of 910 nm to
illuminate the eye, and registers the entire area of the pupil, by so doing acquiring a
great amount of data and reducing the amount of obtained noise. The LED-radiation is
furthermore of Class 1, meaning that it is safe to use under all normal circumstances and
conditions.
One side of the mirror functions like a dielectric mirror, as it spectrally divides the
incoming light by reflecting the infrared light back into the light source and transmitting
light in the visible region of the electromagnetic spectrum. The reason for this behaviour
is the fact that it has been coated with a sequence of thin films of dichroic filters, namely,
the high refractive index semiconductor Titanium dioxide TiO2 , which has a refraction
index of n=2.30 and band gap of approximately 3.0 eV, as well as the low index material
silicon dioxide SiO2 , which has a refractive index of n=1.46 and a band gap of 8.9 eV
(Nowotny, 2011; Sah, 2007; Smith, 1966). Due to its wide band gap, silicon possesses a
low conductivity and acts as an interlayer insulator in the film structure.
Furthermore, the thin films of the filters have an optical thickness of λf /4, which denotes
the physical thickness d of the coatings multiplied by its refractive index n. Whilst the
physical thickness of the coatings determines the amount of phase that is obtained by
an incident wave, the different and alternating high and low refractive indices n allow a
specific range of the incident wavelengths to constructively interfere with one another in
phased reflections as they are being reflected from each of the layers, at the same time
transmitting wavelengths lying outside that range (Mansuripur, 2002; Smith, 1966). Due
to the fact that the layers have been precipitated in vacuum, the reflectivity increases as
the number of layers is increased, exceeding as much as 99%. The high reflectivity prevents
the rise of waste heat in the apparatus; thus, dichroics exhibiting this kind of behaviour
are often referred to as hot mirrors (Smith, 1966).
The segment of light that is reflected by the glass is given by the Fresnel equation
(Smith, 1966):
Rglass
1 sin2 (I − R) tan2 (I − R)
+
=
2 sin2 (I + R) tan2 (I + R)
(7)
in which I and R denote the angles of incidence and refraction, respectively. The first
term within the brackets denotes the reflection of the portion of light that is polarised
normal to the meridional plane, i.e. that is s-polarised, and the second term denotes the
p-polarised light, i.e. the portion of light that is reflected parallel to the meridional plane.
When the path of the light is parallel to the surface (i.e. when the angle of incidence is 0),
this reduces to:
9
3.2 Software and stimuli
3 METHOD
(n0 − n)2
R= 0
(n + n)2
(8)
where n and n’ denote the refractive index of the medium through which light first propagates and the refractive index of the second medium, respectively.
Due to the fact that light propagates like transverse waves, with oscillations that are
orthogonal to the direction of its transmission, the motion of the wave can be regarded as
a vector sum of two distinct oscillations in planes that are perpendicular to one another.
Removing either of these two components from the beam of light (e.g. by allowing the
light to pass though a polarising prism) would give rise to plane polarised light (Smith,
1966). When light is reflected from the dielectric mirror, it gives rise to a comparative
phase shift of the incident light, i.e. between the s-polarised and p-polarised components.
The polarisation of the states themselves is, however, preserved (Smith, 1966).
The coefficient of the maximum reflection of stacks of films of dichroic filters is furthermore given by (Pedrotti, Pedrotti and Pedrotti, 2007):
(n0 /ns )(nL /nH )2N − 1
=
(n0 /ns )(nL /nH )2N + 1
Rmax
2
(9)
in which N denotes the number of alternating and double high-low layers with an optical
density of λf /4, n0 is 1, ns denotes the refractive index of the substrate (which in the case
of glass is ns = 1.52), and nH and nL denote the refractive indices for the high and low
layers, respectively (Pedrotti, Pedrotti and Pedrotti, 2007).
3.2
Software and stimuli
The raster graphics editor programme GIMP was used to create the equiluminant images
that gave illusions of light and darkness, respectively, see Figure 4 below. The images
presented in Figure 4a are reconstructions of the so-called Asahi-images in the study conducted by Endestand and Laeng in 2012. The average luminosity across these images and
the image depicting the real change in luminosity was 227 units in the HSL-system.
The two sets of images that were used for measuring the pupillary diameter in relation
to aftereffects were also created using the GIMP raster graphics editor and are presented
below in Figures 5a-c and 6-ac. To yield an aftereffect in the colour experiment, the
luminosity of the original image was dampened and the colours were desaturated.
The grey-inverted images, presented below in Figures 5a and 6a, were created so as
to have a control condition with which the effects of the colour illusions preceding the
black-and-white image could be compared. The luminance of both the grey-inverted and
10
3.2 Software and stimuli
3 METHOD
(a) Reconstructions of the visual illusions
that were used in the experiment conducted in 2012, giving illusions of light and
darkness, respectively.
(b) The image created to depict the actual physical difference in light intensity
between the centre and the inner parts of
the edges.
Figure 4. Images that were created for observing the changes
of the pupillary size in relation to visual illusions and actual
differences in light intensity of the same frequency.
the desaturated version of the crop field under a cloudy sky (Figure 5) was 192 units in the
HSL-system, whereas the luminance of the barley field image (Figure 6) was 164 units in
the HSL-system. The black-and-white screens following these stimuli, presented in Figures
5c and 6c below, were 118 units and 129 in the HSL-system, respectively.
(a) The grey-inverted image
of the crop field under a
cloudy sky.
(b) The desaturated version.
(c) The greyscale version.
Figure 5. One of the two sets of images that was used for
observing the changes of the pupillary size in relation to the
potential occurrence of afterimages. Source of the original image (not shown): Pexels.com.
Inherent software programmes of SMI were used to display, record and export the data,
respectively: Experiment Center 3.6 was used for running the slide show of the stimulusimages and for running the calibration programme, iViewX 2.7 was used for implementing
the recording of the pupil, and the obtained data were exported using the software programme BeGaze 3.6 (SMI, 2011).
11
3.3 Procedure
(a) The grey-inverted image
of the barley-field.
3 METHOD
(b) The desaturated version.
(c) The greyscale version.
Figure 6. The second of the two sets of images that was used
for observing the changes of the pupillary size in relation to
the potential occurrence of afterimages. Source of the original
image (not shown): Wikimedia Commons.
3.3
Procedure
Upon being informed about – and consenting to – the terms of the experiment (such
as anonymity and their right to withdraw themselves/their data from the study), each
participant placed their chin on the chin rest of the eye tracking equipment, so as to yield
a constant pixel size of the pupil on the computer screen connected to the set-up. A proper
image, deprived of reflections, was obtained by adjusting the tilt of the mirror.
(a) Video image obtained with the software programme iViewX 2.7, with correct identifications of the pupil and the
corneal reflection, respectively.
(b) Recorded image used for converting
the diameter of the pupil from pixels to
mm.
Figure 7. Images obtained with the eye tracking equipment.
The experimental set-up was placed in a room with a constant luminance of 225 LUX.
Prior to the execution of each set of measurements, the tower-mounted eye tracker, placed
at a fixed distance of 67 cm from a computer screen, was calibrated using a 5-point linear
calibration routine with validation, and the pupil and corneal reflection of the eye of the
12
3.3 Procedure
3 METHOD
participant were identified by hair-cross cursors in a video image, see Figure 7a. The
calibration was implemented by having the participant fixating on a large black dot that
moved to five different positions across the screen. The computer connected to the system
traced the gaze of each of the participants and both calculated and corrected for the errors
in the tracing in the calibration procedures.
As the stimuli images gradually were displayed, the area of the pupil was measured
by means of IR on intervals of 0.8 ms. The obtained images were displayed on a computer screen with a resolution of 1280x1024 pixels and screen size of 380x300 mm. Each
experiment lasted for approximately 30 minutes for each participant.
The first part of the experiment consisted of three sets of stimuli images. Each participant was initially desensitised to a greyscale image showing a change in physical luminance,
shown in Figure 4b. Afterwards, they were successively and repeatedly presented with three
sets of images. In each of the sets, three completely grey images were shown and displayed
for 1s each, before being followed by either one of the Asahi-images yielding illusions of
darkness and brightness, respectively, or the image depicting an actual difference in light
intensity, as shown in Figures 4 a-b. The participant was asked to focus on the centre of
these images whilst they were being displayed. In-between each of the three sets of images,
a white noise image was shown for 4s, during which the participants were able to both
rest their eyes and blink, with the latter being something they were asked to avoid doing
during the stimuli presentations. The main purpose of the noise image was, however, to
eliminate possible afterimages from previous stimuli.
The second part of the experiment consisted of four sets of stimuli images. Similarly as
before, the participant was in each set shown two completely grey images for 1s each, after
which they for 8s were exposed to desaturated versions of images depicting wheat fields,
as well as equiluminant grey-inverted versions of those images, that were not probable to
give rise to afterimage effects. In each of these cases, this was, for 8s, followed by a black
and white version of the image that had been shown. In-between each of the four sets of
images, a white noise image was shown for 4s.
To minimise the noise in the obtained signals and increase the internal consistency
reliability of the results, each set of images was displayed five times for each participant
and the order of display of the different sets of images was in each case randomised.
All pixel lengths were converted into millimetres by dividing the pupil diameter of one
recording – read in pixels by the camera – by a conversion factor that was obtained in
another recording in which a tape measure was placed in front of the eye of one of the
supervisors of the thesis, as shown in Figure 7b.
The sampled raw data were exported to MATLAB, where nictations (which appeared
as zero-data) and unreasonable data points (such as pupillary diameters lying outside
the range of what is physiologically possible) were removed along with 50 data samples
before and after those points. The removed data points were then corrected with a linear
interpolation. This was performed by first calculating the difference between the last decent
data point preceding and succeeding the artifacts (i.e. gaps or inaccurate measurements
that occurred as a result from the investigative procedure or due to the eye tracker) and
then by dividing it by the number of data points that the artifacts contained plus one, and
13
4 RESULTS
finally by successively and incrementally substituting the artifact-data with these values.
Jitter and high-frequency noise were thereafter smoothed out by the use of a low-pass filter
of 10 Hz.
The codes that were used for implementing the correction and low-pass filter were
provided by the supervisor of this thesis and adapted to the purpose of the thesis.
4
Results
The conversion factor between pixels and millimetres was found to be 0.14381, i.e. 1 mm
corresponds to 0.14381 pixels.
Using the values that were stated in section 3.1., and knowing that the number of layers
N in a dichroic filter lies within the range 20 ≤ N ≤ 50 (High End Systems, 2015), the
coefficient of the maximum reflection of the thin stacks of films of dichroic filters was with
formula (9) calculated to be Rmax ≡ 1.
The data that were obtained in the experiment in which the participants were exposed
to the reconstructions of the Asahi-images are illustrated in Figure 8, and the results of
the pupillary changes during the inspections of the colour illusions are presented in Figure
9. In the figures, the change in the pupillary aperture is plotted against the number of
samples, with each sample corresponding to 0.8 ms.
(a)
(b)
Figure 8. Recorded pupillary responses to all stimuli in the
visual illusion experiment. 5 000 samples correspond to 4 seconds of recording. The dashed lines denote the standard error.
Figure 8a depicts the average of the entire set of raw data; and
Figure 8b depicts the final condition for each of the stimuli.
In the final conditions, improbable pupillary sizes (such as diameters exceeding those that
are physiologically possible) were removed, a linear interpolation was performed, a mean
14
4 RESULTS
value of the 5 trials in each of the conditions was calculated, and a low pass filter of 10 Hz
was applied.
(a)
(b)
(c)
Figure 9. Recorded pupillary responses to all stimuli in the
colour illusion experiment. 10 000 samples correspond to 8
seconds of recording. Figure 9a depicts the average of the entire
set of raw data, and Figures 9b and 9c depict the pupillary
responses to figures 6b and 5b, respectively.
Prominent critical values that were acquired from the results of both experiments, including
the pupillary contraction change, the maximum peak response, and the final value of the
responses, are presented below in Tables 1 and 2.
15
5 DISCUSSION
Table 1. Pupillary data of the brightness experiment, conveying the change in the diameter
of the pupillary aperture with respect to the starting point.
Stimulus
Illusion of darkness
Illusion of brightness
Real change in light
Contraction
[mm]
-0.1153
-0.1465
-0.0917
Max. dilation [mm]
0.1775
0.0648
0.1699
End response
[mm]
0.1775
0.0350
0.1543
Mean change
[mm]
0.0437
-0.0112
0.0665
Table 2. Pupillary data of the colour experiment, conveying the change in the diameter of
the pupillary aperture with respect to the starting point.
Stimulus
Cloudy field illusion
Cloudy field without
illusion
Barley field illusion
Barley field without illusion
5
Contraction
[mm]
-0.3391
-0.3243
Max dilation [mm]
0.1395
0.0282
End response
[mm]
0.0644
0.0605
Mean change
[mm]
-0.0250
-0.1061
-0.2490
-0.2967
0.0897
0.0441
0.0480
-0.0187
-0.0279
-0.0738
Discussion
The results of the experiment in which the effects of the visual illusions were studied and
compared with an actual change in light intensity were in agreement with the ones acquired
by Endestand and Laeng in 2012. As can be seen in Figure 8b, depicting the mean of each
set of samples, the pupillary aperture is indeed affected by visual illusions to a much greater
extent than by actual differences in light intensity. Whereas the change in the pupillary
aperture was a contraction of approximately 0.10 mm when the eyes were exposed to a
real change in the luminance (Figure 4b), the pupil constricted approximately 0.15 mm
when the pupil was exposed to the illusion of brightness, shown in Figure 4a, thereby
confirming the hypothesis. It can from this thus be inferred that there is no strict linear
relationship between the light intensity to which we are exposed, and the pupillary size
that occurs as an adaption to it. This due to the fact that the perceptual input affects
the pupil adaption, which further might suggest that the cognitive input is greater when
perceiving light. In table 1 it is also clear that the change becomes intensified with time,
with the end responses of the illusion of brightness and the real change in luminosity
differing to a much greater extent compared to when the pupil contracted when exposed
to those images, and compared to the maximum dilation. Since the segment of the graphs
in Figure 8 that depict the eye recording data as the pupil redilated, further shows that
the pupil constantly remained more constricted when exposed to the visual illusion image
16
5 DISCUSSION
compared to its equiluminant counterpart, the effects of the images, deduced from the
tabular values, must have been constant. All this suggests that the pupillary aperture is
more affected by perceived brightness than it is by the amount of light that it actually is
being exposed to.
Interestingly, however, the illusion of darkness yielded a greater contraction than the
real change in luminosity, perhaps hinting that the design of the real change in luminosity
was not entirely efficient. The illusion of darkness did, however, also yield a greater change
when it comes to dilation, as seen in Table 1 above. This could suggest that the participants’
average contrast sensitivity was closer to an optimal value when they were viewing the
image depicting the illusion of darkness compared to when they were exposed to the real
change in luminosity, to which the pupillary light adaption was time-dependent. Due to the
clear line, or contrast, between the grey and the white segment, the image depicting a real
change in luminosity possessed a spatial frequency that yielded a symmetric distribution
of light and dark bars across the neurons perceiving them.
The results of the second experiment, in which the cerebral colour perception and
projection was explored by exposing the eye of the participants to desaturated images and
grey-inverted images, made it evident that the pupil responds to exhaustion of photocells.
In Figure 9b, it is clearly seen that the pupil contracted less subsequent to the exposure
of the colour illusion compared to the grey-inverted image. The fact that a smaller change
in light difference was perceived after the colour illusion was removed might suggest that
the colour illusion was stronger relative to the grey-inverted image, as the eyes yet had not
adapted to the achromatic image. Furthermore, among the participants who were aware
of the fact that they were projecting colour onto the achromatic images, there was a clear
consensus that the crop field under a cloudy sky yielded a stronger projection effect, which
is mirrored by the results in Figure 9c, where it can be seen that pupil dilated to a much
greater extent than in the case of the other colour illusion, and where the difference of
the effects of the colour illusion image and grey-inverted image remains much greater over
time.
Slightly after 4s, i.e. around 3500 samples, it can be seen that the redilation of the
pupil as a response to the colour illusion greatly overtakes that of the grey-inverted image,
suggesting that the afterimage-effect ceased after 3s and that the pupil thereafter dilated
as a more direct adjustment to the the black-and-white image on the display.
Similarly, in the case of the other colour illusion, the pupil contracted more subsequent
to the exposure of the colour illusions compared to the grey-inverted image, as seen in
the greatest minima points in Figure 9c. After approximately 3s, i.e. 2400 samples, the
redilation occurring as a response to the colour illusion overtook that of the grey-inverted
image, as seen in Figure 9c.
Interestingly, some did not perceive an afterimage-effect at all, but their pupils responded in the same way as those who did. In the conversations that we had with the
participants after each of the experiments, it was further noted that the men who participated in the experiment experienced more of an afterimage-illusion compared to the
women, most of whom did not perceive any colours at all when the black and white image
was displayed. This could, perhaps, suggest that some kind of gender-related component
17
5 DISCUSSION
in the occipital lobe plays a role in the way we perceive colour.
Drawing a comparison between the differences in the change of the pupillary diameter
in Figure 8 and in Figure 9, further gives the information that the pupillary aperture was
affected by the colour illusions created for this experiment to a much greater extent than
it was affected by the visual illusions. This is, however, most likely due to the fact that the
illusion took place across the entire screen and not just a certain area of it, such as in the
case of the visual illusion, and to some extent because of the fact that the difference between
the colour illusion and the black-and-white screen following it was somewhat greater than
the difference between the Asahi-images.
The fact that the Asahi-images had a higher number of units in the HSL-system and
conveyed a higher intensity of brightness compared to images used for the colour experiment, yet did not yield more significant results, can furthermore be explained by the
Feschner-law approximation (formula (1)), which here indicates that the threshold of the
stimulus increases proportionally to the luminance of the display. Arguably, since the
colour illusion had a lower level of brightness, the threshold level for detecting it became
lower as well, hence the greater pupillary contraction.
But even though the experiment yielded relatively clear and interesting results, there
were several sources of errors that most likely prevented the experiment from yielding even
more accurate results. First of all, the eye tracking equipment itself had some intrinsic
features that give rise to noise and affects the yielded data. Also, saccades, i.e. rapid and
ballistic eye movements that might have occurred as the trial persons mapped the screen
in front of them, might occasionally have prevented the eye tracker from tracing the pupil
during the changes in position. Nevertheless, it can be assumed that these effects were
fairly minimal, since the analysis software also filters some of the noise originating from
the aforementioned factors. Tremors and microsaccades, which are likely to occur during
fixations, might not have been identified by the filtering algorithms. But since these factors
are minor, they did most likely not affect the results.
Second of all, the calibration and identification of the pupil and corneal reflection,
respectively, might not have been entirely precise; and if there so happened to be a slight
deviations from an ideal, foolproof identification, then it is also quite possible that the
diameter of the area that was being recorded did not quite correspond to what was of
interest. Also, since the positions of the gaze direction in the area between the calibration
points are assessed by inherent algorithms in the computer, these estimations might carry
with them a higher inaccuracy.
A 9-point linear calibration would have added an algorithm for corner correction, and
a 13-point linear calibration would have added additional fixation points to enhance the
accuracy. However, since gaze accuracy was not critical in this experiment, a 5-point linear
calibration did most likely suffice.
Furthermore, it was assumed that the right eye of those who participated in the experiment was dominant (due to the fact that it to a great extent is linked to right-handedness),
which might not have been the case and therefore also have yielded a certain level of inaccuracy when it comes to the gaze precision of some of the human subjects, which furthermore
might explain why the calibration procedure in some instances was difficult to implement.
18
6 OUTLOOK
This fundamental physiological difference, along with the fact that the gaze position of
the participants might have different with a few degrees of angle from one another, can
possibly have affected the results in an unfavourable way.
Future experiments should then, perhaps, consider identifying the dominant eye of the
experiment participant before performing the experiment. Since it probably would give
rise to another error if the tracking setting would have to alternate between monocular
and binocular tracking, it would presumably then also be wise to sort out either those who
have a left dominant eye or those who have a right dominant eye. Performing binocular
tracking and then calculating an average value of the data obtained from both eyes, is
another suggestion that ought to yield more accurate – or at least less inaccurate – results,
even if this mostly affects the absolute values of the pupil signals, and to a lesser extent,
the changes within them.
Since the sample rate of the eye tracking column was as high as 1250 Hz, it is less likely
that the data acquired from the system was as influenced by eye movements as would have
been the case if a slower sample rate would have been used, which would have yielded a
poorer precision due to the fact that it records a smaller translation of the eye.
Also, it should be stated that various kinds of cognitive processes during the observations of the visual stimuli slightly affected the pupillary response as well, which furthermore
makes it incorrect to assume that there is a direct connection between a stimulus and the
diametrical change of the pupil of the person being exposed to it. This change in the
pupillary diameter did, however, most probably not exceed 0.5 mm.
Whereas some experimental conditions remained constant for all participants (such as
the fact that all procedures were performed under constant illumination), a few others
probably differed, such as the position of their eyes in the tracking box, which might have
varied both for each of the individuals and among them. Also, since the participants had
different levels of e.g. arousal and alertness in the beginning of the experiment, the starting
point of their respective pupillary diameter likely varied, which further may have influenced
the data if the initial value was close to the endpoints of the range of the psychologically
possible diameter values.
6
Outlook
Conclusively, it can be said that the project was fairly successful both in terms of implementation and in terms of the outcome of the experiment. It was from the pupillary
responses to the visual stimuli more or less evident that the visual cortex of the brain plays
a great role in the perception of ambient light, and that the pupillary response simply does
not depend on the amount of physical light that the eye is exposed to. Furthermore, in
the colour experiment (as shown in Figure 9) it was found that it took a certain amount
of time for the participants to respond to the black and white-images that were displayed
before them, thus supporting the theory that the occipital lobe still is active for a certain
amount of time due to the exhaustion of the photoreceptors.
The maximum reflectivity of the dichroic stack of films was calculated to be 1. This
19
6 OUTLOOK
experiment did, however, not determine whether or not the rotation of the mirror of the
apparatus affected the reflectivity, and it could thus be beneficial for future researchers
to see how an angular dependence can have an effect on the results. Fabricating the thin
films of dichroic filters in such a way that an ideal maximum reflectivity, i.e. 100%, would
be acquired, regardless of how much the mirror has been rotated, could, perhaps, ensure
that the reflectivity always remains the same for all participants.
Future work could delve deeper into the matters that were touched upon in this project
by exploring to what extent our rods and cones are sensitive to excitations by light and
how a probable over-exhaustion of (primarily) the rods translates onto black-and-white
images that follow the presentation of the light source. The short-range purpose of such an
experiment would be to understand whether or not the combined effects of overexposure
of our rods and cones (by light and colour) can alter the way we see phenomena in both
daily life and research-related scenarios, and the long-range goal being to manufacture
lenses, glasses, or even software programmes that possess the ability to counteract the
perceptual input when viewing visual illusions related to light or colour. This could result
in more objective observations and analyses, regardless if the phenomena are being observed
directly, or are being analysed on a computer screen.
Another factor that could be of great interest for future researchers is to discern whether
or not light that exists in our blind spot (which does not possess any photoreceptors) affects
the pupillary aperture and if that effect depends on the light intensity, wavelength, or
shape of the light source. And an even more physiology-oriented possibility when it comes
to fields of research could be to investigate whether or not there is a relation between
eye pathologies and the susceptibility to perceive/be affected by various kinds of visual
illusions, since all those who participated in this experiment reported to have healthy eyes.
This could further be applied to retinal transplantations, after which one could diagnose
if the function of the eye has returned to the same state as the other, healthy eye, or if it
has remained the same. Such a diagnosis could also be performed prior to the transplant
for a later comparison with a diagnosis of the same eye.
The results of this experiment could also contribute to research on automotive design.
By equipping cars with eye tracking systems that have collected the driver’s eye data and
because of this is able to detect signs of sleepiness, lack of attention and distraction, this
kind of research aims to improve driver safety by warning the driver when such detections
have been made. And if I were to continue with research on how the pupillary aperture is
affected by visual illusions, for e.g. my Master’s thesis, I would have liked to examine the
gaze behaviour and the behaviour of the pupil under e.g. foggy weather conditions and
when the eye is perceiving mirage-like effects (such as Fata Morgana and heat haze) so as
to contribute to the research on automotive design and ensure driver safety even further.
One could, however, here also argue that such a desertion of responsibility and reliance
on technology is dangerous and that it at all costs should be avoided since it can give an
illusion of safety, causing the disasters it endeavours to prevent.
In this study, the purpose was to provide some insight into how psychological factors
can affect observations of our surroundings. The concept that was practised can, however,
also be used to reveal things that have been registered by our pupils, but that we are
20
7 BIBLIOGRAPHY
not aware of. For researchers more keen on exploring uncharted psychological territory in
relation to changes in the pupillary aperture, it could thus be of great interest to examine
whether or not there is a pattern in how different kinds of visual or audial subliminal
messages that are received (or passively absorbed) by the brain, affect the pupil, seeing as
how this to some extent also may determine the likelihood of the subliminal message in
question being perceived by the brain.
On a final note, it can be stated that passing the threshold between psychology and
physics in several ways can prove to be beneficial for both disciplines. Seeing as how the
study would not have been possible to implement if it had not been for the optical properties
of the eye and the physics behind the set-up, the two seemingly unconnected subjects are
strongly connected and the door between them should therefore always remain open, both
to encourage further advancement and progression, and to give nuances to already acquired
knowledge.
7
Bibliography
Andreassi, J. (2007). Psychophysiology: Human Behavior and Physiological Response. 5th
ed. New Yersey: Lawrence Erlbaum Associates, pp. 289-291.
Anstis, S., Vergeer, M. and Van Lier, R. (2012). Luminance contours can gate afterimage
colors and ”real” colors. Journal of Vision, 12(10), p. 1.
Bernamont, J. (1937). Fluctuations de potential aux bornes d’un conducteur metallique
de faible volume parcouru par un courant,Annalen der Physik (Leipzig), 7 , pp. 71–140.
Coey, C. A., Wallot, S., Richardson M. J. and Van Orden, G. (2012). On the Structure
of Measurement Noise in Eye-Tracking. Journal of Eye Movement Research, 5(4), pp.
2,4.
Caloyannides, M.A. (1974). Microcycle spectral estimates of 1/f noise in semiconductors.
J. Appl. Phys., 45, pp. 307-316.
Daw, N. W. (1962). Why afterimages are not seen in normal circumstances. Nature, 196
(4860), pp. 1143–1145.
Dutta, P., Horn, P. (1981). Low-frequency fluctuation in solids: 1/f noise. Rev. Mod.
Phys, 5(3), p. 497.
Glass, L. (2001). Synchronization and rhythmic processes in physiology. Nature, 410(6825),
pp. 277–284.
Gonzales, R., Woods, R. (2008). Digital Image Processing. 3rd ed. Upper Saddle River,
New Jersey: Pearson Prentice Hall, p. 95.
Gooodman, J. (1996). Introduction to Fourier optics. 2nd ed. San Francisco: McGrawHill, pp. 20-21, 182-183.
Gray, H. (1918). Anatomy of the Human Body. 20th ed. Philadelphia and New York: Lea
and Febiger, p. 1013.
Gross, H., Zügge, H., Peschka, M., Blechinger, F. (2007). Handbook of Optical Systems,
Volume 3, Aberration Theory and Correction of Optical Systems. Weinheim: WileyVCH, pp. 147, 151-152.
21
7 BIBLIOGRAPHY
Hardy, A. and Perrin, F. (1932). The principles of optics. New York: McGraw-Hill Book
Company, Inc., pp. 186-188.
Hecht, S. (1924). The Visual Discrimination of Intensity and the Weber-Fechner Law.
The Journal of General Physiology, 7(2), p. 238.
Hess, E. H. (1972). Pupillometrics. In N. S. Greenfield & R. A. Sternbach (Eds.). Handbook
of psychophysiology, New York: Holt, Rinehart & Winston, pp. 491-531.
Holmqvist, K., Nyström, M., Andersson R., Dewhurst R., Jarodzka. H., van de Weijer, J.
(2010). Eye tracking. Oxford: Oxford University Press, p. 174.
Johnson, J.B. (1925). The Schottky effect in low frequency circuits. Physical Review, 26,
pp. 71–85.
Laeng, B. and Endestad, T. (2012). Bright illusions reduce the eye’s pupil. Proceedings of
the National Academy of Sciences, 109(6), pp. 2162-2167.
Litke, A., Bezayiff, N., Chichilnisky, E., Cunningham, W., Dabrowski, W., Grillo, A.,
Grivich, M., Grybos, P., Hottowy, P., Kachiguine, S., Kalmar, R., Mathieson, K., Petrusca, D., Rahman, M. and Sher, A. (2004). What does the eye tell the brain?: Development of a system for the large-scale recording of retinal output activity. IEEE Transactions on Nuclear Science, 51(4), p. 1434.
Makandar, A. and Halalli, B. (2015). Image Enhancement Techniques using Highpass and
Lowpass Filters. International Journal of Computer Applications, 109(14), pp. 12-24.
Mansuripur, M. (2002). Classical optics and its applications. 2nd ed. Cambridge, UK:
Cambridge University Press, p. 324.
Nayar, S. and Mitsunaga, T. (2000). High Dynamic Range Imaging: Spatially Varying
Pixel Exposures. IEEE Conference on Computer Vision and Pattern Recognition,
2000. Proceedings., vol.1, pp. 472, 474.
Nowotny, J. (2011). Oxide Semiconductors for Solar Energy Conversion: Titanium Dioxide. CRC Press. p. 156.
Pamplona, V., Oliveira, M., and Baranoski, G. (2009). Photorealistic models for pupil
light reflex and iridal pattern deformation. TOG, 28(4), p. 3.
Pedrotti, F., Pedrotti, L. and Pedrotti, L. (2007). Introduction to optics. 3rd ed. Upper
Saddle River , N.J.: Pearson Prentice Hall., pp. 9, 40-41, 420-421, 426-427, 487-488.
Sah, R. (2007). Silicon nitride, silicon dioxide , and emerging dielectrics 9. Pennington,
N.J.: Electrochemical Society, p. 793.
Schottky, W. (1926). Small-shot effect and flicker effect. Physical Review 28, pp. 74–103.
SensoMotoric Instruments. (2011). iView XTM , Version 2.7, pp. 24, 29, 50, 54, 168, 193,
236, 339.
Smith, W. (1966). Modern optical engineering . 4th ed. New York: McGraw-Hill, pp.
230, 237-240, 249.
Stampe, D. (1993). Heuristic filtering and reliable calibration methods for video-based
pupil-tracking systems. Behavior Research Methods, Instruments, & Computers, 25(2),
pp. 137-142.
Stoyanov, M., Gunzburger, M. and Burkardt, J. (2011). Pink noise, 1/f α noise, and their
effect on solutions of differential equations . International Journal for Uncertainty
Quantification, 1(3), pp. 257-258.
22
7 BIBLIOGRAPHY
Tilmant, C., Charavel, M., Ponrouch, M., Gindre, G., Sarry, L., and Boire, J.-Y. (2003).
Monitoring and modeling of pupillary dynamics. Proceedings of 25th Annual International Conference of the IEEE, vol.1, pp. 678–681.
Wang, D., Mulvey, F., Pelz, J. and Holmqvist, K. (2016). A study of artificial eyes for the
measurement of precision in eye-trackers. Behav Res, pp. 3, 6.
Young, H., Freedman, R. and Ford, L. (2011). Sears & Zemansky’s university physics
with modern physics. San Francisco, CA: Pearson Education, pp. 1139, 1142-1143.
Beyond the Rainbow, 2015. Colour: The Spectrum of Science (3). [TV programme]. BBC,
BBC 4, 19 November 2015.
Barley under a blue sky. (2008). [online image]. Available at: https://commons.wiki
media.org/wiki/File:Barley crop -blue sky-4May2008.jpg [Accessed 11 Nov. 2016].
Brown Field and Blue Sky. (2016). [online image]. Available at: http://www.pexels.com
/photo/sky-clouds-cloudy-earth-46160 [Accessed 11 Nov. 2016].
Contrast Sensitivity vs. Spatial Frequency. (2013). [online image]. Available at: https://
commons.wikimedia.org/wiki/File:Contrast Sensitivity vs. Spacial Frequency.png
[Accessed 18 Nov. 2016].
High End Systems, a Barco Company, 2015. About Dichroic Filters [online] Available at:
http://www2.highend.com/support/training/dichroic.asp [Accessed 11 December 2016].
Human eye cross-sectional view grayscale. (2005). [online image]. Available at: https://
upload.wikimedia.org/wikipedia/commons/e/ed/Human eye cross-sectional view gray
scale.png [Accessed 14 Oct. 2016].
23
8 APPENDICES
8
8.1
1
Appendices
APPENDIX. MATLAB-CODES
%Fig . 8a , t he a v e r a g e o f t he s e t o f raw data o f t he v i s u a l
i l l u s i o n e x p e r i m e n t . The code f o r f i g 9a i s a n a l o g o u s t o t he
f o l l o w i n g code .
2
3
4
clear all
clc
5
6
7
8
9
10
x = importdata ( ’ P u p i l d a t a B r i g h t n e s s . mat ’ ) ;
n = l e n g t h ( x ) ; %number o f data p o i n t s
M = mean ( x ) ;
dt = ( 1 ) ; %sampling i n t e r v a l
t = 0 : dt : ( n−1)∗ dt ;
11
12
13
p l o t ( t ,M, ’ Linewidth ’ , 4 )
h o l d on
14
15
16
S = s t d ( x ) ; %th e s t a n d a r d d e v i a t i o n , i . e . t he s q u a r e r o o t o f t h e
variance
SE=S/ s q r t ( l e n g t h ( x ( : , 1 ) ) ) ;
17
18
19
p l o t ( t , (M+SE) , ’ : r ’ , ’ LineWidth ’ , 1 )
p l o t ( t , (M−SE) , ’ : r ’ , ’ LineWidth ’ , 1 )
20
21
22
23
x l a b e l ( ’ Samples ’ ) ;
y l a b e l ( ’ P u p i l l a r y d i a m e t e r [mm] ’ ) ;
24
25
26
1
l e g e n d ( ’ Average o f t h e raw data ’ )
%Fig . 8b , th e r e s u l t s o f t he v i s u a l i l l u s i o n e x p e r i m e n t . The
c o d e s f o r f i g u r e s 9b−c a r e a n a l o g o u s t o th e f o l l o w i n g code .
2
3
4
clear all
clc
5
6
7
8
l o a d ( ’ P u p i l d a t a B r i g h t n e s s F i l t e r e d . mat ’ )
l o a d ( ’ ExpDataBrightness . mat ’ )
l o a d ( ’ P u p i l d a t a B r i g h t n e s s B a s e l i n e F i l t e r e d . mat ’ ) ;
24
8.1 APPENDIX. MATLAB-CODES
8 APPENDICES
9
10
11
12
B a s e l i n e = nanmean ( P u p i l d a t a B r i g h t n e s s B a s e l i n e F i l t e r e d
(: ,1200:1250) ’) ’;
B a s e l i n e 1 = bsxfun ( @minus , P u p i l d a t a B r i g h t n e s s F i l t e r e d , B a s e l i n e ) ;
13
14
AsahiDark=B a s e l i n e 1 ( f i n d ( ExpDataBright ( : , 2 ) ==1) , : )
;
15
16
17
18
19
20
M = mean ( AsahiDark ) ;
A = sum (M, ’ omitnan ’ ) ;
d i s p (A/5000)
h1 = p l o t (M, ’ g ’ , ’ Linewidth ’ , 4 ) ;
h o l d on
21
22
23
24
25
26
n=l e n g t h ( AsahiDark ) ;
dt = 1 ;
t = 0 : dt : ( n−1)∗ dt ;
S = s t d ( AsahiDark ) ;
SE=S/ s q r t ( l e n g t h ( AsahiDark ( : , 1 ) ) ) ;
27
28
29
h2 = p l o t ( t , (M+SE) , ’ : g ’ , ’ LineWidth ’ , 1 ) ;
h3 = p l o t ( t , (M−SE) , ’ : g ’ , ’ LineWidth ’ , 1 ) ;
30
31
32
33
34
35
36
37
A s a h i L i g h t=B a s e l i n e 1 ( f i n d ( ExpDataBright ( : , 2 ) ==2) , : ) ;
M2=mean ( A s a h i L i g h t ) ;
B = sum (M2, ’ omitnan ’ ) ;
d i s p (B/ 5000)
h4 = p l o t (M2, ’ r ’ , ’ Linewidth ’ , 4 ) ;
h o l d on
38
39
40
41
42
43
n=l e n g t h ( A s a h i L i g h t ) ;
dt = 1 ;
t = 0 : dt : ( n−1)∗ dt ;
S2 = s t d ( A s a h i L i g h t ) ;
SE2=S2/ s q r t ( l e n g t h ( A s a h i L i g h t ( : , 1 ) ) ) ;
44
45
46
h5 = p l o t ( t , ( M2+SE2 ) , ’ : r ’ , ’ LineWidth ’ , 1 ) ;
h6 = p l o t ( t , ( M2−SE2 ) , ’ : r ’ , ’ LineWidth ’ , 1 ) ;
47
48
49
Real = B a s e l i n e 1 ( f i n d ( ExpDataBright ( : , 2 ) ==3) , : ) ;
25
8.1 APPENDIX. MATLAB-CODES
50
51
52
53
54
8 APPENDICES
M3=mean ( Real ) ;
C = sum (M3, ’ omitnan ’ ) ;
d i s p (C/ 5000) ;
h7 = p l o t (M3, ’ b ’ , ’ Linewidth ’ , 4 ) ;
h o l d on
55
56
57
58
59
60
n=l e n g t h ( Real ) ;
dt = 1 ;
t = 0 : dt : ( n−1)∗ dt ;
S3 = s t d ( Real ) ;
SE3=S3/ s q r t ( l e n g t h ( Real ( : , 1 ) ) ) ;
61
62
63
h8 = p l o t ( t , ( M2+SE3 ) , ’ : b ’ , ’ LineWidth ’ , 1 ) ;
h9 = p l o t ( t , ( M2−SE3 ) , ’ : b ’ , ’ LineWidth ’ , 1 ) ;
64
65
s e t ( gca , ’ f o n t s i z e ’ , 1 8 )
66
67
h o l d on
68
69
l e g e n d ( [ h1 , h4 , h7 ] , { ’ I l l u s i o n o f d a r k n e s s ’ , ’ I l l u s i o n o f l i g h t ’ ,
’ Real change i n l i g h t ’ }) ;
70
71
72
73
x l a b e l ( ’ Samples ’ ) ;
y l a b e l ( ’ Change i n p u p i l l a r y d i a m e t e r [mm] ’ ) ;
26