Download INTEGRATED OPTOELECTRONICS

Document related concepts

Photopolymer wikipedia , lookup

Bioluminescence wikipedia , lookup

Gravitational lens wikipedia , lookup

Daylighting wikipedia , lookup

Doctor Light (Kimiyo Hoshi) wikipedia , lookup

Doctor Light (Arthur Light) wikipedia , lookup

Photoelectric effect wikipedia , lookup

Fiber-optic communication wikipedia , lookup

Transcript
FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION
BRNO UNIVERSITY OF TECHNOLOGY
INTEGRATED OPTOELECTRONICS
Author:
Ing. Soňa Šedivá, Ph.D.
Brno
2007
2
FEKT Vysokého učení technického v Brně
Contents
1
HISTORICAL OVERVIEW ............................................................................................8
2
PHYSICAL PRINCIPLES................................................................................................9
2.1 LIGHT .............................................................................................................................. 9
2.1.1
Fermat´s principle ..................................................................................... 10
2.2 PHOTON......................................................................................................................... 10
2.3 ELECTROMAGNETIC RADIATION [13]............................................................................. 11
2.3.1
Maxwell´s equations [1] ............................................................................ 13
2.4 LIGHT PHENOMENA ....................................................................................................... 13
2.4.1
Reflection and refraction of light .............................................................. 13
2.4.2
Interference ............................................................................................... 16
2.4.3
Diffraction of light ..................................................................................... 20
2.4.4
Dispersion of light ..................................................................................... 22
2.4.5
Absorption of light ..................................................................................... 23
2.4.6
Fresnel´s laws ........................................................................................... 24
2.4.7
Mechanisms of attenuation........................................................................ 26
2.4.8
Wave guide propagation for fiber ............................................................. 30
3
OPTICAL COMPONENTS............................................................................................32
3.1 OPTICAL WINDOWS ....................................................................................................... 32
3.2 OPTICAL FILTERS ........................................................................................................... 32
3.2.1
Interference filters ..................................................................................... 33
3.3 MIRRORS ....................................................................................................................... 34
3.3.1
Plane mirrors ............................................................................................ 34
3.3.2
Convex and concave mirror ...................................................................... 35
3.4 POLARIZER .................................................................................................................... 37
3.4.1
Birefringent palarizer ................................................................................ 38
3.4.2
Malu´s law and other properties ............................................................... 39
3.4.3
Beam-splitting polarizers .......................................................................... 39
3.4.4
Polarization by reflection .......................................................................... 39
3.5 BEAM SPLITTERS ........................................................................................................... 40
3.6 OPTICAL REFLECTORS ................................................................................................... 40
3.7 LENSES .......................................................................................................................... 41
3.8 OPTICAL FIBERS ............................................................................................................ 44
3.8.1
Glass optical fiber ..................................................................................... 46
3.8.2
Plastic optical fiber ................................................................................... 46
3.8.3
Plastic Clad Silica optical fiber ................................................................ 46
3.8.4
Single-mode and multi-mode fiber optic cable ......................................... 46
3.8.5
Multimode fiber optic cable ...................................................................... 48
3.8.6
Loss Mechanisms in Fibers [8] ................................................................. 48
3.8.7
Fiber connectors ....................................................................................... 51
4
LIGHT SOURCES AND DETECTORS .......................................................................53
4.1 INTRODUCTION .............................................................................................................. 53
4.2 LIGHT SOURCES ............................................................................................................. 53
4.2.1
Light emitting diode structure ................................................................... 54
Integrated optoelectronics
3
4.2.2
Laser diodes ............................................................................................... 57
4.3 LIGHT DETECTORS ......................................................................................................... 63
4.3.1
Photoresistors ............................................................................................ 63
4.3.2
Photodiodes ............................................................................................... 64
4.3.3
Phototransistor .......................................................................................... 67
4.3.4
Position sensitive photo-detectors (PSD) [7] ............................................ 68
4.3.5
Charged coupled image sensors (CCD) .................................................... 70
5
FIBRE OPTIC SENSORS .............................................................................................. 72
5.1 INTENSITY-MODULATED SENSORS ................................................................................. 73
5.1.1
Transmissive concept ................................................................................. 74
5.1.2
Reflective concept ...................................................................................... 75
5.1.3
Microbending concept ............................................................................... 76
5.1.4
Intrinsic concept ........................................................................................ 76
5.1.5
Transmission and reflection with other optic effects ................................. 77
5.2 PHASE-MODULATED SENSORS ........................................................................................ 78
5.3 WAVELENGTH-MODULATED SENSORS ........................................................................... 78
5.3.1
Bragg grating concept ............................................................................... 78
5.4 APPLICATION OF FIBER OPTICS SENSORS ........................................................................ 79
5.4.1
Displacement sensors ................................................................................ 79
5.4.2
Temperature sensors .................................................................................. 80
5.4.3
Pressure sensors ........................................................................................ 81
5.4.4
Flow sensors .............................................................................................. 82
5.4.5
Level sensors .............................................................................................. 83
5.4.6
Magnetic and electric field sensors ........................................................... 84
5.4.7
Chemical analysis ...................................................................................... 86
5.4.8
Rotation rate sensors (gyroscopes)............................................................ 87
6
OPTICAL SENSORS OF POSITION AND MOVEMENT ....................................... 88
6.1 INTRODUCTION .............................................................................................................. 88
6.2 SENSOR OF POSITION USING PRINCIPLE OF TRIANGULATION ........................................... 88
6.3 INCREMENTAL SENSORS OF POSITION OR DISPLACEMENT............................................... 90
7
LIST OF SYMBOL ......................................................................................................... 94
8
BIBLIOGRAPHY ........................................................................................................... 96
4
FEKT Vysokého učení technického v Brně
Figure list
FIGURE 1:
FIGURE 2:
FIGURE 3:
FIGURE 4:
FIGURE 5:
FIGURE 6:
FIGURE 7:
FIGURE 8:
FIGURE 9:
HISTORIC ATTEMPT OF D. COLLADON TO GUITE LIGHT IN STREAM OF WATER ..... 8
ELECTROMAGNETIC RADIATION SPECTRUM ......................................................... 9
ELECTROMAGNETIC RADIATION ........................................................................ 11
REFLECTION OF LIGHT ....................................................................................... 14
INTERNAL REFLECTION AND CRITICAL ANGLE ................................................... 14
REFRACTION – DIFFERENT REFRACTIVE MEDIUM............................................... 14
SNELL´S LAW..................................................................................................... 15
TOTAL INTERNAL REFLECTION .......................................................................... 15
TOTAL INTERNAL REFLECTION: A) AT A PLANE INTERFACE BETWEEN A LOW AND
HEIGHT INDEX MEDIUM, B) AT A PRISM, C) IN A FIBER OPTICS ............................................ 16
FIGURE 10: CONSTRUCTIVE AND DESTRUCTIVE INTERFERENCE ........................................... 17
FIGURE 11: MICHELSON INTERFEROMETER .......................................................................... 18
FIGURE 12: DIAGRAM OF MICHELSON INTERFEROMETER .................................................... 18
FIGURE 13: MACH-ZEHNDER INTERFEROMETER .................................................................. 19
FIGURE 14: SAGNAC INTERFEROMETER................................................................................ 19
FIGURE 15: FABRY-PEROT INTERFEROMETER ...................................................................... 20
FIGURE 16: DOUBLE-SPLIT DIFFRACTION ............................................................................. 20
FIGURE 17: YOUNG´S TWO-SPLIT DIFFRACTION ................................................................... 21
FIGURE 18: GRAPH AND IMAGE OF SINGLE-SPLIT DIFFRACTION ........................................... 22
FIGURE 19: DISPERSION OF LIGHT ........................................................................................ 22
FIGURE 20: THE VARIATION OF REFRACTIVE INDEX VS. WAVELENGTH FOR VARIOUS GLASSES
23
FIGURE 21: ABSORPTION COEFFICIENTS OF MAJOR SEMICONDUCTORS................................. 24
FIGURE 22: REFLECTION AND REFRACTION .......................................................................... 25
FIGURE 23: REFRACTION DIAGRAM ...................................................................................... 26
FIGURE 24: POWER MEASUREMENT...................................................................................... 28
FIGURE 25: CUTBACK METHOD ............................................................................................ 29
FIGURE 26: CABLE SUBSTITUTION METHOD ......................................................................... 29
FIGURE 27: WAVE GUIDE PROPAGATION .............................................................................. 30
FIGURE 28: MODES .............................................................................................................. 31
FIGURE 29: COLORED AND NEUTRAL DENSITY FILTERS ........................................................ 32
FIGURE 30: INTERFERENCE FILTERS ..................................................................................... 33
FIGURE 31: PLANE MIRROR .................................................................................................. 35
FIGURE 32: CONCAVE MIRROR ............................................................................................. 35
FIGURE 33: CONVEX MIRROR ............................................................................................... 36
FIGURE 34: CONVEX AND CONCAVE MIRRORS ..................................................................... 36
FIGURE 35: WIRE-GRID POLARIZER ...................................................................................... 37
FIGURE 36: NICOL PRISM ..................................................................................................... 38
FIGURE 37: WOLLASTON PRISM ........................................................................................... 38
FIGURE 38: POLARIZATION – MALUS´S LAW ........................................................................ 39
FIGURE 39: FULL POLARIZATION AT BREWSTER´S ANGLE .................................................... 40
FIGURE 40: TYPES OF LENS .................................................................................................. 41
FIGURE 41: CONVERTING (CONVEX) AND DIVERGING (CONCAVE) LENS ............................... 42
FIGURE 42: IMAGE FORMATION BY A CONVERGING LENS ..................................................... 42
FIGURE 43: IMAGE FORMATION BY A DIVERGING LENS ........................................................ 43
FIGURE 44: REPRODUCTION OF THE IMAGE .......................................................................... 43
FIGURE 45: LENS EQUATION ................................................................................................ 44
FIGURE 46: OPTICAL FIBER .................................................................................................. 44
Integrated optoelectronics
FIGURE 47:
FIGURE 48:
FIGURE 49:
FIGURE 50:
FIGURE 51:
FIGURE 52:
FIGURE 53:
FIGURE 54:
FIGURE 55:
FIGURE 56:
FIGURE 57:
FIGURE 58:
FIGURE 59:
FIGURE 60:
FIGURE 61:
FIGURE 62:
FIGURE 63:
FIGURE 64:
FIGURE 65:
FIGURE 66:
FIGURE 67:
FIGURE 68:
5
FIBER OPTIC ....................................................................................................... 45
FIBER OPTIC CABLE CONSTRUCTION ................................................................... 45
TYPES OF MODE PROPAGATION IN FIBER OPTIC CABLE ....................................... 47
SINGLE-MODE FIBER .......................................................................................... 47
MACROBENDING LOSSES .................................................................................... 50
FIBER COUPLING LOSSES .................................................................................... 51
FIBER CONNECTORS ........................................................................................... 52
PLASTIC FIBER OPTIC CABLE CONNECTOR .......................................................... 52
LED AND LASER SPECTRAL WIDTHS................................................................... 54
LIGHT EMITTING DIODE STRUCTURE................................................................... 54
BLUE, GREEN AND RED LEDS ............................................................................ 55
LEDS ................................................................................................................. 55
LED POLARITY ................................................................................................. 56
ELECTROLUMINISCENCE IN LEDS ...................................................................... 56
LED CHARACTERISTIC ....................................................................................... 57
LED RADIATION PATTERNS................................................................................ 57
LASER DIODE ..................................................................................................... 58
LASER EMISSION PATTERN ................................................................................. 59
TEMPERATURE EFFECTS ON LASER OPTICAL OUTPUT POWER ............................. 60
DIAGRAM OF FRONT VIEW OF A DOUBLE HETEROSTRUCTURE LASER DIODE ....... 60
DIAGRAM OF FRONT VIEW OF SIMPLE QUANTUM WELL LASER DIODE ................. 61
DIAGRAM OF FRONT VIEW OF SEPARATE CONFINEMENT HETEROSTRUCTURE
QUANTUM WELL LASER DIODE ........................................................................................... 62
FIGURE 69: DIAGRAM OF A SIMPLE VCSELS STRUCTURE .................................................... 63
FIGURE 70: PHOTORESISTOR................................................................................................. 63
FIGURE 71: PLANAR DIFFUSED SILICON PHOTODIODES ......................................................... 65
FIGURE 72: TYPICAL SPECTRAL RESPONSIVITY OF SEVERAL DIFFERENT TYPES OF PLANAR
DIFFUSED PHOTODIODES .................................................................................................... 66
FIGURE 73: CURRENT – VOLTAGE CHARACTERISTIC OF PHOTODIODE ................................... 67
FIGURE 74: PHOTOTRANSISTOR ............................................................................................ 68
FIGURE 75: PSD: A) STRUCTURE, B) SUBSTITUTE DIAGRAM, C) 2D PSD, D) EQUIVALENT
ELECTRICAL CIRCUIT ......................................................................................................... 68
FIGURE 76: SPECIALLY DEVELOPED CCD USED FOR ULTRAVIOLET IMAGING ....................... 70
FIGURE 77: COMPONENTS COMMON TO ALL FIBER OPTIC SENSORS ....................................... 73
FIGURE 78: INTENSITY SENSOR ............................................................................................. 73
FIGURE 79: TRANSMISSIVE FIBER OPTIC SENSORS................................................................. 74
FIGURE 80: FRUSTRATED TOTAL INTERNAL REFLECTION CONFIGURATION .......................... 74
FIGURE 81: REFLECTIVE FIBER OPTIC SENSOR ...................................................................... 75
FIGURE 82: REFLECTIVE SOND.............................................................................................. 75
FIGURE 83: PROBE CONFIGURATION .................................................................................... 75
FIGURE 84: REFLECTIVE FIBER OPTIC SENSORS – OUTPUT VERSUS DISTANCE ....................... 76
FIGURE 85: MICROBENDING SENSOR .................................................................................... 76
FIGURE 86: RADIAL DISPLACEMENT SENSOR WITH ABSORPTION GRATINGS .......................... 77
FIGURE 87: SCHEMATIC STRUCTURE OF A FIBER BRAGG GRATING ....................................... 79
FIGURE 88: TYPICAL APPLICATION – DUAL PROBE ................................................................ 79
FIGURE 89: TYPICAL APPLICATIONS – REFLECTIVE FIBER OPTIC SENSOR .............................. 80
FIGURE 90: REFLECTIVE FIBER OPTIC TEMPERATURE SENSOR USING A BIMETALLIC
TRANSDUCER ..................................................................................................................... 80
FIGURE 91: TEMPERATURE SENSOR WITH SEMICONDUCTOR LAYER...................................... 81
FIGURE 92: TEMPERATURE SENSOR – TEMPERATURE CHANGES OF MODIFY CLADDING ........ 81
6
FIGURE 93:
FEKT Vysokého učení technického v Brně
TRANSMISSIVE FIBER OPTIC PRESSURE SENSOR USING A SHUTTER TO MODULATE
THE INTENSITY .................................................................................................................. 81
FIGURE 94: TRANSMISSIVE FIBER OPTIC PRESSURE SENSOR USING A MOVING GRATING TO
MODULATE THE INTENSITY................................................................................................ 82
FIGURE 95: REFLECTIVE FIBER OPTIC PRESSURE SENSOR USING A DIAPHRAGM FOR
MODULATION .................................................................................................................... 82
FIGURE 96: RESONANT FIBER OPTIC PRESSURE SENSOR ....................................................... 82
FIGURE 97: REFRACTIVE INDEX CHANGE LIQUID LEVEL SENSOR .......................................... 83
FIGURE 98: REFRACTIVE INDEX CHANGE LIQUID LEVEL SENSOR – DETAIL ........................... 84
FIGURE 99: MAGNETIC FIBER SENSOR .................................................................................. 84
FIGURE 100: BASIC CONFIGURATION OF MAGNETIC FIBER SENSORS ...................................... 85
FIGURE 101: INTENZITY OF MAGNETIC FIELD SENSOR WITH MAGNETORESISTIVE BAND ......... 85
FIGURE 102: INTENSITY OF MAGNETIC FIELD SENSOR – USING OF THE RADIAL DEFORMATION
OF MAGNETOSTRICTIVE CYLINDER .................................................................................... 86
FIGURE 103: FIBER OPTIC TRANSMISSIVE ELECTRIC FIELD SENSOR USING A POCKELS CELL... 86
FIGURE 104: SAGNAC INTERFEROMETER................................................................................ 87
FIGURE 105: ANALOG FIBER OPTIC GYROSCOPE CONFIGURATION .......................................... 88
FIGURE 106: TRIANGULATION SENSOR FOR THE DISTANCE MEASUREMENT ........................... 89
FIGURE 107: THE TRIANGULAR SENSOR WITH SWEPT LASER BEAM ........................................ 90
FIGURE 108: PRINCIPLE OF INCREMENTAL SENSOR DEVICES .................................................. 90
FIGURE 109: TYPICAL ARCHITECTURE OF INCREMENTAL ENCODER [5] .................................. 91
FIGURE 110: THE CEDED DISC ................................................................................................ 91
FIGURE 111: DIGITAL CIRCUIT MAP WITH APPROPRIATE SIGNALS .......................................... 92
FIGURE 112: INCREMENTAL SENSORS .................................................................................... 93
Integrated optoelectronics
7
8
FEKT Vysokého učení technického v Brně
1 Historical overview
The first attempts at guiding light on the basis of total internal reflection in a medium
dates to 1841 by Daniel Colladon. He attempted to couple light from an arc lamp into a
stream of water (Figure 1).
Figure 1: Historic attempt of D. Colladon to guite light in stream of water
Several decades later, the medical men Roth and Reuss used glass rods to illuminate body
cavities (1888). At the beginning of the 20 century light was successfully transmitted through
thin glass fibers.
In 1926 J.L.Baird received a patent for transmitting an image in glass rods and
C.W.Hansell first began contemplating the idea of configuring an imaging bundle. In 1930 the
medical student Heinrich Lamm of Munich produced in the first image transmitting fiber
bundle. In 1931 the first mass production of glass fibers was achieved by Owens – Illinois for
Fiberglas. Attempts at patenting the idea of glass fibers with an enveloping clad glass was
initiated by H. M. Moller in a patent by Hansell, however, refused. As a result the well-known
scientists A.C.S. van Heel, Kapany and H. H. Hopkins produced the first fiber optic
endoscope on the basis of fiber cladding in 1954. Curtiss developed an important requisite for
the production of unclad glass fibers in 1956. He suggested that a glass rod be used as the
core material with a glass tube of lower index of refraction melted to it on the outside.
In 1961, E. Snitzer described the theoretical basis for very thin (several micron) fibers,
which are the foundation for our current fiber optic communication network. The notion of
Kawakami proposed the concept of fiber whose index of refraction varied in a continuous,
parabolic manner from the center to the edge (gradient index fiber). The main thrust of further
activities in the development of fiber optics was in improving material quality of glass. High
levels of purity were required of preform to address the enormous economic and technological
potential of a worldwide communications network.
Integrated optoelectronics
9
2 Physical principles
2.1 Light
The word “light” was given to electromagnetic radiation, which occupies wavelengths
from approximately 0,1 to 100 µm (Figure 2). Radiation below the shortest wavelength than
we can see (violet) is called ultraviolet and radiation with a wavelength longer than we can
see (red) is called infrared.
Figure 2: Electromagnetic radiation spectrum
Light may be considered as a propagation of either electromagnetic waves or quanta of
energy. The velocity of light propagation in a vacuum is expressed as:
1
299792458,7 1,1 . where
4 · 10 . 8,854. 10 . ... permeability of vacuum
... permitivity of vacuum.
The velocity of fight in a vacuum is independent of the wavelength. The frequency of
light waves in vacuum or any particular medium is determined as
where
c
... velocity of light in medium
λ
... wavelength of electromagnetic wave
10
FEKT Vysokého učení technického v Brně
2.1.1
Fermat´s principle
Fermat's principle of optics, in its historical form states:
The actual path between two points taken by a beam of light is the one which is
traversed in the least time.
The historical form is incomplete. The modern, full version of Fermat's principle states
that the optical path length must be extremely, which means that it can be either minimal,
maximal or a point of inflection (a saddle point). Minima occur most often, for instance the
angle of refraction a wave takes when passing into a different medium or the path light has
when reflected off of a planar mirror.
2.2 Photon
Definition: The minimum "bundle" or "capsule" of energy needed to sustain the
electromagnetic phenomenon at a particular frequency.
The above definition specifies that a photon is a self-sustaining "capsule" of energy, but
doesn't tell us how much energy is involved. We also know that it is involved with both a
magnetic field (commonly identified as the B field) and an electric field (normally called the
E field). The figure to the right shows the B field only; the E field would be sticking straight
up out of the screen at you, and alternately retreating back into the screen. Although we don't
see it here, perhaps we can make some statements about what it must be.
The basic equation that specifies the energy of a photon is given generally as:
"
! " #
In this expression:
E
...
energy of the photon
h
...
Planck´s constant
f
...
frequency of the light
c
...
velocity of the light
n
...
index of refraction of the medium
λ
...
wavelength of the wave
When the photon impacts with the electron, it imparts its energy to the electron. There
are several possible results, depending on the energy in the photon:
1. If the photon has insufficient energy to boost the electron to its next higher possible
orbit, the electron cannot hold the energy, and releases it again at once, as a photon
that matches the incoming photon. The direction of the released photon depends on
the nature of the material substance and energy of the photon itself, so we get
phenomena such as reflection and refraction.
2. If the photon has exactly the energy needed to boost the electron beyond the next
orbital energy level, and possibility to a yet higher orbit around its nucleus, it will do
so, and the electron will emit a lower-energy photon if necessary, as it initially drops
to the highest-energy orbit it can reach. In the meantime, however, another orbiting
electron will lose energy by dropping into the vacated orbit, and will emit a photon
of its own as it does so.
3. We see this phenomenon in fluorescent lights. Here, the actual source of light
energy is UV light produced by a mercury vapor arc through the glass tube. This
Integrated optoelectronics
11
would normally be very damaging to the eyes, were it not for the phosphors coating
the inside of the glass. That oating absorbs the UV light and emits visible light in
return.
4. The photon doesn't always give up all of its energy to the electron it strikes. Under
some circumstances, it only gives up part of its energy to the electron, and both a
higher-energy electron and a lower-energy photon leave the point of impact. This is
known as the Compton Effect. A practical example of this is found in greenhouses,
where some wavelengths of incoming sunlight are converted to longer-wavelength
infrared (heat) photons, which are then primarily reflected by the glass panes and are
therefore trapped inside the greenhouse.
5. Some substances absorb the energy of most incident photons and either transmit (a
colored filter) or reflect (a painted surface) photons of a specific amount of energy
only. The chlorophyll in green plants gets its energy by reflecting only green light,
and absorbing the energy of photons of other colors.
2.3 Electromagnetic radiation [13]
Electromagnetic radiation is generally described as a self-propagating wave in space
with electric and magnetic components. These components oscillate at right angles to each
other and to the direction of propagation, and are in phase with each other. Electromagnetic
radiation is classified into types according to the frequency of the wave: these types include,
in order of increasing frequency, radio waves, microwaves, infrared radiation, visible light,
ultraviolet radiation, X-rays and gamma rays. In some technical contexts the entire range is
referred to as just 'light'.
Figure 3: Electromagnetic radiation
Electromagnetic radiation can be imagined as a self-propagating transverse oscillating
wave of electric and magnetic fields (Figure 3). This diagram shows a plane linearly
polarized wave propagating from left to right.
Electromagnetic waves of much lower frequency than visible light were first predicted
by Maxwell's equations and subsequently discovered by Heinrich Hertz. Maxwell derived a
wave form of the electric and magnetic equations, revealing the wavelike nature of electric
and magnetic fields and their symmetry. According to these equations, a time-varying electric
field generates a magnetic field and vice versa. Therefore, as an oscillating electric field
12
FEKT Vysokého učení technického v Brně
generates an oscillating magnetic field, the magnetic field in turn generates an oscillating
electric field, and so on. These oscillating fields together form an electromagnetic wave.
Electric and magnetic fields obey the properties of superposition, so fields due to
particular particles or time-varying electric or magnetic fields contribute to the fields due to
other causes. (As these fields are vector fields, all magnetic and electric field vectors add
together according to vector addition.) These properties cause various phenomena including
refraction and diffraction. For instance, a traveling EM wave incident on an atomic structure
induces oscillation in the atoms, thereby causing them to emit their own EM waves. These
emissions then alter the impinging wave through interference.
Since light is an oscillation, it is not affected by traveling through static electric or
magnetic fields in a linear medium such as a vacuum. In nonlinear media such as some
crystals, however, interactions can occur between light and static electric and magnetic fields.
Generally, EM radiation is classified by wavelength into electrical energy, radio,
microwave, infrared, the visible region we perceive as light, ultraviolet, X-rays and gamma
rays (Figure 2, Table 1).
The behavior of EM radiation depends on its wavelength. Higher frequencies have
shorter wavelengths, and lower frequencies have longer wavelengths. When EM radiation
interacts with single atoms and molecules, its behavior depends on the amount of energy per
quantum it carries.
Table 1: Frequency, wavelength and energy of EM radiation
Legend:
γ
HX
SX
EUV
NUV
=
=
=
=
=
Gamma rays
Hard X-rays
Soft X-Rays
Extreme ultraviolet
Near ultraviolet
Integrated optoelectronics
Visible light
NIR
=
MIR
=
FIR
=
Near infrared
Moderate infrared
Far infrared
Radio waves:
EHF
=
=
UHF
=
VHF
=
HF
=
MF
=
LF
=
VLF
=
VF
=
ELF
=
Extremely high frequency SHF
Super high frequency
Ultrahigh frequency
Very high frequency
High frequency
Medium frequency
Low frequency
Very low frequency
Voice frequency
Extremely low frequency
2.3.1
13
Maxwell´s equations [1]
Electromagnetic waves as a general phenomenon were predicted by the classical laws of
electricity and magnetism, known as Maxwell's equations. If you inspect Maxwell's equations
without sources (charges or currents) then you will find that, along with the possibility of
nothing happening, the theory will also admit nontrivial solutions of changing electric and
magnetic fields. So, beginning with Maxwell's equations for a vacuum:
$·! 0
'
$%! & )
'(
$·) 0
'
$ % ) !
'(
where
∇ ... a vector differential operator
2.4 Light phenomena
2.4.1
Reflection and refraction of light
A light wave, like any wave, is an energy-transport phenomenon. A light wave
transports energy from one location to another. When a light wave strikes a boundary between
two distinct media, a portion of the energy will be transmitted into the new medium and a
portion of the energy will be reflected off the boundary and stay within the original medium.
Reflection of a light wave involves the bouncing of a light wave off the boundary, while
refraction of a light wave involves the bending of the path of a light wave upon crossing a
boundary and entering a new medium. Both reflection and refraction involve a change in
direction of a wave, but only refraction involves a change in medium.
14
FEKT Vysokého učení technického v Brně
Figure 4 shows several wave fronts approaching a boundary between two media. The
angle between the incident ray and the normal is the angle of incidence. The angle between
the reflected ray and the normal is the angle of reflection. And the angle between the
refracted ray and the normal is the angle of refraction.
Figure 4: Reflection of light
Figure 5: Internal reflection and critical angle
Refraction is the bending of the path of a light wave as it passes across the boundary
separating two media. Refraction is caused by the change in speed experienced by a
wavewhen it changes medium. If a light wave passes from a medium in which it travels slow
(relatively speaking) into a medium in which it travels fast, then the light wave will refract
away from the normal. In such a case, the refracted ray will be farther from the normal line
than the incident ray. On the other hand, if a light wave passes from a medium in which it
travels fast (relatively speaking) into a medium in which it travels slow, then the light wave
will refract towards the normal. In such a case, the refracted ray will be closer to the normal
line than the incident ray is.
The diagram below (Figure 6) depicts a ray of light approaching three different
boundaries at an angle of incidence of 45-degrees. The refractive medium is different in each
case, causing different amounts of refraction. The angles of refraction are shown on the
diagram.
Figure 6: Refraction – different refractive medium
Integrated optoelectronics
15
The fundamental law which governs the reflection of light is called the law of reflection
[1]:
When a light ray reflects off a surface, the angle of incidence is equal to the angle of
reflection.
The fundamental law which governs the refraction of light is Snell´s Law (Figure 7) [1]:
When a light ray is transmitted into a new medium, the relationship between the angle
of incidence and the angle of refraction is given by the following equation
#* · +#Θ- n/ · sinΘ/
where the ni and nr values represent the indices of refraction of the incident and the
refractive medium respectively.
Figure 7: Snell´s law
2.4.1.1 Total internal reflection
Total internal reflection (TIR) is the phenomenon which involves the reflection of all
the incident light off the boundary. TIR only takes place when both of the following two
conditions are met:
• a light ray is the denser medium and approaching the less dense medium.
• the angle of incidence for the light ray is greater than the so-called critical angle.
When the angle of incidence in water reaches a certain critical value, the refracted ray
lies long the boundary, having an angle of refraction of 90-degrees. This angle of incidence is
known as the critical angle (Figure 8); it is the largest angle of incidence for which refraction
can still occur. For any angle of incidence greater than the critical angle, light will undergo
total internal reflection.
Figure 8: Total internal reflection
16
FEKT Vysokého učení technického v Brně
So the critical angle is defined as the angle of incidence which provides an angle of
refraction of 90-degrees. Make particular note that the critical angle is an angle of incidence
value. For the water-air boundary, the critical angle is 48.6-degrees. For the crown glass-water
boundary, the critical angle is 61.0-degrees. The actual value of the critical angle is dependent
upon the combination of materials present on each side of the boundary.
Figure 9: Total internal reflection: a) at a plane interface between a low and height index
medium, b) at a prism, c) in a fiber optics
2.4.2
Interference
Interference is the superposition of two or more waves resulting in a new wave pattern.
As most commonly used, the term usually refers to the interference of waves which
arecorrelated or coherent with each other, either because they come from the same source or
because they have the same or nearly the same frequency. Two non-monochromatic waves
are only fully coherent with each other if they both have exactly the same range of
wavelengths and the same phase differences at each of the constituent wavelengths.
The principle of superposition of waves states that the resultant displacement at a point
is equal to the sum of the displacements of different waves at that point. If a crest of a wave
meets a crest of another wave at the same point then the crests interfere constructively and the
resultant wave amplitude is greater. If a crest of a wave meets a trough of another wave then
they interfere destructively, and the overall amplitude is decreased.
Interference is involved in Thomas Young's double-slit experiment where two beams of
light which are coherent with each other interfere to produce an interference pattern (the
beams of light both have the same wavelength range and at the center of the interference
pattern they have the same phases at each wavelength, as they both come from the same
source). More generally, this form of interference can occur whenever a wave can propagate
from a source to a destination by two or more paths of different length. Two or more sources
can only be used to produce interference when there is a fixed phase relation between them,
but in this case the interference generated is the same as with a single source; see Huygens ‘
Principle.
When two sinusoidal waves superimpose, the resulting waveform depends on the
frequency (or wavelength) amplitude and relative phase of the two waves. If the two waves
have the same amplitude A and wavelength the resultant waveform will have amplitude
between 0 and 2A depending on whether the two waves are in phase or out of phase.
Integrated optoelectronics
17
Figure 10: Constructive and destructive interference
Consider two waves that are in phase, with amplitudes A1 and A2. Their troughs and
peaks line up and the resultant wave will have amplitude A = A1 + A2. This is known as
constructive interference (Figure 10).
If the two waves are pi radians, or 180°, out of phase, then one wave's crests will
coincide with another wave's troughs and so will tend to cancel out. The resultant amplitude is
A = | A1 − A2 | . If A1 = A2, the resultant amplitude will be zero. This is known as destructive
interference (Figure 10).
2.4.2.1 Interferometers
Interferometry is the science of combining two or more waves, which are said to
interfere with each other. In wave terms the interference pattern is a state, which depends on
the amplitude and phase of all the contributing waves. Although the wave phenomenon of
interference is very general, the applications of interferometry can be used in a wide variety of
fields, including astronomy, fiber optics, optical metrology, studies of quantum mechanics
such as neutron interferometry, neutrino interferometry, and string theory interferometry.
There are many other types of interferometers. They all work on the same basic principles, but
the geometry is different for the different types. Basic types of interferometers are:
• Michelson interferometer
• Mach-Zehnder interferometer
• Sagnac interferometer
• Fabry-Perot interferometer
2.4.2.2 Michelson interferometer
18
FEKT Vysokého učení technického v Brně
Figure 11: Michelson interferometer
A very common example of an interferometer is the Michelson (or Michelson-Morley)
type (Figure 11). Here the basic building blocks are a monochromatic source (emitting light or
matter waves), a detector, two mirrors and one semitransparent mirror (often called beam
splitter). These are put together as shown in the figure.
There are two paths from the (light) source to the detector. One reflects off the
semitransparent mirror, goes to the top mirror and then reflects back, goes through the
semitransparent mirror, to the detector. The other first goes through the semi-transparent
mirror, to the mirror on the right, reflects back to the semi-transparent mirror, then reflects
from the semi-transparent mirror into the detector (Figure 12).
If these two paths differ by a whole number (including 0) of wavelengths, there is
constructive interference and a strong signal at the detector. If they differ by a whole number
and half wavelengths (e.g., 0,5, 1,5, 2,5 ...) there is destructive interference and a weak signal.
This might appear at first sight to violate conservation of energy. However energy is
conserved, because there is a re-distribution of energy at the detector in which the energy at
the destructive sites is re-distributed to the constructive sites. The effect of the interference is
to alter the share of the reflected light which heads for the detector and the remainder which
heads back in the direction of the source.
Figure 12: Diagram of Michelson interferometer
Interferometers are perhaps even more widely used in integrated optical circuits, in the
form of a Mach-Zehnder interferometer (Figure 13), in which light interferes between two
branches of a waveguide that are (typically) externally modulated to vary their relative phase.
This interferometer's configuration consists of two beam splitters and two completely
reflective mirrors. The source beam is split and the two resulting waves travel down separate
paths. A slight tilt of one of the beam splitters will result in a path difference and a change in
the interference pattern. The Mach-Zehnder interferometer can be very difficult to align,
however this sensitivity adds to its diverse number of applications. The Mach-Zehnder
interferometer can be the basis of a wide variety of devices, from RF modulators to sensors to
optical switches.
Integrated optoelectronics
19
2.4.2.3 Mach-Zehnder interferometer
Figure 13: Mach-Zehnder interferometer
2.4.2.4 Sagnac interferometer
Figure 14: Sagnac interferometer
A Sagnac interferometer (Figure 14) is an interferometry configuration in which a beam
of light is split and the two beams are made to follow a trajectory in opposite directions. To
act as a ring the trajectory must enclose an area. On return to the point of entry the light is
allowed to exit the apparatus in such a way that an interference pattern is obtained. In the
Sagnac configuration, the position of the interference fringes is dependent on angular velocity
of the setup. This dependence is caused by the rotation effectively shortening the path
20
FEKT Vysokého učení technického v Brně
distance of one beam, while lengthening the other. A Sagnac interferometer has been used by
Albert Michelson and Henry Gale to determine the angular velocity of the Earth. It can be
used in navigation as a ring laser gyroscope, which is commonly found on fighter planes.
2.4.2.5 Fabry-Perot interferometr
This interferometer makes use of multiple reflections between two closely spaced
partially silvered surfaces (Figure 15). Part of the light is transmitted each time the light
reaches the second surface, resulting in multiple offset beams which can interfere with each
other. The large number of interfering rays produces an interferometer with extremely high
resolution, somewhat like the multiple slits of a diffraction grating increase its resolution.
Figure 15: Fabry-Perot interferometer
2.4.3
Diffraction of light
Diffraction refers to the various phenomena associated with wave propagation, such as
the bending, spreading and interference of waves emerging from an aperture. It occurs with
any type of wave, including sound waves, water waves, electromagnetic waves such as light
and radio waves, and matter displaying wave-like properties according to the wave–particle
duality. While diffraction always occurs, its effects are generally only noticeable for waves
where the wavelength is on the order of the feature size of the diffracting objects or apertures
(Figure 16).
Figure 16: Double-split diffraction
Diffraction effects were first carefully observed and characterized in 1665 by Francesco
Maria Grimaldi, who also coined the term diffraction. Isaac Newton studied these effects and
attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction
Integrated optoelectronics
21
patterns caused by a bird feather, effectively the first diffraction grating. Thomas Young
observed two-slit diffraction in 1803 and deduced that light must propagate as waves (Figure
17). Fresnel did more definitive studies and calculations of diffraction, published in 1815 and
1818, and thereby gave great support to the wave theory of light that had been advanced by
Christian Huygens and reinvigorated by Thomas Young, against Newton's theories.
Figure 17: Young´s two-split diffraction
Several qualitative observations can be made:
• The angular spacing of the features in the diffraction pattern is inversely
proportional to the dimensions of the object causing the diffraction, in other words:
the smaller the diffracting objects the 'wider' the resulting diffraction pattern and
vice versa. (More precisely, this is true of the sinus of the angles.)
• The diffraction angles are invariant under scaling; that is, they depend only on the
ratio of the wavelength to a dimension, a, of the diffracting object.
• When the diffracting object is repeated, for example in a diffraction grating the
effect is to create narrower maximum on the interference fringes, concentrating its
energy within a narrower range of angles. The third figure, for example, shows a
comparison of a double-slit pattern with a pattern formed by five slits, both sets of
slits having the same spacing, a, between the center of one slit and the next.
It is mathematically easier to consider the case of far-field or Fraunhofer diffraction,
where the diffracting obstruction is far from the point at which the wave is measured. The
more general case is known as near-field or Fresnel diffraction, and involves more complex
mathematics. As the observation distance is increased the results predicted by the Fresnel
theory converge towards those predicted by the simpler Fraunhofer theory. This article
considers far-field diffraction, which is commonly observed in nature. Quantitatively, the
angular positions of the minima in multiple-slit diffraction are given by the equation
λ
+#2 m
a
where
m
... an integer that labels the order of each minimum,
λ
... the wavelength,
a
... the distance between the slits,
θ
... the angle for destructive interference.
22
FEKT Vysokého učení technického v Brně
Figure 18: Graph and image of single-split diffraction
2.4.4
Dispersion of light
Dispersion is the difference between the amounts of refraction of different colors of
light. White light is actually composed of light of all different colors (Figure 19). A highly
dispersive material will split light strongly into its component colors to give a "prism" effect
showing a "rainbow" or spectrum.
Figure 19: Dispersion of light
The divergence or spreading of the different colored rays of a beam of composite light
when refracted by a prism or lens, or when diffracted, so as to produce a spectrum, especially
in reference to the amount of this dispersion.
In optics, the phase velocity of a wave v in a given uniform medium is given by:
6
#
where
c
... the speed of light in a vacuum,
n
... the refractive index of the medium.
Integrated optoelectronics
23
Figure 20: The variation of refractive index vs. wavelength for various glasses
The velocity of light in a material, and thus its index of refraction, depends on the
wavelength of the light (Figure 20, Table 2). In general, the index of refraction is greater for
shorter wavelengths. This causes light inside materials to be refracted by different amounts
according to the wavelength or color.
Color
Wavelength
Index of
Refraction
Blue
434 nm
1.528
Yellow
550 nm
1.517
Red
700nm
1.510
Table 2: Comparison of wavelength and index of refraction
2.4.5
Absorption of light
In absorption, the frequency of the incoming light wave is at or near the energy levels of
the electrons in the matter. The electrons will absorb the energy of the light wave and change
their energy state. There are several options as what can happen next, either the electron
returns to the ground state emitting the photon of light or the energy is retained by the matter
and the light is absorbed. If the photon is immediately re-emitted the photon is effectively
reflected or scattered. If the photon energy is absorbed the energy from the photon typically
manifests itself as heating the matter up.
The absorption of light makes an object dark or opaque to the wavelengths or colors of
the incoming wave. Wood is opaque to visible light. Some materials are opaque to some
wavelengths of light, but transparent to others. Glass and water are opaque to ultraviolet light,
but transparent to visible light. By which wavelengths of light are absorbed by a material the
material composition and properties can be understood.
24
FEKT Vysokého učení technického v Brně
Another manner that the absorption of light is apparent is by their color. If a material or
matter absorbs light of certain wavelengths or colors of the spectrum, an observer will not see
these colors in the reflected light. On the other hand if certain wavelengths of colors are
reflected from the material, an observer will see them and see the material in those colors. For
example, the leaves of green plants contain a pigment called chlorophyll, which absorbs the
blue and red colors of the spectrum and reflects the green. Leaves therefore appear green.
Figure 21: Absorption coefficients of major semiconductors
2.4.6
Fresnel´s laws
The Fresnel equations, deduced by Augustin-Jean Fresnel, describe the behavior of light
when moving between media of differing refractive indices. The reflection of light that the
equations predict is known as Fresnel reflection.
When light moves from a medium of a given refractive index n1 into a second medium
with refractive index n2, both reflection and refraction of the light may occur.
In the diagram on the right (Figure 22), an incident light ray PO strikes at point O the
interface between two media of refractive indexes n1 and n2. Part of the ray is reflected as ray
OQ and part refracted as ray OS. The angles that the incident, reflected and refracted rays
make to the normal of the interface are given as θi, θr and θt, respectively. The relationship
between these angles is given by the law of reflection and Snell's law.
The fraction of the intensity of incident light that is reflected from the interface is given
by the reflection coefficient R, and the fraction refracted by the transmission coefficient T. The
Fresnel equations, which are based on the assumption that the two materials are both
nonmagnetic, may be used to calculate R and T in a given situation.
Integrated optoelectronics
25
Figure 22: Reflection and refraction
The calculations of R and T depend on polarization of the incident ray. If the light is
polarized with the electric field of the light perpendicular to the plane of the diagram above
(s-polarized), the reflection coefficient is given by:
+#:2; & 2* <
# cos 2* & # cos 2; 78 9
> ?
B
+#:2; = 2* <
# cos 2* = # cos 2*
where θt can be derived from θi by Snell´s law.
If the incident light is polarized in the plane of the diagram (p-polarized), the R is given
by:
(D#:2; & 2* <
# E:2; < & # E:2* <
7C 9
> 9
>
(D#:2; = 2* <
# E:2; < = # E:2* <
The transmission coefficient in each case is given by Ts = 1 − Rs and Tp = 1 − Rp. If the
incident light is unpolarized (containing an equal mix of s- and p-polarizations), the reflection
coefficient is R = (Rs + Rp)/2.
The reflection and transmission coefficients correspond to the ratio of the intensity of
the incident ray to that of the reflected and transmitted rays. Equations for coefficients
corresponding to ratios of the electric field amplitudes of the waves can also be derived, and
these are also called "Fresnel equations".
At one particular angle for a given n1 and n2, the value of Rp goes to zero and a
ppolarized incident ray is purely refracted. This angle is known as Brewster's angle (Figure
23), and is around 56° for a glass medium in air or vacuum.
When moving from a more dense medium into a less dense one (i.e. n1 > n2), above an
incidence angle known as the critical angle (Figure 23), all light is reflected and Rs = Rp = 1.
This phenomenon is known as total internal reflection. The critical angle is approximately 41°
for glass in air.
26
FEKT Vysokého učení technického v Brně
Figure 23: Refraction diagram
When the light is at near-normal incidence to the interface (θi ≈ θt ≈ 0), the reflection
and transmission coefficient are given by:
# & # 7 78 7F G
H
# = #
4# #
I I8 IF 1 & 7 :# = # <
For common glass, the reflection coefficient is about 4%. Note that reflection by a
window is from the front side as well as the back side, and that some of the light bounces
back and forth a number of times between the two sides. The combined reflection coefficient
for this case is 2R/(1 + R).
Repeated reflection and refraction on thin, parallel layers is also known as Fabry-Perot
interference, this effect is responsible for the colors seen in oil films on water, used in optics
to make reflection free lenses and perfect mirrors, etc.
It should be noted that the discussion given here is only valid when the permeability µ is
equal to the vacuum permeability µ0 in both media. This is true for most dielectric materials,
but the completely general Fresnel equations are more complex.
2.4.7
Mechanisms of attenuation
Attenuation or loss for fiber optics is defined by the following equation:
M*
J &10KEL
N). O M
where
A
... attenuation,
Pi
... input power,
P0 ... output power.
The negative sign arises from the convention that attenuation is negative. Attenuation is
measured in decibels (dB) per unit length, typically dB.km-1.
Integrated optoelectronics
27
Loss can vary from 1 to 1000 dB.km-1 in useful fibers with the various causes of loss
often being wavelength dependent. The causes for loss are absorption, scattering,
microbending and end loss due to reflection.
Losses associated with microbending in the fiber will be discussed in depth in a later
chapter, since the microbending mechanism is quite useful in sensor design.
The following definition of fiber loss is useful [3]:
• Low loss – less than 10 dB.km-1
• Medium loss – 10 to 100 dB.km-1
• High loss – greater than 100 dB.km-1
In addition to losses in the fiber itself, there are losses at the ends of the fiber due to
reflection. The refractive index difference between the fiber and usually and interface leads to
Fresnel reflection losses. As a result, small amounts of energy are reflected back into the
fiber. These losses show up in connecting the fiber to optical devices or other fibers and must
be considered in overall system losses. The Fresnel reflection loss R is defined for a glass-air
interface by:
# & 1 7G
H
# = 1
where
n0
the index of refraction of the core material.
Using the classical definition of absorption:
M M* P QR
N)
where
P0 ... output power
... input power
Pi
α
... attenuation coefficient
L
... length
Transmission T is given by:
%
I :1 & 7< P QR
2
The term (1-R) is the reflection loss for the entrance and exit ends of the fiber. The
effect of reflections is multiplicative and therefore accounts for the square term since the are
two surfaces (exit and entrance).
2.4.7.1 Power measurements
Watts are the basic units of optical power measurement. In fiberoptics, decibel units are
the logarithmic transformations of watts and submultiples of watts. Decibel units are used in
fiberoptics because they provide a convenient means of condensing power measurement
information that has a wide dynamic range (Table 3).
28
FEKT Vysokého učení technického v Brně
Table 3: Conversion dBm to mW
Since fiberoptic power levels cover many orders of magnitude, the logarithmically
compressed decibel scale is commonly used. Decibel power is defined as:
MU*VWXY
N) 10KEL T
^
MZ[\[Z[W][
Since the decibel is a ratio, it must either have a mutually agreed reference power (such
as 1 milliwatt or 1 microwatt), or be understood to represent the power difference between
two signals. For example, to express the loss of an optical component where the input power
is PIN and the out power is POUT:
Loss (dB) = dBm (PIN) - dBm (POUT)
or
Loss (dB) = dBµ (PIN) - dBµ (POUT)
Power measurements are made by converting light from a laser diode or LED, for
example, into an electrical signal through an opticalelectrical converter or detector (Figure
24).
Figure 24: Power measurement
Fiberoptics communications wavelengths range from 650 nm to 1550 nm.
You should select your detector according to the wavelength you wish to measure. Fiber
measurements in the wavelength range of 360 to 1100 nm require silicon detector heads.
Measurements up to 1800 nm require germanium or indium gallium arsenide sensor heads.
Integrated optoelectronics
29
There are two methods for making measurements of a fiberoptic laser diode or LED.
One way, suitable for low power light sources, is to connect the fiber and its attached laser
diode to the power meter. The alternate method, which is particularly useful for high power
light sources, is to use a miniature integrating sphere.The sphere attenuates the light intensity
by several orders of magnitude, and thus permits direct measurement of power output.
Fiber Attenuation Measurements
Figure 25 and Figure 26 illustrate two methods for measuring cable loss. The cutback
technique uses just one fiber for measurement but requires that you cut off an end piece of the
fiber during the measurement process.The dB/km loss factor is the difference in dB power
measurements divided by the length of the cut-off piece of fiber.
In the cable substitution method, the dB/kilometer loss factor can be established by
comparing the transmission of a short test cable with the transmission through that of a known
longer length cable.
Figure 25: Cutback method
Figure 26: Cable substitution method
30
FEKT Vysokého učení technického v Brně
2.4.8
Wave guide propagation for fiber
Depending upon the core size and numerical aperture, the fiber will transmit many
modes (rays) of the light and be referred to as multimode fiber. It may also be limited to a
single mode.
Modal performance is mathematically defined by Maxwell’s equation for cylindrical
boundary conditions as follows [3]:
N _ 1 N_ 1 N_
= ·
= ·
= :O & b < 0
N` ` N` ` Na
where
ρ
... the radial parameter
ψ
... the wave function of the guided light
k
... the bulk medium wave vector
β
... the wave vector along the fiber axis
Ф
... the azimuthally angle
If the wave function is assumed to be of the form:
_ J:`<P *cd
then the Maxwell equation becomes a Bessel Equation. The boundary conditions
require that on the axis (ρ = 0), the field has finite value. However, the field becomes zero
at infinity (ρ = ∞).
Figure 27: Wave guide propagation
The resulting longitudinal components of the field functions are as follows:
Jec :fg/D<P *cd , ` i D :EgP<
)jc :kg/D<P *cd , ` i D :KDNN+#L<
Integrated optoelectronics
31
where Jυ(ur/a) and Kυ(wr/a) are Bessel functions of the first and second kind,
respectively:
l :O & b <D
O 2# / k :b & O <D
O 2# / The subscripts 1 and 2 denote the core and the cladding, respectively, while a is the
radius of the core.
2D k =f G
H · :# & # < m The term w2 + u2 is a constant for all modes and is characteristic of the optical fiber. The
parameter V represents the number of modes in the fiber and is related to the numerical
aperture as follows:
2D
:nJ<
m The relationship clearly follows the previously developed concept for numerical
aperture (i.e., as the NA increases, so does the number of rays (modes) that can be accepted).
It is important to note that the mathematical solutions to Maxwell’s equations have allowed
values. Therefore, there are allowed modes. Modes that do not fit the mathematical solutions
are not allowed. As a result, modes can be considered to be quantized. For the simplest case
of a single-mode fiber v = 0, only the TE and TM modes are present (Figure 28). Two modes
exist in a single-mode fiber because the mode can degenerate due to polarization. For higher
modes where notation HEmn(n = υ) or EHmn, depending upon the dominance of magnetic of
electric characteristics. The subscript n defines further mathematical solutions due to the
behavior of Bessel functions. The field varies in a periodic fashion with φ and Skew rays
that have a φ component result in a power concentration away from the fiber axis near the
cladding.
Figure 28: Modes
At V < 2,405, the fiber can support only a single mode, designated as HE11. At
V>2,405, other modes can exist, with the number increasing as V increases. Each of the
modes is doubly degenerate due to polarization, since in circular waveguides, two orthogonal
polarization states (modes) exist for the same wave number. Elliptical core geometric
considerations can eliminate the degeneracy.
32
FEKT Vysokého učení technického v Brně
3 Optical components
Optical components help to manipulate light in many ways. In this section these
components will be discussed from a standpoint of a geometrical optics. Geometrical optics
techniques are only used for components whose sizes are much larger than the wavelength of
interest.
3.1 Optical windows
The main purpose of windows is to sensor interiors from environment. A window
should transmit light in a specific wavelength without distortion. Therefore windows should
have appropriate properties depending on a particular application.
3.2 Optical filters
An optical filter (Figure 29) is a device which selectively transmits light having certain
properties (often, a particular range of wavelengths, that is, range of colors of light), while
blocking the remainder [12]. They are commonly used in photography, in many optical
instruments, and to color stage lighting.
Figure 29: Colored and neutral density filters
The types of the optical filters:
• Absorptive filters are usually made from glass to which various inorganic or
organic compounds have been added. These compounds absorb some wavelengths
of light while transmitting others. The compounds can also be added to plastic
(often polycarbonate or acrylic) to produce gel filters, which are lighter and cheaper
than glass-based filters.
• Reflective filters can be made by coating a glass substrate with an optical coating.
These filters reflect the unwanted portion of the light and transmit the remainder.
Reflective filters are particularly suited for high-precision scientific work, since
their exact filter band can be selected by precise control of the coating. They are
however, usually much more expensive and delicate than absorption filters.
Coating-based filters are often referred to as diachronic, and can be used in devices
such as a diachronic prism to separate a beam of light into different colored
components.
Integrated optoelectronics
•
•
•
33
Monochromatic filters only allow a narrow range of wavelengths (that is, a single
colour) to pass.
Infrared (IR) or heat-absorbing filters are designed to block mid-infrared
wavelengths but pass visible light. They are often used in devices with bright
incandescent light bulbs (such as slide and overhead projectors) to prevent unwanted
heating. There are also near-infrared filters which are used in solid state video
cameras to compensate for the high sensitivity of many camera sensors to nearinfrared light.
Ultraviolet (UV) filters block ultraviolet radiation, but let visible light through.
Because photographic film and digital sensors are sensitive to ultraviolet (which is
abundant in skylight) but the human eye is not, such light would, if not filtered out,
make photographs look different from the scene that the photographer saw. This
causes images of distant mountains to appear hazy. By attaching a filter to remove
ultraviolet, photographers can produce pictures that more closely resemble the scene
as seen by a human eye.
3.2.1
Interference filters
An interference filter is an optical filter that reflects one or more spectral bands or lines
and transmits others, while maintaining a nearly zero coefficient of absorption for all
wavelengths of interest. An interference filter may be high-pass, low-pass, bandpass, or
bandrejection.
Figure 30: Interference filters
34
FEKT Vysokého učení technického v Brně
An interference filter (Figure 30) consists of multiple thin layers of dielectric material
having different refractive indices. There also may be metallic layers. In its broadest meaning,
interference filters comprise also etalons that could be implemented as tunable interference
filters. Interference filters are wavelength-selective by virtue of the interference effects that
take place between the incident and reflected waves at the thin-film boundaries.
Bandpass filters are normally designed for normal incidence. However, when the angle
of incidence of the incoming light is increased from zero, the central wavelength of the filter
decreases, resulting in partial tunability. If λc is the central wavelength under an angle of
incidence θ < 20°, λ0 is the central wavelength at normal incidence, and n* is the filter
effective index of refraction, then:
2
T1
&
^
o
2#
3.3 Mirrors
Mirrors are the oldest optical instruments based on reflectivity and are widely used for
gathering light and forming images since they work over a wide wavelength range and do not
have the same problems as dispersion, which are associated with lenses and other refracting
elements. They avoid the chromatic aberration arising from dispersion in lenses, but are
subject to other aberration [5]. Reflective coatings suitable for operation in the visible and
near infrared range include silver, aluminium, chromium and rhodium.
3.3.1
Plane mirrors
A plane mirror is simply a mirror with a flat surface; all of us use plane mirrors every
day, so we've got plenty of experience with them. Images produced by plane mirrors have a
number of properties, including:
• the image produced is upright
• the image is the same size as the object (i.e., the magnification is m = 1)
• the image is the same distance from the mirror as the object appears to be (i.e.,
the image distance = the object distance)
• the image is a virtual image, as opposed to a real image, because the light rays
do not actually pass through the image. This also implies that an image could not
be focused on a screen placed at the location where the image is.
Consider an object placed a certain distance in front of a mirror, as shown in the
diagram (Figure 31). To figure out where the image of this object is located, a ray diagram
can be used. In a ray diagram, rays of light are drawn from the object to the mirror, along with
the rays that reflect off the mirror. The image will be found where the reflected rays intersect.
Note that the reflected rays obey the law of reflection. What you notice is that the reflected
rays diverge from the mirror; they must be extended back to find the place where they
intersect, and that's where the image is.
Integrated optoelectronics
35
Figure 31: Plane mirror
Analyzing this a little further, it's easy to see that the height of the image is the same as
the height of the object. Using the similar triangles ABC and EDC, it can also be seen that the
distance from the object to the mirror is the same as the distance from the image to the mirror.
3.3.2
Convex and concave mirror
We have also seen how images are created by the reflection of light off curved mirrors.
Suppose that a light bulb is placed in front of a concave mirror; the light bulb will emit light
in a variety of directions, some of which will strike the mirror. Each individual ray of light
will reflect according to the law of reflection. Upon reflecting, the light will converge at a
point. At the point where the light from the object converges, a replica or reproduction of the
actual object is created; this replica is known as the image. Once the reflected light rays
reached the image location, they begin to diverge. The point where all the reflected light rays
converge is known as the image point. Not only is it the point where light rays converge, it is
also the point where reflected light rays appear to an observer to be diverging from.
Regardless of the observer's location, the observer will see a ray of light passing through the
real image location. To view the image, the observer must line her sight up with the image
location in order to see the image via the reflected light ray. The diagram (Figure 32) depicts
several rays from the object reflecting from the mirror and converging at the image location.
The reflected light rays then begin to diverge, with each one being capable of assisting an
individual in viewing the image of the object.
Figure 32: Concave mirror
36
FEKT Vysokého učení technického v Brně
For plane mirrors, virtual images are formed. Light does not actually pass through the
virtual image location; it only appears to an observer as though the light was emanating from
the virtual image location. The image formed by this concave mirror is a real image. When a
real image is formed, it still appears to an observer as though light is diverging from the real
image location. Only in the case of a real image, light is actually passing through the image
location.
Another characteristic of the images of objects formed by convex mirrors pertains to
how a variation in object distance effects the image distance and size. The diagram (Figure
33) shows seven different object locations (drawn and labeled in red) and their corresponding
image locations (drawn and labeled in blue).
Figure 33: Convex mirror
The diagram shows that as the object distance is decreased, the image distance is
decreased and the image size is increased. So as an object approaches the mirror, its virtual
image on the opposite side of the mirror approaches the mirror as well; and at the same time,
the image is becoming larger.
Figure 34: Convex and concave mirrors
Integrated optoelectronics
37
3.4 Polarizer
A polarizer is a device that converts an unpolarized or mixed-polarization
mixed
beam of
electromagnetic waves (e.g., light) into a beam with a single polarization state (usually, a
single linear polarization). Polarizers are used in many optical techniques and instruments,
and polarizing filters find applications in photography.
Polarizers can be divided into two general categories: absorptive polarizers,
polarizers where the
unwanted polarization states are absorbed by the device; and beam-splitting
beam
polarizers,
where the unpolarized beam is split into two beams with opposite polarization states.
Figure 35: Wire-grid polarizer
The simplest polarizer in concept is the wire-grid
wire grid polarizer, which consists of a regular
array of fine parallel
lel metallic wires, placed in a plane perpendicular to the incident beam.
Electromagnetic waves which have a component of their electric fields aligned parallel to the
wires induce the movement of electrons along the length of the wires. Since the electrons
electron are
free to move, the polarizer behaves in a similar manner as the surface of a metal when
reflecting light; some energy is lost due to Joule heating in the wires, and the rest of the wave
is reflected backwards along the incident beam.
For waves with electric
ectric fields perpendicular to the wires, the electrons cannot move
very far across the width of each wire; therefore, little energy is lost or reflected, and the
incident wave is able to travel through the grid. Since electric field components parallel to the
wires are absorbed or reflected, the transmitted wave has an electric field purely in the
direction perpendicular to the wires, and is thus linearly polarized. Simply stated, only light
traveling in a certain direction passes through the polarizer, and
and the rest of the light is
absorbed or reflected. Note that the polarization direction is perpendicular to the wires; the
naive concept of a wave "slipping through" the gaps between the wires is incorrect.
For practical use, the separation distance between the wires must be less than the
wavelength of the radiation, and the wire width should be a small fraction of this distance.
This means that wire-grid
grid polarizers (Figure 35) are generally only used for microwaves and
for far- and mid-infrared
infrared light. Using advanced lithographic techniques, very tight pitch
metallic grids can be made which polarize visible light, but they are generally impractical
compared to other polarizer types.
Certain crystals, due to the effects described by crystal optics, show dichroism,
dichro
a
preferential absorption of light which is polarized in a particular direction. They can therefore
be used as polarizers. The best known crystal of this type is tourmaline. However, this crystal
is seldom used as a polarizer, since the dichroic effect is strongly wavelength dependent and
38
FEKT Vysokého učení technického v Brně
the crystal appears coloured. Herapathite is also dichroic, and is not strongly coloured, but is
difficult to grow in large crystals.
3.4.1
Birefringent palarizer
Other polarizers exploit the birefringent properties of crystals such as quartz and calcite.
In these crystals, a beam of unpolarized light incident on their surface is split by refraction
into two rays. Snell's law holds for one of these rays, the ordinary or o-ray, but not for the
other, the extraordinary or e-ray. In general the two rays will be in different polarization
states, though not in linear polarization states except for certain propagation directions relative
to the crystal axis. The two rays also experience differing refractive indices in the crystal.
Figure 36: Nicol prism
A Nicol prism was an early type of birefringent polarizer that consists of a crystal of
calcite which has been split and rejoined with Canada balsam. The crystal is cut such that the
o- and e-rays are in orthogonal linear polarization states (Figure 36). Total internal reflection
of the o-ray occurs at the balsam interface, since it experiences a larger refractive index in
calcite than in the balsam, and the ray is deflected to the side of the crystal. The e-ray, which
sees a smaller refractive index in the calcite, is transmitted through the interface without
deflection. Nicol prisms produce a very high purity of polarized light, and were extensively
used in microscopy, though in modern use they have been mostly replaced with alternatives
such as the Glan-Thompson prism, Glan-Foucault prism, and Glan-Taylor prism. These
prisms are not true polarizing beamsplitters since only the transmitted beam is fully polarized.
Figure 37: Wollaston prism
A Wollaston prism is another birefringent polarizer consisting of two triangular calcite
prisms with orthogonal crystal axes that are cemented together. At the internal interface, an
unpolarized beam splits into two linearly polarized rays which leave the prism at a divergence
angle of 15° - 45°.
Integrated optoelectronics
3.4.2
39
Malu´s law and other properties
Figure 38: Polarization – Malus´s law
Malus's law, which is named after Etienne-Louis Malus, says that when a perfect
polarizer is placed in a polarized beam of light, the intensity, I, of the light that passes through
is given by
p p E 2*
where
I0
... the initial intensity,
θi
... the angle between the light´s initial plane of polarization and the axis of
the polarizer.
A beam of unpolarized light can be thought of as containing a uniform mixture of linear
polarizations at all possible angles. Integrating Malus's law with respect to θ, it can be shown
that the transmission coefficient for a polarizer is
p
1
p 2
In practice, some light is lost in the polarizer and the actual transmission of unpolarized
light will be somewhat lower than this, around 38% for Polaroid-type polarizers but
considerably higher (>49.9%) for some birefringent prism types.
3.4.3
Beam-splitting polarizers
Beam-splitting polarizers split the incident beam into two beams of differing
polarization. For an ideal polarizing beamsplitter these would be fully polarized, with
orthogonal polarizations. For many common beam-splitting polarizers, however, only one of
the two output beams is fully polarized. The other contains a mixture of polarization states.
Unlike absorptive polarizers, beam splitting polarizers do not need to absorb and
dissipate the energy of the rejected polarization state, and so they are more suitable for use
with high intensity beams such as laser light. True polarizing beamsplitters are also useful
where the two polarization components are to be analyzed or used simultaneously.
3.4.4
Polarization by reflection
When light reflects at an angle from an interface between two transparent materials, the
reflectivity is different for light polarized in the plane of incidence and light polarized
perpendicular to it. Light polarized in the plane is said to be p-polarized, while that polarized
perpendicular to it is s-polarized. At a special angle known as Brewster's angle, no p-polarized
40
FEKT Vysokého učení
ení technického v Brně
Brn
light is reflected from the surface, thus all reflected light must be s-polarized,
polarized, with an electric
e
field perpendicular to the plane of incidence.
A simple polarizer can be made by tilting a stack of glass plates at Brewster's angle to
the beam (Figure 39). Some of the s-polarized
polarized light is reflected from each surface of each
plate. For a stack of plates, each reflection depletes the incident beam of s-polarized
s
light,
leaving a greater fraction of p--polarized
polarized light in the transmitted beam at each stage. For
visible light in air and typical glass, Brewster's angle is about 57°, and about 16% of the spolarized light present in the beam is reflected for each air-to-glass
air
or glass-to
to-air transition. It
takes many plates to achieve even mediocre polarization of the transmitted beam with this
approach.
Figure 39:
39 Full polarization at Brewster´s angle
3.5 Beam splitters
A beam splitter is an optical device that splits a beam of light in two. It is the crucial
part of most interferometers.
In its most common form, it is a cube, made from two triangular glass prisms which are
glued together
ther at their base using Canada balsam. The thickness of the resin layer is adjusted
such that (for a certain wavelength) half of the light incident through one "port" (i.e. face of
the cube) is reflected and the other half is transmitted. Polarizing beam splitters, such as the
Wollaston prism, use birefringent materials, splitting light into beams of differing
polarization.
Another design is the use of a half-silvered
half silvered mirror. This is a plate of glass with a thin
coating of aluminum (usually deposited from aluminum vapour) with the thickness of the
aluminum coating such that, of light incident at a 45 degree angle, one half is transmitted and
one half is reflected. Instead of a metallic coating, a dielectric optical coating may be used.
Similarly, a very thinn pellicle film may also be used as a beam splitter.
A third version of the beam splitter is a dichroic mirrored prism assembly which uses
dichroic optical coatings to split the incoming light into three beams, one each of red, green,
and blue.
3.6 Optical reflectors
Optical reflectors are used like mirrors for the reflection of light in required direction.
However optical reflectors differ from mirrors by the fact that they should not only reflect
light beams perpendicular to the surface but also beams tilted by a certain small angle. They
Integrated optoelectronics
41
most often have a modular structure composed from a set of basic prismatic cells. The
incident light beam is reflected three the internal walls of a prism and form this reason the
reflectors of this type are designated as triple
tri reflectors [5].
3.7 Lenses
There are many similarities between lenses and mirrors. The mirror equation, relating
focal length and the image and object distances for mirrors, is the same as the lens equation
used for lenses. There are also some differences, however; the most important being that with
a mirror, light is reflected, while with a lens an image is formed by light that is refracted by,
and transmitted through, the lens. Also, lenses have two focal points, one on each side of the
lens.
The surfaces of lenses, like spherical mirrors, can be treated as pieces cut from spheres.
A lens is double sided, however, and the two sides may or may not have the same curvature.
A general rule of thumb is that when the lens is thickest in the center, it is a converging
conve
lens,
and can form real or virtual images. When the lens is thickest at the outside, it is a diverging
lens, and it can only form virtual images.
Ideally, the curve of the lens cross section is parabolic. However, it is difficult and
expensive to grind lenses to that precise shape; so many lenses are ground with a circular
cross section instead. This works acceptably for many applications, since the "nose" of a
parabola almost exactly coincides with a portion of a circle. However, such lenses do not
operate perfectly, and do introduce some distortion. In our discussion of lens optics, we will
assume ideal lenses for our calculations and descriptions, unless otherwise noted.
Lenses are classified by the curvature of the two optical surfaces (Figure 40). A lens is
biconvex (or just convex) if both surfaces are convex, likewise, a lens with two concave
surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex
plano
or plano-concave
concave depending on the curvature of the other surface. A lens with one convex and
one concave side is convex-concave,
convex concave, and in this case if both curvatures are equal it is a
meniscus lens. (Sometimes, meniscus lens can refer to any lens of the convex-concave
convex
type).
It's not necessary for a lens to be thicker in the middle than at the edges. Just the
opposite is shown in the cross section to the right. This type of lens also has its uses, and has a
number
er of practical applications.
Figure 40: Types of lens
42
FEKT Vysokého učení technického v Brně
Figure 41: Converting (convex) and diverging (concave) lens
Converging lenses (Figure 42) can produce both real and virtual images while diverging
images can only produce virtual images. The process by which images are formed for lenses
is the same as the process by which images are formed for plane and curved mirrors. Images
are formed at locations where any observer is sighting as they view the image of the object
through the lens. So if the path of several light rays through a lens is traced, each of these light
rays will intersect at a point upon refraction through the lens. Each observer must sight in the
direction of this point in order to view the image of the object. While different observers will
sight along different lines of sight, each line of sight intersects at the image location. The
diagram below shows several incident rays emanating from an object - a light bulb. Three of
these incident rays correspond to our three strategic and predictable light rays. Each incident
ray will refract through the lens and be detected by a different observer (represented by the
eyes). The location where the refracted rays are intersecting is the image location.
In this case, the image is a real image since the light rays are actually passing through
the image location. To each observer, it appears as though light is diverging from this
location.
Figure 42: Image formation by a converging lens
Diverging lens (Figure 43) create virtual images since the refracted rays do not actually
converge to a point. In the case of a diverging lens, the image location is located on the
object's side of the lens where the refracted rays would intersect if extended backwards. Every
observer would be sighting along a line in the direction of this image location in order to see
the image of the object. As the observer sights along this line of sight, a refracted ray would
come to the observer's eye. This refracted ray originates at the object, and refracts through the
lens. The diagram below shows several incident rays emanating from an object - a light bulb.
Integrated optoelectronics
43
Three of these incident rays correspond to our three strategic and predictable light rays. Each
incident ray will refract through the lens and be detected by a different observer (represented
by the eyes). The location where the refracted rays are intersecting is the image location.
Since light rays do not actually exist at the image location, the image is a virtual image. It
would only appear to an observer as though light were diverging from this location to the
observer's eye.
Figure 43: Image formation by a diverging lens
The above discussion relates to the formation of an image by a "point object" - in this
case, a small light bulb. The same principles apply to objects which occupy more than one
point in space. For example, a person occupies a multitude of points in space. As you sight at
a person through a lens, light emanates from that each individual point on that person in all
directions. Some of this light reaches the lens and refracts. All the light which originates from
one single point on the object will refract and intersect at one single point on the image. This
is true for all points on the object; light from each point intersects to create an image of this
point. The result is that a replica or reproduction of the object is created as we sight at the
object through the lens. This replica or reproduction is the image of that object. This is
depicted in the diagram below (Figure 44).
Figure 44: Reproduction of the image
44
FEKT Vysokého učení technického v Brně
Ray diagrams (Figure 42, Figure 43) can be used to determine the image location, size,
orientation and type of image formed of objects when placed at a given location in front of a
lens.
Figure 45: Lens equation
Ray diagrams provide useful information about object-image relationships, yet fail to
provide the information in a quantitative form. While a ray diagram may help one determine
the approximate location and size of the image, it will not provide numerical information
about image distance and object size. To obtain this type of numerical information, it is
necessary to use the lens equation and the Magnification equation. The lens equation
expresses the quantitative relationship between the object distance (do), the image distance
(di), and the focal length (f) (Figure 45). The equation is stated as follows:
1
1
1
=
N N*
The Magnification equation relates the ratio of the image distance and object distance to
the ratio of the image height (hi) and object height (ho). The magnification equation is stated
as follows:
"*
N*
q
&
"
N
These two equations can be combined to yield information about the image distance and
image height if the object distance, object height, and focal length are known.
3.8 Optical fibers
An optical fiber (American spelling) or fibre (British spelling) is a cylindrical dielectric
waveguide that transmits light along its axis, by the process of total internal reflection.
Total internal refection confines light within optical fibers (Figure 46). Because the
cladding has a lower refractive index, light rays reflect back into the core if they encounter the
cladding at a shallow angle (red lines). A ray that exceeds a certain "critical" angle escapes
from the fiber (yellow line).
Figure 46: Optical fiber
In recent years it has become apparent that fiber-optics are steadily replacing copper
wire as an appropriate means of communication signal transmission. A fiber-optic system is
similar to the copper wire system that fiber-optics is replacing. The difference is that fiber-
Integrated optoelectronics
45
optics use light pulses to transmit information down fiber lines instead of using electronic
pulses to transmit information down copper lines. Looking at the components in a fiber-optic
chain will give a better understanding of how the system works in conjunction with wire
based systems.
Basically, a fiber optic cable is composed of two concentric layers termed the core and
the cladding. These are shown on the right side of Figure 47. The core and cladding have
different indices of refraction with the core having n1 and the cladding n2. Light is piped
through the core. A fiber optic cable has an additional coating around the cladding called the
jacket. Core, cladding and jacket are all shown in the three dimensional view on the left side
of Figure 47. The jacket usually consists of one or more layers of polymer. Its role is to
protect the core and cladding from shocks that might affect their optical or physical
properties. It acts as a shock absorber. The jacket also provides protection from abrasions,
solvents and other contaminants. The jacket does not have any optical properties that might
affect the propagation of light within the fiber optic cable.
At one end of the system is a transmitter. This is the place of origin for information
coming on to fiber-optic lines. The transmitter accepts coded electronic pulse information
coming from copper wire. It then processes and translates that information into equivalently
coded light pulses. A light-emitting diode (LED) or an injection-laser diode (ILD) can be used
for generating the light pulses. Using a lens, the light pulses are funneled into the fiber-optic
medium where they transmit themselves down the line.
Figure 47: Fiber optic
Figure 48: Fiber optic cable construction
Light pulses move easily down the fiber-optic line because of a principle known as total
internal reflection. This principle of total internal reflection states that when the angle of
incidence exceeds a critical value, light cannot get out of the glass; instead, the light bounces
back in. When this principle is applied to the construction of the fiber-optic strand, it is
possible to transmit information down fiber lines in the form of light pulses.
46
FEKT Vysokého učení technického v Brně
There are three types of fiber optic cable commonly used: glass, plastic and plastic
optical fiber (POF).
3.8.1
Glass optical fiber
Glass fiber optic cable has the lowest attenuation and comes at the highest cost. A pure
glass fiber optic cable has a glass core and a glass cladding. This candidate has, by far, the
most wide spread use. It has been the most popular with link installers and it is the candidate
with which installers have the most experience. The glass employed in a fiber optic cable is
ultra pure, ultra transparent, silicon dioxide or fused quartz. One reference put this in
perspective by noting that "if seawater were as clear as this type of fiber optic cable then you
would be able to see to the bottom of the deepest trench in the Pacific Ocean." During the
glass fiber optic cable fabrication process impurities are purposely added to the pure glass so
as to obtain the desired indices of refraction needed to guide light. Germanium or
phosphorous are added to increase the index of refraction. Boron or fluorine is added to
decrease the index of refraction. Other impurities may somehow remain in the glass cable
after fabrication. These residual impurities may increase the attenuation by either scattering or
absorbing light.
3.8.2
Plastic optical fiber
Plastic fiber optic cable has the highest attenuation, but comes at the lowest cost. Plastic
fiber optic cable has a plastic core and plastic cladding. This fiber optic cable is quite thick.
Typical dimensions are 480/500, 735/750 and 980/1000. The core generally consists of
PMMA (polymethylmethacrylate) coated with a fluropolymer. Plastic fiber optic cable was
pioneered in Japan principally for use in the automotive industry. It is just beginning to gain
attention in the premises data communications market in the United States. The increased
interest is due to two reasons. First, the higher attenuation relative to glass may not be a
serious obstacle with the short cable runs often required in premise networks. Secondly, the
cost advantage sparks interest when network architects are faced with budget decisions.
Plastic fiber optic cable does have a problem with flammability. Because of this, it may not be
appropriate for certain environments and care has to be given when it is run through a plenum.
Otherwise, plastic fiber is considered extremely rugged with a tight bend radius and the ability
to withstand abuse.
3.8.3
Plastic Clad Silica optical fiber
Plastic Clad Silica (PCS) fiber optic cable has an attenuation that lies between glass and
plastic and a cost that lies between their costs as well. Plastic Clad Silica (PCS) fiber optic
cable has a glass core which is often vitreous silica while the cladding is plastic - usually a
silicone elastomer with a lower refractive index. In 1984 the IEC standardized PCS fiber optic
cable to have the following dimensions: core 200 microns, silicone elastomer cladding 380
microns, jacket 600 microns. PCS fabricated with a silicone elastomer cladding suffers from
three major defects. It has considerable plasticity. This makes connector application difficult.
Adhesive bonding is not possible and it is practically insoluble in organic solvents. All of this
makes this type of fiber optic cable not particularly popular with link installers. However,
there have been some improvements in it in recent years.
3.8.4
Single-mode and multi-mode fiber optic cable
When it comes to mode of propagation fiber optic cable can be one of two types,
multimode or single-mode. These provide different performance with respect to both
Integrated optoelectronics
47
attenuation and time dispersion. The single-mode fiber optic cable provides the better
performance at, of course, a higher cost.
In order to understand the difference in these types an explanation must be given of
what is meant by mode of propagation.
Light has a dual nature and can be viewed as either a wave phenomenon or a particle
phenomenon (photons). For the present purposes consider it as a wave. When this wave is
guided down a fiber optic cable it exhibits certain modes. These are variations in the intensity
of the light, both over the cable cross section and down the cable length. These modes are
actually numbered from lowest to highest. In a very simple sense each of these modes can be
thought of as a ray of light. Although, it should be noted that the term ray of light is a hold
over from classical physics and does not really describe the true nature of light.
In any case, view the modes as rays of light. For a given fiber optic cables the number
of modes that exist depend upon the dimensions of the cable and the variation of the indices
of refraction of both core and cladding across the cross section. There are three principal
possibilities. These are illustrated in Figure 49.
Figure 49: Types of mode propagation in fiber optic cable
3.8.4.1 Single-mode fiber optic cable
SINGLE-MODE FIBER (Figure 50) has a narrow core (eight microns or less), and the
index of refraction between the core and the cladding changes less than it does for multimode
fibers. Light thus travels parallel to the axis, creating little pulse dispersion. Telephone and
cable television networks install millions of kilometers of this fiber every year.
Figure 50: Single-mode fiber
48
FEKT Vysokého učení technického v Brně
Single Mode cable is a single stand of glass fiber with a diameter of 8.3 to 10 microns
that has one mode of transmission. Single Mode Fiber with a relatively narrow diameter,
through which only one mode will propagate typically 1310 or 1550nm. Single-mode fiber
gives you a higher transmission rate and up to 50 times more distance than multimode, but it
also costs more. Single-mode fiber has a much smaller core than multimode. The small core
and single light-wave virtually eliminate any distortion that could result from overlapping
light pulses, providing the least signal attenuation and the highest transmission speeds of any
fiber cable type.
3.8.5
Multimode fiber optic cable
STEP-INDEX MULTIMODE FIBER (Figure 49) has a large core, up to 100 microns in
diameter. As a result, some of the light rays that make up the digital pulse may travel a direct
route, whereas others zigzag as they bounce off the cladding. These alternative pathways
cause the different groupings of light rays, referred to as modes, to arrive separately at a
receiving point. The pulse, an aggregate of different modes, begins to spread out, losing its
well-defined shape. The need to leave spacing between pulses to prevent overlapping limits
bandwidth that is, the amount of information that can be sent. Consequently, this type of fiber
is best suited for transmission over short distances, in an endoscope, for instance.
GRADED-INDEX MULTIMODE FIBER (Figure 49) contains a core in which the
refractive index diminishes gradually from the center axis out toward the cladding. The higher
refractive index at the center makes the light rays moving down the axis advance more slowly
than those near the cladding. Also, rather than zigzagging off the cladding, light in the core
curves helically because of the graded index, reducing its travel distance. The shortened path
and the higher speed allow light at the periphery to arrive at a receiver at about the same time
as the slow but straight rays in the core axis. The result: a digital pulse suffers less dispersion.
Multimode cable is made of glass fibers, with a common diameter in the 50-to-100
micron range for the light carry component (the most common size is 62,5). Multimode fiber
gives you high bandwidth at high speeds over medium distances. Light waves are dispersed
into numerous paths, or modes, as they travel through the cable's core typically 850 or 1300
nm. Typical multimode fiber core diameters are 50, 62,5, and 100 micrometers. However, in
long cable runs (greater than 3000 feet - 914.4 meters), multiple paths of light can cause
signal distortion at the receiving end, resulting in an unclear and incomplete data
transmission.
3.8.6
Loss Mechanisms in Fibers [8]
The following effects can lead to losses in electromeagnetic energy propagating in
fibers:
• material absorption,
• material scattering,
• waveguide scattering due to form-inhomogeneities,
• mode losses due to fiber bending and cladding losses.
3.8.6.1 Material-Absorption
Absorption losses are largely due to impurities in glass material from residual foreign
atomic substances and hydrogen/oxygen molecules. Lastly, there are attenuation maxima in
small band wavelength regions. The fundamental attenuated wavelength (highest absorption)
is due to (OH)- ions. In quartz this is at λ = 2.7 μm. In the spectral region below this
wavelength, there are other absorption bands at 1.38μm, 1.24μm, 950 nm and 720 nm.
Integrated optoelectronics
49
Between these wavelength bands there are “windows” of minimal attenuation. These
spectral regions are at 850nm (1st windows), at 1300 nm (2nd window) and at 1550 nm (3rd
window). These spectral regions are used for data transmission (communication technology).
Foreign substances include metal ions such as Cr3+, Fe2+ and Cu2+. The associated
absorption bands are between 500nm and 1000 nm. The bandwidth can be very different
depending on the specific glass and metal ion being discussed.
Attempting to transmit short wavelength light in quartz fibers (i.e. UV light λ = 210
nm) can lead to a damage mechanism referred to as polarization. In the quartz structure, there
are absorption centers where anions (negatively charged ions) are replaced by an electron.
These electrons can be excited, potentially at resonance. These regions in the crystal are also
called color centers, because the normally color neutral crystals (i.e. NaCl) become
characteristically discoloured.
3.8.6.2 Material Scattering
One crucial scattering mechanism is Rayleigh Scattering. Spatially there are high
density gradients (short compared to the wavelength) which alter the index of refraction and
cause scattering. The intensity of the scattered light is proportional to 1/λ4. The effect
evidences itself in, among other things, strong reverse scattering.
Another scattering mechanism is Mie Scattering, which mainly results in forward
scattering. This mechanism comes from material inhomogeneities in larger wavelength
spectrums.
Stimulated Raman Scattering and Stimulated Brillouin Scattering are non-linear
radiation induced effects, which exceed intensity thresholds. Transmitting laser light alone
can exceed these threshold values.
3.8.6.3 Light Guide Specific Scattering Mechanisms
So called intrinsic fiber characteristics can cause loss of energy. Some of these effects
are: changes in core diameter, difference in refractive indices, index profile effects, mode
coupling (double mechanisms) and scattered radiation in the cladding glass. Radiation losses
can exist due to the conversion of core modes to non-porpagating modes (cladding modes).
This results in a reduction in the carrying modes.
Extrinsic causes for loss mechanisms come from such things as mechanical influences,
such as micro and macrobending.
3.8.6.4 Radiation Losses due to Macrobending
Fiber bending with a constant bend radius is referred to as macrobending. This produces
at least 2 loss mechanisms:
a) In multimode fibers, the number of propagating modes is reduced as a function of
bend radius according to the following description [8]:
q:7< r q s1 & tu # /:7nJ <v
M0 ... number of propagating modes without bending
M(R) ... number of propagating modes with bending
n2
... clad refractive index
R
... bend radius
DF ... fiber diameter
NA ... numerical aperture
50
FEKT Vysokého učení technického v Brně
b) An additional problem worth mentioning in bent fibers is electromagnetic radiation
loss by differences in propagation (wave front) velocity. The main portion of
electromagnetic energy is concentrated in the fiber core, while other portions are
transmitted in the cladding and a slight amount outside the cladding.
In bending the fiber with bending radius R, the light will move with the mediums
propagation velocity. In the fiber cross section , the area radially further from the radius center
will need to move with a greater velocity than that of the fiber core to maintain the signal
transport speed. At reaching a critical value a barrier is reached.
The transport velocity lies beyond this point. Seeing as the signal velocity no longer
exists, light can no longer be transmitted in this configuration and relevant portion of the
energy radiates into the surroundings.
Figure 51: Macrobending losses
3.8.6.5 Losses due to microbending
Along the length of the fiber, periodic or statistically distributed locations of curvatures
occur, whose magnitude continously varies. The associated loss mechanism is mainly
exhibited by a permanent transformation of the transmitted mode.
Integrated optoelectronics
51
Figure 52: Fiber coupling losses
3.8.6.6 Fiber Coupling Losses
Cleaved single fibers may be spliced. The splicing region can exhibit intrinsic (purely
optical) and extrinsic (mechanical alignment) losses. The Figure 52 shows various
configurations and transmission values for multimode fibers with cleaved terminations.
3.8.7
Fiber connectors
The Connector is a mechanical device mounted on the end of a fiber optic cable, light
source, Receiver or housing. It allows it to be mated to a similar device. The Transmitter
provides the Information bearing light to the fiber optic cable through a connector. The
Receiver gets the Information bearing light from the fiber optic cable through a connector.
The connector must direct light and collect light. It must also be easily attached and detached
from equipment. This is a key point. The connector is disconnectable. With this feature it is
different than a splice which will be discussed in the next sub-chapter.
A connector marks a place in the premises fiber optic data link where signal power can
be lost and the BER can be affected. It marks a place in the premises fiber optic data link
where reliability can be affected by a mechanical connection.
There are many different connector types. The ones for glass fiber optic cable are
briefly described below and put in perspective. This is followed by discussion of connectors
for plastic fiber optic cable. However, it must be noted that the ST connector is the most
widely used connector for premise data communications.
52
FEKT Vysokého učení technického v Brně
Figure 53: Fiber connectors
Plastic Fiber Optic Cable Connectors - Connectors that are exclusively used for plastic
fiber optic cable stress very low cost and easy application. Often used in applications with no
polishing or epoxy. Figure 2-16 illustrates such a connector. Connectors for plastic fiber optic
cable include both proprietary designs and standard designs. Connectors used for glass fiber
optic cable, such as ST or SMA are also available for use with plastic fiber optic cable. As
plastic fiber optic cable gains in popularity in the data communications world there will be
undoubtedly greater standardization.
Figure 54: Plastic fiber optic cable connector
Integrated optoelectronics
53
4 Light sources and detectors
4.1 Introduction
Many effects can be used to convert electromagnetic radiation (light) to electric power.
The process of optical detection involves the conversion of optical energy into an electronic
signal.
Absorption of photons by a sensing material may result in either a quantum o thermal
response. Therefore all light detectors are divided into two major groups namely quantum and
thermal. The quantum detectors operate from the ultraviolet to the mid infrared spectral
regions, shile thermal detectors are suitable for far infrared range.
Quantum detectors (photovoltaic and photoconductive devices) rely on the interaction
of individual photons with a crystalline lattice of semiconductor materials. Their operations
are based on the internal photoelectric effect.
Thermal detectors are optimized for maximum absorption of radiation in a sensitive
element and minimum transfer of heat from the sensor.
Light can be produced and/or controlled electronically in a number of ways. Light
emitting diodes (LEDs), a solid state process called electroluminiscence produces light. Under
specific conditions, solid state light sources can produce coherent light, as in laser diodes.
Other devices such as liquid crystal devices (LCDs) control externally supplied light from
display units.
4.2 Light sources
There are two types of light emitting junction diodes that can be used as the optical
source of the Transmitter. These are the light emitting diode (LED) and the laser diode (LD).
This is not the place to discuss the physics of their operation. LED's are simpler and generate
incoherent, lower power, light. LD's are more complex and generate coherent, higher power
light. Figure 2-9 illustrates the optical power output, P, from each of these devices as a
function of the electrical current input, I, from the modulation circuitry. As the figure
indicates the LED has a relatively linear P-I characteristic while the LD has a strong
nonlinearity or threshold effect. The LD may also be prone to kinks where the power actually
decreases with increasing bandwidth.
With minor exceptions, LDs have advantages over LED's in the following ways.
• They can be modulated at very high speeds.
• They produce greater optical power.
• They have higher coupling efficiency to the fiber optic cable.
LED's have advantages over LD's because they have
• higher reliability
• better linearity
• lower cost
Both the LED and LD generate an optical beam with such dimensions that it can be
coupled into a fiber optic cable. However, the LD produces an output beam with much less
spatial width than an LED. This gives it greater coupling efficiency. Each can be modulated
with a digital electrical signal. For very high-speed data rates the link architect is generally
54
FEKT Vysokého učení technického v Brně
driven to a Transmitter having a LD. When cost is a major issue the link architect is generally
driven to a Transmitter having an LED.
A key difference in the optical output of an LED and a LD is the wavelength spread
over which the optical power is distributed. The spectral width, σλ, is the 3 dB optical power
width (measured in nm or microns). The spectral width impacts the effective transmitted
signal bandwidth. A larger spectral width takes up a larger portion of the fiber optic cable link
bandwidth. Figure 2-10 illustrates the spectral width of the two devices. The optical power
generated by each device is the area under the curve. The spectral width is the half-power
spread. A LD will always have a smaller spectral width than a LED. The specific value of the
spectral width depends on the details of the diode structure and the semiconductor material.
However, typical values for a LED are around 40 nm for operation at 850 nm and 80 nm at
1310 nm. Typical values for a LD are 1 nm for operation at 850 nm and 3 nm at 1310 nm.
Figure 55: LED and laser spectral widths
4.2.1
Light emitting diode structure
LEDs are p-n junction devices (Figure 56) constructed of gallium arsenide (GaAs),
gallium arsenide phosphide (GaAsP), or gallium phosphide (GaP). Silicon and germanium are
not suitable because those junctions produce heat and no appreciable IR or visible light. The
junction in an LED is forward biased and when electrons cross the junction from the n- to the
p-type material, the electron-hole recombination process produces some photons in the IR or
visible in a process called electroluminescence. An exposed semiconductor surface can then
emit light (Figure 60).
Figure 56: Light emitting diode structure
Integrated optoelectronics
55
Figure 57: Blue, green and red LEDs
Figure 58: LEDs
When the applied forward voltage on the diode of the LED drives the electrons and
holes into the active region between the n-type and p-type material, the energy can be
converted into infrared or visible photons. This implies that the electron-hole pair drops into a
more stable bound state, releasing energy on the order of electron volts by emission of a
photon. The red extreme of the visible spectrum, 700 nm, requires an energy release of 1.77
eV to provide the quantum energy of the photon. At the other extreme, 400 nm in the violet,
3.1 eV is required.
56
FEKT Vysokého učení technického v Brně
Figure 59: LED polarity
Figure 60: Electroluminiscence in LEDs
Conventional LEDs are made from a variety of inorganic semiconductor materials,
producing the following colors:
• aluminum gallium arsenide (AlGaAs) - red and infrared
• aluminum gallium phosphide (AlGaP) – green
• aluminum gallium indium phosphide (AlGaInP) - high-brightness orange-red,
orange, yellow, and green
• gallium arsenide phosphide (GaAsP) - red, orange-red, orange, and yellow
• gallium phosphide (GaP) - red, yellow and green
• gallium nitride (GaN) - green, pure green (or emerald green), and blue also
white (if it has an AlGaN Quantum Barrier)
• indium gallium nitride (InGaN) - near ultraviolet, bluish-green and blue
• silicon carbide (SiC) as substrate – blue
• silicon (Si) as substrate - blue (under development)
• sapphire (Al2O3) as substrate – blue
• zinc selenide (ZnSe) – blue
• diamond (C) – ultraviolet
• aluminum nitride (AlN), aluminum gallium nitride (AlGaN) - near to far
ultraviolet
Integrated optoelectronics
57
When an LED is forward biased to the threshold of conduction, its current increases
rapidly and must be controlled to prevent destruction of the device (Figure 61). The light
output is quite linearly proportional to the current within its active region, so the light output
can be precisely modulated to send an undistorted signal through a fiber optic cable.
Figure 61: LED characteristic
An LED is a directional light source, with the maximum emitted power in the direction
perpendicular to the emitting surface. The typical radiation pattern shows that most of the
energy is emitted within 20° of the direction of maximum light (Figure 62). Some packages
for LEDs include plastic lenses to spread the light for a greater angle of visibility.
Figure 62: LED radiation patterns
4.2.2
Laser diodes
Laser diodes (Figure 63) are complex semiconductors that convert an electrical current
into light. The conversion process is fairly efficient in that it generates little heat compared to
incandescent lights.
Five inherent properties make lasers attractive for use in fiber optics:
1. They are small.
2. They possess high radiance (i.e., they emit lots of light in a small area).
3. The emitting area is small, comparable to the dimensions of optical fibers.
58
FEKT Vysokého učení technického v Brně
4. They have a very long life, offering high reliability.
5. They can be modulated (turned off and on) at high speeds.
Figure 63: Laser diode
Table 4 offers a quick comparison of some of the characteristics for lasers and LEDs.
These characteristics are discussed in greater detail throughout this article and in the article on
light-emitting diodes.
Characteristic
LEDs
Lasers
Output Power
Linearly proportional
to drive current
Proportional to current
above the threshold
Current
Drive Current: 50 to
100 mA
Threshold Current: 5 to
40 mA
Moderate
High
Speed
Slower
Faster
Output Pattern
Higher
Lower
Moderate
High
Coupled Power
Bandwidth
Wavelengths
0,66 to 1,65 µm
0,78 to 1,65 µm
Wider (40-190 nm
FWHM)
Narrower (0,00001 nm
to 10 nm FWHM)
Fiber Type
Multimode Only
SM, MM
Ease of Use
Easier
Harder
Lifetime
Longer
Long
Low ($5-$300)
High ($100-$10.000)
Spectral Width
Cost
Table 4: Comparison of LEDs and Laser diodes
Integrated optoelectronics
59
Laser diodes are typically constructed of GaAlAs (gallium aluminum arsenide) for
short-wavelength devices. Long-wavelength devices generally incorporate InGaAsP (indium
gallium arsenide phosphide).
Several key characteristics lasers determine their usefulness in a given application.
These are:
• Peak wavelength: This is the wavelength at which the source emits the most power.
It should be matched to the wavelengths that are transmitted with the least
attenuation through optical fiber. The most common peak wavelengths are 1310,
1550, and 1625 nm.
• Spectral width: Ideally, all the light emitted from a laser would be at the peak
wavelength, but in practice the light is emitted in a range of wavelengths centered at
the peak wavelength. This range is called the spectral width of the source.
• Emission pattern: The pattern of emitted light affects the amount of light that can
be coupled into the optical fiber. The size of the emitting region should be similar to
the diameter of the fiber core. Figure 2 illustrates the emission pattern of a laser.
• Power: The best results are usually achieved by coupling as much of a source's
power into the fiber as possible. The key requirement is that the output power of the
source be strong enough to provide sufficient power to the detector at the receiving
end, considering fiber attenuation, coupling losses and other system constraints. In
general, lasers are more powerful than LEDs.
• Speed: A source should turn on and off fast enough to meet the bandwidth limits of
the system. The speed is given according to a source's rise or fall time, the time
required to go from 10% to 90% of peak power. Lasers have faster rise and fall
times than LEDs.
Figure 64: Laser emission pattern
Linearity is another important characteristic to light sources for some applications.
Linearity represents the degree to which the optical output is directly proportional to the
electrical current input. Most light sources give little or no attention to linearity, making them
usable only for digital applications. Analog applications require close attention to linearity.
Nonlinearity in lasers causes harmonic distortion in the analog signal that is transmitted over
an analog fiber optic link.
Lasers are temperature sensitive; the lasing threshold will change with the temperature.
Figure 65 shows the typical behavior of a laser diode. As operating temperature changes,
several effects can occur. First, the threshold current changes. The threshold current is always
lower at lower temperatures and vice versa. The second change that can be important is the
slope efficiency. The slope efficiency is the number of milliwatts or microwatts of light output
per milliampere of increased drive current above threshold. Most lasers show a drop in slope
efficiency as temperature increases. Thus, lasers require a method of stabilizing the threshold
to achieve maximum performance. Often, a photodiode is used to monitor the light output on
60
FEKT Vysokého učení technického v Brně
the rear facet of the laser. The current from the photodiode changes with variations in light
output and provides feedback to adjust the laser drive current.
Figure 65: Temperature effects on laser optical output power
4.2.2.1 Laser diode types
The simple laser diode structure, described above, is extremely inefficient. Such devices
require so much power that they can only achieve pulsed operation without damage. Although
historically important and easy to explain, such devices are not practical.
a) Double heterostructure lasers
Figure 66: Diagram of front view of a double heterostructure laser diode
The first laser diode to achieve continuous wave operation was a double heterostructure
(Figure 66). In these devices, a layer of low bandgap material is sandwiched between two
high bandgap layers. One commonly-used pair of materials is gallium arsenide (GaAs) with
aluminium gallium arsenide (AlxGa(1-x)As). Each of the junctions between different bandgap
materials is called a heterostructure, hence the name "double heterostructure laser" or DH
laser. The kind of laser diode described in the first part of the article may be referred to as a
homojunction laser, for contrast with these more popular devices.
Integrated optoelectronics
61
The advantage of a DH laser is that the region where free electrons and holes exist
simultaneously - the "active" region - is confined to the thin middle layer. This means that
many more of the electron-hole pairs can contribute to amplification - not so many are left out
in the poorly amplifying periphery. In addition, light is reflected from the heterojunction;
hence, the light is confined to the region where the amplification takes place.
b) Quantum well lasers
Figure 67: Diagram of front view of simple quantum well laser diode
If the middle layer is made thin enough, it acts as a quantum well (Figure 67). This
means that the vertical variation of the electron's wavefunction, and thus a component of its
energy, is quantised. The efficiency of a quantum well laser is greater than that of a bulk laser
because the density of states function of electrons in the quantum well system has an abrupt
edge that concentrates electrons in energy states that contribute to laser action.
Lasers containing more than one quantum well layer are known as multiple quantum
well lasers. Multiple quantum wells improve the overlap of the gain region with the optical
waveguide mode.
In a quantum cascade laser, the difference between quantum well energy levels is used
for the laser transition instead of the bandgap. This enables laser action at relatively long
wavelengths, which can be tuned simply by altering the thickness of the layer.
c) Separate confinement heterostructure lasers
The problem with the simple quantum well diode described above is that the thin layer
is simply too small to effectively confine the light. To compensate, another two layers are
added on, outside the first three. These layers have a lower refractive index than the centre
layers, and hence confine the light effectively. Such a design is called a separate confinement
heterostructure (SCH) laser diode (Figure 68).
62
FEKT Vysokého učení technického v Brně
Figure 68: Diagram of front view of separate confinement heterostructure quantum well laser
diode
d) Distributed feedback lasers
Distributed feedback lasers (DFB) are the most common transmitter type in
DWDMsystems. To stabilize the lasing wavelength, a diffraction grating is etched close to the
p-n junction of the diode. This grating acts like an optical filter, causing only a single
wavelength to be fed back to the gain region and lase. Thus at least one facet of a DFB is antireflection coated. The DFB laser has a stable wavelength that is set during manufacturing by
the pitch of the grating, and can only be tuned slightly with temperature. Such lasers are the
workhorse of demanding optical communication.
e) VCSELs
Vertical cavity surface emitting lasers (VCSELs) have the optical cavity axis along the
direction of current flow rather than perpendicular to the current flow as in conventional laser
diodes (Figure 69). The active region length is very short compared with the lateral
dimensions so that the radiation emerges from the ‘‘surface’’ of the cavity rather than from its
edge as shown in Fig. 2. The reflectors at the ends of the cavity are dielectric mirrors made
from alternating high and low refractive index quarter-wave thick multilayer.
Such dielectric mirrors provide a high degree of wavelength-selective reflectance at the
required free surface wavelength λ if the thicknesses of alternating layers d1 and d2 with
refractive indices n1 and n2 are such that n1d1 + n2d2 = (1/2)λ which then leads to the
constructive interference of all partially reflected waves at the interfaces. Because of the high
mirror reflectivities, VCSELs have lower output powers when compared to edge emitting
lasers.
Integrated optoelectronics
63
Figure 69: Diagram of a simple VCSELs structure
4.3 Light detectors
4.3.1
Photoresistors
A photoresistor is an electronic component whose resistance decreases with increasing
incident light intensity. It can also be referred to as a light-dependent resistor (LDR), or
photoconductor (Figure 70).
Figure 70: Photoresistor
64
FEKT Vysokého učení technického v Brně
A photoresistor is made of a high-resistance semiconductor. If light falling on the
device is of high enough frequency, photons absorbed by the semiconductor give bound
electrons enough energy to jump into the conduction band. The resulting free electron (and its
hole partner) conduct electricity, thereby lowering resistance.
Cadmium sulphide or cadmium sulfide (CdS) cells rely on the material's ability to vary
its resistance according to the amount of light striking the cell. The more light that strikes the
cell, the lower the resistance. Although not accurate, even a simple CdS cell can have a wide
range of resistance from about 600 ohms in bright light to one or two megaohms in darkness.
The cells are also capable of reacting to a broad range of frequencies, including infrared (IR),
visible light, and ultraviolet (UV). They are often found on street lights as automatic on/off
switches. They were once even used in heat-seeking missiles to sense for targets.
4.3.2
Photodiodes
A photodiode is a semiconductor diode that functions as a photodetector. Photodiodes
are packaged with either a window or optical fibre connection, in order to let in the light to the
sensitive part of the device. They may also be used without a window to detect vacuum UV or
X-rays.
The photodiode is a p-n junction or p-i-n structure. When light with sufficient photon
energy strikes a semiconductor, photons can be absorbed, resulting in generation of a mobile
electron and electron hole. If the absorption occurs in the junction's depletion region, these
carriers are swept from the junction by the built-in field of the depletion region, producing a
photocurrent.
Photodiodes can be used under either zero bias leading to a current in the forward bias
direction. This is called the photovoltaic effect, and is the basis for solar cells — in fact, a
solar cell is just a large number of big, cheap photodiodes.
Diodes usually have extremely high resistance when reverse-biased. This resistance is
reduced when light of an appropriate frequency shines on the junction. Hence, a reversebiased
diode can be used as a detector by monitoring the current running through it. Circuits based
on this effect are more sensitive to light than ones based on the photovoltaic effect.
Avalanche photodiodes have a similar structure, but they are operated with much higher
reverse bias. This allows each photo-generated carrier to be multiplied by avalanche
breakdown, resulting in internal gain within the photodiode, which increases the effective
responsivity of the device.
The material used to make a photodiode (Table 5) is critical to defining its properties,
because only photons with sufficient energy to excite an electron across the material's
bandgap will produce significant photocurrents [14].
Material
Wavelength range (nm)
Silicon
190 – 1100
Germanium
800 – 1700
Indium gallium arsenide
800 – 2600
Lead sulfide
<1000 – 3500
Table 5: Materials commonly used to produce photodiodes
Critical performance parameters of a photodiode include:
Integrated optoelectronics
•
•
•
65
Responsivity - The ratio of generated photocurrent to incident light power, typically
expressed in A/W when used in photoconductive mode. The responsivity may also
be expressed as quantum efficiency, or the ratio of the number of photogenerated
carriers to incident photons and thus a unitless quantity.
Dark current - The current through the photodiode in the absence of any input
optical signal, when it is operated in photoconductive mode. The dark current
includes photocurrent generated by background radiation and the saturation current
f the semiconductor junction. Dark current must be accounted for by calibration if a
photodiode is used to make an accurate optical power measurement, and it is also a
source of noise when a photodiode is used in an optical communication system.
Noise-equivalent power (NEP) - The minimum input optical power to generate
photocurrent, equal to the rms noise current in a 1 hertz bandwidth. The related
characteristic detectivity (D) is the inverse of NEP, 1/NEP; and the specific
detectivity (D*) is the detectivity normalized to the area (A) of the photodetector,
tw t√J The NEP is roughly the minimum detectable input power of a
photodiode.
4.3.2.1 Types of photodiodes
Planar diffused silicon photodiodes are simply P-N junction diodes. A P-N junction
can be formed by diffusing either a P-type impurity (anode), such as Boron, into an N-type
bulk silicon wafer, or an N-type impurity, such as Phosphorous, into a P-type bulk silicon
wafer. The diffused area defines the photodiode active area. To form an ohmiccontact
impurity diffusion into the backside of the wafer is necessary. The impurity is an N-type for
P-type active area and P-type for an N-type active area. The contact pads are deposited on the
front active area on defined areas, and on the backside, completely covering the device. The
active area is then passivated with an antireflection coating to reduce the reflection of the light
for a specific predefined wavelength. The non-active area on the top is covered with a thick
layer of silicon oxide. By controlling the thickness of bulk substrate, the speed and
responsivity of the photodiode can be controlled. Note that the photodiodes, when biased,
must be operated in the reverse bias mode, i.e. a negative voltage applied to anode and
positive voltage to cathode.
Figure 71: Planar diffused silicon photodiodes
66
FEKT Vysokého učení technického v Brně
The responsivity of a silicon photodiode is a measure of the sensitivity to light, and it is
defined as the ratio of the photocurrent IP to the incident light power P at a given wavelength:
pC
:J. z <
7y M
In other words, it is a measure of the effectiveness of the conversion of the light power
into electrical current. It varies with the wavelength of the incident light (Figure 72) as well as
applied reverse bias and temperature.
Figure 72: Typical spectral responsivity of several different types of planar diffused
photodiodes
The current-voltage characteristic of a photodiode (Figure 73) with no incident light is
similar to a rectifying diode. When the photodiode is forward biased, there is an exponential
increase in the current. When a reverse bias is applied, a small reverse saturation current
appears. It is related to dark current as:
~€
p{ p8|} GP ‚ } & 1H
where
... the photodiode dark current,
ID
ISAT ... the reverse saturation current,
q
... the electron charge,
VA ... the applied bias voltage,
kB
... 1,38.10-23 K, is the Boltzmann Constant,
T
... the absolute temperature (273 K=0 °C).
:J<
Integrated optoelectronics
67
Figure 73: Current – voltage characteristic of photodiode
From equation (33), three various states can be defined:
1.
V = 0, in this state, the current becomes the reverse saturation current.
2.
V = +V, in this state the current increases exponentially. This state is also known
as forward bias mode.
3.
V = -V, When a reverse bias is applied to the photodiode, the current behaves as
shown in figure 7.
Illuminating the photodiode with optical radiation, shifts the I-V curve by the amount of
photocurrent (IP). Thus:
~„
p}ƒ}|R p8|} TP … } & 1^ & pF
:J<
where IP is defined as the photocurrent in equation (32).
As the applied reverse bias increases, there is a sharp increase in the photodiode current.
The applied reverse bias at this point is referred to as breakdown voltage. This is the
maximum applied reverse bias, below which, the photodiode should be operated (also known
as maximum reverse voltage). Breakdown voltage, varies from one photodiode to another and
is usually measured, for small active areas, at a photodiode current of 10 µA.
4.3.3
Phototransistor
A phototransistor is in essence nothing more than a bipolar transistor that is encased in
a transparent case so that light can reach the base-collector junction. The phototransistor
works like a photodiode, but with a much higher sensitivity for light, because the electrons
that are generated by photons in the base-collector junction are injected into the base, and this
current is then amplified by the transistor operation. However, a phototransistor has a slower
response time than a photodiode. Note that photodiodes also can provide a similar function,
although with much lower gain (i.e., photodiodes allow much less current to flow than do
phototransistors).
68
FEKT Vysokého učení technického v Brně
Figure 74: Phototransistor
4.3.4
Position sensitive photo-detectors (PSD) [7]
Silicon photodetectors are commonly used for light power measurements in a wide
range of applications such as bar-code readers, laser printers, medical imaging, spectroscopy
and more. There is another function, however, which utilizes the photodetectors as optical
position sensors. They, are widely referred to as Position Sensing Detectors or simply PSD’s.
The applications vary from human eye movement monitoring, 3-D modeling of human
motion to laser, light source, and mirrors alignment. They are also widely used in ultrafast,
accurate auto focusing schemes for a variety of optical systems, such as microscopes, machine
tool alignment, vibration analysis and more. The position of a beam within fractions of
microns can be obtained using PSD’s. They are divided into two families: segmented PSD’s
and lateral effect PSD’s [14].
Figure 75: PSD: a) structure, b) substitute diagram, c) 2D PSD, d) equivalent electrical
circuit
Integrated optoelectronics
69
A photo-current arising in this way is divided into two parts by the resistive layer made
from the P-type semiconductor (Figure 75). The middle part of the sensor (intrinsic layer) is
manufactured from silicon with a large resistivity. The electric field corresponding to the
depletion region on the junctions, PI and NI, evokes a shift of the holes towards the layer, P
and the electrons towards the layer, N+. The electron – hole pairs generated at the point of
incidence light act as the current source with intensity Io, the currents for the left IA and right
IB electrode are given as:
7R & 7‡
7‡
p| p†
; p‰ p†
7R
7R
These relations are valid for the homogenous distribuion of the resistivity of the layer P.
Then resistors, RX and RL – RX in the equivalent diagram depend linearly on the position of the
light beam trace [5].
p‰ 7‡ Š
p| ‹ & Š
D#N
‹
p† 7R ‹
pƒ
4.3.4.1 Segmented PSD´s
Segmented PSD’s, are common substrate photodiodes divided into either two or four
segments (for one or two-dimensional measurements, respectively), separated by a gap or
dead region. A symmetrical optical beam generates equal photocurrents in all segments, if
positioned at the center. The relative position is obtained by simply measuring the output
current of each segment. They offer position resolution better than 0,1 µm and accuracy
higher than lateral effect PSD’s due to superior responsivity match between the elements.
Since the position resolution is not dependent on the S/N of the system, as it is in lateral effect
PSD’s, very low light level detection is possible. They exhibit excellent stability over time
and temperature and fast response times necessary for pulsed applications. They are however,
confined to certain limitations, such as the light spot has to overlap all segments at all times
and it can not be smaller than the gap between the segments. It is important to have a uniform
intensity distribution of the light spot for correct measurements. They are excellent devices
for applications like nulling and beam centering.
4.3.4.2 Lateral effect PSD’s
Lateral effect PSD’s, are continuous single element planar diffused photodiodes with no
gaps or dead areas. These types of PSD’s provide direct readout of a light spot displacement
across the entire active area. This is achieved by providing an analog output directly
proportional to both the position and intensity of a light spot present on the detector active
area. A light spot present on the active area will generate a photocurrent, which flows from
the point of incidence through the resistive layer to the contacts. This photocurrent is
inversely proportional to the resistance between the incident light spot and the contact. When
the input light spot is exactly at the device center, equal current signals are generated. By
moving the light spot over the active area, the amount of current generated at the contacts will
determine the exact light spot position at each instant of time. These electrical signals are
proportionately related to the light spot position from the center.
The main advantage of lateral-effect diodes is their wide dynamic range. They can
measure the light spot position all the way to the edge of the sensor. They are also
independent of the light spot profile and intensity distribution that effects the position reading
in the segmented diodes. The input light beam may be any size and shape, since the position
of the centroid of the light spot is indicated and provides electrical output signals proportional
to the displacement from the center. The devices can resolve positions better than 0.5 µm. The
resolution is detector / circuit signal to noise ratio dependent. UDT Sensors manufactures two
70
FEKT Vysokého učení technického v Brně
types of lateral effect PSD’s. Duo-Lateral and Tetra-Lateral structures. Both structures are
available in one and two-dimensional configurations.
In duo-lateral PSD’s, there are two resistive layers, one at the top and the other at the
bottom of the photodiode. The photocurrent is divided into two parts in each layer. This
structure type can resolve light spot movements of less that 0.5 µm and have very small
position detection error, all the way almost to the edge of the active area. They also exhibit
excellent position linearity over the entire active area.
The tetra-lateral PSD’s, own a single resistive layer, in which the photocurrent is
divided into two or four parts for one or two dimensional sensing respectively. These devices
exhibit more position non linearity at distances far away from the center, as well as larger
position detection errors compared to duo-lateral types.
4.3.5
Charged coupled image sensors (CCD)
A charge-coupled device (CCD) is an image sensor, consisting of an integrated circuit
containing an array of linked, or coupled, capacitors sensitive to light. The capacitor
perspective is reflective of the history of the development of the CCD and also is indicative of
its general mode of operation, with respect to readout, but attempts aimed at optimization of
present CCD designs and structures tend towards consideration of the photodiode as the
fundamental collecting unit of the CCD. Under the control of an external circuit, each
capacitor can transfer its electric charge to one or other of its neighbours.
The CCD was invented in 1969 by Willard Boyle and George Smith at AT&T Bell
Labs. The lab was working on the Picture-phone and on the development of semiconductor
bubble memory.
Figure 76: Specially developed CCD used for ultraviolet imaging
The photoactive region of the CCD is, generally, an epitaxial layer of silicon, doped p+
(Boron) and grown upon the substrate material. In buried channel devices, the type of design
utilized in most modern CCDs, the surface of the silicon is ion implanted with phosphorus,
giving it an n-doped region. This region defines the channel in which the photogenerated
charge packets will travel. The gate oxide, i.e. the capacitor dielectric, is grown on top of this
substrate and the polysilicon gates are deposited by CVD, patterned with photolithography,
and etched in such a way that the separately phased gates lie perpendicular to the channels.
The channels are further defined by utilization of the LOCOS process to produce the channel
stop region. Channel stops are thermally grown oxides that serve to isolate the charge packets
in one column from those in another. These channel stops are produced before the polysilicon
gates are, as the LOCOS process utilizes a high temperature step that would destroy the gate
material. Channel stops often have a p+ doped region underlying them, providing a further
Integrated optoelectronics
71
barrier to the electrons in the charge packets (this discussion of the physics of CCD devices
assumes an electron transfer device, though hole transfer is possible).
One should note that the clocking of the gates, alternately high and low, will forward
and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial
layer (p-doped). This will cause the CCD to deplete, near the p-n junction and the gate oxide,
and will collect and move the charge packets beneath the channels of the device.
4.3.5.1 Architectures of CCD image sensors
The CCD image sensors can be implemented in several different architectures. The
most common are full-frame, frame-transfer and interline. The distinguishing characteristic of
each of these architectures is their approach to the problem of shuttering.
In a full-frame device, all of the image area is active and there is no electronic shutter. A
mechanical shutter must be added to this type of sensor or the image will smear as the device
is clocked or read out.
With a frame transfer CCD, half of the silicon area is covered by an opaque mask
(typically aluminum). The image can be quickly transferred from the image area to the opaque
area or storage region with acceptable smear of a few percent. That image can then be read
out slowly from the storage region while a new image is integrating or exposing in the active
area. Frame-transfer devices typically do not require a mechanical shutter and were a common
architecture for early solid-state broadcast cameras. The downside to the frame-transfer
architecture is that it requires twice the silicon real estate of an equivalent full-frame device;
hence, it costs roughly twice as much.
The interline architecture extends this concept one step further and masks every
othercolumn of the image sensor for storage. In this device, only one pixel shift has to occur
to transfer from image area to storage area; thus, shutter times can be less than a microsecond
and smear is essentially eliminated. The advantage is not free, however, as the imaging area is
now covered by opaque strips dropping the "fill factor" to approximately 50% and the
effective quantum efficiency by an equivalent amount. Modern designs have addressed this
deleterious characteristic by adding microlenses on the surface of the device to direct light
away from the opaque regions and on the active area. Microlenses can bring the fill factor
back up to 90% or more depending on pixel size and the overall system's optical design.
The choice of architecture comes down to one of utility. If the application cannot
tolerate an expensive, failure prone, power hungry mechanical shutter, then an interline device
is the right choice. Consumer snap-shot cameras have used interline devices. On the other
hand, for those applications that require the best possible light collection and issues of money,
power and time are less important, the full-frame device will be the right choice. Astronomers
tend to prefer full-frame devices. The frame-transfer falls in between and was a common
choice before the fill-factor issue of interline devices was addressed. Today, the choice of
frame-transfer is usually made when an interline architecture is not available, such as in a
back-illuminated device.
72
FEKT Vysokého učení technického v Brně
5 Fibre optic sensors
Fibre optic sensor [2], [3], [5] represents a technology base that can be applied to a
multitude of sensing applications. The following are some characteristic advantages of fiber
optics that make their use especially attractive for sensors:
• Nonelectrical
• Explosion-proof
• Often do not require contact
• Removable
• Small size and weight
• Immune to radio frequency interference and electro-magnetic interference
• High accuracy
• Can be interfaced with data communication systems
• Potentially resistant to ionizing radiation
Most physical properties can be sensed optically with fibers. There are just some of the
phenomena that can be sensed:
• Light intensity
• Position (displacement)
• Temperature
• Pressure
• Rotation
• Sound
• Strain
• Magnetic field
• Electric field
• Radiation
• Flow
• Liquid level
• Chemical analysis
Fiber optic sensors can be divided into three basic categories: phase-modulated,
intensity-modulated, and wavelength-modulated sensors. Intensity-modulated sensors
generally are associated with displacement or some other physical perturbation that interacts
with the fiber or mechanical transducer attached to the fiber. The perturbation causes a change
in received light intensity, which is a function of the phenomenon being measured.
Phasemodulated sensors compare the phase of light in a sensing fiber to a reference fiber in a
device known as an interferometer. Phase difference can be measured with extreme
sensitivity. Phase-modulated sensors are much more accurate than intensity-modulated
sensors and can be used over a much larger dynamic range. However, they are often much
more expensive. In the third category, wavelength-modulated sensors experience wavelength
change associated with displacement, temperature of the presence of chemical species, which
causes fluorescence.
Integrated optoelectronics
73
Figure 77 summarizes the common elements in a sensing system. Generally all system
has a light source, and interface to an optical fiber, a modulator than alters the light in a
manner proportional to the perturbing environment, and a photodector.
Figure 77: Components common to all fiber optic sensors
5.1 Intensity-modulated sensors
Intensity-modulated sensors detect the amount of light that is a function of the
perturbing environment (Figure 78). The light loss can be associated with transmission,
reflection, microbending or other phenomena such as absorption, scattering or fluorescence,
which can be incorporated in the fiber or in a reflective or transmissive target.
Intensitymodulated sensors normally require more light to function than do phase-modulated
sensors. Transmission, reflection and microbending sensor concepts are the most widely used.
Figure 78: Intensity sensor
The general concepts associated with intensity modulation include transmission,
reflection and microbending. However, several other mechanisms than can be used
independently (intrinsically) or in conjunction with the three primary concepts include
absorption, scattering, fluorescence, polarization and optical gratings. While
intensitymodulated sensors are analog in nature, they have significant usage in digital (on/off)
applications for switches and counters.
74
FEKT Vysokého učení technického v Brně
5.1.1
Transmissive concept
The transmissive sensor concept is normally associated with the interruption of a light
beam in a switch configuration. However, this approach can provide a good analog sensor. A
more sensitive transmissive approach employs radial displacement as show in Figure 79. The
sensor shows no transmission if the probes are displaced a distance equal to one probe
diameter. Approximately the first 20% of the displacement gives a linear output.
Figure 79: Transmissive fiber optic sensors
A modification of the transmissive concept is referred to as frustrated total internal
reflection (Figure 80). The two opposing probes have the fibers polished at an angle to the
fiber axis, which produces total internal reflection for all propagating modes. As the fiber
ends come close in proximity to one another, energy is coupled. The intensity of light coupled
into the receiving fiber. This approach provides the highest sensitivity for a transmissive
sensor.
Figure 80: Frustrated Total internal reflection configuration
Integrated optoelectronics
5.1.2
75
Reflective concept
The reflective concept is especially attractive for broad sensor use due to accuracy,
simplicity and potential low cost. The concept is shown in Figure 81. The sensor is comprised
of two bundles of fibers or a pair or single fibers. One bundle of fibers transmits light to a
reflecting target and the other bundle traps reflected light and transmits it to a detector. The
intensity of the detected light depends on haw far the reflecting target is from the fiber optic
probe.
Figure 81: Reflective fiber optic sensor
Figure 82: Reflective sond
Figure 84 shows the detected light intensity versus distance from the target. The linear
front slope allows a displacement to be measured with potential accuracy of one million of an
inch. The accuracy depends on the probe configuration (Figure 83).
For applications that require a greater dynamic range than possible with any of the fiber
configurations, a lens system can be added. Using a lens system in conjunction with a fiber
optic probe, the dynamic range can be expanded from 0,5 cm to 12,5 or more cm.
Figure 83: Probe Configuration
76
FEKT Vysokého učení technického v Brně
Figure 84: Reflective fiber optic sensors – output versus distance
5.1.3
Microbending concept
Another attractive sensor concept is that of microbending. If a fiber is bent, small
amounts of light are lost through the wall of the fiber. If a transducer bends the fiber due to a
change in some physical property, as shown in Figure 85, then the amount of received light is
related to the value of this physical property.
Like reflective sensors, they are potentially low cost and accurate. It is also important to
note that microbending sensors have a closed optical path and therefore are immune to dirty
environments.
Figure 85: Microbending sensor
5.1.4
Intrinsic concept
Intrinsic sensors change the intensity of the returning light from the sensor, but unlike
the transmissive, reflective and microbending concepts, no movement is required. Intrinsic
Integrated optoelectronics
77
sensors use the chemistry of the core glass (cladding glass of the plastic coatings) to achieve
the sensing activity. The prime mechanisms are absorption, scattering, fluorescence, changes
in refractive index or polarization.
For absorption, doping the core glass results in absorption spectra. Generally, some
peaks are temperature sensitive, while others are not.
Fluorescence can be achieved by doping the glass with various additives. The sensor
can function in two modes. A light source can be used to stimulate fluorescence, which is
affected by temperature, or the fiber can be stimulated by outside radiation and the
fluorescence detected, which is a measure of the level of incident radiation.
Refractive index changes can vary the amount of received light by effectively changing
the numerical aperture of the fiber. Many polymeric coating materials can be made to have
index changes with temperature, thus providing a temperature sensor.
Lastly, doping the glass with various rare earth oxides can make the fiber sensitive to
magnetic fields. Such fibers in the presence of magnetic fields rotate the polarized light beam
in the fiber, causing a partial extinction and a correlation of light intensity with magnetic field.
This concept is referred to as a Faraday rotation.
5.1.5
Transmission and reflection with other optic effects
Transmissive sensors can enhance their sensitivity further by adding absorption grating
to the fiber face, as shown in Figure 86. To go from maximum intensity now requires
movement of only one grating spacing instead of one probe diameter. This could increase
sensitivity by a factor of five or more.
Figure 86: Radial displacement sensor with absorption gratings
The grating on the fiber is designed to increase sensitivity for radial displacement. The
transmissive sensor can be used for rotational sensing. Consider two cases. In case 1, we have
two disks, one fixed and the other able to rotate. Each disk has a grating such that the grating
of the two disks can lie in an optical line and allow maximum light, of the disks can be
aligned so that there is no light or any position in between. Depending upon the width of the
gratings, the sensor changes intensity for a maximum to zero in the rotational distance of 1
grating spacing. The analog signal is linear with the degree of rotation.
In case 2, the grating are replaced with a fixed and rotation polarizing lens. The
transmitted intensity through the polarizing lenses is approximately proportional to cos2Θ,
where Θ is the relative rotation. The linear range for this sensor type is limited.
78
FEKT Vysokého učení technického v Brně
5.2 Phase-modulated sensors
Generally, the phase-modulated sensor employs a coherent laser lightsource and two
single-mode fibers. The light is split and injected into each fiber. If the environment perturbs
one fiber relative to the other, a phase shift occurs that can be detected very precisely. The
phase shift is detected by and interferometer. There are four basic interferometric
configurations (chapter 3.4.2):
1.
the Mach-Zehnder,
2.
the Michelson,
3.
the Fabry-Perot,
4.
the Sagnac.
Phase-modulated sensors use interferometric techniques to detect pressure, rotation and
magnetic field. The first two applications being the most widely developed.
5.3 Wavelength-modulated sensors
Wavelenght-modulated sensor use changes in wavelength to detect the sensing function.
Fluorescence and phosphorescence emit a characteristic wavelength of light if perturbed in
the proper way. For instance, a dye in the presence of an analyte can give off a characteristic
excitation spectrum.
Wavelength-modulated sensors can be the result of fluorescence (for chemical sensing).
However a broad-based concept using wavelength modulation is accomplished with Bragg
gratings. The Bragg grating has a resonance condition at a specific wavelength. Light at that
wavelength is reflected. However, change in strain of temperature perturbs the grating and
causes the reflected light to have a wavelength shift, which is a direct measure of the change
in strain and/or temperature.
5.3.1
Bragg grating concept
A fiber Bragg grating is a periodic or aperiodic perturbation of the effective refractive
index in the core of an optical fiber. Typically, the perturbation is approximately periodic over
a certain length of e.g. a few millimeters or centimeters, and the period is of the order of
hundreds of nanometers. This leads to the reflection of light (propagating along the fiber) in a
narrow range of wavelengths, for which a Bragg condition is satisfied. This basically means
that the wavenumber of the grating matches the difference of the wavenumbers of the incident
and reflected waves. (In other words, the complex amplitudes corresponding to reflected field
contributions from different parts of the grating are all in phase so that can add up
constructively; this is a kind of phase matching.) Other wavelengths are nearly not affected by
the Bragg grating, except for some side lobes which frequently occur in the reflection
spectrum (but can be suppressed by apodization). Around the Bragg wavelength, even a weak
index modulation (with amplitude of e.g. 10-4) is sufficient to achieve nearly total reflection, if
the grating is sufficiently long (e.g. a few millimeters).
Integrated optoelectronics
79
Figure 87: Schematic structure of a fiber Bragg grating
The reflection bandwidth of a fiber grating, which is typically well below 1 nm,
depends on both the length and the strength of the refractive index modulation. The narrowest
bandwidth values, as are desirable e.g. for the construction of single-frequency fiber lasers or
for certain optical filters, are obtained for long gratings with weak index modulation. Large
bandwidths may be achieved with short and strong gratings, but also with aperiodic designs
(see below).
As the wavelength of maximum reflectivity depends not only on the Bragg grating
period but also on temperature and mechanical strain, Bragg gratings can be used in
temperature and strain sensors. Transverse stress, as generated e.g. by squeezing a fiber
grating between two flat plates, induces birefringence and thus polarization-dependent Bragg
wavelengths.
5.4 Application of fiber optics sensors
5.4.1
Displacement sensors
Fiber optic displacement sensors will play an increasingly larger role in a broad range of
industrial, military and medical applications. Two particular advantages are the potential for
extremely accurate non-contact sensing and the possibility of incorporating the optical sensors
permanently in composite structures.
The basic fiber optic, intensity-modulated sensing concepts as well as interferometric
sensors and Bragg grating, can measure displacement. These concepts, however, have
provided the primary approach to displacement sensors: reflective and microbending sensors.
Figure 88: Typical application – dual probe
80
FEKT Vysokého učení technického v Brně
Figure 89: Typical applications – reflective fiber optic sensor
5.4.2
Temperature sensors
Several considerations drive the need for fiber optic temperature sensors [6]. Sensors
are needed to operate in strong electromagnetic fields. Sensors with metallic leads will
experience eddy currents in such environments, which in turn causes inaccuracy in the
temperature measurement. Fiber optic temperature sensors that do not use metallic
transducers allow for minimized heat dissipation by conduction and provide quick response.
Several fiber optic sensing concepts have been applied to temperature measurement:
reflective, microbending and intrinsic. Phase-modulated concepts and wavelength-modulated
concepts have been applied to temperature sensing.
Figure 90: Reflective fiber optic temperature sensor using a bimetallic transducer
Integrated optoelectronics
81
Figure 91: Temperature sensor with semiconductor layer
Figure 92: Temperature sensor – temperature changes of modify cladding
5.4.3
Pressure sensors
In several concepts for measuring pressure, the sensor actually measures displacement,
such as for devices that incorporate diaphragms of bourdon tubes. Therefore, the fiber-optic,
intensity-modulated, sensing approaches are applicable to pressure measurement. These
approaches include transmission, reflection and microbending.
Transmissive fiber optic pressure sensors have been divided into two basic categories.
In the first category, the transmitting and receiving fiber legs remain fixed, and the
modulation occurs by an object partially obscuring the light path. In the second category, the
fibers can move relative to each other to provide the modulation.
Figure 93: Transmissive fiber optic pressure sensor using a shutter to modulate the intensity
82
FEKT Vysokého učení technického v Brně
Figure 94: Transmissive fiber optic pressure sensor using a moving grating to modulate the
intensity
Figure 95: Reflective fiber optic pressure sensor using a diaphragm for modulation
Figure 96: Resonant fiber optic pressure sensor
5.4.4
Flow sensors
Flow measurement is a critical process control parameter in a wide range of applications
such as engine control, power generation and industrial processes. Often the environment is
difficult. The sensor can be subjected to high electrical noise, explosive environments,
relatively high temperature and areas of difficult access. Fiber optic sensors have the ability to
Integrated optoelectronics
83
perform under these environmental conditions and to be the basis for several sensing
approaches.
Four basic sensing concepts have been used for flow detection using fiber optic [3]:
1.
Rotational frequency monitoring of a paddle wheel or turbine in the flow.
2.
Differential pressure measurement across an orifice.
3.
Frequency monitoring of a vortex-shedding device.
4.
Laser Doppler velocimetry.
5.4.5
Level sensors
Level sensors are categorized as switches for high/low level and leak detection or as
magnitude sensors for actual liquid level. Fiber optic devices have found more applications in
the former category.
Liquid level is one of the prime process-control parameters, especially in the
petrochemical and chemical industries. The explosive nature of many of the processes makes
fiber optics especially desirable.
Fiber optic sensors that use light interaction with the media for which the level is being
measured work best in relatively clean and clear liquids. Dirty or somewhat opaque liquids
such as crude oil and paints tend to foul the optics and blind the sensors, as do solids in
powder form. Level measurement for these types of materials is best achieved by sensors that
are used in pressure-related level measurements.
The concepts that can be used in conjunction with fiber optic level sensors include [3]:
• Sight glasses
• Force
• Pressure
• Reflective surface
• Refractive index change
Figure 97: Refractive index change liquid level sensor
84
FEKT Vysokého učení technického v Brně
Figure 98: Refractive index change liquid level sensor – detail
5.4.6
Magnetic and electric field sensors
The monitoring of current (magnetic field) and voltage (electric field) are critical for
power utilities and in other applications where high electrical power is used. Fiber optic
sensor technology is especially attractive since it is immune to electromagnetic interference
and potentially represents a lower cost alternative.
Several sensing approaches have been conceived using fiber optics for both magnetic
and electric field sensing. The various optical and optomechanical effects such as Faraday
rotation (Figure 99 and Figure 100), Kerr effect, Pockels effect (Figure 103) and
magnetostriction (Figure 101 and Figure 102) may be using for detection.
Figure 99: Magnetic fiber sensor
Integrated optoelectronics
Figure 100:
Figure 101:
85
Basic configuration of magnetic fiber sensors
Intenzity of magnetic field sensor with magnetoresistive band
86
FEKT Vysokého učení technického v Brně
Figure 102:
Intensity of magnetic field sensor – using of the radial deformation of
magnetostrictive cylinder
Figure 103:
5.4.7
Fiber optic transmissive electric field sensor using a Pockels cell
Chemical analysis
Fiber optic techniques for chemical analysis have several distinct advantages. Analysis
can often be done in situ in real time. The sensing techniques generally do not disturb the
process. The sample size can be extremely small and the sensing locations can be in remote
areas that are normally dificult to access. Potential disadvantages include sensitivity to
ambient light, relatively slow response time due to the required reaction with various reagents
and shortened lifetime if high incident radiation is used to engance sensitivity.
Four approaches can be used for qualitative and quantitative chemical analysis. These
techniques are [3]:
1.
Fluorecence
2.
Scattering
3.
Absorption
4.
Refractive index change
Integrated optoelectronics
5.4.8
87
Rotation rate sensors (gyroscopes)
The major advantages of a fiber optic gyroscope over mechanical devices include:
• No moving parts
• No warm-up time
• Unlimited shelf life
• Minimal maintenance
• Large dynamic range
• Small size
All optical rotation sensors are based on the Sagnac effect, which is described in chapter
3.4.2.4.
Sagnac effect uses an interferometric technique for rotation rate detection Figure 104.
Figure 104:
Sagnac interferometer
The initial light beam is split into two beams that travel along a single fiber in a coiled
configuration. One path is clockwise and the other is counterclockwise. When the fiber ring
rotates in a clockwise direction, the light propagation in teh clockwise direction is longer.
This situation is due to the fact that the starting point has now moved due to rotation and the
light beam mus travel a greater distance to reach the starting point. Conversely, the
counterclockwise light beam travels a shorter distance. The path lenght difference results in a
phase difference that affects the output of the Sagnac interferometer in a manner that is related
to the rotation rate.
The fiber optic gyroscope using a modulator is shown in Figure 105.
88
FEKT Vysokého učení technického v Brně
Figure 105:
Analog fiber optic gyroscope configuration
6 Optical sensors of position and movement
6.1 Introduction
Optical sensors are one of a few possibilities of contact-less measurement of the
position or movement of an object. Their main features are the elimination of mutual
interference between the object and sensor, the short response time due to the properties of
modern photoelectronic light detector, the flexibility of implementation suitable for a broad
range of measuring tasks, both in laboratory and industry. Their application requires
protection of the active optical path from obstacles, extraneous light interference and
vibrations of the sensor body.
6.2 Sensor of position using principle of triangulation
One method for accurately measuring the distance to targets is through the use of laser
triangulation sensors. They are so named because the sensor enclosure, the emitted laser and
the reflected laser light form a triangle.
The laser beam is projected from the instrument and is reflected from a target surface to
a collection lens. This lens is typically located adjacent to the laser emitter. The lens focuses
an image of the spot on a linear array camera (CCD array). The camera views the
measurement range from an angle that varies from 45 to 65 degrees at the center of the
measurement range, depending on the particular model. The position of the spot image on the
pixels of the camera is then processed to determine the distance to the target. The camera
integrates the light falling on it, so longer exposure times allow greater sensitivity to weak
reflections. The beam is viewed from one side so that the apparent location of the spot
changes with the distance to the target [15].
Triangulation devices are ideal for measuring distances of a few inches with high
accuracy. Triangulation devices may be built on any scale, but the accuracy falls off rapidly
with increasing range. The depth of field (minimum to maximum measurable distance) is
Integrated optoelectronics
89
typically limited, as triangulation sensors can not measure relative to their baseline, the
distance between the emitter and the detector.
The exposure and laser power level are typically controlled to optimize the accuracy of
the measurements for the signal strength and environmental light level measured. The range
data may be internally averaged over multiple exposures prior to transmitting if the sample
rate is set appropriately.
The configuration of the sensor depicted on Figure 106 is designed for a measurement
of the range of an object (target) from the light source rather than the coordinates of the
position. A light source with a narrow beam strikes the target and the beam is reflected back
toward the detector (PSD sensor). The received low intensity light is focused on the sensitive
surface of the PSD. The intensity of a received beam greatly depends on the reflective
properties of the target. Diffusive reflectivity in the near infrared spectral range is close to that
in the visible range, hence, the intensity of the light incident on the PSD varies greatly.
Nevertheless, the accuracy of the measurement doesn’t depend on the intensity of the received
light [5].
Figure 106:
Triangulation sensor for the distance measurement
The triangulation principles could be used even without the PSD elements. If a swept
beam light source is available then the configuration in Figure 107 can be implemented. Here
laser bema is swept periodically by a moving mirror. The signal reflected from the platform
and the top surface of the e measured object is received by the photodiode [5].
90
FEKT Vysokého učení technického v Brně
Figure 107:
The triangular sensor with swept laser beam
6.3 Incremental sensors of position or displacement
Incremental sensors deliver on each rotation a defined number of pulses (Figure 108).
This pulse number depends on the division of the code disc. If the disc contains 360 segments
for example, so the impulse device delivers 360 impulses per rotation, respectively one pulse
per degree. Small size modern impulse devices have up to 10000 pulses per rotation. More
capable incremental sensors generate up to 720000 pulses per rotation.
Figure 108:
Principle of incremental sensor devices
Integrated optoelectronics
Figure 109:
91
Typical architecture of incremental encoder [5]
The typical set-up for the measurement of the linear displacement is depicted in Figure
109. The set of transparent marks is created by photolithography on the glass rule (or in the
case of angular displacement on the glass disk). The pattern of the marks with the same shape
is located on the grid placed close to the rule. The optical system of the sensor has three
channels, each composed of the light source (usually LEDs), lenses, grids and two
photodiodes connected in anti-parallel mode. Two channels deliver the input signal for the
logic circuits detecting the displacement and a third channel is used to the process the signal
corresponding to the reference mark. In many sensors of this type a linear light source,
common for all channels, instead of three separate LEDs is used [5].
Figure 110:
The ceded disc
92
FEKT Vysokého učení technického v Brně
In technical applications, three types of incremental sensors are used:
• One channel incremental sensors - These incremental sensors have one impulse
output channel. The pulse generation is independent from the direction of rotation.
Simple forward counters analyze the pulses as digital values. These sensors are not
able to indicate the direction of rotation.
• Two channel incremental sensors - This type of incremental sensor device has two
separate optical capture systems. Both systems are configured in a way that the
pulse signals are electrically shifted at an angle of 90 degrees. Therefore it is
possible to indicate the direction of rotation. Two channel incremental sensor
devices deliver the value and the direction of movement as described later.
• Three channel incremental sensors - Three channel incremental sensors further
show the zero, respectively the start position by use of an additional optical capture
system to indicate a reference marker point fixed on the rotating code disc. Hence
the electronic analysis of sensor signals is oriented on a reference signal that defines
the start position. The reference signal is fixed by one impulse per rotation.
It is possible to detect the edges of both channel signals. If these edges are captured, the
counted signals are multiplied fourfold. Therefore the measurement of distance analysis of the
edges will become better. A digital circuit map with appropriate signals is shown in the
following diagram Figure 111.
Incremental sensor devices (Figure 112) are used in position and speed measurement
applications. The accuracy depends on the number of segments set up on the code disc. For
speed measurement, incremental sensors are not implicitly useful. They show the mean value
of speed and not the instantaneous value. Furthermore an error is always produced as
described by mathematical analysis.
Figure 111:
Digital circuit map with appropriate signals
Integrated optoelectronics
93
Figure 112:
Incremental sensors
The speed is given as a mean value because the number of pulses counted takes place in
a time unit Δt.
1 ∆Ž
#Œ ·
∆(
where
#Œ
... mean speed
m
... number of segments around the disc
∆Z ... counted pulses
∆t
... time unit
94
FEKT Vysokého učení technického v Brně
7 List of symbol
c0
Velocity of light propagation in vacuum
m.s-1
µ0
Permeability of vacuum
H.s-1
ε0
Permitivity of vacuum
F.s-1
c
Velocity of light in a medium
m.s-1
λ
Wavelength
m
F
Frequency of light
Hz
E
Intensity of electrical field
V.m-1
B
Magnetic induction
T
F
Time
s
n
Refractive index of the medium
-
Θr
Angle of reflective
°
Θi
Angle of incidence
v
Phase velocity
°
m.s-1
R
Reflection coefficient
-
T
Transmission coefficient
A
Attenuation
dB.km-1, dB.m-1
Pi
Input power
W
Po
Output power
W
R
Fresnel reflection loss
-
α
Attenuation coefficient
-
L
Length
m
Numerical aperture
NA
I
Intensity
W.m-2
Io
Initial intensity
W.m-2
f
Focal length
m
do
Object distance
m
di
Image distance
m
hi
Image height
m
ho
Object height
m
D
Detectivity
D*
Specific detectivity
W-1
m.W-1
A
Area
m2
Integrated optoelectronics
95
Ip
Photocurrent
Rλ
Responsivity
A
A.W-1
R
Resistivity
Ω
E
Energy of the photon
J
h
Plank´s constant
J.s
a
distance
m
θ
Angle for destructive interference
°
ρ
Radial parameter
ψ
Wave function of the guided light
k
Bulk medium wave vector
β
Wave vector along the fiber axis
a
M0
M(R)
Azimuthally angle
-
°
Number of propagating modes without bending
Number of propagation modes with bending
DF
Fiber diameter
m
ID
Photodiode dark current
I
Reverse saturation current
I
Electron charge
eV
VA
Applied bias voltage
V
T
Absolute temperature
K
kB
Boltzmann constant
Mean speed
K
m.s-1
m
Number of segments around the disc
-
∆Z
Counted pulses
∆t
Time unit
ISAT
q
#Œ
s
96
FEKT Vysokého učení technického v Brně
8 Bibliography
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
SALEH, E.A., TEICH, M.C.: Fundamentals of photonics (in Czech), Matfyzpress,
Praha 1990 –1996, ISBN 80-85863-00-6.
TURÁN, J., PETRÍK, S.: Fiber optics sensors (in Slovak). Alfa Bratislava, 1990, ISBN
80-05-00655-1.
KROHN, D. A.: Fiber optic sensors – fundamentals and applications. Third edition,
ISA, 2000, ISBN 1-55617-714-3.
ĎAĎO,S., KREIDL,M.: Sensors and measuring circuits (in Czech). ČVUT Praha 1996,
ISBN 80-01-01500-9.
ĎAĎO, S., FISCHER, J.: Master book of sensors – master module 2: Optical sensors.
ČVUT Praha, 2003, ISBN 80-7300-129-2.
INCI, M. N., YOSHINO, T. T.: Fiber optic wavelength modulation sensor for absolute
temperature measurements. Fiber Optic Sensor Technology and Applications, vol.
3860, p. 368-374, SPIE 1999.
MAKYNEN, A: Position-sensitive devices and sensor systems for optical tracking and
displacement sensing. University of Oulu, OULU, 2000. ISBN 951-42-5780-4.
JENNY R.: Fundamentals of Fiber Optics – An Introduction for Beginners. Volpi, New
York, 2000.
Intelligent opto-sensors. Data book. Texas Instruments Inc., 1995. www.ti.com/dlp.
Websites
[10] http://hyperphysics.phy-astr.gsu.edu
[11] http://en.wikipedia.org
[12] http://hyperphysics.phy-astr.gsu.edu
[13] http://www.rfzone.org/free-rf-ebooks/
[14] http://www.udt.com
[15] http://www.sensorsmag.com