Download Celestial Object Imaging Model and Parameter Optimization for an

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Main sequence wikipedia , lookup

Stellar evolution wikipedia , lookup

Star formation wikipedia , lookup

Transcript
sensors
Article
Celestial Object Imaging Model and Parameter
Optimization for an Optical Navigation Sensor
Based on the Well Capacity Adjusting Scheme
Hao Wang, Jie Jiang * and Guangjun Zhang
Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education,
School of Instrumentation Science and Opto-electronics Engineering, Beihang University,
No. 37 Xueyuan Road, Haidian District, Beijing 100191, China; [email protected] (H.W.);
[email protected] (G.Z.)
* Correspondence: [email protected]; Tel.: +86-10-8233-8497
Academic Editor: Francesco De Leonardis
Received: 3 January 2017; Accepted: 19 April 2017; Published: 21 April 2017
Abstract: The simultaneous extraction of optical navigation measurements from a target celestial body
and star images is essential for autonomous optical navigation. Generally, a single optical navigation
sensor cannot simultaneously image the target celestial body and stars well-exposed because their
irradiance difference is generally large. Multi-sensor integration or complex image processing
algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates
the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a
single exposure through a single field of view (FOV) optical navigation sensor using the well capacity
adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then,
the celestial body edge model and star spot imaging model are established when the WCA scheme is
applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge
extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by
conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally,
laboratorial and night sky experiments are performed to validate the correctness of the proposed
model and optimal exposure parameters.
Keywords: optical navigation sensor; well capacity adjusting; star centroid estimation; edge
extraction; exposure parameter optimization
1. Introduction
Optical autonomous navigation is a key technology in deep space exploration. This process
is usually accomplished by multi-sensor integration, such as star sensors, navigation cameras,
inertial measurement devices, and other equipment. The attitude information of a spacecraft is
obtained by a star sensor and inertial measurement element. The navigation camera captures the
target celestial image with background stars and extracts the target celestial line-of-sight (LOS) vector
according to the current spacecraft attitude. Then, the spacecraft position can be calculated by
integrating these optical navigation measurements according to the geometric relationship. An optical
navigation system with multi-sensor integration is not only complicated in structure and has high cost
and power consumption but also has installation errors between sensors, which further restricts the
improvement of navigation accuracy. The best solution for deep space exploration missions is when
a single navigation sensor can simultaneously obtain the attitude and LOS vector from the sensor
to the centroid of the target celestial body. This approach requires the sensor to image the stars and
target celestial body and to extract their navigation measurements simultaneously. However, a large
Sensors 2017, 17, 915; doi:10.3390/s17040915
www.mdpi.com/journal/sensors
Sensors 2017, 17, 915
2 of 23
gap exists between the irradiance of stars and the target celestial body. For reference, standard image
sensors have a dynamic range of 40 dB to 70 dB [1], which is insufficient to ensure that the target
celestial body and stars are well-exposed simultaneously. The problem of insufficient dynamic range
for image sensors can be generally solved in three ways.
The design of optical systems to lower the incident flux of a celestial body is first considered.
In Reference [2], the combined Earth-/star sensor for attitude and orbit determination of geostationary
satellites is investigated. The combined Earth-/star sensor has two fields of view (FOV) for the
observation of the Earth and stars. The two FOVs are combined on the detector through a beamsplitter.
The partially transmissive mirror reflects 91% of starlight onto the detector, while transmitting only 9%
of the Earth’s brightness. The Earth’s incident flux is further reduced by a filter. This method directly
lowers the incident flux of the high irradiance target from the source, which is convenient for the
subsequent image processing. A disadvantage of this approach is the complexity of the optical system
design, higher weight, and higher costs.
The second method enhances the dynamic range by image processing algorithms. In [3–6],
multi-exposure fusion techniques are adopted to enhance the dynamic range. A set of different
exposure images is obtained. Then, these images are fused into an image where all scenes or areas of
interest appear well exposed. The advantage of the multi-exposure technique is that it can enhance the
dynamic range without degrading the signal-to-noise ratio (SNR) [7]. The main drawback of exposure
fusion is its limitation to static scenes, and any object movement incurs severe ghosting artifacts in
the fused result. Given that a spacecraft is always in the motion state, this method is inapplicable in
such condition.
New image sensor designs are also proposed to attain an extended dynamic range.
The photocurrent in logarithmic response image sensors is fed to a resistor with a logarithmic
current-voltage characteristic [8–10]. Logarithmic-response image sensors can obtain a wide dynamic
range, but it has several disadvantages (i.e., image lag, low SNR, large fixed pattern noise, and poor
image quality). This undesired lag effect is most pronounced at low light conditions, and it is caused by
a long settling time constant that can exceed the frame time. Another wide dynamic image sensor based
on time-to-saturation information was reported in [11–14]. The pixel attains its saturation level and
extrapolates the incident light by measuring and recording the time required to attain the saturation
state. The light intensity is derived by the information on the time stored in the memory of each pixel
and then the final image can be reconstructed. However, each pixel of the detector requires a signal
detection circuit, comparator, digital memory, and other components to detect the saturation state and
storage time information. This condition results in large pixel sizes and low fill factor, which limit
the sensor resolution. The well capacity adjusting (WCA) scheme described by Knight [15] and
Sayag [16] and implemented by Decker [17] compresses the sensor’s current versus charge response
curve using a lateral overflow gate. This technology is currently widely employed by integrating a
lateral overflow integration capacitor in a pixel in complementary metal-oxide-semiconductor (CMOS)
detectors [18–20]. The well capacity is monotonically increased once or multiple times to its maximum
value during integration. The accumulated photoelectrons of the high irradiance signal are suppressed,
but the low irradiance signal is unaffected. The WCA scheme enhances the dynamic range, but at the
expense of substantial degradation in SNR.
In [21], the navigation camera used a sequence of long and short exposures for optical navigation.
A short exposure in which the celestial body is not saturated will permit determination of the celestial
body’s location within the image. A long exposure will permit determination of the stars’ location
within the image. Setting a long exposure time to ensure that the dim stars satisfy the detection SNR
limit is necessary to ensure that the navigation sensor has a reliable attitude determination function.
However, a long exposure time can lead to the overexposure of the target celestial body, which results
in the expansion of the image shape because the apparent diameter increases and high stray light effect
which will overwhelm the natural response to the target stars. Given that the WCA scheme is widely
Sensors 2017, 17, 915
3 of 23
utilized in CMOS image sensors, high dynamic range images can be obtained by a single exposure
with this technique.
This study first analyzes the irradiance characteristics of a celestial body. Then, the celestial
body edge model and star spot imaging model are established when the WCA scheme is applied.
The effect of exposure parameters on the accuracy of star centroid estimation and edge detection is
analyzed based on the proposed model. The exposure parameters are optimized to ensure that the
optical navigation measurements satisfy the requirement of navigation accuracy. Comparing with the
conventional navigation sensors, this study provides a feasible method for the study of a miniaturized
single FOV optical navigation sensor, which is less cost, less weight, simple to design and can obtain
attitude information and LOS vector from the sensor to the centroid of the target celestial body
simultaneously. This navigation sensor has a strong applicability and can be utilized for a variety of
navigation tasks.
2. Irradiance Characteristics of a Celestial Body
The study of the irradiance characteristics of a celestial body is the prerequisite for the optimization
of the exposure parameters of navigation sensors. The irradiance of a celestial body received by the
detector is analyzed in this section.
Figure 1 shows the spatial position relationship between the navigation sensor and the observed
target. A planet typically emits energy in a manner that reflects solar radiation energy. The irradiance
of the sun is assumed to be isotropic because the irradiance is inversely proportional to the square of
the distance [22]. Thus, the irradiance received at the target celestial surface is expressed as:
IP = Isun (rS /RSP )2
(1)
where Isun is the irradiance of the sun, rS is the radius of the sun, and RSP is the distance between
the sun and the target celestial body. The incident energy on the surface of a planet is only partially
reflected back into cosmic space, whereas the rest is absorbed by the surface. r P is defined as the
radius of the celestial body, A is the Bond albedo that represents the fraction of energy incident on
an astronomical body scattered back into space at all wavelengths and phase angles. Thus, the total
radiant flux reflected by the surface of the celestial body is expressed as:
LR =
π Ar2P rS2
· Isun
R2SP
(2)
We consider that navigation sensor observes the planet at a distance of R PC . Only a part of the
illuminated area of the celestial body can be observed in most cases. If the surface of a planet is
assumed to be homogeneous, then the distribution of the reflected radiant energy only depends on two
factors, distance R PC and phase angle ξ which is the angle at the celestial body object between the Sun
and the observer. Phase function is the ratio of the reflected radiant flux at phase angle ξ to the radiant
◦
flux at zero phase angle, which is denoted as P(ξ ). When ξ = 0 , P(ξ ) = 1. Thus, the irradiance
received by the navigation sensor is expressed in the following form:
IC = CP(ξ ) ·
CAr2P rS2
LR
=
· P(ξ ) · Isun
4πR2PC
4R2PC R2SP
(3)
where C is a constant. The total reflected energy of the celestial body is distributed on a spherical
surface. As the distance increases, the radiant flux density decreases, but the total radiant flux remains
the same. Therefore, the total radiant flux on the sphere from the center of the celestial body of radius
R PC is expressed as:
Z
Z
CP(ξ ) L R
L R = IC dS =
dS
(4)
4πR2PC
S
S
Sensors 2017, 17, 915
Sensors 2017, 17, 915
4 of 23
4 of 23
RPC
RSP

Figure
Figure1.1.Spatial
Spatialposition
positionrelationship
relationshipbetween
betweenthe
thesensor
sensorand
andtarget
targetcelestial
celestialbody.
body.
dS can be inexpressed
Figure
shows
the dS
area
terms ofasits
coordinates
as
Figure
22shows
thatthat
the area
can be expressed
terms of itsin
coordinates
dS =
R2PC sin ξdϕdξ.
2
C is derived as:
dS
 RPC sinC
 disderived
d  . Therefore,
Therefore,
as:
2
2
44πR
RPC
44π

22
PC 
C C = R
= R π
= Rϕ2=
 2π Rξ =π
)dS
sinξdξ
) sin
 dξdϕdξ
 d   0PP(ξ)sin
d
 P P(ξdS
 ϕ=0 ξ =P0P(ξsin
S S
 0
 0
(5)(5)
0
Inastronomy,
astronomy, the
theBond
Bondalbedo
albedo( (A)
relatedtotothe
thegeometric
geometricalbedo
albedo( (ρ)
by the
the expression
expression
In
) by
A ) isisrelated
A
=
ρq
[23],
where:
A   q [23], where:
Z π
Z π
I (ξ )
sin ξdξ = 2
P(ξ ) sin ξdξ
(6)
q=2
 0
0 II(0)
q  2
sin  d   2  P   sin  d 
(6)
0 I 0
0
following

Thus, the constant C must obey the
relationship:
Thus, the constant C must obey the following 4ρ
relationship:
C=
(7)
4A
C
(7)
A received by the sensor can be expressed as:
The previously analysis shows that the irradiance
The previously analysis shows that the irradiance
received by the sensor can be expressed as:
ρr2P rS2
IC = 2 2 22 · P(ξ ) · Isun
(8)
rSSP
R rPR
I C  PC
P
I





(8)
sun
2
2
RPC
RSP
Equation (8) shows that IC is the function of R PC , RSP , and phase angle ξ. Visual magnitude is
RSP , and phase
Equationquantity
(8) shows
that I C adopted
is the function
of RPC
angle  object.
. VisualFor
magnitude
the relative
generally
to measure
the, irradiance
of a celestial
example,
when
the
Moon
is
observed
in
a
geosynchronous
orbit,
the
visual
magnitude
varies
from
−2.5 to
is the relative quantity generally adopted to measure the irradiance of a celestial object. For example,
−
12.74.
The
irradiance
of
a
celestial
body
is
commonly
higher
than
that
of
stars.
when the Moon is observed in a geosynchronous orbit, the visual magnitude varies from −2.5 to
−12.74. The irradiance of a celestial body is commonly higher than that of stars.
Sensors 2017, 17, 915
5 of 23
Sensors
2017,
17,17,
915
Sensors
2017,
915
5 5ofof2323
Figure 2. Reflected radiation flux over a sphere.
Figure
fluxover
overaasphere.
sphere.
Figure2.2.Reflected
Reflected radiation
radiation flux
3. Celestial Object Imaging Model based on the Well Capacity Adjusting Scheme
3. Celestial
ObjectImaging
ImagingModel
Modelbased
based on
on the
the Well
Well Capacity
3. Celestial
Object
CapacityAdjusting
AdjustingScheme
Scheme
The WCA scheme is widely employed in CMOS image sensors. During integration period of the
The
WCAscheme
schemeisis widely
widely employed
in CMOS
image
sensors.
During integration
periodperiod
of the of
The
WCA
employed
CMOS
image
sensors.
integration
WCA
scheme,
well capacity of a certain
pixelin
is increased
several
times to During
extend the
range of incident
WCA
scheme,
well
capacity
of
a
certain
pixel
is
increased
several
times
to
extend
the
range
of
incident
the WCA
well range
capacity
of aiscertain
is ratio
increased
times to extend
thetorange
signal.scheme,
The dynamic
which
definedpixel
as the
of the several
largest nonsaturating
signal
the of
signal.
The
dynamic
range
which
is
defined
as
the
ratio
of
the
largest
nonsaturating
signal
to
the
standard
deviation
of the noise
under
darkisconditions
enhanced
WCA
scheme.
Figure 3a plots
incident
signal.
The dynamic
range
which
defined asisthe
ratio ofinthe
largest
nonsaturating
signal
standard
deviation
of
the
noise
under
dark
conditions
is
enhanced
in
WCA
scheme.
Figure
3a
plots
thestandard
accumulated
photoelectrons
versus
the dark
integration
time for
three different
irradiance
signals
in 3a
to the
deviation
of the noise
under
conditions
is enhanced
in WCA
scheme.
Figure
the accumulated
photoelectrons
the integration
time for
three different
irradiance
signalsinin
integration
mode. The versus
accumulated
photoelectrons
increase
linearly
with
the
increase
plotsnormal
the accumulated
photoelectrons
versus the
integration time
for three
different
irradiance
signals
normal
integration
mode.
accumulated
photoelectrons
increase linearly
with the aincrease
integration
time until
they The
reach
the full well capacity.
The accumulated
pixel
canin in
in normal
integration
mode.
The accumulated
photoelectrons
increasephotoelectrons
linearly withofthe
increase
integration
time
until
they
reach
the
full
well
capacity.
The
accumulated
photoelectrons
of
a
pixel
can
be expressed
as: they reach the full well capacity. The accumulated photoelectrons of a pixel
integration
time until
can
be expressed as:
be expressed as:
 I  T if I  Q / T
(
Q I  T if I  QMAX / T
(9)
MAX
IQ
· T otherwise
if I ≤ Q
MAX /T
(9) (9)
Q = Q   MAX
otherwise
QQ
otherwise
MAX
MAX
where I is the photocurrent of the incident signal. The largest photocurrent of nonsaturating
where
isphotocurrent
theisphotocurrent
of the
signal.
The largest
photocurrent
of nonsaturating
QMAXincident
/ Tsignal.
incident
signal
given by,
.
where I is Ithe
of Ithe
The largest
photocurrent
of nonsaturating
incident
MAX incident
I

Q
/
T
incident
signal
is
given
by,
.
MAX /T. MAX
signal is given by, I MAX = Q MAX
Photoelectrons
Photoelectrons
Photoelectrons
Q
Photoelectrons
QMAX
MAX
QMAX
QMAX
IH
IH
QS
IM
IM
QS
IL
IL
(a)
IH
IH
T
T
Time
IM
IM
IL
IL
(b)
TS
TS
T
Time
T
Time
Time
(a)
(b)
Figure 3. (a) Accumulated photoelectrons versus time in normal integration mode; and (b)
accumulated photoelectrons versus time using the WCA scheme.
Figure
3. Accumulated
(a) Accumulated
photoelectrons
in integration
normal integration
mode;
and (b)
Figure
3. (a)
photoelectrons
versusversus
time intime
normal
mode; and
(b) accumulated
accumulated
photoelectrons
versus
time
using
the
WCA
scheme.
photoelectrons versus time using the WCA scheme.
The total integration time is divided into several time segments when the WCA scheme is
utilized. The well capacity is adjusted to a higher value at the beginning of each time segment until
The total integration time is divided into several time segments when the WCA scheme is
itThe
increases
the full well
capacity.
The accumulated
photoelectrons
are awhen
piecewise
total to
integration
time
is divided
into several
time segments
the linear
WCAfunction
scheme is
utilized. The well capacity is adjusted to a higher value at the beginning of each time segment until
with
respect
to
the
integration
time.
Figure
3b
shows
that
the
integration
time
is
divided
into
twountil
utilized.
The well
adjusted
to accumulated
a higher value
at the beginning
each time
segment
it increases
to thecapacity
full welliscapacity.
The
photoelectrons
are a of
piecewise
linear
function
T
T

T
T
and
.
is
designated
as
the
adjusting
integration
time
(AIT)
at
this
segments,
namely,
S
S The
S accumulated photoelectrons are a piecewise linear function
it increases
to the
fullintegration
well
capacity.
with respect
to the
time.
Figure
3b shows that the integration time is divided into two
QMAX time
TS . Notably,
the full
well capacity
at time
point.
Thetowell
is adjustedFigure
from Q3b
with
respect
the capacity
integration
shows
integration
is
divided
two
S to
TS and T time.
 TS . TS is designated
asthat
the the
adjusting
integration
time
(AIT) into
at this
segments,
namely,
Q
I
(e.g.,
the
high
irradiance
signal
(
)
case
in
the
when
the
accumulated
photoelectrons
reach
segments,
namely,
TS andisT adjusted
− TS . TSfrom
is designated
adjusting
time
at this
H
QS toS the as
QMAX at time
TS (AIT)
fullthe
well
capacityintegration
. Notably,
point. The
well capacity
. The capacity
excess photoelectrons
will T
spill
over
figure),
output
photoelectrons
arefrom
clipped
time
point.
The the
well
capacity
is adjusted
QSuntil
to the
fullTS well
Q MAX at time
.
Notably,
S
in the
when the accumulated photoelectrons reach QS (e.g., the high irradiance signal ( I H ) case
when the accumulated photoelectrons reach QS (e.g., the high irradiance signal (IH ) case in the figure),
figure), the output photoelectrons are clipped until time TS . The excess photoelectrons will spill over
the output photoelectrons are clipped until time TS . The excess
photoelectrons will spill over from the
Sensors 2017, 17, 915
6 of 23
lateral overflow gate. However, the accumulated photoelectrons of the low irradiance signal (IL ) are
Sensors 2017, 17, 915
6 of 23
unaffected.
Therefore, the accumulated photoelectrons of a pixel when using the WCA scheme
can be
expressed as:
from the lateral overflow gate. However, the accumulated photoelectrons of the low irradiance signal
 Therefore, the accumulated photoelectrons of a pixel when using the WCA
( I L ) are unaffected.

 I·T
QS + I · ( T − TS )

 Q I T
MAX
scheme can be expressed as:
Q=
if I ≤ QS /TS
if QS /TS < I ≤ ( Q MAX − QS )/( T − tS ) TS
if I  QS / TS
otherwise
(10)

Q  QS  I  T  TS  if QS / TS  I   QMAX  QS  / T  tS  TS
(10)
Q of incidentotherwise
The largest photocurrent
signal
when
using
the
WCA
scheme
is
given
by,
 MAX
I 0 MAX = ( Q MAX − QS )/( T − TS ). The standard deviation of the noise under dark conditions is the
The dynamic
largest photocurrent
of incident
signal when using the WCA scheme is given by,
same. Thus,
range is enhanced
by a factor:
  (QMAX  QS ) / (T  TS ) . The standard deviation of the noise under dark conditions is the same.
I MAX
0
Thus, dynamic range is enhanced by a Ifactor:
MAX
λ=
=
( Q MAX − QS ) T
Q MAX
 (QT −
T TS )
Q
I MAX
I
S
  MAX  MAX
(11)

I
Q
T
TS 

MAX
For a signal that does not saturate after MAX
integration,
Equation
(10) can also be expressed as:
(11)
Equation (10) can also be expressed as:
For a signal that does not saturate afterintegration,
Q
Q = IT − ε I − S · ( ITS − QS )

Q 
T
Q  IT    I  SS    ITS  QS 
(

(12)
(12)
TS 
t≥0
where ε(t) =
1 t  0is the unit step function. As shown in Equation (12), the exposure parameters
where   t  
0  t < 0 is the unit step function. As shown in Equation (12), the exposure parameters
0 t  0
QS and TS explicitly show the effects of output photoelectrons. Therefore, celestial body edge model
QS and TS explicitly show the effects of output photoelectrons. Therefore, celestial body edge
and star
spot imaging model are first established in this study. Then, the influence of exposure
model and star spot imaging model are first established in this study. Then, the influence of exposure
parameters on the accuracy of the extraction of optical navigation measurements is analyzed.
1
parameters on the accuracy of the extraction of optical navigation measurements is analyzed.
3.1. Celestial Body Edge Model
3.1. Celestial Body Edge Model
The edge of a celestial body can ideally be modeled as a step function on 1D section. Given the
The edge of a celestial body can ideally be modeled as a step function on 1D section. Given the
effects
of the point spread function (PSF) of the optic system, the real edge adjusts to the blurring effect
effects of the point spread function (PSF) of the optic system, the real edge adjusts to the blurring
and effect
is called
blurred
model.
The parameter
Gaussian
PSF PSF
radius
σPSFindicates
the extent
and the
is called
the edge
blurred
edge model.
The parameter
Gaussian
radius
PSF indicates the
of the
blurring
The
celestial
body image
is assumed
to be to
anbeideal
disk,disk,
suchsuch
thatthat
thethe
radial
extent
of theeffect.
blurring
effect.
The celestial
body image
is assumed
an ideal
energy
distribution
along
the
direction
normal
to
the
edge
is
isotropic.
Therefore,
the
2D
imaging
radial energy distribution along the direction normal to the edge is isotropic. Therefore, the 2D
◦ around the center
model
can bemodel
regarded
asregarded
having been
formed
byformed
1D edge
rotates
imaging
can be
as having
been
by model
1D edgethat
model
that 360
rotates
360° around
of the
Aof
1D
celestial
edge body
model
is established,
and the subsequent
analyses
are based
thedisk.
center
the
disk. A body
1D celestial
edge
model is established,
and the subsequent
analyses
are on
the said
simplify
theoretical
analysis and
calculation.
The blurred
edge (Figure
4) can be
basedmodel
on theto
said
model the
to simplify
the theoretical
analysis
and calculation.
The blurred
edge (Figure
4) canby
be convolving
modeled by convolving
the edge
ideal with
step edge
withwhich
the PSF,
is expressed
modeled
the ideal step
the PSF,
is which
expressed
as [24]:as [24]:
 
k
x x− ll   1  h
( x)k er erf

 + 1 + h
√
f
f ( x ) f=

2PSF
2 2   2σ
PSF 

(13) (13)
hk
h
l
Figure4.
4. Blurred
Blurred edge
Figure
edgemodel.
model.
x
Sensors 2017,
2017, 17,
17, 915
915
Sensors
77 of
of 23
23
This model has four parameters, namely, background intensity h , edge contrast k , edge
 PSF . background
This lmodel
four parameters,
namely,
intensity
h, edgeiscontrast
k, edge location
and has
Gaussian
PSF radius
The background
intensity
zero regardless
of thel
location
and
Gaussian
PSF
radius
σ
.
The
background
intensity
is
zero
regardless
of
the
random
noise.
Then,
PSF
random noise. Then, the 1D
celestial body edge model is expressed as:
the 1D celestial body edge model is expressed as:
PQE T   x  l  
f  x  φP η T erf x − l  1 (14)
QE
2 er f  √2 PSF  +1
f (x) =
(14)
2
2σPSF
Thus, the 1D edge gradient model is derived as:
Thus, the 1D edge gradient model is derived as:
2
PQET
"  x  l   #
f   x  φP ηQE T exp   ( x −
2
(15)
2 l)

f 0 ( x ) = √ 2 PSF exp − 2 PSF
(15)

2
2σPSF
2πσPSF
where T is the integration time,  P is the incident flux density of the celestial body on the image
where T is the integration time, φP is the incident flux density of the celestial body on the image plane,
plane, and QE is the quantum efficiency of the imager sensor. Equation (15) implies that the
and ηQE is the quantum efficiency of the imager sensor. Equation (15) implies that the gradient is
xl .
gradient
is maximized
the edge
maximized
at the edge at
location
x =location
l.
In
Figure 5,
5, the
the small
small amplitude
amplitude blue
blue solid
solid line
line denotes
denotes the
the intensity
intensity distribution
distribution with
short
In Figure
with aa short
integration
time,
and
the
central
area
is
under-saturated.
The
edge
location
can
be
obtained
at
integration time, and the central area is under-saturated. The edge location can be obtainedhalf
at
the
amplitude
intersection
point.
The
large
amplitude
light
blue
dashed
line
denotes
the
intensity
half the amplitude intersection point. The large amplitude light blue dashed line denotes the
distribution
where the
long
time time
should
havehave
been.
However,
intensity distribution
where
theintegration
long integration
should
been.
However,the
theactual
actual intensity
intensity
distribution
is
denoted
as
the
blue
solid
line
because
the
central
area
is
oversaturated.
This scenario
scenario
distribution is denoted as the blue solid line because the central area is oversaturated. This
will result
result in
in an
an extension
extension of
of the
the apparent
apparent diameter
diameter of
of the
the celestial
celestial body.
body. Therefore,
real edge
edge
will
Therefore, the
the real
location
cannot
be
extracted.
location cannot be extracted.
Figure 5. 1D profile of the celestial body image in normal integration mode.
The WCA scheme is used to avoid oversaturation of the celestial body. Let the total integration
The WCA scheme is used to avoid oversaturation of the celestial body. Let the total integration
time be T and AIT be TS . The well capacity is adjusted at time TS from QS to QMAX . In Figure 6,
time be T and AIT be TS . The well capacity is adjusted at time TS from QS to Q MAX . In Figure 6, the red
TS and
the accumulated
photoelectrons
the
red
solid
line denotes
the intensity
distribution
solid
line
denotes
the intensity
distribution
at timeatTtime
accumulated
photoelectrons
in the
S and the
Q
.
The
dark
blue
solid
line
denotes
the
intensity
distribution
at the
in
the
central
area
are
clipped
at
central area are clipped at QS . The dark
blue solid line denotes the intensity distribution at the end
of
S
the integration.
Compared
with Figure
the central
of the
celestial
image
is under-saturated,
end
of the integration.
Compared
with5,Figure
5, thearea
central
area
of the body
celestial
body
image is underwhich
avoids
theavoids
extension
of the apparent
diameter diameter
of the celestial
However,
Figure 6 Figure
shows
saturated,
which
the extension
of the apparent
of the body.
celestial
body. However,
a shallow
ring
formsring
around
thearound
central the
area.
Therefore,
1D celestial
body
edge model
6that
shows
that aenergy
shallow
energy
forms
central
area. the
Therefore,
the 1D
celestial
body
becomes
a piecewise
when
the WCA
is applied
can be and
expressed
edge
model
becomes afunction
piecewise
function
whenscheme
the WCA
schemeand
is applied
can be as:
expressed as:
Sensors 2017, 17, 915
8 of 23
Sensors 2017, 17, 915
8 of 23
PQE T   x  l  

x  xc
erf 
  1
 2   2 PSF  
f
 x    φP ηQE T
(16)
x−l

  x +

 


T er
 Tf S  √
1
x
≤
x


l
c

P
QE

erf

  1 x  xc
2σPSF
QS  2
2
f (x) =
(16)
  2 PSF    φ η


x−l

P QE ( T − TS )

er f √
+1
x > xc
 QS +
2σPSF
 x  l 2
2QS
where xC is the solution of erf
 1 , which denotes the inflection point of the

 
x −2l PSF  P2Q
SS
QE T

where xC is the solution of er f √
=
− 1, which denotes the inflection point of the
φP ηQEbody
TS is commonly high, such that   T ≫ Q .
2σPSF
intensity distribution. The irradiance
of a celestial
P QE S
S
intensity distribution. The irradiance of a celestial body is commonly high, such that φP ηQE TS QS .
x  l is derived.
Therefore,
Therefore, xCC < l is derived.
The
1D
scheme is
is utilized
utilized can
can be
be derived
derived as:
as:
The 1D edge
edge gradient
gradient model
model when
when the
the WCA
WCA scheme
2
   T " 
 x l)2l # 
PT QE

φ
η
x
−
(
P

QE

exp
x xx≤



√
C x
2
−

C
22  PSF

2exp
 
PSF
2πσPSF
 2σPSF


0
f
x



(17)

f (x) =
(17)
"  x  l 2 2 #



(T−TTS ))


P
QE
φ
η
(
T
x
−
l
(
)
P

QE
S
exp
 √
C xC
exp − 2 2  x xx>


2PSF
 22σPSF

PSF
PSF
 2πσ
 ll,, the
the second
second term
term of
of Equation
Equation (17)
(17) obtains
obtains the
the maximum
maximum value. Therefore, the true edge
when xx=
location can be extracted when the WCA scheme is adopted.
Figure
Figure 6.
6. 1D
1D profile
profile of
of the
the celestial
celestial body
body image
image when
when using
using the
the WCA
WCA scheme.
scheme.
3.2.
3.2. Star
Star Spot
Spot Imaging
Imaging Model
Model
Establishing
anaccurate
accurate
star
imaging
model
is the
step to achieving
high star
Establishing an
star
spotspot
imaging
model
is the first
stepfirst
to achieving
high star centroiding
centroiding
accuracy.
Stars canpoint
be considered
point Stellar
sourcesrays
at can
infinity.
Stellar rays
can be
accuracy. Stars
can be considered
sources at infinity.
be approximated
to parallel
approximated
to
parallel
light
rays.
Incident
stellar
lights
pass
through
the
optical
lens
and
are
light rays. Incident stellar lights pass through the optical lens and are focused at a point in the focal
focused
at
a
point
in
the
focal
plane.
However,
the
lens
is
generally
slightly
defocused
to
improve
the
plane. However, the lens is generally slightly defocused to improve the centroiding accuracy of
centroiding
of a star
which
spreads
toimage
severalplane.
pixels in
theprofile
imageof
plane.
profile
of
a star spot, accuracy
which spreads
tospot,
several
pixels
in the
The
a starThe
spot
can be

adescribed
star spotby
canthe
beGaussian
describedPSF,
by the
Gaussian
PSF,
whereas
the
parameter
Gaussian
PSF
radius
PSF
whereas the parameter Gaussian PSF radius σ
indicates the extent
PSF
 PSF where
is high,
the spot
region
where out
a star
spot spreads
is
indicates
the extent
dispersion.
of dispersion.
WhenofσPSF
is high, When
the region
a star
spreads
is large.
The starout
spot
imaging
in normal
integration
commonly
assumes
a 2D Gaussian
function
7a)
large.
Themodel
star spot
imaging
model in mode
normal
integration
mode commonly
assumes
a 2D(Figure
Gaussian
and can be
expressed
as: can be expressed as:
function
(Figure
7a) and
"
#
φS TηQE
( x − x0 )2 + ( y − y0 )2
Enor ( x, y) =
exp −
= ΦS ( x, y) T
(18)
2
2
2πσPSF
2σPSF
Sensors 2017, 17, 915
9 of 23
  x  x0    y  y0 
S TQE
exp
Enor ( x, y ) 

2
2
2 PSF
2 PSF

2
Sensors 2017, 17, 915
2

   S  x, y  T

9(18)
of 23
255
255
192
192
Intensity
Intensity
S is the incident flux of the star on the image
thetrue
truecentroid
centroid
where (xx0,,yy0) isisthe
where
of of
thethe
starstar
andand
0 0
"
#φS is the incident flux of the star on the image plane.
2
2
2 2
φS ηQE SQE ( x − xx0 ) x+
(y − y0 )y0  
0 y 
ΦS ( x, y)=
exp −exp  
is defined
as the
energy
distribution
function
at this
plane.
is defined
as the
energy
distribution
function
at
2
S  x, y 2
2
2πσPSF2 2
2σPSF
2 PSF


PSF
point. The accumulated photoelectrons of a bright star are suppressed when utilizing the WCA scheme,
this point. The accumulated photoelectrons of a bright star are suppressed when utilizing the WCA
and the excess photoelectrons are drained via the overflow gate. The star spot imaging model when
scheme, and the excess photoelectrons are drained via the overflow gate. The star spot imaging model
the WCA scheme is applied (Figure 7b) can be expressed as:
when the WCA scheme is applied (Figure 7b) can be expressed as:
QS


Q
EWCA ( x, y) = ΦS ( x, y) T − ε ΦS ( x, y) − S
(19)
(Φ ( x, y) T − Q )
EWCA ( x, y )   S  x, y  T     S  x, y   TS   S Sx, y  TS S QS  S
(19)
T
S 

The celestial body edge model and star spot imaging model when utilizing the WCA scheme
The celestial body edge model and star spot imaging model when utilizing the WCA scheme
have been established so far. In the subsequent section, the effect of exposure parameters on the
have been established so far. In the subsequent section, the effect of exposure parameters on the
accuracy of star centroiding and edge detection is analyzed using the proposed models. Then,
accuracy of star centroiding and edge detection is analyzed using the proposed models. Then, the
the exposure parameters are optimized to obtain the best performance of the navigation sensor
exposure parameters are optimized to obtain the best performance of the navigation sensor utilizing
utilizing the WCA scheme.
the WCA scheme.
128
64
0
5
5
y [p
ixe 0
ls]
0
-5
-5
ls]
ixe
p
[
x
128
64
0
5
5
y [p
ixe 0
ls]
(a)
0
-5
-5
ls]
ixe
p
[
x
(b)
Figure 7.
7. (a)
(a) Star
Star signal
signal intensity
intensity distribution
distribution utilizing
utilizing normal
normal integration
integration mode;
star signal
signal
Figure
mode; and
and (b)
(b) star
intensity
distribution
using
the
WCA
scheme.
intensity distribution using the WCA scheme.
4. Celestial Object Image Feature Extraction Accuracy Performance Utilizing the WCA Scheme
4. Celestial Object Image Feature Extraction Accuracy Performance Utilizing the WCA Scheme
The navigation measurements of deep space optical navigation systems are generally combined
The navigation measurements of deep space optical navigation systems are generally combined
to calculate the LOS direction and spacecraft location [25]. The accuracy of the apparent diameter and
to calculate the LOS direction and spacecraft location [25]. The accuracy of the apparent diameter
celestial body centroiding is determined by the precision of edge detection. The accuracy of the LOS
and celestial body centroiding is determined by the precision of edge detection. The accuracy of the
direction of the navigation sensor is determined by the attitude measurement precision, that is, the
LOS direction of the navigation sensor is determined by the attitude measurement precision, that is,
centroiding accuracy of the background stars. In this section, the effect of exposure parameters on the
the centroiding accuracy of the background stars. In this section, the effect of exposure parameters
accuracy of star centroiding and edge detection is analyzed using the proposed image model, which
on the accuracy of star centroiding and edge detection is analyzed using the proposed image model,
provides theoretical support for parameter optimization.
which provides theoretical support for parameter optimization.
4.1. Edge
the WCA
WCA Scheme
Scheme
4.1.
Edge Detection
Detection Accuracy
Accuracy Performance
Performance Utilizing
Utilizing the
The edge
edge is
is the
the part
part of
of the
the image
image where
where brightness
brightness changes
changes sharply.
sharply. The
The edge
edge points
points based
based on
on
The
the
blurred
edge
model
are
given
by
the
maxima
of
the
first
image
derivative
or
zero
crossing
point
the blurred edge model are given by the maxima of the first image derivative or zero crossing point
of the
the second
second image
image derivative.
derivative. Steger
of
Steger proposed
proposed aa subpixel
subpixel edge
edge extraction
extraction algorithm
algorithm in
in his
his doctoral
doctoral
thesis
[26].
The
basic
principle
of
the
algorithm
is
to
perform
the
second-order
Taylor
expansion
thesis [26]. The basic principle of the algorithm is to perform the second-order Taylor expansion about
about
the pixel
pixel where
where the
the local
local gradient
gradient is
is maximized
maximized in
in the
the direction
of the
the edge
edge normal
normal and
and to
the
direction of
to determine
determine
the subpixel
point of
of the
the edge
edge of
of the
the
the
subpixel location
location of
of the
the zero
zero crossing
crossing point
the second
second derivative.
derivative. In
In this
this study,
study, the
celestial
body
is
extracted
using
this
algorithm,
which
is
essentially
a
fitting
interpolation
algorithm.
celestial body is extracted using this algorithm, which is essentially a fitting interpolation algorithm.
Given that the edge detection algorithm is based on image derivative information, it is highly
sensitive to noise. Therefore, the image derivatives must be estimated by convolving the image with
Sensors 2017, 17, 915
10 of 23
the derivatives of the Gaussian smoothing kernel. Edges appear as bright lines in an image that
contains the absolute value of the gradient. The second-order Taylor expansion about the maximized
gradient pixel in the direction of the edge normal is expressed as:
1
r 0 ( x ) = r 0 ( x0 ) + r 00 ( x0 ) x + r 000 ( x0 ) x2
2
(20)
where x0 is the pixel center. The subpixel location of the edge where r00 ( x ) = 0 is expressed as:
l0 = x 0 −
r00 ( x0 )
r 000 ( x0 )
(21)
The 1D celestial body edge model is a piecewise function when the WCA scheme is applied. The
gradient of the edge model is derived by convolving the edge model with the first derivative of the
Gaussian smoothing kernel and expressed as:
R∞
r 0 ( x ) = f ( x ) ∗ g0 ( x ) = −∞ f 0 (τ ) g( x − τ )dτ
" " 2 #
#
R xc φP ηQE T
x−τ 2
τ−l
1
exp − √
= −∞ √
exp − √
·√
dτ +
2πσPSF
2πσ
2σPSF
2σ
" " 2 #
#
R ∞ φP ηQE ( T − TS )
τ−l
1
x−τ 2
√
exp − √
·√
exp − √
dτ
xc
2πσPSF
2πσ
2σPSF
2σ
"
#
φP ηQE
TS
( x − l )2
TS
+ er f [Θ( x )]
= q
exp −
T−
2
2
2
2 σPSF
+ σ2
2
2π (σPSF
+ σ2 )
(22)
Thus, the second derivative of edge model is derived as:
#
2
TS
(
x
−
l
)
TS
r 00 ( x ) = − q
T
−
exp −
+
er
f
Θ
(
x
)]
[
2
2
2
2 σPSF
+ σ2
2
2
2π (σPSF
+ σ2 )(σPSF
+ σ2 )
"
#
n
o
φP ηQE TS σPSF
( x − l )2
exp −Θ2 ( x )
exp
−
−
2
2
2πσ (σPSF + σ2 )
2 σPSF + σ2
φP ηQE ( x − l )
where Θ( x ) =
"
(23)
2 ( x − x ) + σ2 ( x − l )
σPSF
c
c
q
. The edge location is the zero crossing point of the second
2
2(σPSF
+ σ2 )σPSF σ
derivative, which indicates that the solution of Equation (23) and r000 ( x )r 0 ( x ) < 0 are required.
Given that the analytical solution cannot be calculated, the effect of exposure parameters on the edge
detection accuracy is analyzed by numerical simulations. The edge localization error is the function
of the Gaussian radius σPSF , well capacity QS , total integration time T, AIT TS , Gaussian smoothing
kernel radius σ, and edge location l. For a given optical system, σPSF is constant, σ is a parameter of
the edge detection algorithm, and T is determined by the limiting detectable star visual magnitude for
the navigation sensor. This study focuses on analyzing the edge detection error caused by QS and TS .
Systematic error in edge detection is also introduced by pixelization. However, l is a random variable
in practice that is uniformly distributed over a pixel. The root mean square error is defined as the error
of edge detection, which is expressed as:
δE ( QS , TS ) =
Z
0.5
−0.5
2
δ ( QS , TS , l )dl
2
(24)
σPSF = 0.67 pixels, σ = 0.55 pixels, T = 30 ms are set at this point. Then, the simulated
celestial body images with temporal noise and fixed pattern noise are generated. Sources of noises
Sensors
Sensors2017,
2017,17,
17,915
915
11
11ofof23
23
which are taken into consideration include photon shot noise, dark current noise, readout noise,
which are taken into consideration include photon shot noise, dark current noise, readout noise,
quantization noise, dark signal non-uniformity and photon response non-uniformity. The full well
quantization noise, dark signal non-uniformity
and photon response non-uniformity. The full well
consist with the image sensor we utilized. The edge detection error
capacity is set to QMAX  15, 000e −
capacity is set to Q MAX = 15, 000e consist with the image sensor we utilized. The edge detection
simulation
results
are shown
in Figure
8. 8.
error
simulation
results
are shown
in Figure
andAIT
AITTTSis is
discussed.The
Theblack
blackline
line
First,the
therelationship
relationship between
between edge
First,
edge detection
detectionerror
error δ E and
discussed.
E
S
evidentlyaffected
affectedby
byTST.S .An
An interval
interval exists wherein δEE isissignificantly
significantlysmall.
small.
indicates that
that δE Eis isevidently
indicates
The
relatively
long
when
TS TisS short,
which
leads
to
T TT
relatively
long
when
is short,
which
leads
Thesecond
secondsegment
segmentofofthe
theintegration
integrationtime
timeT −
S Sis is
the
of theofcentral
regionregion
pixels pixels
and extension
of the apparent
diameterdiameter
of the celestial
to oversaturation
the oversaturation
the central
and extension
of the apparent
of the
body.
However,
T
−
T
is
relatively
short
when
T
is
excessively
long,
which
leads
to
a
small
intensity
S
S
celestial body. However, T  TS is relatively short when TS is excessively long, which leads to a
contrast
betweencontrast
the central
region
and
the energy
Theenergy
algorithm
the edge
of the
small intensity
between
the
central
regionring.
and the
ring.will
Theextract
algorithm
will extract
“rings”
instead
of the
actual
edge location.
Therefore,
reaches theminimum
number and
E initiallyTherefore,
the edge
of the
“rings”
instead
of the actual
edge δlocation.
E initially reaches the
then increases with the increase in TS .
minimum number and then increases with the increase in TS .
Figure8.8.Edge
Edgedetection
detectionerror
error δEE (left)
AIT
TS T
(right).
Figure
(left) versus
versus well
well capacity
capacity QQ
and
AIT
S Sand
S (right).
QSS isisdiscussed.
Second, the
the relationship
relationship between edge
discussed.
and well
well capacity Q
Second,
edge detection
detectionerror
error δEE and
Setting
relatively small
small δEE. .TSTS=29.6
29.6ms
msisisset
setatatthis
thispoint.
point.
Settingthe
theAIT
AITto
toan
an appropriate
appropriate value
value leads to a relatively
The
red
line
indicates
that
δ
varies
slightly
and
remains
nearly
constant
at
the
beginning
with
the
The red line indicates that E E varies slightly and remains nearly constant at the beginning withthe
increase in QS . As QS continues to increase, δE increases sharply. The reason for this relationship
increase in QS . As QS continues to increase,  E increases sharply. The reason for this relationship
is provided in Figure 9, which shows the simulated images of the celestial object and the second
is provided in Figure 9, which shows the simulated images of the celestial object and the second
derivative of the edge model expressed in Equation (23) when the well capacity values are 1000e−,
derivative of the edge model expressed in Equation (23) when the well capacity values are 1000 e ,
5000e−and 14,000e− .
5000 e and 14,000 e .
In Figure 9, the purple color represents the saturated pixels, whereas the cyan color represents the
In Figure 9, the purple color represents the saturated pixels, whereas the cyan color represents
pixels with zero intensity. The yellow arc represents the true edge of the celestial body. The red cross
the pixels with zero intensity. The yellow arc represents the true edge of the celestial body. The red
symbol indicates the zero-crossing point of the second derivative, which is located at the true edge
cross symbol indicates the zero-crossing point of the second derivative, which is located at the true
location, whereas the blue circle symbol indicates the zero-crossing point that deviates from the true
edge location, whereas the blue circle symbol indicates the zero-crossing point that deviates from the
edge location. Figure 9a shows that the intensity of the energy ring is small when the well capacity QS
true edge location. Figure 9a shows that the intensity of the energy ring is small when the well
is small and that the zero-crossing points can be extracted at the actual edge location. Figure 9b shows
capacity QS is small and that the zero-crossing points can be extracted at the actual edge location.
that the intensity
of the energy ring increases with the increase in QS and that the zero crossing points
QS edge
and that
the
Figure
9b
shows
the energy
intensity
of the
ringextracts
increases
with edges,
the increase
infalse
exist at the locationthat
of the
ring.
Theenergy
algorithm
double
and the
can be
zero crossing
existthat
at the
location
of the
energy
ring.
The
extracts
double
edges,
rejected.
Figurepoints
9c shows
the
intensity
of the
energy
ring
is algorithm
higher with
a larger
QS and
that and
no
the false edge
can be
shows
that
the intensity
of the energy
ring is from
higher
zero-crossing
points
arerejected.
obtainedFigure
at the 9c
actual
edge
location.
The extracted
edge deviates
thewith
truea
obtained
at the actual edge location. The extracted
larger QThus,
location.
largeno
QSzero-crossing
value resultspoints
in falseare
edge
extraction.
S anda that
edge deviates from the true location. Thus, a large QS value results in false edge extraction.
Sensors
2017,
17,17,
915
Sensors
2017,
915
1212
ofof
2323
Thus, the well capacity and AIT are the main factors that affect the edge detection error. TS
Thus, the well capacity and AIT are the main factors that affect the edge detection error.
evidently influences on the accuracy of edge detection. The edge detection error initially reaches the
TS evidently influences on the accuracy of edge detection. The edge detection error initially reaches
minimum number and then increases with the increase in T ; Q must not be excessively large.
the minimum number and then increases with the increase in STS ; QSS must not be excessively large.
(a)
QS=1000e-
(b)
(c)
QS=5000e-
QS=14000e-
gray value
255
128
The second derivatives of edge data
0
-10
0
x[pixels]
10
-10
0
x[pixels]
10
-10
0
x[pixels]
10
−;
Figure
of different
well capacity
on the
edge on
detection
resultsdetection
when: (a) Q
Figure9. Effect
9. Effect
of different
well values
capacity
values
the edge
results
when:
S = 1000e



−
−
(b)
= 14,
. QS  14,000e .
(a)QSQ=
1000e ; ;and
(b) (c)
QS QS5000
e 000e
; and (c)
S 5000e
4.2.
4.2.Star
StarCentroiding
CentroidingAccuracy
AccuracyPerformance
PerformanceUtilizing
Utilizingthe
theWCA
WCAScheme
Scheme
Star
Starcentroiding
centroidingaccuracy
accuracyisisthe
thebasis
basisfor
forattitude
attitudeaccuracy.
accuracy.The
Thestar
starcentroiding
centroidingaccuracy
accuracy
performance
performanceisisanalyzed
analyzedwhen
whenthe
theWCA
WCAscheme
schemeisisemployed
employedtotoensure
ensureattitude
attitudeaccuracy.
accuracy.The
Thetotal
total
centroiding
centroidingerror
errorisisdecomposed
decomposedinto
intothe
thex-and
x-andy-component
y-componenterrors.
errors.The
Theerrors
errorsinineach
eachcase
casecan
canbebe
proven
asas
anan
example.
proventotobebethe
thesame.
same.Thus,
Thus,this
thisstudy
studyfocuses
focusesononanalyzing
analyzingthe
thex-component
x-componenterrors
errors
example.
The
the
WCA
scheme
is
applied
is
expressed
as:as:
Thecentroiding
centroidingerror
errorofofthe
thex-component
x-componentδx when
when
the
WCA
scheme
is
applied
is
expressed
x
∑i ∑ j xi Iij i  j xi I ij
 x  − x0
 x0
∑i ∑ j Iij  i  j I ij
 x , y − QQSS  · Φ x , y T − Q
−
x
·
ε
Φ
xi ,
y j Tx
∑
∑
∑i ∑ j Φ S 
 S Sxi , yij  TjS  Q
SS 
S
ixi , y j Tx
i  xi  S
iS  xji , y j  
i
j








S
i
i
j
i
j

TTSS  

=

 x0 − x0
Q
S


Q
S
φS ηQE T −S∑
· Φ x x, y, yT T
∑ ε Φ j  xi ,Sy jxi , −
 Q− Q
yj  
QEiT  j i  S
TSTS   S S i i j  j S S S  S

δx =
(25)
(25)
InInEquation
(25),
δx isthe
function of the Gaussian radius σ radius
, wellcapacity
Q capacity
, total integration
QS , total
Equation
(25),
x is the function of the Gaussian PSF
PSF , well S
time
T,
AIT
T
,
incident
flux
of
the
star
on
the
image
plane
φ
,
and
actual
star
location
x
. T is
S
S plane S , and actual star 0location
T , AIT TS , incident flux of the star on the image
integration time
determined by the limiting detectable star visual magnitude for the navigation sensor. If x0 moves
x0 . T is determined by the limiting detectable star visual magnitude for the navigation sensor. If
within a pixel, then δx changes periodically. However, x0 is a random variable that is uniformly
x0 moves within a pixel, then  x changes periodically. However, x0 is a random variable that is
distributed
over a pixel within the range [−0.5, 0.5) in practice. The root mean square error is defined
distributed
over a pixel within the range [−0.5, 0.5) in practice. The root mean square error
asuniformly
the error of
x, as:
Z 0.5
2
is defined as the error of x, as:
2
δx,S ( QS , TS ) =
δx ( QS , TS , x0 )dx0 o
(26)
2
−0.50.5 2


 x, S (QS , TS )    x (QS , TS , x0 )dx0
(26)
 0.5noise to the simulated

After adding temporal noise and fixed pattern
star image, the relationship
amongAfter
δx,S , adding
QS , andtemporal
TS is analyzed
with
different
magnitudes.
Figurestar
10 image,
shows the
the relationship
simulated
noise and
fixed
patternstar
noise
to the simulated
results when theQstar magnitude
is 2, 4, 5, and 6. First, the relationship between star centroiding error
among  x , S , S , and TS is analyzed with different star magnitudes. Figure 10 shows the simulated
results when the star magnitude is 2, 4, 5, and 6. First, the relationship between star centroiding error
Sensors 2017, 17, 915
Sensors 2017, 17, 915
13 of 23
13 of 23
 x , S and AIT TS is discussed. Figure 10a shows that the centroiding error of the star magnitude = 2
δslowly
AIT TS is with
discussed.
Figure in
10aTSshows
that10b–d
the centroiding
of the star magnitude
=2
. Figure
shows thaterror
the centroiding
error variation
the increase
x,S and increases
slowly
increases
with
the
increase
in
T
.
Figure
10b–d
shows
that
the
centroiding
error
variation
caused by TS can be neglected. Thus, S x , S is less affected by TS . Second, the relationship between
caused by TS can be neglected. Thus, δx,S is less affected by TS . Second, the relationship between star
star centroiding error  x , S and well capacity QS is discussed. Figure 10a–c shows that  x , S
centroiding error δx,S and well
capacity QS is discussed. Figure 10a–c shows that δx,S decreases with
decreases
increase intheQcentroiding
the centroiding
of a dim
(Figure
10d)
S . However,error
the
increasewith
in Qthe
.
However,
variation oferror
a dimvariation
star (Figure
10d)star
caused
by well
S
caused by
capacity can be neglected.
capacity
canwell
be neglected.
Thetotal
totalstar
starcentroiding
centroidingerror
errorisisexpressed
expressedas:
as:
The

q 2x2, S   2y2, S
δS,cenS , cen= δx,S
+ δy,S
(27)
(27)
The relationship between total star centroiding error and exposure parameters is consistent with
The relationship
between
centroiding
parameters
consistent
with
the x-component
errors.
Thus,total
the star
centroiding
errorerror
of a and
dimexposure
star is unaffected
by is
the
WAC scheme.
the
errors.ofThus,
thestar
centroiding
a dim
star is
by the
scheme.
Thex-component
centroiding error
a bright
decreaseserror
withofthe
increase
inunaffected
well capacity,
andWAC
the AIT
effect
The
centroiding
error
of
a
bright
star
decreases
with
the
increase
in
well
capacity,
and
the
AIT
effect
can be ignored.
can be ignored.
(b)
0.1
0.05
(c)
4000
Q [e -]6000
S
(d)
Star Magnitude=5
8000
29
Star Magnitude=6
30
8000
29
[m
s]
4000 6000
Q [e -]
S
29.5
s
[m
s]
29
s
8000
T
4000 6000
Q [e -]
S
0
2000
T
30
29.5
x,S[pixels]
0.05
x,S[pixels]
0.05
0
2000
30
29.5
[m
s]
29
[m
s]
8000
s
4000 6000
Q [e -]
S
0
2000
s
30
29.5
T
0.05
0
2000
Star Magnitude=4
x,S[pixels]
Star Magnitude=2
T
x,S[pixels]
(a)
Figure 10. δx,S versus well capacity QS and AIT TS for different star magnitudes: (a) star magnitude =
Figure 10.  x, S versus well capacity QS and AIT TS for different star magnitudes: (a) star
2; (b) star magnitude
= 4; (c) star magnitude = 5; and (d) star magnitude = 6.
magnitude = 2; (b) star magnitude = 4; (c) star magnitude = 5; and (d) star magnitude = 6.
In the preceding sections, the centroiding accuracy performance of a single star is analyzed
In the preceding sections, the centroiding accuracy performance of a single star is analyzed when
when the WCA scheme is adopted. Many stars in the FOV are required to increase the attitude
the WCA scheme is adopted. Many stars in the FOV are required to increase the attitude
determination accuracy in practice. More dim stars have been generally recorded than bright stars
determination accuracy in practice. More dim stars have been generally recorded than bright stars in
in the FOV. The overall star centroiding error is defined as the weighted average of centroiding
the FOV. The overall star centroiding error is defined as the weighted average of centroiding errors
errors for different star magnitudes at this point. The overall centroiding error can directly reflect the
for different star magnitudes at this point. The overall centroiding error can directly reflect the
attitude determination accuracy of the optical navigation sensor. The star magnitudes range from 0
attitude determination accuracy of the optical navigation sensor. The star magnitudes range from 0
to 7 at 0.5 intervals. The star magnitudes that range from mV − 0.25 to mV + 0.25 are considered
to 7 at 0.5 intervals. The star magnitudes that range from mV  0.25 to mV + 0.25 are considered the
the magnitude mV to simplify the analysis process. Therefore,
the overall star centroiding error is
magnitude
expressed
as: mV to simplify the analysis process. Therefore, the overall star centroiding error is
7
expressed as:
∑ δS,MVi ( NMVi ,FOV − NMVi−1 ,FOV )
i =0
δS,All =
(28)
NMV7 ,FOV

i 0
 S , All 
where
N17,
M Vi ,915
FOV
Sensors
2017,
S , M Vi
( N MVi , FOV  N MVi1 , FOV )
(28)
N MV 7 , FOV
is the average number of stars brighter than magnitude MVi in the FOV, 14and
of 23is
expressed as [27]:


where NMVi ,FOV is the average number of stars brighter1than
 cosmagnitude
A / 2 MVi in the FOV, and is
(29)
N MVi , FOV  6.57  e1.08MVi 
expressed as [27]:
2
1 − cos( A/2)
NMVi ,FOV = 6.57 · e1.08MVi ·
(29)
2
exponentially with the increase
where A is the FOV size. The number of stars in the FOV increases
where
is the FOV M
size.
The number of stars in the FOV increases exponentially with the increase
in starAmagnitude
Vi . There are much more dim stars than the bright ones in the FOV. As a result,
inthe
star
magnitude
MVierror
. There
more dim stars
than
the overall
bright ones
in the FOV.
AsFigure
a result,
star
centroiding
of are
dimmuch
star contributes
more
to the
centroiding
error.
11
the
star
centroiding
error
of
dim
star
contributes
more
to
the
overall
centroiding
error.
Figure
11
shows the relationship between overall centroiding error and well capacity QS when
shows the relationship between overall centroiding error and well capacity QS when TS = 29.6 ms.
TS  29.6 ms . The overall centroiding error fluctuation is relatively stable with the increase in QS
The overall centroiding error fluctuation is relatively stable with the increase in QS because variation of
because variation of star centroiding error of dim star caused by QS is less affected than bright star
star centroiding error of dim star caused by QS is less affected than bright
star Therefore, the variation
Therefore,
the
variation
of
overall
centroiding
error
caused
by
well
capacity
can be neglected.
of overall centroiding error caused by well capacity can be neglected.
0.048
S,All[pixels]
0.046
0.044
0.042
0.04
0.038
0.036
2000
4000
6000
8000
-
QS[e ]
Figure
Figure11.
11.Overall
Overallstar
starcentroiding
centroidingerror
errorversus
versuswell
wellcapacity.
capacity.
summary,the
the
well
capacity
is mainly
responsible
for centroiding
the centroiding
of a single
InInsummary,
well
capacity
QS Q
isS mainly
responsible
for the
errorerror
of a single
star.
star. However,
the attitude
accuracy
affected
well capacity
be neglected.
However,
the attitude
accuracy
affected
by wellbycapacity
can be can
neglected.
5.5.Exposure
ExposureParameter
ParameterOptimization
Optimization
InInthe
preceding
sections,
the celestial
body edge
star spot
model
are established.
the
preceding
sections,
the celestial
bodymodel
edge and
model
and image
star spot
image
model are
Both
models
are
piecewise
functions,
such
that
the
analytical
solution
of
the
optimal
exposure
established. Both models are piecewise functions, such that the analytical solution of the
optimal
parameter
difficult to
the optimal
exposure
parameters
obtained are
by conducting
exposure is
parameter
is obtain.
difficultThus,
to obtain.
Thus, the
optimal
exposure are
parameters
obtained by
Monte
Carlo simulation.
The
exposure parameters
include
the totalinclude
integration
time integration
T, AIT TS , and
well
T,
conducting
Monte Carlo
simulation.
The exposure
parameters
the total
time
capacity
appropriate
of
T
ensures
that
sufficient
stars
are
covered
in
the
FOV
for
star
S . An
T
well
capacity Qvalue
.
An
appropriate
value
of
ensures
that
sufficient
stars
are
covered
AIT TS ,Qand
S
pattern
recognition,
a prerequisite
for aisreliable
attitudefor
measurement
function.
The total
in the FOV
for star which
patternisrecognition,
which
a prerequisite
a reliable attitude
measurement
integration
time
is
generally
set
to
a
relatively
longer
value
to
ensure
that
dim
stars
can
be
identified
as
function. The total integration time is generally set to a relatively longer value to ensure that dim
reliable,
which
can causeas
the
target celestial
body
to become
overexposed.
Therefore,
the WCA
scheme
stars can
be identified
reliable,
which can
cause
the target
celestial body
to become
overexposed.
isTherefore,
adopted, and
the
AIT
and
well
capacity
are
optimized
to
obtain
the
best
navigation
performance.
the WCA scheme is adopted, and the AIT and well capacity are optimized to obtain the
The
optical
systemperformance.
employed in The
this study
the following
parameters:
aperture
40 mm,
best navigation
opticalhas
system
employed
in this study
has D
the=following
focal
length f aperture
= 100 mm,
and
transmission
= 80%.
sensor
utilized isCMV20000,
mm ,The
D
40 optical
mm , focal
= 80% . The
parameters:
length f τ 100
andimage
optical
transmission
the parameters of which are listed in Table 1.
image sensor utilized is CMV20000, the parameters of which are listed in Table 1.
Table 1. Parameters of the CMV20000 image sensor.
Parameter
Value
Parameter
Value
Active pixels
Pixel pitch
Full well capacity, Q MAX
Conversion gain
Dark current
5120 × 3840
6.4 µm × 6.4 µm
15, 000e−
0.25DN/e−
125e− /s
PRNU
DSNU
Read noise
Quantization bits
Quantum efficiency, ηQE
1%
10e− /s
8e−
12
0.45e− /photon
Sensors 2017, 17, 915
15 of 23
5.1. Total Integration Time T
The total integration time is determined by the limiting detectable star visual magnitude for
the navigation sensor. Navigation sensors must conduct star pattern recognition to obtain attitude
information. A sufficient number of navigation stars must be present in the FOV to ensure the
effectiveness of the star pattern recognition algorithm. A star can be generally identified as reliable if
the SNR of at least five pixels are more than 5. Thus, T must satisfy that the SNR of the darkest pixel of
limiting detectable star is more than 5, which can be expressed as:
SNR =
KφS ηQE T
KφS0 ηQE T · 2.512− MV
= q
>5
Nnoise
n2Shot + n2Dark + n2PRNU + n2DSNU + n2read + n2ADC
(30)
Thus, the following expression can be derived:
T>
5·
q
n2Shot + n2Dark + n2PRNU + n2DSNU + n2read + n2ADC
KφS0 ηQE · 2.512− MV
(31)
where nShot , n Dark , n PRNU , n DSNU , nread , and n ADC denote the standard deviations of photon shot
noise, dark current noise, photon response non-uniformity noise (PRNU), dark signal non-uniformity
noise (DSNU), readout noise and quantization noise, respectively. Photon shot noise follows a Poisson
distribution and is dependent on incident flux on the pixel; its variance is equal to the counts of
photoelectrons in the imaging process. Dark current noise also follows a Poisson distribution, and its
variance is equal to the production of dark√
current and exposure time. The standard deviation of the
quantization noise is equal to n ADC = 1/ 12G, where G denote the conversion gain. These noise
terms can be derived from the parameters in Table 1. K is the ratio of the energy of the darkest pixel to
the total energy of the star signal. K = 0.0287 is set at this point. φS0 = 5.4 × 104 photons/ms is the
incident flux of the star, whose magnitude is 0. MV is the limiting detectable star visual magnitude for
the navigation sensor. MV = 6 is set at this point. Equation (31) shows that the total integration time
must satisfy T > 25.4 ms. T = 30 ms is set to provide the system with a certain degree of redundancy.
5.2. Adjusting Integration Time TS
The AIT TS is one of the main factors that influences the edge detection accuracy performance of
the celestial body. The edge detection error initially reaches the minimum number and then increases
with the increase in TS based on the previous conclusion. However, the variation of star centroiding
error caused by TS can be neglected. Therefore, this study focuses on optimizing TS to minimize the
edge detection error.
Monte Carlo simulation is performed through the following procedure: First, a total of 8000 groups
of celestial body images are generated using the proposed model. The well capacity values range from
3000e− to 5000e− at 25e− intervals. The AIT values range from 29.6 ms to 29.8 ms at 0.002 ms intervals.
Each group contains 100 celestial body images, whose individual center coordinates are fixed; then,
random noise is added. However, the radius of the celestial body obeys uniform distribution over a
pixel, which is equivalent to the edge location that changes uniformly over a pixel. Second, edge data
of the 100 celestial bodies are extracted using the edge detection algorithm. The absolute fitting
radius error is obtained by fitting these edge points utilizing the least square circle fitting algorithm.
The absolute radius error is a direct expression of the edge detection error. Then, the standard deviation
of these absolute radius errors in one group is considered the average edge detection error at certain
well capacity and AIT values. Finally, this procedure is repeated until all groups of celestial body
images are processed.
The simulation conditions are set as follows: the incident flux of the celestial body on the
image plane average in one pixel φP = 7.5 × 104 photons/ms, the total integration time T = 30 ms,
the Gaussian radius σPSF = 0.67 pixels, and the Gaussian smoothing kernel radius σ = 0.55 pixels.
error. Then, the standard deviation of these absolute radius errors in one group is considered the
average edge detection error at certain well capacity and AIT values. Finally, this procedure is
repeated until all groups of celestial body images are processed.
The simulation conditions are set as follows: the incident flux of the celestial body on the image
plane average in one pixel  P = 7.5  10 4 photons/ms , the total integration time T  30 ms , the
Sensors 2017, 17, 915
16 of 23
Gaussian radius  PSF = 0.67 pixels, and the Gaussian smoothing kernel radius  = 0.55 pixels. The
AIT TS that corresponds to the minimum standard deviation of the edge detection error is selected
The AIT TS that corresponds to the minimum standard deviation of the edge detection error is selected
as the optimal T at a certain well capacity, which is shown as a red scattered point in Figure 12.
as the optimal TS Sat a certain well capacity, which is shown as a red scattered point in Figure 12.
The simulation results are shown in Figure 12. The blue solid line indicates the linear function
The simulation results are shown in Figure 12. The blue solid line indicates the linear function
between well capacity Q and optimal T fitted by MATLAB. The optimal T increases with the
between well capacity QSS and optimal TSS fitted by MATLAB. The optimal TSS increases with the
increaseininwell
wellcapacity.
capacity.Therefore,
Therefore,the
theoptimal
optimalAIT
AITisisderived
derivedas:
as:
increase
Q
Q
T  T Q MAX − QS S
TS =S T − MAX


QE
φPPηQE
(32)
(32)
Thesame
sameconclusion
conclusioncan
canalso
alsobe
beobtained
obtainedunder
underother
othersimulation
simulationconditions.
conditions.Thus,
Thus,the
theoptimal
optimal
The
T
when
utilizing
the
WCA
scheme
is
given
by
Equation
(32).
AIT
AIT TS Swhen utilizing the WCA scheme is given by Equation (32).
29.72
Linear fit
T S[ms]
29.7
Simulations
29.68
29.66
29.64
3000
3500
4000
4500
5000
-
QS[e ]
Figure 12.
12. Simulation
Simulation results
results of
of the
the optimal
optimal TT
Figure
S .S .
5.3.Well
WellCapacity
CapacityQQ
5.3.
SS
The
of of
well
capacity
QS isQ
determined
by two conditions. First, if QS is setifexcessively
QS is set
Theoptimization
optimization
well
capacity
S is determined by two conditions. First,
low,
then
the
SNR
of
a
dim
star
decreases,
which
can
cause
the
star
identification
failure
of
the
limiting
excessively low, then the SNR of a dim star decreases, which can cause the star identification
failure
detectable
star.
Therefore,
providing
the
lower
limit
of
the
well
capacity
is
necessary.
Second,
QS is
of the limiting detectable star. Therefore, providing the lower limit of the well capacity is necessary.
the
main factor
thatmain
affects
the accuracy
of the
edge
detection
whendetection
the WCAwhen
scheme
applied,
andis
QS is the
factor
that affects
accuracy
of edge
the is
WCA
scheme
Second,
the optimal QS must minimize the edge detection error. Therefore, the optimal well capacity QS is
applied, and the optimal QS must minimize the edge detection error. Therefore, the optimal well
comprehensively determined by the aforementioned conditions.
capacity QS is comprehensively determined by the aforementioned conditions.
The intensity distribution of the limiting detectable star cannot be degraded to ensure that it is
The intensity
distribution
of the
limiting
cannotas:
be degraded to ensure that it is
identified
reliably. Thus,
the lower
limit
of welldetectable
capacity isstar
expressed
identified reliably. Thus, the lower limit of well capacity is expressed as:
QS ≥ TS K B φS0 ηQE · 2.512− MVtM
(33)
QS  TS K BS 0QE  2.512 Vt
(33)
By substituting Equation (32) into Equation (33), the following expression is obtained:
By substituting Equation (32) into Equation (33), the following expression is obtained:
φP TηQE − Q MAX K B φS0 · 2.512− MVt
QS ≥
φP − K B φS0 · 2.512− MVt
(34)
where K B is the ratio of the energy of the brightest pixel to the total energy of the star signal, K B = 0.2965
is set at this point. MVt is the star magnitude limit threshold, and the SNR of a star that is dimmer
than magnitude MVt does not decrease when the WCA scheme is employed. MVt = 5.5 is set at this
point, and QS ≥ 1400e− is obtained.
The well capacity QS is another factor that influences the edge detection performance of the
celestial body. Monte Carlo simulation is performed to obtain the optimal well capacity value.
Given that the relationship between optimal AIT and well capacity is already derived in the previous
section, the optimal exposure parameters under current imaging conditions are determined. A total

is set at this point, and QS  1400e is obtained.
The well capacity QS is another factor that influences the edge detection performance of the
celestial body. Monte Carlo simulation is performed to obtain the optimal well capacity value. Given
that the relationship between optimal AIT and well capacity is already derived in the previous
Sensors 2017, 17, 915
17 of 23
section, the optimal exposure parameters under current imaging conditions are determined. A total
of 300 groups of celestial body images were generated based on the celestial body edge model. Then,

of 300 groups
celestial
images values
were generated
celestial
edge model.
e  at 10body
e  intervals.
random
noise isof
added.
Thebody
well capacity
range frombased
2000 eon
tothe
5000
Each
−
−
−
Then,
random using
noise is
added. (32)
The well
capacityQvalues
range
from
2000e
to
5000e
at
10e
intervals.
TS is derived
Equation
at a certain
.
Then,
the
edge
data
of
the
100
celestial
bodies
are
S
Each TS isusing
derived
at a certain
QS . Then,fitting
the edge
dataerror
of the
celestial
extracted
theusing
edge Equation
detection (32)
algorithm.
The absolute
radius
is 100
obtained
by bodies
fitting
are
extracted
using
the
edge
detection
algorithm.
The
absolute
fitting
radius
error
is
obtained
by
these edge points utilizing the least square circle fitting algorithm. The standard deviation of these
fitting
these
edge
points
utilizing
the
least
square
circle
fitting
algorithm.
The
standard
deviation
of
absolute radius errors in one group is considered the average edge detection error at a certain well
these absolute
radius
errors in
group
is considered
average edge
detection
errorof
at the
a certain
QSone
capacity.
The well
capacity
that
corresponds
to thethe
minimum
standard
deviation
edge
well
capacity.
The
well
capacity
Q
that
corresponds
to
the
minimum
standard
deviation
of
the
S
QS if it also satisfies the condition expressed in Equation edge
(34).
detection error is selected as the optimal
detection error is selected as the optimal QS if it also satisfies the condition expressed in Equation (34).
The simulation results of the optical system in this study are shown in Figure 13. The abscissa
The simulation results of the optical system in this study are shown in Figure 13. The abscissa
denotes the well capacity QS , whereas the values in the parentheses denote the optimal TS that
denotes the well capacity QS , whereas the values in the parentheses denote the optimal TS that
Q The edge
edge detection
detection error
error initially
initially reaches
reaches the
the minimum
minimum number
correspond
correspond to
to the
the current
current QSS .. The
number and
and
Q
Q
.
The
symbol
“*”
indicates
the
optimal
.
Thus,
then
increases
with
the
increase
in
then increases with the increase in QSS . The symbol “*” indicates the optimal QSS . Thus, the
the optimal
optimal
solutions
solutions for
for the exposure parameters are identified. The
The optimal
optimal well
well capacity
capacity of
of the navigation
navigation

−
sensor of
of the
the optical
optical system
system employed
employed in this study is 3750
3750e e , whereas
sensor
, whereasthe
theoptimal
optimalAIT
AITisis29.67
29.67ms.
ms.
0.04
0.035
E[pixels]
0.03
0.025
0.02
0.015
0.01
2000(29.618) 3000(29.647) 4000(29.676) 5000(29.706)
QS[e- ](T S[ms])
Figure
Figure13.
13.Simulation
Simulationresults
resultsof
ofthe
theoptimal
optimal Q
QSS ..
6.
6. Experimental
Experimental Results
Results and
and Analysis
Analysis
Laboratorial
analysis
experiment
andand
night
skysky
experiment
are
Laboratorialsingle-star
single-starimaging
imagingand
andaccuracy
accuracy
analysis
experiment
night
experiment
conducted
to validate
the correctness
of the
models,
accuracy
performance
analysis,
and
are conducted
to validate
the correctness
of proposed
the proposed
models,
accuracy
performance
analysis,
optimal
exposure
parameters.
The
image
sensor
of
the
navigation
sensor
adopted
in
these
and optimal exposure parameters. The image sensor of the navigation sensor adopted in these
experiments
listed in
in Table
Table 1.
1.
experiments is
is CMV20000,
CMV20000, the
the parameters
parameters of
of which
which are
are listed
6.1. Laboratorial Single-Star Imaging and Accuracy Analysis Experiment
A laboratorial single-star imaging and accuracy analysis experiment is performed to validate the
star spot imaging model and centroiding accuracy performance when the WCA scheme is employed.
The autocollimator in the laboratory is used to generate infinite distance star signals with different star
magnitudes. The navigation sensor is mounted on a turntable, as shown in Figure 14.
6.1. Laboratorial Single-Star Imaging and Accuracy Analysis Experiment
6.1. Laboratorial Single-Star Imaging and Accuracy Analysis Experiment
A laboratorial single-star imaging and accuracy analysis experiment is performed to validate the
A laboratorial single-star imaging and accuracy analysis experiment is performed to validate the
star spot imaging model and centroiding accuracy performance when the WCA scheme is employed.
star spot imaging model and centroiding accuracy performance when the WCA scheme is employed.
The autocollimator
in the laboratory is used to generate infinite distance star signals with different
Sensors
2017, 17, 915
18 of 23
The autocollimator
in the laboratory is used to generate infinite distance star signals with different
star magnitudes. The navigation sensor is mounted on a turntable, as shown in Figure 14.
star magnitudes. The navigation sensor is mounted on a turntable, as shown in Figure 14.
Auto-collimator
Auto-collimator
Navigation Sensor
Navigation Sensor
Turntable
Turntable
Figure
14. Setup
the laboratorial
experiment.
Figure
Setup for
for
Figure 14.
14. Setup
for the
the laboratorial
laboratorial experiment.
experiment.
The exposure parameters of the navigation sensor are set as follows: total integration time
the navigation
navigation sensor
sensor are
are set
set as
as follows:
follows: total
total integration
integration time
The exposure parameters of the
T  30 ms , AIT TS  29.67 ms , and well capacity QS  3750e −. Figure 15 shows the star images of
 29.67
 30
capacity Q
QSS =
 3750
e .. Figure
TT =
30ms
ms,, AIT TTSS =
29.67ms
ms,, and
and well capacity
3750e
Figure 15
15 shows
shows the
the star images of
different magnitudes using the normal integration mode and the WCA scheme. The brightness degree
different magnitudes using the normal integration mode and the WCA
scheme.
The
brightness degree
WCA
of the bright star clearly degraded when the WCA scheme is applied. However, the energy distribution
of the bright star clearly degraded when the WCA scheme is applied. However, the energy distribution
of the dim star is unaffected. Therefore, the star point imaging model is validated by the experiment.
of the dim star is unaffected.
unaffected. Therefore,
Therefore,the
thestar
starpoint
pointimaging
imagingmodel
modelisisvalidated
validatedbybythe
theexperiment.
experiment.
Figure 15. (a–d) Star images of magnitudes 2, 4, 5 and 6 when using the normal integration mode; and
Figure 15.
15. (a–d)
images
2, 4,
66 when
using
the
mode; and
and
Figure
(a–d) Star
Star
images of
of magnitudes
magnitudes
4, 55 and
and
when
using scheme.
the normal
normal integration
integration mode;
(e–h)
star images
of magnitudes
2, 4, 5 and 2,
6 when
using
the WCA
(e–h)
star
images
of
magnitudes
2,
4,
5
and
6
when
using
the
WCA
scheme.
(e–h) star images of magnitudes 2, 4, 5 and 6 when using the WCA scheme.
The centroiding accuracy performance when utilizing the WCA scheme is validated. The
The centroiding accuracy performance when utilizing the WCA scheme is validated. The
The centroiding
performance
when
the WCA
is validated.
 30exposure
ms , AIT
exposure
parametersaccuracy
of the navigation
sensor
areutilizing
set as follows:
totalscheme
integration
time T The
exposure parameters of the navigation sensor are set as follows: total integration time T  30 ms , AIT

integration

parameters
of
the
navigation
sensor
are
set
as
follows:
total
time
T
=
30star
ms,
TS  29.67 ms , and well capacity QS ranges from 2000 e  to 8000 e  at 500 e  intervals. Then,
−
−
−
T

29.67
ms
Q
e
e
e
,
and
well
capacity
ranges
from
2000
to
8000
at
500
intervals.
Then,
star
S
S Q S ranges from 2000e
AIT
TS = 29.67 ms, and well capacity
to 8000e at 500e intervals. Then,
centroiding
is performed and experimental
data are recorded. Although the true position of the star
centroiding
is performed
and experimental
data are
recorded.
Although
the truethe
position
of the star
star
centroiding
is performed
and experimental
data
are recorded.
Although
true position
of
is unknown, the average centroid position  xcen , ycen  of a bright unsaturated star image can be
the
star
is
unknown,
the
average
centroid
position
x
,
y
of
a
bright
unsaturated
star
image
)
unsaturated star image can be
is unknown, the average centroid position  xcen , ycen(  cenof acenbright
considered
the estimated
value of value
the true
position.
Then,
the standard
the centroiding
can
be
considered
the
estimated
of
the
true
position.
Then,
thedeviations
standard of
deviations
of the
considered the estimated value of the true position. Then, the standard
deviations
of
the centroiding
Q
are
calculated
at
each
certain
. The
error
of eacherror
magnitude
respectwith
to position
centroiding
of each with
magnitude
respect to xxposition
,
y
are
calculated
at
each
certain
)
cen , ycen  ( x
cen calculated
cen
at each certain QSS . The
error of each magnitude with respect to position  cen , ycen  are
QS . The experimental
results
areinshown
Figure 16.
experimental
results are
shown
Figurein16.
experimental results are shown in Figure 16.
Sensors 2017, 17, 915
Sensors 2017, 17, 915
19 of 23
19 of 23
(a)
0.2
Star Magnitude=2
Simulations
Sensors 2017, 17, 915
Simulations
Experiments
0.1
0
2000
0.05
4000
6000
-
4000
6000
QS[e-]
Simulations
(c)
(b)
Star Magnitude=4
Simulations
0.05
Experiments
0.1
0
2000
4000
0.05
Simulations
Experiments
0.06
0.02
0.04
2000
0.02
2000
4000
6000
8000
QS[e-]
4000
6000
8000
-
6000
8000
(d)
0
Experiments
Star Magnitude=5
0.04
19 of 23
Experiments
Star Magnitude=6
0.08
2000
8000
Simulations
0.06
S
S[pixels]
0.15
Star Magnitude=4
QS[e ]
Star Magnitude=5
S[pixels]  [pixels]
0.082000
0.08
0.1
(b)
QS[e ]
0(c)
0.06
8000
S[pixels]
Star Magnitude=2
0.15
0.05
S[pixels]
S[pixels]
0.1
0.2
(a)
S[pixels]
Experiments
0.15
S[pixels]
0.15
0.08
4000
6000
(d)
8000
Simulations
QS[e- ]
Experiments
Star Magnitude=6
0.04
Simulations
Experiments
0.06
0.02
0.04
0
2000
4000
0.02
0
2000
6000
8000
QS[e- ]
4000
6000
8000
Figure
Star
centroidingerror
errorversus
versuswell
wellcapacity:
capacity: (a)
(a) star
star magnitude
= 2; (b) star magnitude = 4;
Figure
16.16.
Star
centroiding
magnitude
QS[e-]
QS[e- ] = 2; (b) star magnitude = 4;
(c)
star
magnitude
=
5;
and
(d)
star
magnitude
=
6.
(c) star magnitude = 5; and (d) star magnitude = 6.
Figure 16. Star centroiding error versus well capacity: (a) star magnitude = 2; (b) star magnitude = 4;
In(c)
Figure
16, the experimental
are =denoted
with red solid lines, whereas the simulation
star magnitude
= 5; and (d) starresults
magnitude
6.
In Figure
16, the with
experimental
arecentroiding
denoted with
red solid
lines, whereas
the
simulation
results
are denoted
blue solid results
lines. The
accuracy
performance
from the
experiment
results
are
denoted
with
blue
solid
lines.
The
centroiding
accuracy
performance
from
the
experiment
is consistent
with
simulation results.
the laboratorial
single-star
imaging
accuracy is
In Figure
16, the
the experimental
results Thus,
are denoted
with red solid
lines, whereas
the and
simulation
consistent
the simulation
results.
Thus,
the
laboratorial
single-star
imaging
andthe
accuracy
analysis
resultswith
are denoted
with blue
solidconclusions
lines.
The
centroiding
accuracy
performance
from
experiment
analysis
experiment
validates
the
of
this study.
experiment
validates
of thisThus,
study.the laboratorial single-star imaging and accuracy
is consistent
with the
the conclusions
simulation results.
analysis
experiment
validates
the conclusions
this study.
6.2.
Night Sky
Observation
and Accuracy
AnalysisofExperiment
6.2. Night Sky Observation and Accuracy Analysis Experiment
nightSky
skyObservation
observation
and
accuracy
analysis
experiment is performed to validate the correctness
6.2.ANight
and
Accuracy
Analysis
Experiment
night
sky observation
and accuracy
analysis
experiment
is performed
validate
theiscorrectness
of A
the
celestial
body edge model
and optimal
exposure
parameters
when the to
WCA
scheme
applied.
A night
sky observation
and
accuracy
analysis
experiment
performed
to
the
correctness
of the
celestial
body
edge
model
and
optimal
exposure
parameters
when
thevalidate
WCA
scheme
is applied.
Moreover,
whether
the optical
navigation
measurements
of theisstars
and target
celestial
body
from
the
of the whether
celestial body
edge model
and optimal
exposure parameters
when
the
WCAcelestial
scheme is
applied.
Moreover,
the
optical
navigation
measurements
of
the
stars
and
target
body
from
image by a single exposure with the optimal exposure parameters satisfy the requirements ofthe
Moreover, whether the optical navigation measurements of the stars and target celestial body from the
image
by a single
exposure
with the optimal
exposure
requirements
of navigation
navigation
accuracy
is validated.
In Figure
17, the parameters
navigation satisfy
sensor the
is installed
on a tripod.
The
image by a single exposure with the optimal exposure parameters satisfy the requirements of
hardware
configurations
of the navigation
sensor are the
sameisasinstalled
those previously
mentioned.
accuracy
is validated.
In Figure
17, the navigation
sensor
on a tripod.
The hardware
navigation accuracy is validated. In Figure 17, the navigation sensor is installed on a tripod. The
configurations
of the navigation
are sensor
the same
as those
previously
mentioned.
hardware configurations
of the sensor
navigation
are the
same as
those previously
mentioned.
Moon
Moon
Navigation Sensor
Navigation Sensor
Tripod
Tripod
Figure 17. Setup of the night sky experiment.
Figure 17. Setup of the night sky experiment.
Figure 17. Setup of the night sky experiment.
Sensors 2017, 17, 915
20 of 23
Sensors 2017, 17, 915
20 of 23
6.2.1. Observations of the Moon and Accuracy Analysis Experiment
Sensors 2017, 17, 915
6.2.1. Observations of the Moon and Accuracy Analysis Experiment
20 of 23
The images of the Moon are obtained utilizing the normal integration and WCA schemes, as shown
6.2.1.The
Observations
the
Moon
and
Accuracy
Experiment
of of
the
Moon
are
obtained
normal integration
WCA schemes,
in Figure
18. images
Figure 18b
shows
that
the
image utilizing
is Analysis
largelythe
overexposed,
and theand
apparent
diameter as
of the
shown
in
Figure
18.
Figure
18b
shows
that
the
image
is
largely
overexposed,
and
the
apparent
Moon is The
extended
significantly.
18c shows
the Moon
imaged
with optimal
exposure
parameters
images
of the MoonFigure
are obtained
utilizing
the normal
integration
and WCA
schemes,
as
diameter
the Moon
is extended
significantly.
18c
shows in
theFigure
Moon imaged
with
optimal
shownthe
inof
Figure
18. Figure
18b shows
that the
image
isshown
largely
overexposed,
the apparent
utilizing
WCA
scheme.
Compared
with
theFigure
image
18b,and
although
the total
exposure
parameters
utilizing
the WCA
scheme.Figure
Compared
with the
image
shown
inwith
Figure
18b,
diametertime
of
the
Moon
is extended
significantly.
18cexposed,
shows
the
Moon
imagedexhibited
optimal
integration
is the
same,
the image
of the Moon
is well
and
the image
suitable
although
the
total
integration
time
is
the
same,
the
image
of
the
Moon
is
well
exposed,
and
the
image
exposure
parameters
utilizing
the
WCA
scheme.
Compared
with
the
image
shown
in
Figure
18b, 18d
performance of navigation measurement extraction when the WCA scheme is applied. Figure
exhibited
suitable
performance
of isnavigation
measurement
extraction
when
the WCA
scheme
is
although
the
total
integration
time
the
same,
the
image
of
the
Moon
is
well
exposed,
and
the
image
−
shows the image of the Moon when the well capacity is set to 6093e . The energy ring around the
6093e
applied.
18dperformance
shows the image
of the Moon
when the wellextraction
capacity iswhen
set tothe
. The
energy
exhibitedFigure
suitable
of navigation
measurement
WCA
scheme
Moon
isaround
clearly
visible.
The
observation
results
validate
the validate
correctness
of the celestial
body isedge
 the celestial
ring
the
Moon
is
clearly
visible.
The
observation
results
the
correctness
of
applied. Figure 18d shows the image of the Moon when the well capacity is set to 6093e . The energy
model
when
WCA
scheme
is adopted.
body
edgethe
model
when
the WCA
scheme is adopted.
ring around the Moon is clearly visible. The observation results validate the correctness of the celestial
body edge model when the WCA scheme is adopted.
Figure 18. Lunar images with different exposure parameters: (a) T  0.4 ms utilizing the normal
Figure 18. Lunar images with different exposure parameters: (a) T = 0.4 ms utilizing the normal
 29.67 ms,
utilizing exposure
the normal integration (a)
mode;
integration
mode;
(b) T  30
ms different
T  30utilizing
ms, TS the
Figure 18.
Lunar
T (c)
0.4
integration
mode;
(b)images
T = 30with
ms utilizing
the normalparameters:
integration mode;
(c)ms
T = 30 ms,
TS =normal
29.67 ms,


T

29.67
ms,
Q

3750
e
Q

6093
e
utilizing
the
scheme;
and
(d)
the
T  30 ms, S mode; (c) T S 30 ms, T utilizing
S
 29.67
ms,
utilizing
the(d)
normal
integration
mode;
(b)the
T WCA
WCA
30 msscheme;
S −
QS =
3750e− utilizing
and
T = integration
30 ms, TS = 29.67 ms, QS = 6093e
utilizing
the


WCA
scheme.
 3750e utilizing the WCA scheme; and (d) T  30 ms, TS  29.67 ms, QS  6093e utilizing the
WCAQSscheme.
WCA scheme.
The theoretical value of the apparent diameter of the Moon is estimated to be 71.02 pixels on the
The theoretical
value ofthe
theSTK
apparent
diameter
of the
Moon
is estimated
to be 71.02location
pixels on
image
plane
by applying
software
to simulate
theMoon
distance
from theto
observation
to the
The
theoretical
value of the
apparent
diameter
of the
is estimated
be 71.02 pixels on the
image
plane
by applying
the STK software
to simulate thetodistance
fromedge
the observation
to the
the
Moon.
edge detection
algorithm
the Moon.location
The least
image
planeThe
by applying
the STK
softwareistoemployed
simulate the extract
distancethe
from theof
observation
location
to
Moon.
The
edge
detection
algorithm
is
employed
to
extract
the
edge
of
the
Moon.
The
least
squares
squares
circle
fitting
is
utilized
to
obtain
the
apparent
radius
and
centroid
of
the
Moon
image.
The
the Moon. The edge detection algorithm is employed to extract the edge of the Moon. The least
results
are
in to
Figure
19.the
circle
fitting
isshown
utilized
apparent
and
centroid
of the Moon
results
squares
circle
fitting
is obtain
utilized
to obtain
the radius
apparent
radius
and centroid
of theimage.
Moon The
image.
The are
shown
in Figure
19. in Figure 19.
results
are shown
Figure 19. Edge detection and circle fitting results of the Moon image.
Figure 19. Edge detection and circle fitting results of the Moon image.
Figure 19. Edge detection and circle fitting results of the Moon image.
Sensors 2017, 17, 915
Sensors 2017, 17, 915
21 of 23
21 of 23
In Figure 19, the red scatter points are the extracted edge points, the yellow circle is the fitting
circle,Inand
“+” symbol
is the
centroid
position.
The
minimum
relative
deviation
of the
apparent
Figure
19, the red
scatter
points
are the
extracted
edge
points,
the yellow
circle
is the radius
fitting
with
tosymbol
the theoretical
value isposition.
obtainedThe
when
the optimal
exposure
parameters
are adopted.
circle,respect
and “+”
is the centroid
minimum
relative
deviation
of the apparent
radius
Table
2 shows
thetheoretical
average extraction
results when
of thethe
apparent
under
different
with respect
to the
value is obtained
optimal radius
exposure
parameters
areexposure
adopted.
conditions.
However,
theextraction
apparentresults
radius
errors
are larger
simulation
results.
This
Table 2 shows
the average
of the
apparent
radius than
underthe
different
exposure
conditions.
phenomenon
be attributed
to atmospheric
turbulence
and lens
calibration
error, whichmay
are
However, the may
apparent
radius errors
are larger than
the simulation
results.
This phenomenon
beyond
the scope
of this study.turbulence
Thus, these
factors
not considered
in the are
model.
In summary,
be attributed
to atmospheric
and
lens are
calibration
error, which
beyond
the scopethe
of
Moon
observations
validate
theare
reliability
of errorin
analysis
and parameter
optimization.
this study.
Thus, these
factors
not considered
the model.
In summary,
the Moon observations
validate the reliability of error analysis and parameter optimization.
Table 2. Apparent radius of the Moon under different exposure conditions.
Table 2. Apparent radius of the Moon under different exposure conditions.
Exposure Conditions
T  0.4 ms
Normal
integration,
Exposure
Conditions
T ms
30 ms
Normal
integration,
Normal
integration,
T = 0.4
Normal
integration,
T = ms,
30 ms
T  30
ms, TS =29.65
QS  3046e
WCA,
WCA, T = 30 ms, TS = 29.65 ms, QS = 3046e− 
30 Tms,=TS 29.67
= 29.67
 3750
WCA,
− e
WCA,
T =T
30ms,
ms, ms,
QS Q
=S3750e
S
− 
WCA,
T =T
30ms,
ms, ms,
QS Q
= 4453e
S =
30 Tms,
TS 29.69
= 29.69
e
WCA,
S  4453
WCA, T = 30 ms, TS = 29.74 ms, QS = 6093e− 
WCA, T  30 ms, TS = 29.74 ms, QS  6093e
Apparent Radius/Pixels
Error/Pixels
71.44
Apparent
Radius/Pixels
0.42
Error/Pixels
75.36
71.44
4.340.42
75.36
71.46
0.444.34
71.46
71.36
71.36
71.60
71.60
72.12
72.12
0.44
0.340.34
0.580.58
1.10
1.10
6.2.2. Observations of the Moon and Stars in the Same FOV
6.2.2. Observations of the Moon and Stars in the Same FOV
Images of stars and the Moon in the same FOV are taken utilizing the WCA scheme and optimal
Images
of stars and
MoonininFigure
the same
utilizing
the WCAalgorithm
scheme and
optimal
exposure parameters,
asthe
shown
20. FOV
The are
startaken
pattern
identification
is applied,
exposure
parameters,
as
shown
in
Figure
20.
The
star
pattern
identification
algorithm
is
applied,
and
and the identified stars are denoted with the yellow “+” symbol. A total of 12 stars are identified.
the
identified
stars
are
denoted
with
the
yellow
“+”
symbol.
A
total
of
12
stars
are
identified.
The
The brightest star magnitude is 3.0, whereas the dimmest star magnitude is 6.2. The centroid positions
brightest
star magnitude
is 3.0, whereas
theimage
dimmest
star magnitude
and magnitude
of the identified
stars in the
are listed
in Table 3. is 6.2. The centroid positions
and magnitude of the identified stars in the image are listed in Table 3.
MV = 6.2
MV = 3.0
Figure 20.
20. Observations
Observations of
of the
the Moon
Moon and
and stars
stars in
in the
the same
same FOV.
Figure
Table 3. Centroid positions and magnitude of the identified stars.
Table
Number
Number
1
1
22
33
4
55
6
6
Centroid/Pixels
Centroid/Pixels
(3439.886, 3084.843)
(3439.886, 3084.843)
(2985.837,
2934.429)
(2985.837, 2934.429)
(1707.333,
2326.095)
(1707.333, 2326.095)
(2575.164,
2232.000)
(2575.164, 2232.000)
(1012.179, 256.769)
(1012.179,
256.769)
(3025.250, 1638.643)
(3025.250, 1638.643)
Magnitude
Magnitude
3.8
3.8
3.0
3.0
4.4
4.4
5.2
5.2
5.5
5.5
5.4
5.4
Number
Centroid/Pixels
Number
Centroid/Pixels
7
(57.533, 1493.233)
7
(57.533, 1493.233)
88
(3045.821,
(3045.821,1069.692)
1069.692)
99
(3131.244,
(3131.244,1017.171)
1017.171)
(3624.311,684.475)
684.475)
1010
(3624.311,
11
(977.373,
675.311)
11
(977.373, 675.311)
12
(4056.727, 59.644)
12
(4056.727, 59.644)
Magnitude
Magnitude
4.9
4.9
6.2
6.2
5.3
5.3
4.8
4.8
4.3
4.3
3.1
3.1
Under the
the optimal
optimal exposure
exposure parameters,
parameters, the
the navigation
navigation sensor
sensor can
can simultaneously
simultaneously identify
identify the
the
Under
stars that
that are
are dimmer
dimmer than
than the
the limiting
limiting detectable
detectable star
star magnitude
magnitude and
and extract
extract the
the high-accuracy
high-accuracy edge
edge
stars
Sensors 2017, 17, 915
22 of 23
location of the celestial body (results listed in Table 2). In summary, by utilizing the WCA scheme,
the navigation sensor can image the stars and target celestial body well-exposed simultaneously within
a single exposure and can reliably extract high-accuracy optical navigation measurements that satisfy
the navigation demand. The night sky observation and accuracy analysis experiment validates our
study conclusions.
7. Conclusions
In this paper, we first analyze the irradiance characteristics of a celestial body. This study aims at
solving the problem that an optical navigation sensor is unable to image and expose the target celestial
body and stars well-exposed simultaneously. Given that their irradiance difference is generally large,
a solution that utilizes the WCA scheme is proposed. Then, celestial body edge model and star spot
imaging model are established when the WCA scheme is adopted. The effect of exposure parameters
on the accuracy of the star centroid estimation and edge extraction is analyzed based on the models.
The AIT TS and well capacity QS are the main factors that influence the edge detection accuracy
performance of the celestial body. The edge detection error initially reaches the minimum number
and then increases with the increase in TS . An interval exists that indicates that the edge detection
error is significantly small. The edge detection error initially reaches the minimum number and then
increases with the increase in QS when we set TS to an appropriate value. The well capacity is the
main factor that influences the centroiding accuracy performance of a single star. The star centroiding
error of a bright star decreases with the increase in QS . However, the centroiding error of a dim star
is mainly caused by random noise, and more dim stars are recorded than bright stars in the FOV.
Therefore, the overall centroiding error variation caused by the exposure parameters can be neglected.
The exposure parameters are optimized to ensure that the optical navigation measurements satisfy
the requirement of navigation accuracy. The optimal QS and analytical solution of the optimal TS are
obtained by conducting Monte Carlo simulation. The laboratorial and night sky experiments validate
the correctness of the models, the proposed optimal exposure parameters, and other study conclusions.
This study validates the feasibility of extracting attitude information and LOS vector from the sensor to
the centroid of the target celestial body simultaneously by utilizing a miniaturized single FOV optical
navigation sensor.
Acknowledgments: This work was supported by National Natural Science Fund of China (NSFC) (No. 61222304)
and the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20121102110032).
We gratefully acknowledge the supports.
Author Contributions: Hao Wang is responsible for the overall work, performance of the simulations, analysis and
experiments, and writing of this paper. Jie Jiang provided partial research ideas and modified the paper.
Guangjun Zhang is the research group leader who provided general guidance during the research and approved
this paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
El Gamal, A.; Eltoukhy, H. CMOS image sensors. IEEE Circuits Devices Mag. 2005, 21, 6–20. [CrossRef]
Kühl, C.T. Combined Earth-/Star Sensor for Attitude and Orbit Determination of Geostationary Satellites.
Ph.D. Dissertation, Universität Stuttgart, Stuttgart, Baden-Württemberg, Germany, March 2005.
Jinno, T.; Okuda, M. Multiple exposure fusion for high dynamic range image acquisition. IEEE Trans.
Image Process. 2012, 21, 358–365. [CrossRef] [PubMed]
Zhang, W.; Cham, W.-K. Reference-guided exposure fusion in dynamic scenes. J. Vis. Commun.
Image Represent. 2012, 23, 467–475. [CrossRef]
Goshtasby, A.A. Fusion of multi-exposure images. Image Vis. Comput. 2005, 23, 611–618. [CrossRef]
Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion: A simple and practical alternative to high dynamic
range photography. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2009; Volume 28,
pp. 161–171.
Sensors 2017, 17, 915
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
23 of 23
Yang, D.X.; El Gamal, A. Comparative analysis of SNR for image sensors with enhanced dynamic range.
In Sensors, Cameras, and Systems for Scientific/Industrial Applications; Blouke, M.M., Williams, G.M., Jr., Eds.;
SPIE: San Jose, CA, USA, 1999; Volume 3649, pp. 197–211.
Cheng, H.-Y.; Choubey, B.; Collins, S. An integrating wide dynamic-range image sensor with a logarithmic
response. IEEE Trans. Electron Devices 2009, 56, 2423–2428. [CrossRef]
Kavadias, S.; Dierickx, B.; Scheffer, D.; Alaerts, A.; Uwaerts, D.; Bogaerts, J. A logarithmic response CMOS
image sensor with on-chip calibration. IEEE J. Solid State Circuits 2000, 35, 1146–1152. [CrossRef]
Kim, D.; Song, M. An enhanced dynamic-range CMOS image sensor using a digital logarithmic single-slope
ADC. IEEE Trans. Circuits Syst. II Express Briefs 2012, 59, 653–657. [CrossRef]
Brajovic, V.; Kanade, T. A sorting image sensor: An example of massively parallel intensity-to-time processing
for low-latency computational sensors. In Proceedings of the IEEE International Conference on Robotics and
Automation, Minneapolis, MN, USA, 22–28 April 1996; pp. 1638–1643.
Luo, Q.; Harris, J.G.; Chen, Z.J. A time-to-first spike CMOS image sensor with coarse temporal sampling.
Analog Integr. Circuits Signal Process. 2006, 47, 303–313. [CrossRef]
Stoppa, D.; Simoni, A.; Gonzo, L.; Gottardi, M.; Dalla Betta, G.-F. Novel CMOS image sensor with a 132-dB
dynamic range. IEEE J. Solid State Circuits 2002, 37, 1846–1852. [CrossRef]
Stoppa, D.; Vatteroni, M.; Covi, D.; Baschirotto, A.; Sartori, A.; Simoni, A. A 120-dB dynamic range CMOS
image sensor with programmable power responsivity. IEEE J. Solid State Circuits 2007, 42, 1555–1563.
[CrossRef]
Knight, T.F. Design of an Integrated Optical Sensor with on-Chip Preprocessing. Ph.D. Dissertation,
Massachusetts Institute of Technology, Cambridge, MA, USA, 1983.
Sayag, M. Non-Linear Photosite Response in CCD Imagers. U.S. Patent No. 5,055,667, 8 October 1991.
Decker, S.; McGrath, D.; Brehmer, K.; Sodini, C.G. A 256 × 256 CMOS imaging array with wide dynamic
range pixels and column-parallel digital output. IEEE J. Solid State Circuits 1998, 33, 2081–2091. [CrossRef]
Fossum, E.R. High Dynamic Range Cascaded Integration Pixel Cell and Method of Operation. U.S. Patent
No. 6,888,122, 3 May 2005.
Akahane, N.; Sugawa, S.; Adachi, S.; Mori, K.; Ishiuchi, T.; Mizobuchi, K. A sensitivity and linearity
improvement of a 100-dB dynamic range CMOS image sensor using a lateral overflow integration capacitor.
IEEE J. Solid State Circuits 2006, 41, 851–858. [CrossRef]
Lee, W.; Akahane, N.; Adachi, S.; Mizobuchi, K.; Sugawa, S. A high S/N ratio and high full well capacity
CMOS image sensor with active pixel readout feedback operation. In Proceedings of the Solid-State Circuits
Conference, ASSCC’07, IEEE Asian, Jeju City, Korea, 12–14 Novmber 2007; pp. 260–263.
Hawkins, S.E., III; Boldt, J.D.; Darlington, E.H.; Espiritu, R.; Gold, R.E.; Gotwols, B.; Grey, M.P.; Hash, C.D.;
Hayes, J.R.; Jaskulek, S.E. The Mercury dual imaging system on the MESSENGER spacecraft. Space Sci. Rev.
2007, 131, 247–338. [CrossRef]
Iqbal, M. An Introduction to Solar Radiation, 1st ed.; Academic Press: Don Mills, ON, Canada, 2012; pp. 35–36.
The Information of Bond Albedo. Available online: https://en.wikipedia.org/wiki/Bond_albedo (accessed
on 9 November 2016).
Hagara, M.; Kulla, P. Edge detection with sub-pixel accuracy based on approximation of edge with Erf
function. Radioengineering 2011, 20, 516–524.
Lightsey, G.E.; Christian, J.A. Onboard image-processing algorithm for a spacecraft optical navigation sensor
system. J. Spacecr. Rocket. 2012, 49, 337–352. [CrossRef]
Steger, C. Unbiased Extraction of Curvilinear Structures from 2D and 3D Images. Ph.D. Dissertation,
Technische Universität München, Arcisstraße 21, Munich, Bavaria, Germany, 1998.
Liebe, C.C. Accuracy performance of star trackers-a tutorial. IEEE Trans. Aerosp. Electron. Syst. 2002, 38,
587–599. [CrossRef]
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).