Download Paper - AMOS Conference

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Gravitational lens wikipedia , lookup

Transcript
Flat-Panel Space-Based Space Surveillance Sensor
Richard L. Kendrick, Alan Duncan, Chad Ogden, Joe Wilm, David M. Stubbs
Lockheed Martin ATC, 3251 Hanover Street, Palo Alto, CA, 94304, USA
Samuel T. Thurman
Lockheed Martin Coherent Technologies, 135 S. Taylor Ave., Louisville, CO 80027, USA
Tiehu Su, Ryan P. Scott, S. J. B. Yoo,
Department of Electrical and Computer Engineering, University of California,
Davis, California 95616, USA
ABSTRACT
We present a design for an extremely thin form factor computational imaging system based on long baseline
interferometry and photonic integrated circuits.
1. INTRODUCTION
Traditional electro-optical (EO) imaging payloads consist of an optical telescope to collect light from a scene and
map the photons to an image plane to be digitized by a focal plane detector array. The size, weight, and power
(SWaP) for the traditional EO imager is dominated by the optical telescope, driven primarily by large optics, bulky
stiff structures, and thermal controls needed to maintain precise alignment. We propose a non-traditional Segmented
Planar Imaging Detector for EO Reconnaissance (SPIDER) imager concept that is intended to substantially reduce
SWaP, by at least an order of magnitude. SPIDER provides large effective apertures for fine-resolution imaging,
while minimizing system volume, mass, and cost.
SPIDER replaces the traditional optical telescope and digital focal plane detector array with a compact, dense
interferometer imaging array [1] that utilizes photonic integrated circuit (PIC) technologies [2].Light from pairs of
separated lenslets is coupled into waveguides on a PIC chip and combined to form interferometer baselines. A small
system form factor is realized by using on-chip components to perform the interferometry: match optical path
lengths, combine light from the waveguides, impart phase shifts, and detect the light. In this way, light from many
baselines (lenslet pairs) can be processed with a single chip, forming a 1D interferometer array. Several of these 1D
arrays are used to together in different orientations to form 2D arrays. System performance is enhanced using
additional on-chip components: waveguide arrays are used behind each lenslet to increase field-of-view (FOV) and
arrayed waveguide gratings (AWGs) are used to disperse broadband light into different spectral channels, improving
spatial frequency coverage.
Our approach replaces the large optics and structures required by a conventional telescope with PICs made using
standard lithographic complementary metal-oxide semiconductor (CMOS) fabrication techniques. The standard EO
payload integration and test process, which involves precision alignment and testing of optical components, can be
replaced by in-process integration and test during PIC fabrication to substantially reduce schedule and cost. The low
profile and low SWaP of a SPIDER system enables high resolution imaging with a payload that is similar in size and
aspect ratio to a solar panel. This allows fine-resolution, low-cost options for space based space surveillance
missions based on hosted payloads and cubesat designs. We present a description of the concept and preliminary
simulation and experimental data that demonstrate the imaging capabilities of the SPIDER technique.
2. SPIDER CONCEPT DESIGN
2.1 SPIDER Photonic Integrated Circuit Architecture
SPIDER is based on standard imaging interferometer techniques [1]. Fig. 1 shows a 1D array of Michelson
interferometers based on the PIC architecture. Light from pairs of lenslets is coupled into waveguides and combined
to form interference fringes. The PIC contains path matching delays and balanced four quadrature detectors for
measuring the object complex (amplitude and phase) visibility for each interferometer baseline. By the Van-Cittert
Zernike theorem [3], each of these measurements corresponds to a unique Fourier (or u-v spatial frequency)
component of the incoherent object. 2D Fourier fill is achieved by collecting data from several 1D interferometer
arrays with different orientations. Reconstruction algorithms are used to invert this data and form an image [4].
Fig. 1. Diagram of a 1D SPIDER array that consists of six lenslets that are paired up to form 3 interferometer
baselines. On-chip path matching delays and balanced four quadrature detectors are used to measure complex fringe
visibility for each baseline.
In astronomy, interferometry is normally only used to image objects of finite extent. SPIDER, however, can image
extended scenes, because each waveguide acts like a field stop to effectively limit the object extent [5]. As described
so far, the SPIDER image of an extended scene would look like the object apodized by the lenslet/waveguide inputcoupling efficiency. This is a narrow-FOV image that we call a SPIDER “tile” image. For wide-FOV imaging, a 2D
array of waveguides is used behind each lenslet (along with the associated extra path matching and detection
components). The additional components are not shown in Fig. 1. The different waveguides are used to collect data
for separate tile images that span a large area. A wide-FOV SPIDER image is then formed by stitching together
these tile images into a mosaic.
Finally, AWGs are incorporated on the SPIDER chip (see Fig. 2) to allow broadband operation, which improves
photon efficiency and provides increased u-v spatial frequency coverage. Light coupled into one of the input
waveguides is dispersed by an AWG into narrowband spectral channels, routed to individual waveguides, and
interfered with light of the same spectral channel from the corresponding baseline-pair lenslet. For a given physical
baseline, the spatial frequency coordinate scales with the optical wavenumber, such that the spectral channels yield
different u-v samples of the object Fourier transform. Fig. 2(c) is a photograph of a SPIDER PIC test piece that was
designed and fabricated by UC Davis.
Fig 2. (a) Diagram of PIC components for a single baseline showing how AWGs that are used to divide broadband
light into various spectral channels for increased u-v coverage, (b) notional AWG spectral channels, and (c)
photograph of an example PIC with 16 input waveguides.
2.2 SPIDER Payload Design Concept
Fig. 3 shows an example SPIDER payload design. The system is composed of 37 1D interferometers arrays arranged
in a radial pattern. Various parts shown in the exploded view are:
•
•
•
•
•
Many Interferometer Tube Assemblies that contain the lenslets for coupling light into the PIC waveguides;
An Array Plate to hold the Interferometer Tube Assemblies in place;
Inner and Outer Align Cylinders for maintaining alignment of the PIC chips;
37 Silicon PIC Cards that contain the various interferometer components mentioned above; and
A stiff Back Plate.
The overall system form factor is comparable to a flat screen television or solar panel. The spatial resolution of the
imager is determined by the maximum interferometer baseline and is comparable to a conventional aperture having
a diameter approximately equal to the length of an individual PIC chip. Fig. 4 shows a model of the SPIDER
payload integrated with spacecraft bus. An actuated hexapod provides global field-of-regard steering. Multi-layer
insulation blankets and fiber/electrical harnessing are not shown.
Interferometer Tube Assy Array
14 per row
37 rows
Lenslet Array
Plate
Outer Align
Cylinder
Inner Align
Cylinder
Silicon Cards
(37)
Back Plate
Fig. 3. Solid model of a SPIDER payload concept design: a fully assembled imager is shown on the left, while an
exploded view is shown on the right.
Fig. 4. Model of the SPIDER payload integrated with a Surrey spacecraft bus. The spacecraft bus model was
provided courtesy of Surrey Satellite Technology US LLC and the hexapod assembly model was provided courtesy
of PI (Physik Instrumente).
3. SIMULATION RESULTS
Image simulations were conducted to illustrate SPIDER imaging capabilities and applications for space situational
awareness. System parameters used for the simulations are given in Table 1. Note that these values differ from the
concept design of Sec 2.2. The field-of-view of an individual SPIDER image tile is roughly twice the resolvable spot
size of an individual lenslet, i.e.,
λz
=
FOV
2= 20 m ,
tile
D
(1)
where λ is the mean wavelength, z is the object distance, and D is the lenslet diameter. The simulation system
parameters were chosen to obtain FOVtile = 20 m with the intention of imaging ~10m wide satellites. Fig. 5 shows
the 2D arrangement of lenslets in the pupil plane and the resulting u-v spatial frequency coverage. The system
resolution is approximately given by
=
R
λz
= 0.35 m ,
Bmax
where Bmax is the longest interferometer baseline.
(2)
Table 1. SPIDER system parameters used for simulations.
Parameter
Value
Waveband
500-900 nm
Object Distance
250 km
Lenslet Diameter
8.75 mm
Longest Baseline
0.5 m
Number of Lenslets per PIC card
24
Number of PIC cards
37
Number of Spectral Bins
10
Fig. 5. (a) 2D arrangement of SPIDER lenslets and (b) corresponding u-v spatial frequency coverage. The increased
u-v coverage provided by broadband light is indicated by the different color data points.
For the simulations, SPIDER data was generated by computing complex-valued, Fourier-transform samples of an
input object scene for the u-v points shown in Fig. 5(b). Images were reconstructed from this noiseless “data”
subject to physical constraints using a nonlinear optimization approach [4]. The image pixel values were iteratively
adjusted to minimize a merit function that contained terms based on data consistency, image compactness, and
object nonnegativity.
Fig. 6. Simulation results for imaging an individual satellite: (a) pristine target scene used as input for the
simulation; (b) unfiltered reference image from a conventional filled aperture system (diameter = 0.5 m); and (c)
SPIDER image reconstruction. The images shown cover a 12m x10m area at the target.
Fig. 6 shows simulation results for imaging an individual satellite. The input scene, an image from a conventional
0.5 m aperture system, and the SPIDER image reconstruction are shown for comparison. The SPIDER result
exhibits a few artifacts, but is comparable to the conventional image. Fig. 7 shows results for wide-FOV SPIDER
imaging, using the waveguide arrays behind each lenslet. The two-satellite input scene (12m x30m) is wider than
FOVtile = 20 m (the field-of-view of an individual SPIDER tile image). The individual image tiles shown in Fig.
7(b)-(d) result from using data collected with neighboring waveguides behind each lenslet. Each tile image covers a
different portion of the scene. These tile images were stitched together to form the wide-FOV SPIDER mosaic
image shown in Fig. 7(e). We are working to improve the fidelity of the simulations by including a detailed noise
model and to reduce artifacts in the reconstructed imagery.
Fig. 7. Simulation results for wide-FOV imaging of multiple satellites: (a) pristine target scene covering a 12m
x30m area; (b)-(d) individual narrow-FOV (20m x20m) SPIDER tile images covering different parts of (a); and (e)
wide-FOV SPIDER mosaic image [only a 12m x30 m portion of the FOV is shown for comparison with (a)]. Data
for image tiles (b)-(d) is collected simultaneously from light coupled into neighboring waveguides behind individual
lenslets.
4. EXPERIMENTAL RESULTS
A testbed was built using 1550 nm fiber components to demonstrate long-baseline interferometry with an extended
object. Fig. 8 shows the experiment hardware. Fig. 8(a) is an overhead photograph of 1 m aperture, f/100 scene
projector telescope. A 1 in square, chrome-on-glass, 1 cycle/mm, 90% modulation sinusoidal amplitude grating was
used as an extended scene and placed in the back focal plane of the scene projector. The object was back illuminated
with light from a 150 W Quartz-Tungsten-Halogen fiber bundle illuminator (Thorlabs OSL1). The light was
delivered to the scene through a long (~2-3 m) glass light pipe to reduce partial coherence effects, and the
illumination beam was approximately 19 mm wide. Fig. 8(b) is a photograph of three 45 mm diameter fiber
collimators (Microlaser FC45, viewed from behind) used as collection apertures for the interferometer. For this
experiment, only two of these apertures were used: one was held in a fixed position and the other was mounted on a
translation stage (Newport ILS250PP) to allow the baseline between the apertures to be varied. Using Eq. (1) with λ
= 1550 nm, z = 100 m, and D = 45 mm, the interferometer field-of-view at the target is FOV = 7 mm. Note that the
object is considered to be an “extended scene” since the illuminated region of the target is wider than this FOV. The
illustration in Fig. 9 shows the relative sizes of the object transparency, the illumination beam, and the
interferometer FOV.
Fig. 8. Photographs of the experiment testbed: (a) 1 meter scene projector; (b) 45 mm fiber collimators used to
collect light for the interferometer; and (c) fiber-based beam combiner that performs the functions of the PIC.
Transparency Slide
Illumination Profile
Fiber FOV
Fig. 9. Illustration showing the relative sizes of the object transparency slide, the illumination beam, and the fiber
interferometer FOV. Since the illuminated area is wider than the system FOV, this is an extended scene.
Fig. 8.(c) is a photograph of fiber beam combination hardware used to make fringe visibility measurements. There
are several 50/50, 2x2 PM fiber splitters (Newport F-PMC-1550-50), a piezoelectric PM fiber delay line (General
Photonics FST-002-FC/APC), and four detectors (Newport 2153 IR Femtowatt Photoreciever). Two of the detectors
are used to monitor the photometric coupling efficiency of each collection aperture. The other detectors are used to
record the fringes at the interferometer output ports (180 deg out of phase from each other) as the optical delay line
is scanned through a range of ~ 4mm. This 4-channel configuration allows raw measurements to be converted to
normalized fringe data through pre-calibration of the fiber split ratios and compensation for temporal fluctuations in
the fiber coupling efficiency of each collecting telescope [6].
Fig. 10 is a plot of the squared-visibility estimated from normalized fringe data versus the telescope baseline
separation. For this simple object, we expect a peak in the visibility at a baseline Bpeak that corresponds to a spatial
frequency that matches the spatial frequency of the grating pattern f0, i.e.,
B=
λ=
zf 0 155 mm ,
peak
(3)
where f0 = 1 cycle/mm. The plot shows that the peak visibility is indeed at a telescope separation of 155 mm. The
plot also includes a theoretical visibility curve that includes the field stop/apodization effect of the fiber coupling
efficiency. Note that the width of the peak observed in the data agrees well with the theoretical peak width. We
believe the measured visibility is not quite as high as predicted due in part to residual partial coherence in the
illumination beam.
Fig. 10. Plot of squared-visibility for the extended sinusoidal target vs. telescope baseline separation: (solid line with
points) visibility estimates from testbed data and (dashed line) theoretical model that includes the fiber-coupling
field stop/apodization effect.
5. SUMMARY
The application of rapidly emerging nanophotonics and microelectromechanical systems (MEMS) technologies to
imaging is an exciting and revolutionary new field with substantial implications for future space-based sensors. This
approach involves collecting photons from external sources and processing them on a PIC to form computational
images rather than the traditional approach of directly forming an image on a digital focal plane with a large, bulky
optical telescope. This paper is focused on innovative, low technology readiness level (TRL) concepts that provide
unique imaging solutions for space surveillance sensors in a cost and schedule constrained environment. To
accomplish this goal, our revolutionary sensor concept provides a compact alternative to traditional imaging sensors
that dramatically reduces SWaP, schedule, and integration and test complexity by eliminating the manufacture,
polishing, and alignment of large optics.
We have developed a SPIDER conceptual design for space imaging applications, performed a full image simulation
to quantify its expected performance, and verified in a proof of concept experiment the ability to make the required
complex visibility measurements with extended scene sources. Our future plans include the development of a PICbased interferometer testbed and imaging demonstrations with more complex extended scenes. We are also
developing conceptual designs for other nanophotonics EO sensors including compact high spatial resolution
spectrometers, active imaging sensors, and lasercom transmitter/receivers.
6. ACKNOWLEDGEMENTS
This work is funded by Lockheed Martin Advanced Technology Center Internal Research and Development
funding.
7. REFERENCES
1.
For a review of interferometric imaging techniques see: T. J. Pearson and A. C. S. Readhead, “Image formation
by self-calibration in radio astronomy,” Ann. Rev. Atron. Astroohys. 22, 97-130 (1984).
2.
3.
4.
5.
6.
N. K. Fontaine, R. P. Scott, L. Zhou, F. M. Soares, J. P. Heritage, and S. J. B. Yoo, “Real-time full-field
arbitrary optical waveform measurement”, Nature Photonics 14, 248-254 (2010).
J. W. Goodman, Statistical Optics, (John Wiley and Sons, Inc., New York, 2000).
Thiébaut, E., and Giovannelli, J.-F., “Image reconstruction in optical interferometry,” IEEE Signal Processing
Magazine 27, 97-109 (2010).
O. Guyon, “Wide field interferometric imaging with single-mode fibers”, Astron. & Astrophys. 387, 366-378
(2002).
V. Coude du Foresto, S. Ridgeway, and J.-M. Mariotti, “Deriving object visibilities from interferograms
obtained with a fiber stellar interferometer,” Astron. Astrophys. Suppl. Ser. 121, 379-392 (1997).