Download The Fine Scale Optical Range Peter Mack Grubb Texas - UB-SEDS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

XMM-Newton wikipedia , lookup

Observational astronomy wikipedia , lookup

Astrophotography wikipedia , lookup

International Ultraviolet Explorer wikipedia , lookup

Interferometry wikipedia , lookup

Gravitational lens wikipedia , lookup

Transcript
The Fine Scale Optical Range
Peter Mack Grubb
Texas A&M University – Computer Engineering Student
Dr. Tom Pollock
Texas A&M University – Aerospace Engineering Professor
Dr. Chip Hill
Space Engineering Research Center - Director
Abstract
The Fine Scale Optical Range (FiScOR) has been designed and assembled to study the efficacy of on-orbit debris
characterization using small space-based cameras. Physically, this facility permits imaging of small, one to two
centimeter models of simple or complex shapes from a distance sufficiently great to produce image sizes of about
one pixel. The objects are designed in 3D CAD and produced in plastic by 3D printing and CNC milling. They are
then surfaced in such a way as to approximate a white, completely diffuse surface. Details, such as imperfections in
the surfacing of the objects, are achieved to dimensions as fine as 200 micrometers. Mechanisms are provided to
rotate and translate the object. Illumination sources approximating the solar spectrum are used. Light curves are
recorded using CCD or CMOS cameras which may be cooled or at ambient temperature.
For the study reported herein, a few simple objects (cubes, cylinders, etc.) were imaged using a high frame rate color
(Bayer mask) camera. Data obtained were compared with Phong models, and to a limited extent, with night sky
images obtained using the 0.8m telescope near Stephenville, TX and smaller instruments located near College
Station, TX.
Introduction
Currently there is great concern in the Aerospace and Defense communities about the proliferation of
orbital debris[1]. A very small piece of debris in orbit can completely destroy a multi-million dollar satellite just by
being in the wrong place at the wrong time[2]. When this was confined to large objects that were known to be in
orbit, it was of limited concern, as it was very simple to map and calculate the orbits for these objects. However,
with the growth of anti-satellite weaponry[3][4] and the general disrepair of satellites that have been in orbit for
many years, a phenomenon known as “debris fields” has begun to be a problem. While the general orbit of the field
as a whole can be modeled, there are two challenges that make these fields of great concern[5]. Firstly, over time
they expand from the event that originally caused the debris field to form. This will allow objects to drop into higher
or lower orbits that have large differences in location very quickly. Secondly, there is no established method for
tracking these objects. To establish an accurate orbit for an object, observation across multiple orbits is necessary.
However, accurately identifying a single dot in a changing field of dots is highly problematic. To this end, a method
for identifying objects in some consistent way is desired
One method for identifying objects is by determining their physical properties. Physical properties such as
material composition and object shape rarely change from one orbit to the next. Thus, if an object can be quickly
identified by its physical properties from one orbit to the next, it becomes distinguishable. However, before methods
of quick physical property determination can be developed, accurate light curves for known objects must first be
collected.
Light curves of known objects have multiple sources. One such source is ground based astronomy of
known objects. This provides very accurate data, but is limited by the nature of objects in orbit. Additionally, it is
somewhat impractical as quickly imaging a wide variety of objects needed for algorithm development is highly
expensive. Thus, a method of accurately imaging known objects in an orbit like scenario is needed in order to create
accurate light curves for supporting modeling efforts.
To this end, the research team has created The Fine Scale Optical Range. Using small models at the end of
a mirror pathway 46 meters in length, we have created a method of accurately imaging these objects at sizes
analogous to what would be seen in an orbital scenario. This allows imaging of many models in a short amount of
time. These models can be fairly simple or highly complex shapes, with dimensional resolution down to 200
micrometers. These shapes are then translated and/or rotated with continuous imaging by high speed digital cameras
to generate a series of images which can be reduced to a light curve.
These light curves can be favorably compared to Phong models [6] using various methods. The one used in
this paper is to take the raw data of a Phong model and its corresponding data from FiScOR and calculate the
normalized cross correlation coefficient. These methods can provide a baseline to compare the efficacy of FiScOR
as a method for gathering real data.
This project was inspired by and is a continuation of star tracking activities currently taking place at the
Space Engineering Research Center (SERC). The images of real objects can be scaled and inserted to scenario
imagery generated by the Virtual Sky Imager (VSI) software developed by Dr. Mortari[7]. Because of its origins in
star tracking, FiScOR focuses on small cameras, such as what can be found in this application.
By comparing the results against Phong, we intend to show both that the FiScOR model is accurate in
general and that it picks up key features not found in the Phong model making it or some other analogue a necessity
for studying small debris and space objects. Additionally, we intend to show that collecting color data allows us to
study subtleties in the data which are otherwise invisible.
Methods
The fundamental assumption that makes the FiScOR possible is the idea that compared to a wavelength of
light, both the small 1-2 cm models and a full
size object are extremely large. Thus, from the
perspective of the light gathered, the size of the
actual object is somewhat unimportant. All that
matters is the scaling and composition of the
light.
Based off of this assumption, FiScOR
minimizes the amount of space that it takes to
get an accurate light curve. The system consists
of three parts: The model and illumination, the
mirror pathway, and the camera. The overall
system is pictured in Fig.1. Note that the light
source is not explicitly pictured. It may be
placed in any location on the table where it
does not interfere with one of the image
pathways. However, it is recommended that
there be some sort of baffle for the light source
to prevent unintended reflections.
Fig.1. The overall layout of FiScOR is quite compact. The entire room
needed for it is only 5 meters by 4 meters, and the distance from mirror
1 to mirror 2 2 is about 5 meters.
The model is illuminated by a quartz
halogen light source with a fiber optic feed
designed to mimic solar lighting. As a part of setup, the light source was measured using a USB Stellar-Net®
spectrophotometer. The spectrum of the light source is shown in Fig. 2.
160.00
Intensity (μW/cm²/Δλ)
140.00
120.00
100.00
80.00
60.00
40.00
20.00
0.00
200
300
400
500
600
700
800
900
1000
λ (nm)
Fig. 2. Spectrum of Light Source. This source emits more light in the near infrared than is found in sunlight,
even with an IR blocking filter in place.
Fig. 3. Note the back half of the
object is not illuminated.
This light source has a baffle and IR filter to limit unintentional
reflections off other lab apparatus and excess photon count respectively. The light
source is pointed at the model as pictured in Figure 3, which rests on stalk
mounted on an electric motor, the combined
height of which is 10cm. The model itself can
be mounted at any axis of rotation, and should
be approximately 1-2 cm in diameter. Opposite
the light source on the far side of the model is a
second light baffle, once again for the purpose
of preventing unintended reflections.
The models are built using both 3D
Fig. 4. The models for this paper
printers [8][9] and CnC machines. They can be as simple as a cube or flat sheet
have been painted white with a
like the models in Fig. 4, and as complex as a scale model of a known orbital
black stalk supporting them.
object such as the ISS or Hubble Telescope. These models can be covered in
various real materials, so that the table can be used to measure and compare light curve data from not just shape, but
also physical properties. Additionally, the models can be spray painted for various effects. Wire models for the
objects used in this paper along with their designations can be found in Fig. 5.
Cube 1
Cube 2
Cube 3
Cube 4
Octagon
Fig. 5. The cubes each rotate about a different axis, providing various different light curves for study. The Octagon
provides a contrasting shape type to the similarity of the cubes.
The second part of the system is the mirror path. The motivation behind the folded mirror path is
minimization of floor space. For the models to be the appropriate size in the camera Field of View (FOV), the model
needs a light path approximately 46 meters long. Finding
such a space in a lab building is often impractical. Thus,
via a mirror assembly the entire path is compressed into
a room 5 by 4 meters. This reduces costs and makes
running FiScOR more feasible.
This portion of the system is laid out in Fig. 1.
It consists of 13 mirrors (numbers 2-14 in Fig. 1), one of
which is pictured in Fig. 6, each 35 by 26mm, a 30.5 by
77 mm mirror (number 1 in Fig. 1) at the beginning, and
a 84 by 121 mm mirror (number 15 in Fig. 1) at the end
attached to a motorized apparatus. Each of the mirrors
has an apodizing mask to eliminate diffraction
edges[10][11][12]. The end result of this is a path way
46 meters long. For this implementation, 1/10 lambda
first surface mirrors with enhanced coatings were used,
which gives us a very flat surface with minimal light
Fig. 6. The back of the mirror has the screws that allow
for fine adjustment both vertically and horizontally.
loss.
The motorized apparatus in the final mirror
serves to simulate translational motion on the part of the model. This apparatus is essentially an eccentric cam
system, which is driven by an electric motor. By running the motor, the final mirror is titled in a periodic fashion.
This allows for simulating translational velocity of the model.
The final component of FiScOR is the camera itself. Virtually
any camera may be used, but accurate light curve determination
requires a large number of images per revolution. If a high speed
camera is unavailable, this can be achieved via electric motors capable
of slow turn rates. In the case of FiScOR, the Silicone Imaging®
SI640-HFRGB pictured in Fig. 7 was used. This selected because it is a
high frame rate color camera which uses a Bayer mask[13] and a high
speed Camera Link interface. An EPIX PIXCI® EB1 interface card and
the EPIX XCAP® software were used for the purposes of frame
grabbing. This allowed for captures on the order of 400 frames per
revolution of the models in full color. The system is flexible as to lens
Fig. 7. The Camera is mounted on an
type, but it was found that large aperture lenses were needed for the
articulated head with 3 degrees of freedom.
object to be visible. The lens used in this implementation of FiScOR
was a Canon® 85mm F1.2 lens.
The reason for the color camera is that a large part of the motivation behind FiScOR is to move beyond
gray scale imagery for object identification. Some materials such as Solar Cells or the Gold Multi-Layer Insulation
(MLI) used in satellites have very distinctive color signatures. A monochrome camera will often lose details such as
this, rendering objects indistinguishable. By using a color camera, objects of the same shape with different real
material coverings can be imaged, and their R, G and B data compared separately. This provides a potential avenue
for material determination that has not been fully explored.
In addition to the data gathered using FiScOR, Earth based observations were done using the 0.8m
telescope at Tarlton State University. These images were used as references
to provide a baseline for the accuracy of FiScOR.
The other type of data that was gathered to compare FiScOR
against was Phong models. These models were built in DesignCAD 3D
Max versions 18 and 19, and then illuminated using the Phong model built
into the software as shown in Fig. 8. The models were then rotated in 2
degree increments, with the screen captured at each step. The light source
can then be moved to a new location, and the image gathering repeated.
This process was scripted using the open source tools AutoHotkey and
Greenshot.
Results
Fig. 8. DesignCAD 3D Max Allows for
accurate 3D model generation and
lighting.
Once the data was gathered from both FiSCoR and the Phong
models, it was analyzed using Matlab. This analysis took two forms: normalized cross correlation constants, and
matched graphs. For the cross correlation constants, the Matlab function xcorr() was used. However, for cross
correlation to have any meaning, the time step in the data must be equal. Thus, prior to running xcorr, the data was
up-sampled using the Matlab function interpolate() until both the Phong model data and the FiScOR data had the
same number of samples. Additionally, the data sets were normalized by dividing each element by the integral of the
data set, giving an area under the curve of 1 in the resulting set. The results from this are recorded in Tables 1
through 4.
Table 1. 10 deg Horizontal, 16.5 deg Altitude
Table 2. 45 deg Horizontal, 16.5 deg Altitude
Red
Green
Blue
White
Cube 1
0.999981
0.999948
0.999947
0.999972
0.999979
Cube 2
0.999982
0.999938
0.99993
0.999979
0.999787
0.999893
Cube 3
0.999979
0.999937
0.999935
0.999971
0.999954
0.999927
0.999963
Cube 4
0.999971
0.999883
0.999848
0.999913
0.999906
0.999859
0.999908
Octagon
0.999967
0.999881
0.999826
0.999904
Red
Green
Blue
White
Cube 1
0.999976
0.999912
0.999891
0.99995
Cube 2
0.999979
0.999931
0.999907
Cube 3
0.999969
0.999828
Cube 4
0.999979
Octagon
0.999942
Table 3. 90 deg Horizontal, 16.5 deg Altitude
Table 4. 135 deg Horizontal, 16.5 deg Altitude
Cube 1
Cube 2
Red
0.999992
0.999989
Green
0.999989
0.999975
Blue
0.999949
0.999974
White
0.999991
0.99999
Cube 1
Cube 2
Red
0.999993
0.999984
Green
0.99999
0.999976
Blue
0.999987
0.999968
White
0.999991
0.99998
Cube 3
Cube 4
0.999991
0.999982
0.999975
0.999981
0.99996
0.999974
0.999983
0.999986
Cube 3
Cube 4
0.999991
0.999976
0.999971
0.999974
0.999959
0.99997
0.999977
0.999976
Octagon
0.999991
0.999964
0.999971
0.999982
Octagon
0.999996
0.999996
0.999993
0.999997
The other method used for comparing the data was visual. A Matlab script was created to graph each color
against its corresponding Phong model. First, the script took the interpolated data that was input to xcorr() and found
the difference in their relative magnitudes. Based on this factor, the Phong model was scaled to match the Real data.
Then, the means of the two distributions were aligned. This allows the data to be graphically compared, giving a
more detailed picture as to what the data means. To show the periodic nature of the data, two full revolutions are
pictured in the graphs below. The graphs for Cube 2 and the Octagon are included in Fig.’s 9-16.
Fig. 12.
Fig. 9. Cube 2 -10 deg Horizontal, 16.5 deg Altitude
Fig. 10. Cube 2 -45 deg Horizontal, 16.5 deg Altitude
Fig. 11. Cube 2 - 90 deg Horizontal, 16.5 deg Altitude
Fig. 12. Cube 2 -135 deg Horizontal, 16.5 deg Altitude
Fig. 13. Octagon -10 deg Horizontal, 16.5 deg Altitude
Fig. 14. Octagon - 45 deg Horizontal, 16.5 deg Altitude
Fig. 15. Octagon - 90 deg Horizontal, 16.5 deg Altitude
Fig. 16. Octagon -135 deg Horizontal, 16.5 deg Altitude
Discussion
The cross correlation constants were quite high, virtually indistinguishable from a perfect 1 in many cases.
This was initially a cause for concern, as this does not mesh with the disparities apparent in the graphs. For example,
in Fig.17 there are a series of peaks in the Phong data where the image data actually dips. However, the White light
in this example has a cross correlation coefficient of 0.999908. This seems counter intuitive, but it is important to
remember what cross correlation measures. Because cross correlation measures the relative curves against each
other at every point, it represents a best case measure of how alike the curves are if their magnitudes and ranges are
similar. Thus, this number represents a best case of how similar the light curves are.
Taking this into account, the two sets of light curves are remarkably similar. Based on this data, we were
able to actively fine tune the Phong materials to get “measured Phong parameters” for the objects used. From this set
of results alone, we can see that FiScOR is accurately measuring object illumination in aggregate.
However, if the two sets of data were exactly identical, there would be no sense in using FiScOR. While
the data sets are on the order of 99% similar, the graph comparisons show that there are individual features in real
data that are completely missing from Phong. For example, in Fig. 12, the peaks in Phong are actually in better
phase with the dips of the real data, especially for the red spectra. This is because Phong does not account for the
diffuse reflection of specular light. When the light source is at 10 degrees, the diffuse reflected intensity is great
enough that this fraction is insignificant. However, when the light source is at 135 degrees, this forms a significant
portion of the light and in the case of the red light, is actually more significant than the diffuse which the Phong
model sees.
These graphs show that as the light source moves around towards the back of the object, the Phong model
becomes less indicative of reality. Thus to accurately measure all the light angles associated with small objects in
orbit, something like FiScOR is needed. While the aggregate data is accurate from Phong, the specific features
which would be useful in object identification are not necessarily perfectly predicted by the Phong model.
Another point of interest in the graphs is the differences between the different color spectra of light. While
the general waveform was quite similar, there were features in each color spectra not present in the others. For
example, in Fig. 13, the maximum for green and blue is in a different peak. If the object is white, the Phong model
assumes that its reflection is identical in all 3 spectra. This is obviously not true from the data above, thus
necessitating the use of a system like FiScOR.
Note that the magnitude of the red light captured is low across all the graphs. This is due to the specific
brand of white paint used. Most white paints are not true white, as was clearly true in our case.
Conclusion
The FiScOR system performed extremely well. When the data was compared in aggregate with Phong
models via the cross correlation coefficient, the results were found to be quite close. However, the FiScOR graphs
were found to have features not present in the Phong models. Phong’s inability to model the diffuse reflection of
spectral light became quite significant in situations with dim objects.
Our process also found that results differ between the colors, even for white objects. These subtleties could
prove to be quite important when trying to identify objects from the light curves. However, if the images are done in
greyscale, these subtleties are lost.
The next step with this project is to do further work with FiSCoR and alternative material types. The color
data will probably be more significant when different material types such as MLI or solar panels are imaged. Thus,
the next step is to investigate these differences. A further evolution of this paper involving additionally materials
will be presented at the Advanced Maui Optical and Space Surveillance Technologies Conference by the same
authors.
Acknowledgements
None of this project would have been possible without the lab facilities and equipment provided by the
Space Engineering Research Center. In addition, thanks is due to the Research Opportunities for Engineers
Scholarship program, which funded one of the primary authors. Finally, a special thank you is due to Dr. Christian
Bruccoleri for Matlab Coding and general mentorship throughout the project.
References
1. David S. F. Portree, Joseph P. Loftus, Jr., Orbital Debris: A Chronology. NASA publication NASA/TP-1999208856, 1999.
2. V.M. Smirnov et al, "Study of Micrometeoroid and Orbital Debris Effects on the Solar Panels Retrieved from the
Space Station 'MIR'", Space Debris, Volume 2 Number 1 (March, 2000), pp. 1 – 7. doi:10.1023/A:1015607813420
3. William J. Broad and David E. Sanger. “Flexing Muscle, China Destroys Satellite in Test.” New York Times.
January 19th, 2007. <http://www.nytimes.com/2007/01/19/world/asia/19china.html?_r=1&pagewanted=all>
4. "Debris from explosion of Chinese rocket detected by University of Chicago satellite instrument", University of
Chicago press release, 10 August 2000.
5. Antony Milne, Sky Static: The Space Debris Crisis, Greenwood Publishing Group, 2002, ISBN 0-275-97749-8, p.
86.
6. B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317.
7. Ettouati, I., Mortari, D., and Pollock, T.C. "Space Surveillance Using Star Trackers: Simulations," Paper AAS
06-231 of the 2006 AAS Space Flight Mechanics Meeting Conference, Tampa, FL, January 22-26, 2006
<http://aeweb.tamu.edu/mortari/pubbl/2006/Space%20Surveillance%20using%20Star%20Trackers%20%20Part%20I%20-%20Simulations.pdf>
8. Chee Kai Chua; Kah Fai Leong, Chu Sing Lim (2003). Rapid Prototyping. World Scientific. p. 124. ISBN 978981-238-117-0.
9. ASTM F2792-10 Standard Terminology for Additive Manufacturing Technologies, copyright ASTM
International, 100 Barr Harbor Drive, West Conshohocken, PA 19428.
10. C. Harvey Palmer, Optics: Experiments and Demonstrations, (The Johns Hopkins Press, Baltimore, 1962), pp.
210-212.
11. Grant Fowles, Introduction to Modern Optics, 2nd ed. (Dover Publications, Inc. New York, 1989), pp. 138-139.
12. P. Jacquinot and B. Roizen-Dossier, “Apodisation,” in Progress in Optics, Vol. 3, edited by E. Wolf
(Amsterdam, North Holland Publishing Company and New York, John Wiley & Sons, 1964), pp. 29-186
13. Carl Zeiss Microscopy. “Bayer mask.” <http://www.zeiss.de/c1256b5e0047ff3f/Contents-Frame/777d91c572
c7904dc1257068004fcbd7>