Download Remote Sensing

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Interferometric synthetic-aperture radar wikipedia , lookup

Quill (satellite) wikipedia , lookup

Transcript
Understanding Digital Images:
Did you ever go to a football game where the fans, sitting in a special section of the
arena, were assigned special cards? These cards are numbered in sequence and assigned
to fans sitting in specific seats. When given the signal, the fans hold up their cards over
their heads. The effect is to produce an image that can be seen from across the stadium.
The individual cards contain only a small piece of the total image. It takes all of the
cards held up together to produce the full image.
Digital images are very much like the stadium cards. Instead, digital images are made up
of many, many pixels (picture elements). When you view a digital image using your
computer, it is like looking at the stadium cards from across the stadium. (Another
analogy would be the individual tiles that makeup a mosaic.) If you have a pair of
binoculars in the stadium you can zoom in and see individual fans holding up their cards.
In a computer program, you can zoom in on the image and see individual pixels.
Computers have changed the way we look at images. We can have the computer leave
out certain pixels (crop), or zoom in on certain pixels (magnify), or change a specific
color into a new color. Each pixel can be assigned a number value (RGB or HSV) and
the number data transferred through the Internet, telephones, or satellites. Stacks of
images can be stored as animations and displayed at the movies or on your television or
computer monitor.
When a computer displays a digital image, it looks at the correct “column, row, and seat
number” assigned to each pixel. It places each pixel in its correct location. A row of
pixels is referred to as a raster. (Images made up of pixels are sometimes referred to as
raster graphics.) Next, the computer assigns a color based on the number value
contained within each pixel. The result is an image on your screen. Not all digital images
are processed the same, which explains why some images are in the “jpg” format while
others are in the “gif.” If you print out a digital image, the information is sent to the
printer which carries out a similar process.
Not all digital images are the same quality. The quality of the image can be a factor of the
image’s resolution. Resolution is measured by how large or small the pixels are.
Generally, the smaller an image’s pixels, the higher the resolution. Another way of
looking at it is that a high resolution image has more pixels.
So if more pixels mean higher resolution, why aren’t all digital images high resolution? Is
there ever a time when you may need an image with a lower resolution? It usually comes
down to file size. Digital images can become very large in the amount of computer
memory it takes to store and transmit them. Large digital images may slow down
transmission through faxes and the Internet.
The total number of pixels in an image is only one factor in calculating file size, the
other is color depth. Color depth is the term that refers to how the computer processes
the color information contained in the pixel. Color depth is usually referred to in units of
bits, the basic unit of computer memory (24-bit color, 8-bit color, 8-bit grayscale, 1-bit
black and white). The more bits/pixel an image contains, the more computer memory it
requires. The size of the digital image is referred to as Raw File Size and is calculated by
multiplying the number of pixels by the bits.
To further complicate the topic, computer programs often compress or “squeeze out”
unnecessary information from an image to make the file size smaller. Compression is
fine to help images load onto your Internet page faster, but file compression may have
unwanted results if you want to analyze the image in a scientific way.
Humans see in the visible range of the electromagnetic spectrum. Our ordinary digital
cameras are sensitive to the colors of light that are reflecting off our subjects. It should
be mentioned here that not all cameras (sensors) “see” visible light. Many scientific
instruments utilize wavelengths of energy such as Infrared, Ultraviolet and X-rays to
produce digital images. These images contain useful information such as temperature,
elevation and moisture content. These special sensors or cameras can be attached to
telescopes, satellites, or microscopes and help us to visualize the world from afar. This
leads us to our next topic: Remote Sensing.
Remote Sensing
Remote sensing is the ability to analyze and measure phenomena from a distance.
Various types of instruments are used to obtain information, usually in the form of a
digital photograph or digital image. Each instrument used to gather information varies in
the type of sensors it uses. Instruments may measure wavelengths anywhere in the
electromagnetic spectrum. The type of sensor being used will directly affect the spatial
resolution of the image, The greater the resolution the better one is able to measure and
observe from a distance. Some remote sensing allows us to visualize things far away
(Satellites, Telescopes, GPS) while other instruments allow us to look deep inside (X-ray,
MRI, Electron Microscopy). Advances in science are further redefining our definition of
remote sensing with new techniques such as DNA fingerprinting, electrophoresis, and
molecular spectroscopy. Even everyday technology such as TV, radio, Internet and cell
phones can be considered forms of remote sensing.
While remote sensing allows us to gather information from a distance, image processing
allows us to use computers to analyze these images or data. Although the image may
look like a black & white or color photograph, the colors or other wavelengths are
represented by digital numbers (DN). These numbers make up the digital image. Digital
images convert the information (color, temperature, etc.) into a number. Once the image
has been assigned a number, it is displayed on the monitor as a pixel value. This pixel is
the dot of color which gives the image its structure.
We will use Scion Image software to analyze and process the digital numbers (DN)
contained in digital images. The software will allow you to analyze digital images based
upon the differences in the numbers (DN). Some of the techniques possible with Scion
Image are: measuring, counting, density slicing, particle analysis, animation, and
analyzing Digital Elevation Maps (DEM), and Pseudo-coloring. (Scion Image is a very
powerful software package and is capable of many different processing techniques. For
additional information refer to the user manual that comes with Scion Image software.
The following techniques only serve as an introduction)
Measuring:
Measuring distance and area are easily accomplished using satellite images and Scion
Image software. Each pixel on a NOAA weather satellite image is 16 miles square (4
miles x 4 miles). We know this because scientists know the distance from earth to the
satellite and the resolution of the sensors used in the satellite. When the Scion Image
software is calibrated (Set Scale) to the known size of one pixel, it will then calculate
values using measuring tools. Scientists can measure speed, direction and size of storms.
Oil spills can be seen from space and the rate of spread calculated. Doctors can measure
the size and growth of tumors using MRI or other medical images.
Histograms:
The histogram of an image is a statistical graph showing the number of shades of gray
(DN/pixel value) and the frequency at which they occur in the image. In other words, a
histogram (graph) is produced where the height of a bar, above each pixel value, is
proportional to the number of occurrences of each pixel with that value. Plotting a
histogram would let a scientist calculate the most common temperature in an area or the
average elevation in a mountain range.
Density Slicing:
Density slicing gives you the ability to highlight a range of pixel values using the LUT
(Look Up Table) in the image. Pixel values range from 0 (absolute white) to 255
(absolute black). Once the pixel values of interest are colorized, every pixel in the image
will standout clearly among the others in the image. Simply, density slicing makes some
areas of the image more easily seen by assigning them a unique color. One application
would be to make all the areas of a particular temperature be one assigned color.
In addition to being able to see parts of an image more clearly, density slicing also allows
us to quickly isolate some parts of an image from others. Once the area is isolated,
measurements can be made using particle analysis of the area. Doctors use density
slicing to visually isolate a tumor and then measure the area. Repeating this procedure
over time allows the doctors to calculate change in growth.
Particle Analysis:
Particle analysis will allow one to highlight certain pixel values and take measurements.
The pixels in question can be isolated manually, or by using density slicing. The
computer program is calibrated and the known values are applied to the isolated pixels.
Density Calibrating an Image:
Sometimes scientists know values for some pixels and not for others. The values of the
known pixels can help scientists to calculate the unknown values. One example is
temperature. If scientists know the temperature in two different areas, such as Raleigh,
NC and in New York City, they can produce a linear regression plot to calculate
temperatures throughout the range. The same idea could also be applied in measuring
altitude (Digital Elevation Maps). Density calibrating an image will mathematically
relate the values of one pixel to other pixels.
Digital Elevation Maps:
Some remote sensors are capable of producing and displaying elevation (Digital
Elevation Maps or DEM). These images may appear fuzzy or out of focus. Looking at a
DEM only with our eyes provides little useable information. Computer programs such as
Scion Image are capable of producing “3D” images using DEM’s to help us see the
mountains or the valleys. These 3D images can be relative (qualitative) or quantitative
when the correct scale is set in the computer program. Scientist’s could calculate the
surface topography of the ocean floor or the surface of a distant planet using digital
elevation maps.
Animation:
Animations can be produced by sequencing a number of satellite or other images into one
long repetitive file. Using computer generated animations allows use to see changes in
weather patterns, movements of oil slicks, melting of glaciers, or deforestation over time.
Almost every local TV weather broadcast includes short animations of how storms are
moving. One important application of quantified animations is in making predictions.
When will the oil slick strike the shore? Where will the hurricane strike land?
It is also possible to take an animation and “un-stack” it into its individual frames.
Scientists can study and analyze more carefully how processes are occurring. Sometimes
the individual frames can be colorized or quantified and then “re-stacked” back into an
animation. The new animation provides information we could not have seen before.
Weather patterns are often colorized to show temperature or precipitation.
Pseudo-color or False Color Images:
Many of the remote sensors produce digital images that are grayscale. In a grayscale
image pixel values range from 0 (absolute white) to 255 (absolute black). Computer
programs such as Scion Image can “reassign” new colors to the pixels. Scion Image uses
a LUT (color Look Up Table) tool to assign colors to the range of pixels.
We see pseudo-color images everyday. Most newspapers have a weather map of the
United States with temperatures displayed as color (the warmer areas are red and the
colder areas are blue).