Monday, May 5, 2014

Remote Sensing Lab 8


The main goal of this lab is to look at spectral reflectance signatures of different earth surfaces captured by satellite sensors. This lab had three parts to it. First it showed how to get the spectral signatures of its features from an image . Then I learned how to graph those signatures which allows you to compare them to each other. The final part of the lab is all about taking those graphed signatures and interpreting them to try and determine what kind of land feature they are representative of. These spectral signatures can be used for many purposes. One of the biggest is for keeping track of crop health in agriculture. It makes the growing process much less expensive for large farm and is much more efficient. Instead of paying many workers to go out into the fields to check on the crops they can use satellite images to look at the health of the crops.In farming they can track many things about the crops. Moisture content of the soil and plants, health of the plants, if there is a weed invasion, how old the plants are, and how much organic matter is in the soil are all things that can be tracked for farmers.

How do you find a spectral reflectance signature? It is pretty easy to do and very straight forward. You take your aerial of satellite photo and use the drawing tool in Erdas Imagine. You select the polygon tool which allows you create a shape on the image. This shape is the area that the spectral signature is going to correspond with. In the case of a farm they would create the shape just around their field or a section of a field they are interested in and this will show the signature.

Once you have taken the spectral signature of an area you can and should graph it to see it in an easy to read way. All the signature is is a line graph on top of the blue, green, red, near infrared and middle infrared bands. It shows the amount of reflectance inside each of the bands and based on the amounts in the different bands you can get a pretty good idea of what kind of feature is in that area.

Figuring out what the spectral signatures are representing is the interpretation aspect of this lab. Roads, parking lots, lakes, rivers, farm fields, forest and all other types of land cover and features have different unique signatures. Knowing what the signatures look like for these features allows us to be able to better interpret aerial images. Sometimes this can be challenging however. There are so many different kinds of plants and vegetation and sometimes the spectral signatures look very similar for multiple kinds of vegetation. This is where hyper spectral sensors and images should be used. Hyper spectral sensors are much more sensitive and collect many more kinds of data. These extra kinds of data can be the key in finding the very small differences in signatures that are very similar. Being able to gather read and map spectral signatures is a very valuable skill.

 
This is multiple scpectral signatures for various kinds of landcover and features.
Sources:
Erdas Imagine 2013

Monday, April 28, 2014

Remote Sensing Lab 7


The main goal of this lab was to develop my skills in connection with photogrammetric tasks on aerial photographs and satellite images. In particular this lab lays out and helps us understand the mathematical side of calculating photographic scales, measurement of areas and perimeters of different features, and calculating relief displacement. Another skill that is introduced and explained in this lab was stereoscopy and performing orthorectification on satellite images. By the end of the lab I was able to perform difficult and complex photogrammetric tasks.

The first thing I did in the lab was to calculate the scale of a vertical aerial photograph.
I did this buy measuring between two points on the photo with a ruler and comparing that to the real life distance between those two points. Once I know both of those distances all I had to do is take the real life distance and divide it by the distance I measured on the photo. This calculation gives you the scale of the photo. There is another method to do this as well. This method uses focal length of the camera, the altitude at which the photo was taken and the elevation of the place the image is taken. To find the scale using this method you take the photo length and divide it by the altitude minus the elevation.

We can also use aerial photos to find area and perimeters of large land features. In this case I found the area and perimeter of a large lagoon. I did this by using method called digitizing. All this is taking a tool and tracing the outside of the lagoon and the program Erdas Imagine calculates the area and perimeter for you. 

The next thing I did was to calculate relief displacement in an image from an objects height. In the lab I found the displacement of a smoke stack. In order to do this I needed to know the is the height of the tower in real life which you by taking its height in the image and multiplying that by the scale, the height from which the photo was taken, and the distance between the principle point and the top of the tower. Once you know these things you take the height of the object and multiply it time the distance from the principle point to the top of the object. You then divide that number by the height the photo was taken from and that gives you the displacement or tilt inward or outward the object has. If the displacement is a positive number you move the top of the object towards the principle point to correct and the opposite for negative values.

The second part of the lab was about stereoscopy. The main idea of this section was taking two 2D photos and combining them to create one 3D photo. The 3D photo is called an Anaglyph. 2D images only have x and y coordinates but by combining a photo with a DEM or digital elevation model you can add that z coordinate system which gives you the 3D effect.

This is the anaglyph. If you use 3D glasses you will see the different elevation features contained in the image.
The last part of the lab was about orthorectification. This process is quite time consuming but the results you get are very accurate. The whole point is to get two images to match up spatially and be spatially accurate. You do this by using ground control points just like in previous lab exercises but in this method there are way more GCPs which increases the accuracy. In previous exercises I was only correcting the x and y errors this method allows you to correct x, y and z errors. This process is quite difficult to explain so I will not go into too much detail but the basic of idea is that you use one image to correct for the x and y errors and then a second image or DEM to correct for the z errors. Below I have the before and after images of the technique. It is very obvious how well this method works in spatial correction.
These are two images before they are corrected. You can see how far off the same places in each photo are.
 
This is a slider shot of the two photos after they have been corrected you can see how well every land feature matches up.
This is the corrected image as well.

Sources:
Erdas Imagine 2013

Friday, April 18, 2014

Remote Sensing Lab 6

This lab was all about a image preprocessing technique called geometric correction. There are two major types of this technique and this lab was designed to develop and improve my skills when using this technique. The two kinds of geometric correction are image-to-map rectification and image-to-image registration. These are both applied to images before the images are read or interpreted to ensure accuracy of the image when it is interpreted.
 
The first kind of geometric correction I preformed in the lab was image-to-map rectification. In order to do this I brought in an image of Chicago that is skewed and not accurate, and also a map of Chicago that is spatially accurate. I then used GCPs or ground control points to correct the Chicago image. GCPs are points that I placed on the image and also the map in the same spot. In order t determine how many of these points I needed I had to look at what order polynomial this image needs. This is a measure of the distortion in the image, the more distortion there is in the image the higher of polynomial you need. This image had low distortion so it was a first order polynomial. For a first order polynomial 3 control points are required in order to accurately correct the image. In order to get these points as possible to the exact same spot on both images you use something called the Root Mean Square error or RMS. The closer the points are to exactly the same spot on the two images the lower the RMS. An ideal RMS value is below 2.0. If the RMS value is too high the correction you are trying to achieve wont happen and the output image will still be distorted. Once the RMS is at a good level the next thing to do is resample the image. Resampling was explained in my previous post but pretty much it is correcting pixels that are missing brightness values. For this first image I applied the nearest neighbor method of resampling. The resampled image is spatially accurate and can now be interpreted correctly.

GCPs being placed.
 
Corrected image on original image showing how distorted original was.
 
The second kind of correction is call image-to-image registration. I brought in an image of Sierra Leone that is very badly distorted and also brought in another image of Sierra Leone (instead of a map like the first method) that has already been corrected and is spatially accurate. In order to correct this image I used the same procedure as the first image. This image is much more distorted than the first image and because of that I needed to use more GCPs. This was a third order polynomial which requires at least 10 GCPs. I placed GCPs on the distorted image and accurate image in the same location and got a RMS value of less than 2.0. For this image the RMS value is under .5 to make sure that this output image is very accurate. Then just like the first image I resampled. This time I used a bilinear interpolation resample. This resample method is more spatially accurate than nearest neighbor and gives a smoother output image.

 

GCPs being placed.
 
 
Original distorted image on top of original corrected image showing the distortion.


Corrected image on top of original corrected image showing spatial accuracy of the correction.

 
Geometric correction is fairly easy to do if you know how to and improve your skills. The biggest challenge is getting that RMS value down to a good number. Most of the rest of the correction is done by a computer program but knowing what resampling method to use is also an important skill when doing this.

Sources:
ERDAS Image 2013
Earth Resources Observation and Science Center, United States Geological Survey

Thursday, April 10, 2014

Remote Sensing Lab 5


This lab was all about analytical processes in remote sensing and learning how and when to use each method. The processes that are included in this lab are image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection. The goal is that by the end of the lab I am able to use these skills and also know when to apply them to a project in the future that I may be working on.

The first part of the lab was dealing with image mosaicking. Image mosaicking is used when the area that you are interested in spans across more than one satellite image. You then use the mosaic to seamlessly combine the two images into what appears to be one image which includes your whole study area.
There are two different kinds of image mosaic in ERDAS Image 2013. The first is called mosaic express which is the easier and less user intensive way. MosaicPro is much more user intensive and you have a lot more control of the options that are selected for the mosaic. You also get a better mosaic from Pro than you do with express. As you can see the colors of the two images in the Mosaic Express image do not match up very well, this is one of the drawbacks of mosaic express. The MosiacPro image is a very good mosaic you can hardly tell where one image starts and where the other ends, this is exactly what you want. The colors match up very well and it is a seamless transition from one image to the other.


Mosaic Express Image
MosaicPro Image












The next part of the lab was all about band ratioing. I did a band ratio using the normalized difference vegetation index (NDVI). Pretty much what I did was take an image that shows vegetation density by different shades of red and turned it into an image that displays the vegetation values in grayscale so that it is easier to tell the difference between shades. In the new image very white places represent heavy vegetation and the darker gray and black areas represent places with little or no vegetation. The more vegetation the brighter the area is on the output.
After NDVI
Before NDVI















Part 3 was all about spatial and spectral image enhancement techniques. First I did a spatial enhancement. The image I was given is an example of a high frequency image. High frequency images have a very large brightness difference over a short distance which can make the image harder to interpret. In order to fix this I applied something called a low pass convolution filter. This filter lessens the brightness differences in an image making it easier to interpret.


Before Low Pass Filter
After Low Pass Filter

You can also have low frequency images, or images that have very little brightness difference over a short distance. These images are often too similar in brightness values throughout which makes picking out features difficult. For an image like this I applied a high pass convolution filter. This kind of filter makes the differences between light and dark more drastic in the image to help with interpretation.

Before High Pass Filter
After High Pass Filter
 Another kind of spatial enhancement is called edge enhancement. I did this to make features in the image stick out more and be more visible. I did this by using a laplacian convolution filter. A laplacian filter sharpens an image by increasing contrast in certain spots called discontinuities. In the image after edge enhancement was applied you can see that the rivers stand out as very bright green.

Before Edge Enhancement
After Edge Enhancement
 Next I used a spectral enhancement tool called linear contrast stretch to improve the visual appearance of images. The first linear stretch is a minimum-maximum contrast stretch. In order to perform this kind of stretch the image has to be Gaussian. This means that when you look at the histogram of the image there is only one mode or "hill". When you apply this stretch it takes the two ends of the mode and pulls them in opposite directions stretching the histogram all the way across the 0-255 range increasing the contrast of the image. Most images are non-Gaussian which limits the use of this stretch method.

Before Min-Max Stretch
After Min-Max Stretch


The second linear stretch method is piecewise stretch. This stretch is more common and is used on non-Gaussian images or ones that have multiple modes or "pieces" in the histogram. When you do this stretch you have to select the beginning and end point of each mode and then the histogram is again stretched across the full 0-255 range increasing contrast.
Before Piecewise Stretch
After Piecewise Stretch

The final part of the lab was on binary change detection or image differencing. Image differencing is exactly what it sounds like you are taking two images and looking at the differences in them. For this lab the differences I was looking at were the brightness changes of pixels in two images taken 10 years apart. To do this I opened the two images and have ERDAS Image 2013 find the pixels that have lost brightness from one image to the other by creating a model. A model is a mathematical equation but you build it in the form of pictures and symbols like creating an electrical circuit model. I put the two images in the modeler and then chose a function for it to calculate. In this case that function is loss of brightness in a pixel. When the program runs this function it creates an output or image of all of the pixels that lost brightness. I ran this model again only this time the function was to find all the pixels that did not change in brightness. It runs that function and outputs an image of those pixels. You then take those two images that it output and put them on top of each other in ArcMap. This allows you to create and map of the pixels that changed and display them in one color and the ones that did not change in another color. The map below is the map I created using this method, the green pixels changed or lost brightness the other pixels did not change.  
Map Created From Image Differencing


Sources:
ArcMap
ERDAS Image 2013

Thursday, March 27, 2014

Remote Sensing Lab 4

When working with satellite images in remote sensing getting the perfect image can be hard to do. There are many things that interfere with the image and distort it or block the part of the image that you are most interested in. This lab is all about different ways to fix problems or process images so that they are more useful and easier to interpret. The five techniques included in this lab are creating a subset, image fusion, haze reduction, using an interpretation key and resampling.


The first technique of creating a subset is quite simple to do but can be very helpful when looking at an image. Most satellite images cover a very large area and if you are only interested in a little specific area of the image you can use this tool extract that area and get rid of the rest of the image. There are multiple ways of doing this but I am going to explain what I think is the easiest method. You open the image that you are working with and insert something called a inquire box. You can drag this box to the part of image you are interested in and make it any shape and size to contain what you want. You then go to the menu and click create subset which will take the part of the image that is in that box and save it as its own image. Below I have a whole image and the subset that I extracted from it.
Original Image with inquire box.
Subset 


















Image fusion is used to enhance the quality of one image by fusing it or combining it with another image. The technique in this lab is called pan sharpen. You take a reflective image and fuse it with a panchromatic image and this will increase the detail of the reflective image. The reason this works is because the reflective image has a spatial resolution of 30 meters and the panchromatic is 15 meters, so the panchromatic has smaller pixel size and shows more detail. When you fuse the two images together the reflective image takes on the spatial resolution of the panchromatic image so it now has a spatial resolution of 15 meters instead of the original 30 and will show more detail. You can then zoom in further on the image without it becoming distorted and it will be easier to distinguish features of the image.
Pan Sharpened Image

















As I mentioned above there are many of things that can get in the way of the getting a perfect image and viewing the part of an image you are trying to view. One of these things is haze. In the image it looks like clouds but in most cases it isn't. It is important to reduce the amount of haze in an image possible because it will most likely hinder your ability to interpret the image correctly. Using a tool called haze reduction you can remove most of the haze in an image so that it can be more easily interpreted. In the program ERDAS all you have to do is click a couple of things and you can see below how much of a difference using this tool can make. 
Before Haze Removal
After Haze Removal
















There may be times where just looking at a satellite image isn't enough to interpret the image. Using an image interpretation key can be very helpful. In ERDAS you have the ability to link the image you are looking at with Google Earth. By doing this you can look at the remotely sensed image and the same area in Google Earth at the same time. This is very helpful, especially when it comes to zooming in on the image. In Google Earth the detail of the image is much greater because it is a high resolution image. There are also labels, road networks and many other items that can be used as reference points. The Google Earth image is also in true natural color that is seen in real life which can be helpful because many satellite images are not in true natural color.

Reflective Image of Eau Claire Area
Google Earth Image or Interpretation Key


















The last technique called resampling and it used to increase or decrease pixel size in an image depending on what is needed to interpret that image. This is also a fairly simple operation. You open the image you want to work with in ERDAS and go into the spatial options of the image. You can than increase or decrease the pixel size by simply typing in what you would like it changed to. In this lab I took an image and changed the pixel size from 30x30 meters to 20x20 meters. Increasing the pixel size is called resample down and decreasing the pixel size is resample up.
Resample Up Image from 30x30 to 20x20



Sources:
http://www.google.com/earth/
Earth Resources Observation and Science Center, United States Geological Survey
ERDAS Image 2013