Wednesday, April 20, 2016

Lab 6: Geometric Correction

Goal


The goal of this lab was to introduce us to the preprocessing skill of geometric correction. The lab was structured to develop our skills on the two major types of geometric correction: Image-to-map and image-to-image rectification through polynomial transformation.

Objectives:


1.      Use a 7.5 minute digital raster graphic (DRG) image of the Chicago Metropolitan Statistical Area to correct a Landsat TM image of the same area using ground control points (GCPs) from DRG to rectify the TM image.

2.      Use a corrected Landsat TM image for eastern Sierra Leone to rectify a geometrically distorted image of the same area.

Methods

Image-to-Map Rectification

I opened the provided Chicago.drg.img, which is a USGS 7.5 minute digital raster graphic covering part of Chicago (see figure 1).
Figure 1 This is a USGS 7.5 minute digital raster graphic (DRG) covering part of the Chicago region and adjacent areas. The subset is to show detail.

To rectify the image Chicago_2000 to the Chicago DRG, I used the Multipoint Geometric
Correction tool under Multispectral/Ground Control Points in the ERDAS Imagine interface. The Multipoint Geometric Correction window contains 2 panes. On the left is the input image (Chicago_2000.img), while the reference image is on the right pane (Chicago_drg.img). Each of these panes contains three windows. The top left and top right panes show the entire input and reference images respectively. The other two central top panes shows the areas that are zoomed into on the input image and also that zoomed into on the reference image.
See figure two for a close up view.
Figure 2 The Multipoint Geometric Correction window with the input image (Chicago_2000.img) and reference image (Chicago_drg.img). 

I chose four sets of ground control points (GCPs) to align the aerial image “Chicago_2000.img” with the reference image (“chicage_drg.img”). Though only three GCPs are necessary for a first order polynomial, it is wise to collect more than the minimum required GCPs in geometric correction in order for the output image to have a guaranteed good fit. Figure 3 displays the Multipoint Geometric Correction window. The table at the bottom indicates the RMS (Root Mean Square) Error for the individual points and for the image in total. Notice the RMS error is below 0.5, which means the ground control points are accepted as accurate by industry standards.
Figure 3 The Multipoint Geometric Correction window after placement of the ground control points. The RMS Error is less than 0.5. 

For further explanation of the process, Chicago_drg.img served as the reference map to which we rectified/georeferenced the Chicago_2000 image. Using points from the reference image, a list of GCPs were created that were used to register the aerial image to the reference image in a first order transformation. This works to anchor the aerial image down to a known location; the reference image already had a known source and geometric model. We then used a computed transformation matrix to resample the unrectified data.
The interpolation used was the nearest neighbor method wherein each new pixel in the output image is assigned to the pixel nearest it in the input image.
Matrices consist of coefficients that are used in polynomial equations to convert the coordinates of the input image. A 1st-order transformation was used because the aerial image was already projected onto a plane but not rectified to the desired map projection.

Image-to-Image Rectification


Part two involved doing the rectification process again, this time with two images instead of an image and a map. A third order polynomial transformation was required because of the extent of the distortion, and third order transformations require at least 10 GCPs.

For further explanation of higher order transformations, click here

Figure 4 The Image-to-image transformation with twelve GCPS. Notice that the total RMS Error is below 0.05.


Results


Through this laboratory exercise, I developed skills in Image-To-Map and Image-to-Image rectification methods of geometric correction. This type of preprocessing is one that is commonly performed on satellite images before data or information is extracted from the satellite image. The results of the rectification processes can be seen below in figures five and six.
   Figure 5 The image-to-map rectified image of Chicago and the surrounding area.

Figure 6 My rectified image compared to the Sierra Leone image that was used as the reference in the transformation process. The color of my image is washed out, but the orientation appears accurate.

Data sources


Data used in this lab was provided by Dr. Cyril Wilson and collected from the following sources:
 Satellite images are from Earth Resources Observation and Science Center, United States
Geological Survey.
Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Wednesday, April 13, 2016

LiDAR Remote Sensing

Goal and Background


 LiDAR is a rapidly growing geospatial technology and the increasing market for it has many implications for the future of imagery and image collection. The goal of this lab exercise was to practice basic skills involved in processing LiDAR data, particularly using LiDAR point clouds in LAS file format. Objectives of this lab included processing and retrieval of various surface and terrain models, and processing of intensity images and point cloud-derived products. We were told to imagine this exercise in the context of a GIS manager working on a project for the city of Eau Claire.


Objectives

  1. Acquire LiDAR point cloud in LAS format for a portion of the City of Eau Claire.
  2. Initiate an initial quality check on the data by looking at its area and coverage, and also verify the current classification of the LiDAR.
  3. Create a LAS dataset
  4.  Explore the properties of LAS dataset
  5. Visualize the LAS dataset as point cloud in 2D and 3D
  6. Generate LiDAR derivative products

Methods


In part one, I download point cloud data into Erdas Imagine and accessed the tile index file specified for the lab. It is a point cloud dataset of Eau Claire. I selected the same section in ERDAS and ArcMap for practice.
I performed a QA/QC check by comparing the minimum and maximum Z points to the known elevation. The dataset was displayed in tiles.
Part two focused on generating a LAS dataset and exploring LiDAR point clouds with ArcGIS. I created a new LAS dataset in ArcCatalogue. LAS data comprises a great deal of information in the header files associated with the data and there is also ancillary data that can be accessed as well. Using the external metadata, I assigned the D_North_American_1983 and North American Vertical Datum of 1988 to the dataset. For reference, I added a shapefile of Eau Claire County (see Figure 1), which was deleted afterwards.
Figure 1 The grid highlights the area covered by the pointcloud dataset of the city of Eau Claire as compared to the greater Eau Claire County.


The default display settings are point color codes according to elevation, using all returns. Figure 2 shows the elevation range and a portion of the dataset.
Figure 2 A section of the dataset displaying elevation of all returns.

Below are four different views of the data; Elevation, Aspect, Slope, and Contour (see Figure 3).
Figure 3 Four different views of the same subset of the LAS dataset, each displaying a different element.

Next I explored the point clouds according to Class, Return and profile. In ArcMap there is the option to display selected features in 2D and 3D models for more thorough visualization.
Part three involved the generation of Lidar derivative products. In order to derive DSM and DTM products from point clouds, I estimated the average nominal pulse spacing (NPS) at which the point clouds were collected. The derived image below (Figure 4) is the result of  inputting the LAS dataset into the LAS dataset to Raster tool and using the binning interpolation method with a maximum cell type and a natural neighbor void filling method.
Figure 4 The derived raster converted from the LAS dataset using the LAS dataet to Raster tool in ArcMap.


Using the LAS Dataset to Raster and Hillshade tools, I then created the following derivative products:
  • Digital surface model (DSM) with first return
  • Digital terrain model (DTM)
  • Hillshade of the DSM (see Figure 5)
  • Hillshade of the DTM (see Figure 6)
Figure 5 The hillshade results of the first return elevation model performed on the derived raster.


Figure 6 The hillshade results of the terrain model (ground returns only) of the derived raster.

Finally, a lidar intensity image was generated following a similar procedure used to create the DSM and DTMs above. The image was displayed in ERDAS to enhance its appearance automatically.

Results


Through this exercise I learned how to use Lidar data and gained familiarity with some processing toold in ArcMAP. I processed surface and terrain models and used Lidar derivative products as ancillary data in order to improve image classification of the remotely sensed imagery. The final output image was an intensity image of the point cloud data (see Figure 7).

Figure 7 A high intensity image produced with the LAS dataset from the city of Eau Claire.

Data sources


The Lidar point cloud and Tile Index data are from the Eau Claire County 2013 and the Eau Claire County shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price 2014. All data was provided by Dr. Cyril Wilson of the University of Wisconsin- Eau Claire.