Cheyenne Bottoms Wildlife Area Land Cover Mapping - 2005
In an effort to map the current vegetation communities within Cheyenne Bottoms Wildlife Area (CBWA) in Barton County, Kansas, the Kansas Applied Remote Sensing (KARS) program at The University of Kansas acquired and analyzed aerial imagery from the DuncanTech MS3100 digital multispectral camera. On October 8, 2005, imagery was acquired over the Cheyenne Bottoms Wildlife Area. Post processing was performed to geo-register and rectify the individual images then merge them into a seamless mosaic.
Previous mapping efforts that utilized near infrared film and relied exclusively on visual interpretation and the manual digitizing of vegetation communities were necessarily subjective in nature and hard to repeat. This project utilized multispectral digital imagery and examined multiple analysis and classification techniques (supervised, unsupervised, and object oriented) in an attempt to develop a more quantitative and repeatable methodology fort classifying the imagery.
Conclusions and Recommendations
The 2005-2006 land cover mapping project for the Cheyenne Bottoms Wildlife Area turned out to be a more complicated task than originally anticipated. The suite of situations and issues that kept surfacing presented numerous learning opportunities and constantly encouraged researchers to investigate new approaches.
When the imagery was finally flown in mid-October (due to technical delays), researchers were hoping they had not missed the optimal date for separating vegetation types. Early looks at the data showed high-resolution imagery with lots of detail and definable patterns; however, not far into the process it became apparent that this detail also included an abundant amount of within-class variation, and that between-class differences were less apparent. The large amount of variation found within clases, especially the wheatgrass and undifferentiated emergent wetland classes, were primarily due to the high diversity of plant species associated with these communities (and the proportion of these species present at a given location). Since most vegetation types were going through senescence, most of the vegetation was looking "pretty dead," especially after the hot and dry summer. Had it not been for the multispectral aspects of this camera, which created improved visual separation, this project would have been much harder. Though there was considerable overlap of spectral values that hindered computer classification, there were visual differences that aided interpretation. However, since the goal was to find a less subjective method than manual interpretation to classify the vegetation, we pursued the computer classification method.
Had the imagery been acquired earlier, at a time when there was better separation between classes, the more common pixel-based classification methods first tried may have been more effective. Future efforts may want to investigate this, though researchers should not spend all their time on it and a re-sampling to five meters would be recommended. The classification effort using E-cognition also had its difficulties, with one downside being the ever-changing classification results due to the addition of training sites. When a classification error was found and corrected by turning that cluster into a training site for the correct class, that affected the broader classification parameters for the entire image and sometimes changed previously correctly classified clusters and created classification errors elsewhere on the image. Another limitation to the computer-based classification (pixel and cluster) was that it could not be used to map all the classes used in the previous year's mapping efforts (33 land cover and land use classes). The distinction betweeen land use types cannot be made using spectral information alone and is one advantage of manual interpretation and delineation.
Based on this effort, the recommended approach for future mapping efforts would be a hybrid approach where E-Cognition creates several hierarchies of clusters that are exported to shapefiles then manual photo interpretation would be performed with the shapefiles aggregated and labeled as necessary according to user interpretation. Using this hybrid approach would assist with landcover polygon delineation while alllowing more control of the classification process and the inclusion of land use categories. The clusters generated provided an accurate delineation showing differences in vegetation conditions (class), something that is often difficult to do with manual digitizing. Additionally, if more detail were needed for a specific area, the finer level clusters could be used to guide a manual modification of the classification. Some management infrastructure features like fireguards, levees, access roads and agricultural fields would benefit from manual delineation as well.
Acquire imagery in late summer. Early summer may work as well as long as vegetation classes appear different from one another. Imagery acquired in mid-summer or fall do not work well because everything is either "all green" or "all dead."
Acquire coarser resolution imagery (1-2 meter, or resample). This will enlarge the image footprint (assisting geo-referencing), reduce the total number of images, and reduce the amount of detail (spectral confusion) in the data to help with classification.
Take more field reference pictures and link to GIS with coordinates. Field notes should contain a list of common and occasional plant species present as well as a field-based classification assignment.
The multispectral features of the DuncanTech imagery greatly enhance visual interpretation power, but due to the large amount of interclass variation, manual interpretation and digitizing may produce better results than pixel or cluster based classification methods.