Integrating contextual information with per-pixel classification for improved land cover classification

J. Stuckens, P. R. Coppin, M. E. Bauer

Research output: Contribution to journalArticlepeer-review

249 Scopus citations

Abstract

A hybrid segmentation procedure to integrate contextual information with per-pixel classification in a metropolitan area land cover classification project is described and evaluated. It is presented as a flexible tool within a commercially available image processing environment, allowing components to be adapted or replaced according to the user's needs, the image type, and the availability of state-of-the-art algorithms. In the case of the Twin Cities metropolitan area of Minnesota, the combination of the Shen and Castan edge detection operator with iterative centroid linkage region growing/merging based on Student's t-tests proved optimal when compared to other more common contextual approaches, such as majority filtering and the Extraction and Classification of Homogeneous Objects classifier. Postclassification sorting further improved the results by reducing residual confusion between urban and bare soil categories. Overall accuracy of the optimal classification technique was 91.4% for a level II classification (10 classes) with a K(e) of 90.5%. The incorporation of contextual information in the classification process improved accuracy by 5.8% and K(e) by 6.5%. As expected, classification accuracy for a simplified level I classification (five classes) was higher with 95.4% and 94.3% for K(e). A second important advantage of the technique is the reduced occurrence of smaller mapping units, resulting in a more attractive classification map compared to traditional per-pixel maximum likelihood classification results. (C) Elsevier Science Inc., 2000.

Original languageEnglish (US)
Pages (from-to)282-296
Number of pages15
JournalRemote Sensing of Environment
Volume71
Issue number3
DOIs
StatePublished - Mar 2000

Fingerprint

Dive into the research topics of 'Integrating contextual information with per-pixel classification for improved land cover classification'. Together they form a unique fingerprint.

Cite this