ABSTRACT

This chapter includes fusion of SAR and hyperspectral data for land use land cover (LULC) classification. LULC classification using satellite data is required for providing useful and reliable information about the earth’s surface features. Individually, data from different satellite sensors are not sufficient to distinguish various LULC features. This raises the importance of information integration retrieved from multiple sensors. Satellite data fusion from different sensors enables enhanced delineation of land cover features obtained from different satellite sensors. Nowadays, advanced sensors such as hyperspectral and synthetic aperture RADAR (SAR) have shown their potential for extracting earth surface information. Synergizing the information by fusion of SAR and hyperspectral will complement each method’s limitations and enhance information retrieval. This study includes the fusion of Hyperion data with Radarsat-2 C-band and advanced land observing satellite (ALOS) phased array type synthetic aperture radar (PALSAR) L-band, and fully polarimetric SAR (PolSAR) data individually. The fusion of datasets was carried out at three different levels, i.e., the pixel, feature, and decision levels. For pixel-level fusion, high-pass filter (HPF), wavelet, and Gram–Schmidt (GS) fusion techniques were chosen as these were able to preserve spectral properties. In feature-level fusion, kernel-based principal component analysis (PCA) was performed on Hyperion data to extract features, and multi-component scattering model (MCSM) decomposition parameters were extracted from PolSAR data. A feature vector was formed from the extracted features obtained from the Hyperion and SAR datasets. In the case of decision-level fusion, a one-against-all approach of support vector machines (SVM) was used to decide the class membership value, which was based on the membership values of generated rule images of the classified image. Non-linear SVM-based classification was performed for the classification of individual as well as fused images. Overall accuracy (OA), kappa, and individual class accuracies were calculated to assess accuracy. It was observed from the obtained classified results that classification of HPF fusion gave better results for fusing hyperspectral and PolSAR data in comparison with the other pixel-level fusion approaches performed. Among the classified data of fusions of Hyperion and Radarsat-2 and Hyperion and ALOS PALSAR, fusion of Hyperion with Radarsat-2 data performed better. In the case of feature-level and decision-level fusion, Hyperion and ALOS PALSAR data fusion outperformed Hyperion with Radarsat-2 data in terms of OA and kappa. A comparative analysis was carried out between all the classified images of obtained fused products at three fusion levels; it was found that the overall Hyperion with ALOS PALSAR data at feature-level fusion was able to enhance the LULC features and gave a better classification accuracy.