ABSTRACT

Small Unmanned Aircraft System (sUAS) image collection and preprocessing to produce orthoimages, point clouds, and digital surface models have made great leaps forward in the past years. These advances and the wide spectrum of existing and emerging sUAS applications mandate parallel improvements in the algorithms used for image classification that take into consideration the specific characteristics of sUAS imagery. This chapter presents several alternative approaches to improve classification accuracy using redundant information in overlapped sUAS imagery captured in a typical sUAS mission. These include Deep Convolution Neural Network (DCNN) classification, Fully Convolutional Neural (FCN) Network, and contextual relationships under an Object-Based Image Analysis (OBIA) framework. The chapter synthesizes the results of implementing the introduced classification approaches using a case study of high-resolution sUAS images captured in a wetland area in Central Florida.

Key words: sUAS, deep learning, OBIA, landcover mapping, DCNN, FCN, CRF