ABSTRACT

Instead of manual detailed bridge inspection, UAV based bridge patrol can roughly scanning the surface of the bridge and bring back high-resolution image data in very low cost. To use those data smartly, the application of Convolutional Neural Network (Deep Net) based image processing can found deteriorations such as cracks, palling, corrosions, leaking water effectively. Efforts have been conducted to extract labeled damage photos from past Inspection reports and to find out effective way to train CNN model for damage detection. Although it can achieve accuracy about 90% after data argumentation using this data base, its ability of recognize damage from real word UAV image is low. In this study, mixed trainings of CNN model with both UAV sourced image data and inspection report sourced data were conducted to reinforce the machines performance when it’s seeing the background and no-damage structural members, which are less in the inspection reports but resourceful in real UAV scanning images. UAV videos acquired from a few real bridge patrols were used as sources. The 4K images was sliced from video and split to small samples. Each sample was then label manually to classes including background, a few types of damage, and a few types of undamaged structural surfaces. Training with original inspection report sourced data, UAV sourced data, and mixed data were conducted and compared in accuracy of damage recognition in UAV image.