ABSTRACT

Lining cracks are the most common structural damage and are critical for the service life of tunnels. Their early identification is important to monitor damage while ensuring tunnel safety. The use of deep learning (DL)-based methods to detect cracks on the surface of tunnel linings has attracted increased attention in recent years. However, DL-based tunnel crack segmentation methods have two major limitations: one is the lack of sufficient tunnel crack images, and the other is the lack of refined image labels, each of which is time-consuming and labor-intensive to capture in the tunnel environment. To solve these problems, a multi-scene deep domain adaptive crack generator called Tunnel-Crack-DatasetGAN (TCDGAN) is proposed inspired by a novel generative adversarial architecture—DatasetGAN. TCDGAN can be used to automatically generate synthetic tunnel crack images, hence overcoming the aforementioned restrictions. With the consideration of the characteristics of tunnel cracks, three improvements are proposed based on the original DatasetGAN architecture. Firstly, the constant learned input in the original style-based generator is replaced by Fourier features, which improves the equivariance of the generated refined crack branches adjacent to the main crack. Secondly, a novel strategy called adaptive pseudo augmentation (APA) is introduced to alleviate the overfitting problem that may occur due to insufficient source-crack images for initial training. Thirdly, a path length-based regularization operation is introduced to ensure that the model can converge to the optimal gradient during the training process. By these means, the proposed TCDGAN can be utilized to generate massive tunnel crack images with high-quality pixel-wise masks requiring a handful of source-crack images and minimal human effort. The quality of the synthesized tunnel crack image-mask pairs was visually evaluated, and the good performance of some representative segmentation models trained by the synthetic dataset demonstrates the feasibility of the proposed method.