ABSTRACT

Fusion of multimodal data can effectively improve the perception ability of road infrastructure ontology. In this paper, a lightweight deep learning neural network is proposed to study the fusion segmentation effect of multimodal images under visible light, infrared light, and polarized light. The results showed that different modalities have different effects on the segmentation of different road materials. Especially for the recognition of road water, the segmentation effect was improved by 35.6% after fusing AoLP (angle of linear polarization) images. By using multimodal fusion segmentation, the mIoU (mean intersection over union) index was improved by 4.2% compared to ordinary RGB images.