ABSTRACT

Recent advance in sensor technology enables us to monitor bridges with multiple types of sensors and cameras. They produce multi-view sensor data that may consist of video streams by surveillance cameras, accelerometers, strain meters, etc. Each sensor gives us some view of objective events on the bridge. Suppose we observe vehicles passing over the bridge. Surveillance cameras give us the information about the speed of the vehicles in addition to the identity of the vehicles. On the other hand, multiple strain meters give us the information about the speed of the vehicles as well as their weight. Thus, the different kind of sensors enable us to obtain the same information like vehicle speed from the different views, which enables us to improve the accuracy of the speed estimation. They also provide complementary information like weight and identity of the vehicle. For utilizing the multi-view sensor data, we need to align the events observed in each sensor data. This paper proposes a neural-based event alignment system, which consists of two components. The first component performs event detection from each sensor data and event alignment among the sensor data are carried out in the second component. The event detection methods are different depending on the type of sensors as well as the events. This paper handles the event of vehicle passage and using signal data collected from accelerometers and strain meters. We apply an event detection method from sensor data that consists of (i) feature extraction by Wavelet transform and (ii) peak detection by a neural error construction model. Once the events are extracted from the sensor data, we align them based on simple yet effective procedure. The accuracy of proposed system are evaluated using bridge health monitoring data.