ABSTRACT

Human Activity Recognition (HAR) is one of the hot topics in the field of human-computer interaction. It has a wide variety of applications in different tasks such as health rehabilitation, smart houses, smart grids, robotics, and human action prediction. HAR can be carried out through different approaches such as vision-based, sensor-based, radar-based, and Wi-Fi-based. Due to the ubiquitous and easy-to-deploy characteristic of Wi-Fi devices, Wi-Fi-based HAR has gained the interest of both academia and industry in recent years. WiFi-based HAR can be implemented by two channel metrics: Channel State Information (CSI) and Received Signal Strength Indicator (RSSI). Recently, converting the CSI data to images has led to increasing the accuracy level of activity prediction. However, none of the previous research has focused on extracting the features of converted images using the image-processing techniques. In this study, we investigate three available datasets, gathered using CSI property, and took the advantage of Deep Learning (DL), with convolutional layers and edge detection technique to increase overall system accuracy. The canny edge detector extracts the most important features of the image, and giving it to the DL model empowers the prediction of activities. In all three datasets, we witnessed an improvement of 5%, 27%, and 37% in terms of accuracy.