ABSTRACT

Human Activity Recognition (HAR) through sensors has gained significant attention in the field of artificial intelligence and ubiquitous computing, mainly due to the availability of low-cost sensors and sensor networks. As a result, HAR has rapidly developed through wearable sensor data in the past few years. HAR is the process of classifying body gestures and motions to predict states of action or behavior during physical activity (such as walking, jogging, standing, jumping, lying down, etc.) by using the mHealth dataset. Wearable devices gather data on individuals’ activity patterns by constantly measuring various parameters using sensors like gyroscopes, cameras, microphones, lights, compasses, accelerometers, proximity sensors, and global positioning system (GPS). In this work, we used several classifiers of machine learing (ML) with various parameters through gridSearchCV to determine the optimal parameter, along with a hybrid convolutional neural network (CNN) and long short-term memory network (LSTM) model proposed for recognizing and performing human activity. The experiments in this work were carried out using the raw data obtained from sensors (accelerometer and gyroscope). The CNN-LSTM model accurately classified human activities and achieved the highest accuracy, with a recall of 98.10%, precision of 98.01%, and an outstanding F-score of 98%. It excelled in the sensor data and proved its superiority over other ML classifiers. In contrast, the baseline models showed good performance, and the random forest (RF) model acquired the best precision of 95.67%, recall of 95.21%, and F-score of 95.23%. The outcomes of the experiments show that the suggested method is superior to conventional methods in terms of effectiveness.