ABSTRACT

Many people like to listen to music and often they also have hobbies. As a result, music is very significant in one’s life. People’s emotions influence how they react to music. People listen to music according to their moods and interests. Listening to music is an important practice for reducing stress. However, if the music does not match the listener’s current emotions, it can be counterproductive. There’s no music playing within the display approach that can select melodies based on the user’s feelings, such as sad, joyful, furious, or neutral. Agreeing later thinks about, people react and react to music, and music features a critical effect on a person’s brain activity. Additionally, as there are more tracks, music listeners have a difficult time manually establishing and segregating playlists. Existing playlist-generating automation techniques are computationally inefficient, less precise, and not user-friendly to the users, and at times they even require the use of additional hard wares. The proposed method, which is based on the extraction of facial emotions, will automatically produce a playlist, saving time and effort. The goal of the project is to create an application that employs CNN and feature extraction from data sets for emotion recognition to recommend music to users based on their feelings by distinguishing between different types of emotions. By using the HTML CSS/JAVA code, a smart music player is created as the web page with the playlist set according to the emotions. It is based on the user’s interests and feelings, and it has been designed as a web page for ease of usage. The system is framed as friendly to the user.