•  
  •  
 

SMU Data Science Review

Abstract

Music can stimulate emotions within us; hence is often called the “language of emotion.” This study explores emotion as an additional feature in generating a playlist with a deep learning model to improve the current music recommendation system. This study will sample emotions from certain subjects for each song in a sample of the data. Since the effect of music on emotion is subjective and is different person to person, this study would need a considerable number of subjects to reduce subjectivity. Due to the limited resources, a portion of the data will be labeled with emotion from subjects and the rest of the data will be labeled by using an active learning model. A content-based recommendation system will be built using a GAN (Generative Adversarial Network). This research led to creating two recommendation models, one utilizing emotion while the other did not. The Cosine Similarity and the Euclidean Distance where the two metrics used to judge validity of the models. The results showed that the model that utilized emotion performed better than the model that did not but the difference between the two was not statistically significant. One can conclude that there is promise in using emotion as a feature when recommending music. Further research would have to be done to mitigate certain obstacles as well as utilizing better resources to enhance emotional data extraction.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Included in

Data Science Commons

Share

COinS