Project Team
Students
Bhavika jain
Computer Science, Mathematics
University Park
Faculty Mentors
Mahfuza Farooque
University Park
School of Electrical Engineering and Computer Science, Computer Science and Engineering
Project
Project Video
Project Abstract
Social media content can significantly impact users’ mental well-being, especially for those frequently exposed to negative or emotionally charged material. This research aims to understand and mitigate these effects by developing a multi-modal emotion detection framework for social media videos. Sentiment analysis of videos is crucial for mental health because it helps identify and mitigate the impact of harmful content, promoting healthier online interactions (Choudhury et al., 2022). We integrate advanced feature extraction techniques from various modalities: `librosa` and `moviepy` for audio analysis, FER and ResNet-50 for visual frames, TF-IDF and LSTM models for text sentiment, and a neural network for emoji sentiment analysis. Using logistic regression with dynamic weights, our comprehensive model effectively tackles challenges in data collection, feature extraction, and overfitting, resulting in an efficient sentiment classification system with an accuracy of 98.30%. This accuracy surpasses similar approaches presented in recent studies, such as those by Asad et al. (2024) and Chen et al. (2021), highlighting the effectiveness of our approach in capturing and analyzing emotional nuances in social media content. This study underscores the potential of multi-modal analysis to provide deeper insights into user emotions and offers significant implications for enhancing user engagement and content personalization. Challenges such as integrating diverse data sources and addressing potential biases in user-generated content will be a focus in ongoing research.
Evaluate this Project
Use this form link to provide feedback to the presenters, and add your project evaluation for award(s) consideration.