Abstract In the rapidly evolving field of artificial intelligence, the importance of multimodal sentiment analysis has never been more evident, especially amid the ongoing COVID-19 pandemic.Our research addresses the critical need to understand public sentiment across various dimensions of this crisis by integrating data from multiple modalities, such as text, images, audio, LIPOSOMAL VITAMIN C 1000 and videos sourced from platforms like Twitter.Conventional methods, which primarily focus on text analysis, often fall short in capturing the nuanced intricacies of emotional states, necessitating a more comprehensive approach.
To tackle this challenge, our proposed framework introduces a novel hybrid model, IChOA-CNN-LSTM, which leverages Convolutional Neural Networks (CNNs) for precise image feature extraction, Long Short-Term Memory (LSTM) networks for sequential data analysis, and an Improved Chimp Optimization Algorithm for effective feature fusion.Remarkably, our model achieves an impressive accuracy rate of 97.8%, outperforming existing approaches in the field.
Additionally, by integrating the GeoCoV19 dataset, we facilitate a comprehensive analysis that spans linguistic and geographical boundaries, enriching our understanding of global pandemic discourse and providing critical insights for informed decision-making in public health crises.Through this holistic approach and innovative techniques, our research significantly advances multimodal sentiment analysis, offering a robust framework for deciphering the complex interplay of emotions Sushi Roller during unprecedented global challenges like the COVID-19 pandemic.