MuSe 2020 -
The Multimodal Sentiment
in Real-life Media Challenge
Multimodal Emotion Recognition - Topic Engagement Classification - Trustworthiness Quantification
October 12 - Seattle, United States
The Multimodal Sentiment Analysis Challenge and Workshop (MuSe 2020) focusing on the tasks of sentiment recognition, as well as topic engagement and trustworthiness detection is a satellite event of ACM MM 2020, (Seattle, US, October 2020), and the first competition aimed at comparison of multimedia processing and deep learning methods for automatic, integrated audiovisual, and textual based sentiment and emotion sensing, under a common experimental condition set.
The goal of the Challenge is to provide a common benchmark for multimodal information processing and to bring together the Affective Computing, and Sentiment Analysis communities, to compare the merits of multimodal fusion for the three core modalities under well-defined conditions. Another motivation is the need to advance sentiment and emotion recognition systems to be able to deal with fully, previously unexplored naturalistic behaviour in large volumes of in-the-wild data, as this is exactly the type of data that both multimedia and human-machine/ human-robot communication interfaces have to face in the real world.
We are calling for teams to participate in three Sub-Challenges:
Multimodal Sentiment in-the-Wild Sub-challenge (MuSe-Wild): Predicting the level of emotional dimensions (arousal, valence) in a time-continuous manner from audio-visual recordings.
Multimodal Emotion-Target Sub-challenge (MuSe-Topic): Predicting 10-class domain-specific topics as the target of 3-class (low, medium, high) emotions of valence and arousal.
Multimodal Trustworthiness Sub-challenge (MuSe-Trust): Predicting the level of trustworthiness of user-generated audio-visual content in a sequential manner utilising a diverse range of features and (optional) emotional (arousal and valence) predictions.