Sentiment, Emotion, Physiological-Emotion, and Stress
The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the Affective Computing, Sentiment Analysis, and Health Informatics communities, to compare the merits of multimodal fusion for a large amount of modalities under well-defined conditions.
We are pleased to announce that MuSe 2020 and MuSe 2021 data are now available after the challenges have closed for research projects (academic institutions only)!To get access to the MuSe-CaR and/or Ulm-TSST data sets and the corresponding challenge labels, please download the respective EULA(s) - End User License Agreement(s).
MuSe 2021 featured four sub-challenges:
Based on last years' MuSe-CaR dataset, extended by a novel gold standard fusion method:
Multimodal Continuous Emotions in-the-Wild Sub-challenge (MuSe-Wilder): Predicting the level of emotional dimensions (arousal, valence) in a time-continuous manner from audio-visual recordings.
Multimodal Sentiment Sub-challenge (MuSe-Sent): Predicting advanced intensity classes of emotions based on valence and arousal for segments of audio-visual recordings.
Based on the novel audio-visual-text Ulm-TSST dataset, covering people in stressed dispositions:
Multimodal Emotional Stress Sub-challenge (MuSe-Stress): Predicting the level of emotional arousal and valence in a time-continuous manner from audio-visual recordings.
Multimodal Physiological-Arousal Sub-challenge (MuSe-Physio): Predicting the level of psycho-physiological arousal from a) human annotations fused with b) galvanic skin response (also known as Electrodermal Activity (EDA)) signals from the stressed people as regression task. Audio-visual recordings as well as other biological signals (heart rate and respiration) are offered for modelling.