The MuSe 2021 datasets can only be used for the purpose of benchmarking audio, video, or audiovisual affect and fusion recognition systems, according to the guidelines of MuSe 2021.
The signee, who is responsible for the team, must hold a permanent position at an academic institute. Up to five other researchers affiliated with the same institute may be named (e.g. PhD students), which will allow them to work with this dataset. We are not responsible for the content nor the meaning of the videos.
Unprotected, the data must only be stored on the computers of the signees of this document. If stored on a local network, the data must be subject to user-level access control.
The user are not allowed to use the database for any commercial purposes. The database and annotations are available for non-commercial research purposes only. Commercial purposes include, but are not limited to:
proving the efficiency of commercial systems
testing commercial systems
using screenshots of subjects from the database in advertisements
selling data from the database
The user may not distribute the database in any way. No portion (screenshots/audioclips etc.) may be distributed in any publications or presentation, with the exception of presentations and documents in context of this challenge and the workshop. Participants will agree not to reproduce, duplicate, copy, sell, trade, resell or exploit the data for any commercial purposes, any portion of the images and any portion of derived data. They will also agree not to further copy, publish or distribute any portion of annotations of the dataset. The user will forward all requests for copies of the database to the MuSe database administrators.
Participants can use the raw as well as preprocessed scene/background/audio/body pose etc. features along with the provided information.
The participants are free to use external data for training along with the MuSe. However, this should be reproducible and clearly discussed in the accompanying paper. The participants are free to use any commercial or academic feature extractors, pre-trained network and libraries.
At the end of the challenges, we may ask the best teams to send us:
- team name, name of team members
- the predictions on the test set (or the submission number of a previous one)
- a link to a Github repository or the zipped source code including parameters for the replication of the results
- a link to an ArXiv paper with 2-6 pages describing their proposed methodology, data used and results.
We encourage the participates to submit their solution as paper describing the solution and results to our workshop. The winners of each sub-challenge have to submit a paper in order to be announced as a winner. To submit a paper to the Challenge Workshop in the ACM Multimedia proceedings, the report has to fit the conference requirements.