Keynote speaker

Data

A full day workshop in conjunction with ACM Multimedia 2021

Sidney D’Mello

University of Colorado Boulder, US

Sidney D’Mello (PhD in Computer Science) is an Associate Professor in the Institute of Cognitive Science and Department of Computer Science at the University of Colorado Boulder. He is interested in the dynamic interplay between cognition and emotion while individuals and groups engage in complex real-world activities. He applies insights gleaned from this basic research program to develop intelligent technologies that help people achieve to their fullest potential by coordinating what they think and feel with what they know and do. D’Mello has co-edited seven books and has published more than 300 journal papers, book chapters, and conference proceedings. His research has received 16 awards at international conferences and has been funded by numerous grants. D’Mello serves(d) as Associate Editor and on the Editorial Boards of 11 journals. He leads the NSF National Institute for Student-Agent Teaming (2020-2025), which aims to develop AI technologies to facilitate rich socio-collaborative learning experiences for all students. [CV] [Website] [Google Scholar] [Contact]

Getting Really Wild: Challenges and Opportunities of Real- World Multimodal Affect Detection
Abstract: Affect detection in the “real” wild – where people go about their daily routines in their homes and workplaces – is arguably a different problem than affect detection in the lab or in the “quasi” wild (e.g., YouTube videos). How will our affect detection systems hold up when put to the test in the real wild? Some in the Affective Computing community had an opportunity to address this question as part of the MOSAIC (Multimodal Objective Sensing to Assess Individuals with Context [1]) program which ran from 2017 to 2020. Results were sobering, but informative. I’ll discuss those efforts with an emphasis on performance achieved, insights gleaned, challenges faced, and lessons learned.

Panagiotis Tzirakis

hume.ai, US

Dr. Tzirakis is a computer scientist and AI expert with expertise in deep learning, and emotion recognition. He earned his Ph.D. with the Intelligent Behaviour Understanding Group (iBUG) at Imperial College London, where he focused on multimodal emotion recognition efforts. He has published in top outlets including Information Fusion, International Journal of Computer Vision, and several IEEE conference proceedings on topics including 3D facial motion synthesis, multi-channel speech enhancement, the detection of Gibbon calls, and emotion recognition from audio and video. He recently joined the NY startup hume.ai to develop an unbiased and truly empathic AI.

New Directions in Emotion Theory
Abstract: Emotional intelligence is a fundamental component towards a complete and natural interaction between human and machine. Towards this goal several emotion theories have been exploited in the affective computing domain. Along with the studies developed in the theories of emotion, there are two major approaches to characterize emotional models: categorical models and dimensional models. Whereas, categorical models indicate there are a few basic emotions that are independent on the race (e.g. Ekman’s model), dimensional approaches suggest that emotions are not independent, but related to one another in a systematic manner (e.g. Circumplex of Affect). Although these models have been dominating in the affective computing research, recent studies in emotion theories have shown that these models only capture a small fraction of the variance of what people perceive.
In this talk, I will present the new directions in emotion theory that can better capture the emotional behavior of individuals. First, I will discuss the statistical analysis behind key emotions that are conveyed in human vocalizations, speech prosody, and facial expressions, and how these relate to conventional categorical and dimensional models. Based on these new emotional models, I will describe new datasets we have collected at Hume AI, and show the different patterns captured when training deep neural network models.