Third Workshop on Applied
Multimodal Affect Recognition

Workshop at the 26th International Conference on Pattern Recognition

Call for Papers

About The Workshop

Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person's emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Further, multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions, and tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop provides a unique forum for researchers to exchange ideas on future directions, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices. Importantly, this workshop addresses the ethical use of novel applications of affective computing in real world scenarios, welcoming discussions on privacy, manipulation of users, and public fears and misconceptions regarding affective computing. This workshop seeks to explore the intersection between theory and ethical applications of affective computing, with a specific focus on multimodal data for affect recognition (e.g., expression and physiological signals) using pattern recognition, computer vision, and image processing.

Workshop Schedule

Coming Soon

Call for Papers

To investigate ethical, applied affect recognition, this workshop will leverage multimodal data that includes, but is not limited to, 2D, 3D, thermal, brain, physiological, and mobile sensor signals. This workshop aims to expose current use cases for affective computing and emerging applications of affective computing to spark future work. Along with this, this workshop has a specific focus on the ethical considerations of such work, including how to mitigate ethical concerns. Considering this, topics of the workshop will focus on questions including, but not limited to:

  • What inter-correlations exist between facial affect (e.g. expression) and other modalities (e.g. EEG)?
  • How can multimodal data be leveraged to create real-world applications of affect recognition such as prediction of stress, real-time ubiquitous emotion recognition, and impact of mood on ubiquitous subject identification?
  • How can we facilitate the collection of multimodal data for applied affect recognition?
  • What are the ethical implications of working on such questions?
  • How can we mitigate the ethical concerns that such work produces?
  • Can we positively address public fears and misconceptions regarding applied affective computing?
To address these questions, AMAR2022 targets researchers in affective computing, biometrics, BCI, computer vision, human-computer interaction, behavioral sciences, social sciences, and policy makers who are interested in leveraging multimodal data for ethical, applied affect recognition.

Topics of interest include, but are not limited to, ethical applications of the following:
  • Health applications with a focus on multimodal affect
  • Multimodal affective computing for cybersecurity applications (e.g., biometrics and IoT security)
  • Inter-correlations and fusion of ubiquitous multimodal data as it relates to applied emotion recognition (e.g. face and EEG data)
  • Leveraging ubiquitous devices to create reliable multimodal applications for emotion recognition
  • Applications of in-the-wild data vs. lab controlled
  • Facilitation and collection of multimodal data (e.g. ubiquitous data) for applied emotion recognition
  • Engineering applications of multimodal affect (e.g., robotics, social engineering, domain inspired hardware / sensing technologies, etc.)
  • Privacy and security
  • Institutionalized bias
  • Trustworthy applications of affective computing
  • Equal access to ethical applications of affective computing (e.g. medical applications inaccessible due to wealth inequality)
NOTE: Topics that do not demonstrate an existing or potential application of affective computing / emotion recognition are not topics of interest for this workshop.

Workshop candidates are invited to submit papers up to 6 pages plus unlimited pages for references in the IEEE format. Submissions to AMAR 2022 should have no substantial overlap with any other paper submitted to ICPR 2022 or any paper already published. All persons who have made any substantial contribution to the work should be listed as authors (single-blind review). Papers presented at AMAR 2022 will appear in the IEEE Xplore digital library. Papers should follow the ICPR conference format (single-blind).

How to Submit:
Paper submissions will be handled using EasyChair.

Important dates:
Paper submission: June 6, 2022
Decision to Authors: June 20, 2022
Camera-ready papers due: July 6, 2022
Workshop: August 21, 2022

Accepted Papers

Coming Soon

Workshop Organizers

Dr. Shaun Canavan
Shaun Canavan
University of South Florida
Dr. Tempestt Neal
Tempestt Neal
University of South Florida
Dr. Marvin Andujar
Marvin Andujar
University of South Florida
Dr. Saurabh Hinduja
Saurabh Hinduja
University of Pittsburgh
Dr. Lijun Yin
Lijun Yin
Binghamton University

Program Committee

  1. Kalaivani Sundararajan, Koh Young Research Canada
  2. David Crandall, Indiana University Bloomington
  3. Sayde King, University of South Florida
  4. Khadija Zanna, Rice University
  5. Nima Karimian, San Jose State University
  6. Huiyuan Yang, Rice University
  7. Venkata Sri Chakra Kumar, Cornell University

Designed by BootstrapMade