ICMI 2024 onsite Workshop on

Eye Tracking for Multimodal Human-Centric Computing

About the Workshop

Over the last 20 years, eye tracking has evolved from being a diagnostic tool to a powerful input modality for real-time interactive systems. This was partly driven by advances in eye tracking hardware concerning the devices' affordability, availability, performance, and form factor. Eye tracking was first used in niche applications in the '80s and '90s and then gathered significant attention through research on gaze-based interaction and gaze-supported multimodal interaction . In the last 10-15 years, a third very promising direction has emerged: eye-based user and context modeling, i.e., seeing the eyes as an additional modality that provides rich information about user (interactive) behavior and their (interaction) context . The eyes reveal information about visual activities , personality , user intents and goals , attention , expertise and other cognitive abilities , and emotions , just to name a few. With that, eye tracking bears great potential for the development of human-centered multimodal AI systems. Gaze-based multimodal user models can be used to, e.g., generate direct feedback to steer the training of AI systems or trigger explicit feedback requests (or show model explanations) if the user seems to disagree with the output of an AI system.

  • [1] Barral, O. et al. 2020. Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations. Proceedings of the 2020 International Conference on Multimodal Interaction (New York, NY, USA, 2020), 163–173.
  • [2] Barz, M. et al. 2021. Automatic Recognition and Augmentation of Attended Objects in Real-time using Eye Tracking and a Head-mounted Display. ACM Symposium on Eye Tracking Research and Applications (New York, NY, USA, May 2021), 1–4.
  • [3] Barz, M. et al. 2020. Visual Search Target Inference in Natural Interaction Settings with Machine Learning. ACM Symposium on Eye Tracking Research and Applications (2020), 1–8.
  • [4] Barz, M. and Sonntag, D. 2021. Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze. Sensors. 21, 12 (Jan. 2021), 4143. DOI:https://doi.org/10.3390/s21124143.
  • [5] Bednarik, R. et al. 2012. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. Proceedings of the Symposium on Eye Tracking Research and Applications (New York, NY, USA, 2012), 83–90.
  • [6] Bulling, A. et al. 2009. Eye movement analysis for activity recognition. Proceedings of the 11th International Conference on Ubiquitous Computing (New York, NY, USA, 2009), 41–50.
  • [7] Bulling, A. et al. 2011. What’s in the Eyes for Context-Awareness? IEEE Pervasive Computing. 10, 2 (Apr. 2011), 48–57. DOI:https://doi.org/10.1109/MPRV.2010.49.
  • [8] Duchowski, A.T. 2018. Gaze-based interaction: A 30 year retrospective. Computers & Graphics. 73, (2018), 59–69. DOI:https://doi.org/10.1016/j.cag.2018.04.002.
  • [9] Hoppe, S. et al. 2018. Eye Movements During Everyday Behavior Predict Personality Traits. Frontiers in Human Neuroscience. 12, (2018). DOI:https://doi.org/10.3389/fnhum.2018.00105.
  • [10] Lallé, S. et al. 2021. Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations. IEEE Transactions on Visualization and Computer Graphics. 27, 6 (2021), 2941–2952. DOI:https://doi.org/10.1109/TVCG.2019.2958540.
  • [11] Liu, Y. et al. 2009. Who is the expert? Analyzing gaze data to predict expertise level in collaborative applications. 2009 IEEE International Conference on Multimedia and Expo (2009), 898–901.
  • [12] Majaranta, P. and Bulling, A. 2014. Eye Tracking and Eye-Based Human–Computer Interaction. Advances in Physiological Computing. S.H. Fairclough and K. Gilleade, eds. Springer. 39–65.
  • [13] Oviatt, S. et al. eds. 2019. The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions. Association for Computing Machinery and Morgan & Claypool.
  • [14] Oviatt, Sharon. et al. eds. 2017. The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations. Association for Computing Machinery and Morgan & Claypool.
  • [15] Qvarfordt, P. 2017. Gaze-Informed Multimodal Interaction. The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1. Association for Computing Machinery and Morgan & Claypool. 365–402.
  • [16] Sims, S.D. and Conati, C. 2020. A Neural Architecture for Detecting User Confusion in Eye-tracking Data. Proceedings of the 2020 International Conference on Multimodal Interaction (New York, NY, USA, 2020), 15–23.

Call for Papers

The goal of this workshop is to bring together researchers from eye tracking, multimodal human-computer interaction, and artificial intelligence. We will welcome contributions on the following topics:

  • Methods and systems to analyze everyday eye movement behavior Real-time vs. post-hoc analysis and modeling
  • Methods and systems to analyze everyday eye movement behavior
  • Real-time vs. post-hoc analysis and modeling
  • Eye tracking in human-centered AI systems
  • Adaptive gaze-based and gaze-supported multimodal user interfaces
  • Eye-based user modeling with limited data
  • Eye tracking for multimodal user modeling
  • Eye-supported multimodal activity and context recognition
  • Computer vision methods for gaze estimation and multimodal behavior analysis
  • Gaze sensing systems - real-world benchmarks, requirements, techniques
  • Privacy-preserving eye tracking
  • Repositories and datasets
  • Focused reviews and meta-analyses

Important Dates

  • Submission Deadline: July 22nd, 2024 (23:59 CEST)
  • Notification of Acceptance: August 11th, 2024 (23:59 CEST)
  • Camera-ready Papers Due: August 16th, 2024 (23:59 CEST)
  • Workshop Date: TBA

Submission Instructions

The submission should consist of 4-8 pages, excluding references, and should be anonymized for the double-blind review process. Please ensure that the submission adheres to the ACM Sigconf format, which can be accessed through the ACM Template Website.

Submission can be done through OpenReview.

Organizers

image1

Michael Barz

German Research Center for Artificial Intelligence (DFKI)

image1

Roman Bednarik

University of Eastern Finland

image1

Andreas Bulling

University of Stuttgart

image1

Cristina Conati

University of British Columbia

image1

Daniel Sonntag

University of Oldenburg & German Research Center for Artificial Intelligence (DFKI)