Eye Tracking for Multimodal Human-Centric Computing
About the Workshop
Over the last 20 years, eye tracking has evolved from being a diagnostic tool to a powerful input modality for real-time interactive systems. This was partly driven by advances in eye tracking hardware concerning the devices' affordability, availability, performance, and form factor. Eye tracking was first used in niche applications in the '80s and '90s and then gathered significant attention through research on gaze-based interaction and gaze-supported multimodal interaction [8, 12, 15]. In the last 10-15 years, a third very promising direction has emerged: eye-based user and context modeling, i.e., seeing the eyes as an additional modality that provides rich information about user (interactive) behavior and their (interaction) context [4, 7, 10, 13, 14]. The eyes reveal information about visual activities [6], personality [9], user intents and goals [3, 5], attention [2], expertise and other cognitive abilities [1, 11], and emotions [16], just to name a few. With that, eye tracking bears great potential for the development of human-centered multimodal AI systems. Gaze-based multimodal user models can be used to, e.g., generate direct feedback to steer the training of AI systems or trigger explicit feedback requests (or show model explanations) if the user seems to disagree with the output of an AI system.
- [1] Barral, O. et al. 2020. Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations. Proceedings of the 2020 International Conference on Multimodal Interaction (New York, NY, USA, 2020), 163–173.
- [2] Barz, M. et al. 2021. Automatic Recognition and Augmentation of Attended Objects in Real-time using Eye Tracking and a Head-mounted Display. ACM Symposium on Eye Tracking Research and Applications (New York, NY, USA, May 2021), 1–4.
- [3] Barz, M. et al. 2020. Visual Search Target Inference in Natural Interaction Settings with Machine Learning. ACM Symposium on Eye Tracking Research and Applications (2020), 1–8.
- [4] Barz, M. and Sonntag, D. 2021. Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze. Sensors. 21, 12 (Jan. 2021), 4143. DOI:https://doi.org/10.3390/s21124143.
- [5] Bednarik, R. et al. 2012. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. Proceedings of the Symposium on Eye Tracking Research and Applications (New York, NY, USA, 2012), 83–90.
- [6] Bulling, A. et al. 2009. Eye movement analysis for activity recognition. Proceedings of the 11th International Conference on Ubiquitous Computing (New York, NY, USA, 2009), 41–50.
- [7] Bulling, A. et al. 2011. What’s in the Eyes for Context-Awareness? IEEE Pervasive Computing. 10, 2 (Apr. 2011), 48–57. DOI:https://doi.org/10.1109/MPRV.2010.49.
- [8] Duchowski, A.T. 2018. Gaze-based interaction: A 30 year retrospective. Computers & Graphics. 73, (2018), 59–69. DOI:https://doi.org/10.1016/j.cag.2018.04.002.
- [9] Hoppe, S. et al. 2018. Eye Movements During Everyday Behavior Predict Personality Traits. Frontiers in Human Neuroscience. 12, (2018). DOI:https://doi.org/10.3389/fnhum.2018.00105.
- [10] Lallé, S. et al. 2021. Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations. IEEE Transactions on Visualization and Computer Graphics. 27, 6 (2021), 2941–2952. DOI:https://doi.org/10.1109/TVCG.2019.2958540.
- [11] Liu, Y. et al. 2009. Who is the expert? Analyzing gaze data to predict expertise level in collaborative applications. 2009 IEEE International Conference on Multimedia and Expo (2009), 898–901.
- [12] Majaranta, P. and Bulling, A. 2014. Eye Tracking and Eye-Based Human–Computer Interaction. Advances in Physiological Computing. S.H. Fairclough and K. Gilleade, eds. Springer. 39–65.
- [13] Oviatt, S. et al. eds. 2019. The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions. Association for Computing Machinery and Morgan & Claypool.
- [14] Oviatt, Sharon. et al. eds. 2017. The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations. Association for Computing Machinery and Morgan & Claypool.
- [15] Qvarfordt, P. 2017. Gaze-Informed Multimodal Interaction. The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1. Association for Computing Machinery and Morgan & Claypool. 365–402.
- [16] Sims, S.D. and Conati, C. 2020. A Neural Architecture for Detecting User Confusion in Eye-tracking Data. Proceedings of the 2020 International Conference on Multimodal Interaction (New York, NY, USA, 2020), 15–23.
Call for Papers
The goal of this workshop is to bring together researchers from eye tracking, multimodal human-computer interaction, and artificial intelligence. We will welcome contributions on the following topics:
- Methods and systems to analyze everyday eye movement behavior
- Real-time vs. post-hoc analysis and modeling
- Eye tracking in human-centered AI systems
- Adaptive gaze-based and gaze-supported multimodal user interfaces
- Eye-based user modeling with limited data
- Eye tracking for multimodal user modeling
- Eye-supported multimodal activity and context recognition
- Computer vision methods for gaze estimation and multimodal behavior analysis
- Gaze sensing systems - real-world benchmarks, requirements, techniques
- Privacy-preserving eye tracking
- Repositories and datasets
- Focused reviews and meta-analyses
Important Dates
- Submission Deadline:
July 31st, 2024 (15:00 CEST)July 22nd, 2024 (23:59 CEST) - Notification of Acceptance:
August 11th, 2024 (23:59 CEST) - Camera-ready Papers Due:
August 16th, 2024 (23:59 CEST) - Workshop Date: November 4th, 2024 from 8:00 to 11:30
Submission Instructions
The submission should consist of 4-8 pages, excluding references, and should be anonymized for the double-blind review process. Please ensure that the submission adheres to the ACM Sigconf format, which can be accessed through the ACM Template Website. Submissions will be published in the ICMI Companion Proceedings. Submission can be done through OpenReview.
Program
The event will take place on Monday, 04 Nov 2024 from 08:00 to 11:30 at the Crowne Plaza San José Conference Center, Room 4. Full details for the location can be found here.
- 08:00: Welcome
- 08:10: Paper Presentations:
- 3D Gaze Tracking for Studying Collaborative Interactions in Mixed-Reality Environments Eduardo Davalos, Yike Zhang, Ashwin T S, Joyce Horn Fonteles, Umesh Timalsina, Gautam Biswas
- Gaze-Informed Vision Transformers: Predicting Driving Decisions Under Uncertainty Sharath Koorathota,Nikolas Papadopoulos, Jia Li Ma, Shruti Kumar, Xiaoxiao Sun, Arunesh Mittal, Patrick Adelman, Paul Sajda
- Detecting when Users Disagree with Generated Captions Omair Shahzad Bhatti, Harshinee Sriram, Abdulrahman Mohamed Selim, Cristina Conati, Michael Barz, Daniel Sonntag
- Investigating the Impact of Illumination Change on the Accuracy of Head-Mounted Eye Trackers: A Protocol and Initial Results Mohammadhossein Salari, Roman Bednarik
- 09:30: Coffee Break
- 10:00: Panel Discussion
🔗 Prof. Christine Lisetti
Florida International University
Connection of eye tracking to engaging virtual social agents and affective computing
🔗 Prof. Elisabeth André
University of Augsburg
Affective Computing, Embodied Conversational Agents, Multimodal Human-Machine Interaction, and Social Signal Processing
- Discuss methods and systems to analyze everyday eye movement behavior in the context of panelist's research
- Identify promising applications
- Identify the biggest/key challenges
- 11:00: Break-out Sessions
- Identify the biggest/key challenges in the area and try to develop a research agenda towards addressing them
- Identify promising applications that could motivate further investigations in specific areas
- 12:00: Wrap-up and Discussions
- 12:30: Farewell
Organizers
Michael Barz
German Research Center for Artificial Intelligence (DFKI)
Roman Bednarik
University of Eastern Finland
Andreas Bulling
University of Stuttgart
Cristina Conati
University of British Columbia
Daniel Sonntag
University of Oldenburg & German Research Center for Artificial Intelligence (DFKI)
Program Committee
- Michael Barz · German Research Center for Artificial Intelligence (DFKI)
- Roman Bednarik · University of Eastern Finland
- Omair Shahzad Bhatti · German Research Center for Artificial Intelligence (DFKI)
- Sara-Jane Bittner · German Research Center for Artificial Intelligence (DFKI)
- Andreas Bulling · University of Stuttgart
- Cristina Conati · University of British Columbia
- Mayar Elfares · University of Stuttgart
- Anna Maria Feit · Saarland University
- László Kopácsi · German Research Center for Artificial Intelligence (DFKI)
- Sébastien Lallé · Sorbonne University
- Ziqian Luo · Oracle
- Sarah Malone · Saarland University
- Philipp Müller · German Research Center for Artificial Intelligence (DFKI)
- Abdulrahman Mohamed Selim · German Research Center for Artificial Intelligence (DFKI)
- Daniel Sonntag· University of Oldenburg & German Research Center for Artificial Intelligence (DFKI)
- Harshinee Sriram · University of British Columbia
- Yusuke Sugano · University of Tokyo
- Klaus Weber · University of Augsburg
- Zekun Wu · Saarland University