Map Adaptation; Eye Tracking; Gaze-Based Assistance; Intention Recognition; Gaze-Based Interaction; Activity Recognition; Context
Göbel Fabian, Kiefer Peter, Raubal Martin (2019), FeaturEyeTrack: automatic matching of eye tracking data with map features on interactive maps, in
GeoInformatica, 1.
Göbel Fabian, Kwok Tiffany C.K., Rudi David (2019), Look There! Be Social and Share, in
Challenges Using Head-Mounted Displays in Shared and Social Spaces, Workshop at CHI 2019, Glasgow, U.K.ETH Zürich, Zürich, Switzerland.
Göbel Fabian, Kiefer Peter (2019), POI-Track: Improving Map-Based Planning with Implicit POI Tracking, in
Proceedings of the 2019 ACM Symposium on Eye Tracking Research & Applications (ETRA'19), Denver, Colorado, USAACM, New York.
GöbelFabian, BakogioannisNikolaos, HenggelerKatharina, TschümperlinRoswita, XuYang, KieferPeter, RaubalMartin (2018), A public gaze-controlled campus map, in
ET4S Eye Tracking for Spatial Research, Proceedings of the 3rd International Workshop, Zürich, SwitzerlandETH Zürich, Zürich, Switzerland.
KieferPeter (2018), Challenges in gaze-based intention recognition, in Duchowski Andrew, Chuang Lewis, Weiskopf Daniel, Qvarfordt Pernilla (ed.), Dagstuhl Publishing, Dagstuhl, Germany, 44.
RaubalMartin (2018), Detecting Mindless Gaze, in Chuang Lewis, Duchowski Andrew, Weiskopf Daniel, Qvarfordt Pernilla (ed.), Dagstuhl Publishing, Dagstuhl, Germany, 143.
Kiefer Peter, Giannopoulos Ioannis, Raubal Martin, Göbel Fabian, Duchowski Andrew T. (ed.) (2018),
ET4S Eye Tracking for Spatial Research, Proceedings of the 3rd International Workshop, ETH Zürich, Zürich, Switzerland.
GöbelFabian, KieferPeter, GiannopoulosIoannis, RaubalMartin (2018), Gaze Sequences and Map Task Complexity, in
10th International Conference on Geographic Information Science (GIScience 2018), Melbourne, AustraliaSchloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany.
Göbel Fabian, Kiefer Peter, Giannopoulos Ioannis, Duchowski Andrew T., Raubal Martin (2018), Improving map reading with gaze-adaptive legends, in
the 2018 ACM Symposium, Warsaw, PolandACM, New York.
Duchowski Andrew T., Krejtz Krzysztof, Krejtz Izabela, Biele Cezary, Niedzielska Anna, Kiefer Peter, Raubal Martin, Giannopoulos Ioannis (2018), The index of pupillary activity: Measuring cognitive load vis-à-vis task difficulty with pupil oscillation, in
the 2018 CHI Conference, Montreal QC, CanadaACM, New York.
GöbelFabian, MartinHenry (2018), Unsupervised Clustering of Eye Tracking Data, in
Spatial Big Data and Machine Learning in GIScience, Workshop at GIScience 2018, Melbourne, AustraliaETH Zürich, Zürich, Switzerland.
Kiefer Peter, Giannopoulos Ioannis, Anagnostopoulos Vasileios Athanasios, Schöning Johannes, Raubal Martin (2017), Controllability matters: The user experience of adaptive maps, in
GeoInformatica, 21(3), 619-641.
Kiefer Peter, Giannopoulos Ioannis, Raubal Martin, Duchowski Andrew (2017), Eye tracking for spatial research: Cognition, computation, challenges, in
Spatial Cognition & Computation, 17(1-2), 1-19.
GöbelFabian, KieferPeter, RaubalMartin (2017), FeaturEyeTrack: A vector tile-based eye tracking framework for interactive maps, in
Short papers, posters and poster abstracts of the 20th AGILE Conference on Geographic Information Sc, Wageningen University & Research, Wageningen, The Netherlands.
Kiefer Peter, Giannopoulos Ioannis, Duchowski Andrew, Raubal Martin (2016), Measuring cognitive load for map tasks through pupil diameter, in
Geographic Information Science. GIScience 2016, Montréal, CanadaSpringer International Publishing, Cham.
Göbel Fabian, Giannopoulos Ioannis, Raubal Martin (2016), The importance of visual attention for adaptive interfaces, in
the 18th International Conference, Florence, ItalyACM, New York.
The concept of “one map fits all users” is surely obsolete. Adaptive and context-aware technologies have facilitated digital maps which offer personalized assistance by sensing, inferring, and utilizing the map user’s environmental, technological, and user context. Map adaptation is specifically helpful when the interaction possibilities with the map are restricted, or when time constraints apply. Despite maps becoming increasingly adaptive, the semantic gap between the contextual information used for adaptation and the user’s high-level cognitive processes that should finally be supported remains high. If a map-based assistance system could reliably infer a user’s intentions, it would be able to provide highly personalized assistance and support the user’s goals in an optimal way. Since vision is the primary sense for perceiving cartographic maps the user’s gaze behavior while interacting with a map is likely to provide valuable information on the user’s cognitive states in terms of intentions. With eye tracking technology it is possible to sense the user’s gaze on a cartographic map, relate it to the map contents, and use the acquired information for gaze-based adaptation. The proposed project envisions intention-aware gaze-based assistance on cartographic maps. A future intention-aware gaze-based assistive map could, for instance, recognize from the user’s gaze that he or she is planning a touristic round trip, and adapt to the user’s needs accordingly. The main objective of this project consists in the investigation of methods for the recognition of activities and intentions from gaze data, collected from cartographic map users. The project intends to investigate the following research questions:1.How can visible map features (points, lines, polygons) be used to assign meaning to a user’s gaze on a cartographic map?2.How can a map user’s activities be recognized from an incomplete gaze track (i.e., online), and how can a change in the performed activity be detected (segmentation problem)?3.How can the recognized basic activities be combined and interpreted at a higher semantic level, i.e., with respect to the map user’s intentions?Methods used in geographic data processing (e.g., map matching), mobile assistance systems (e.g., trajectory interpretation), activity recognition (e.g., machine learning from gaze tracks), and general plan recognition (e.g., Dynamic Bayes Models) will be extended and adapted to recognize a map user’s activities and intentions from gaze. The expected outcomes of the project include a) algorithmic approaches for online geometric matching of gaze with cartographic vector features, b) an evaluation of machine learning techniques for the recognition and segmentation of activities on cartographic maps, c) a methodology for the recognition of intentions from gaze on cartographic maps, d) an annotated eye tracking corpus, to be published online with the goal of stimulating further research in the eye tracking, artificial intelligence (A.I.), and geographic information science (GIScience) research communities. The eye tracking corpus will be created by collecting gaze data with studies in a controlled experimental setting (remote eye tracking) using various cartographic stimuli and tasks.