space-time; visual analytics; empirical evaluation; animation;
Davies C., Fabrikant S. I., Hegarty M. (2015), Towards Empirically Verified Cartographic Displays., in Hoffman R. Hancock P. Scerbo M. Parasuraman R. (ed.), Cambridge University Press, Cambridge, UK, Ch. 35.
Fabrikant S.I., Maggi S., Montello D.R. (2014), 3D network spatialization: Does it add depth to 2D representations of semantic proximity?, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and
, 8728, 34-47.
Maggi S., Fabrikant S.I. (2014), Embodied Decision Making with Animations, in 8th International Conference on Geographic Information Science
, Vienna, Austria.
Ooms A., Fabrikant S.I., Çöltekin A., De Maeyer P. (2014), Eye tracking with geographic coordinates: methodology to evaluate interactive cartographic products, in 8th International Conference on Geographic Information Science
, Vienna, Austria.
Andrienko Gennady, Fabrikant Sara Irina, Griffin Amy L., Dykes Jason A., Schiewe Jochen (2014), GeoViz: interactive maps that help people think, in International Journal of Geographical Information Science
, 28(10), 2009-2125.
Andrienko G., Fabrikant S. I., Dykes J., Griffin A., Schiewe J. (2014), GeoViz: interactive maps that help people think (Editorial), in International Journal of Geographical Information Science
, 28(2), 2009-2012.
Opach T. Golebiowska I. Fabrikant S.I. (2014), How Do People View Multi- Component Animated Maps?, in The Cartographic Journal
, 51(4), 330-342.
Maggi Sara, Fabrikant Sara Irina (2014), Triangulating eye movement data of animated displays, in CEUR Workshop Proceedings
, 1241, 27-31.
Maggi S., Fabrikant S.I. (2013), Animated displays of moving objects and spatiotemporal coordinated events, in GeoViz Hamburg 2013
, Hamburg, Germany.
Hurter Christophe, Ersoy Ozan, Fabrikant Sara I, Klein Tijmen R, Telea Alexandru C (2013), Bundled Visualization of Dynamic Graph and Trail Data, in IEEE transactions on visualization and computer graphics
, PrePrints, 1.
Christen M, Vitacco DA, Huber L, Harboe J, Fabrikant SI, Brugger P (2013), Colorful brains: 14years of display practice in functional neuroimaging, in NeuroImage
, 73, 30-39.
Shipley T., Fabrikant S.I., Lautenschütz A.-K. (2013), Creating perceptually salient animated displays of spatially coordinated events, in Springer Lecture Notes of Geoinformation and Cartography
Fabrikant S.I., Christophe S., Papastefanou G., Maggi S. (2013), How to measure and visualize emotion when using maps, in 26th International Cartographic Conference, International Cartographic Association
, Dresden, Germany.
Wilkening J. and Fabrikant S.I. (2013), How Users Interact With a 3D Geo-Browser under Time Pressure, in Cartography and Geographic Information Science
, 40(1), 40-52.
Fabrikant S.I. Christophe S. Papastefanou G. and Maggi S. (2012), Emotional response to map design aesthetics, in Proceedings (Extended Abstracts), GIScience 2012
, Columbus, OH.
Lautenschütz A-K (2012), Map readers' assessment of path elements and context to identify movement behaviour in visualisations, in Cartographic Journal
, 49(4), 337-349.
Tuggener S., Çöltekin A., Fabrikant S.I. (2012), Mobility and social inequality: exploring the nexus by means of sequence analysis and geovisualisation, in International Symposium masculine/Feminine: Geographical Dialogues and Beyond
, Grenoble, France.
Griffin A.L. Fabrikant S. I. (2012), More Maps, More Users, More Devices Means More Cartographic Challenges, in The Cartographic Journal
, 49(4), 298-301.
The Cartographic Journal (ed.) (2012), Special Issue on Spatial Cognition, Behaviour and Representation
, Maney, Leeds.
Ooms K., Coltekin A., De Maeyer P., Dupont L., Fabrikant S., Incoul A., Kuhn M., Slabbinck H., Vansteenkiste P., Van Der Haegen L., Combining user logging with eye tracking for interactive and dynamic applications, in Behavior Research Methods
Maggi S., Fabrikant S. I., Hurter C., Imbert J.-P., How Do Display Design and User Characteristics Matter in Animated Visualizations of Movement Data?, in Cartographica
Synopsis: We request funding for two PhD students during 4 years to extend previous consecutive SNF projects PopEye (200021-113745) and PopEye II (200020_126657). This new project extends prior work by focusing specifically on creating perceptually salient and cognitively adequate animated displays including speed, direction, and attribute changes of moving entities.Motivation: There is a need for perceptually salient and cognitively inspired animated displays that enable humans to more effectively and efficiently detect relationships in complex space-time data displays for effective and efficient spatio-temporal decision-making. With increasing interest in and use of dynamic depictions to present and explore complex spatiotemporal data, the research community has developed sophisticated visual analytics (VA) tools for experts coupled often with animated displays to mine databases of spatio-temporal patterns. The contended advantage of VA is the combination of computational methods with the outstanding human capabilities for pattern recognition, imagination, association, and analytical reasoning. However, this claim has not yet been supported by empirical evidence. We still know very little on how exactly dynamic and animated displays are understood by users. As VA tools are being created and disseminated, it will be important to specifically consider research on the kind of information that users can get from dynamic and animated displays. The (limited) perceptual and cognitive systems of the users will determine the salience of the patterns, and this will ultimately determine how effective they will be in detecting and reasoning about spatio-temporal phenomena. Research objective: The proposed work is firstly aimed at better understanding of how users explore and extract knowledge from dynamic (interactive) and animated VA displays including moving objects, and secondly, at deriving empirically based design guidelines for the construction of cognitively adequate animations for the effective and efficient depiction and visual analysis of time-referenced spatial data sets at high resolution. We aim at guidelines that are generic enough to be useful for all types of moving entity types, including various geographically relevant application domains (e.g., human trajectories and lifelines tracked with location-based services, habitat analyses with tagged animals in wildlife biology, eye movements analyses in geographic visualization experiments, etc.). This four-year project will be run by an international collaboration including researchers at the Geography Departments of the University of Zurich, and The Pennsylvania State University (USA) and the Psychology Department at Temple University (USA). Resources are requested for two PhD students, to enable collaborative visits by the students and the external collaborators, and to support conference and workshop attendances to disseminate research findings.The research project is organized along two interleaved empirical subprojects, including the requested PhD student positions: Subproject A (interaction) emphasizes the development and evaluation of cognitively adequate interaction methods, while Subproject B (animation) focuses on the empirically validated designs of cognitively adequate and perceptually salient animations. The research goals leading the two tracks are as follows:•To identify the characteristics of how coordinated spatio-temporal events are perceived and understood by humans in dynamic and animated graphical displays of movement data, and •To develop empirically based design guidelines for the construction of perceptually salient and cognitively adequate dynamic and animated displays of movement data, for more effective and efficient spatio-temporal inference and decision-making. Key outputs are expected along three cyclic work phases. First, knowledge integration across complementary research fields (as mentioned above) is carried out at the theoretical level (event perception, animation design). Second, an experimental VA platform is implemented, and thirdly, interaction methods and animated displays are evaluated with users (i.e., empirical evaluation of usefulness and usability).