Scene viewing is an observable process of rapid decision making at the interface between perception, attention, and motor control. Knowledge about the organization of the visual system has stimulated the development of attentional (“saliency”) models. Saliency models make predictions on regions that viewers are likely to fixate during scene exploration. Cognitive models for the control of eye movements study the dynamic interaction between perception and visuomotor control. Prior work predicts the distribution of fixation positions by a model of spatially-limited access to visual-saliency information and a leaky-memory model of re-inspection of positions.
The first principal goal of the project is to develop dynamical, hierarchical Bayesian models of attentional selection in scene viewing that account for the viewers’ familiarity with and conceptually-driven anticipation of the image content as well as for viewer-specific distributional properties of fixational patterns. The second principal goal is to leverage these generative models of fixation sequences into discriminative models that accurately predict values of latent variables, such as levels of familiarity, based on a given fixation sequence. Work packages include (i) experimental eye-tracking research, (ii) analysis of generative models, (iii) Fisher kernels from generative models, (iv) deep learning, and (v) applications.
A web application for visualization of experimental scan paths and output from a generative model is available at https://engbertlab.shinyapps.io/SceneWalk/
S. Makowski, L. Jäger, A. Abdelwahab, N. Landwehr and T. Scheffer (2018). A discriminative model for identifying readers and assessing text comprehension from eye movements. Proceedings of the European Conference on Machine Learning (ECML-2018). Free PDF