Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging.
Reinforcement learning for the manipulation of eye tracking data
In this paper, we present an approach based on reinforcement learning for eye tracking data manipulation. It is based on two opposing agents, where one tries to classify the data correctly and the second agent looks for patterns in the data, which get manipulated to hide specific information. We show that our approach is successfully applicable to preserve the privacy of a subject. In addition, our approach allows to evaluate the importance of temporal, as well as spatial, information of eye tracking data for specific classification goals. In general, this approach can also be used for stimuli manipulation, making it interesting for gaze guidance. For this purpose, this work provides the theoretical basis, which is why we have also integrated a section on how to apply this method for gaze guidance.
Encodji: Encoding Gaze Data Into Emoji Space
To this day, a variety of information has been obtained from human eye movements, which holds an imense potential to understand and classify cognitive processes and states – e.g., through scanpath classification. In this work, we explore the task of scanpath classification through a combination of unsupervised feature learning and convolutional neural networks. As an amusement factor, we use an Emoji space representation as feature space. This representation is achieved by training generative adversarial networks (GANs) for unpaired scanpath-to-Emoji translation with a cyclic loss. The resulting Emojis are then used to train a convolutional neural network for stimulus prediciton, showing an accuracy improvement of more than five percentual points compared to the same network trained using solely the scanpath data. As a side effect, we also obtain novel unique Emojis representing each unique scanpath. Our goal is to demonstrate the applicability and potential of unsupervised feature learning to scanpath classification in a humorous and entertaining way.
Scanpath comparison with Ferns
Scanpath classification can offer insight into the visual strategies of groups such as experts and novices. We propose to use random ferns in combination with saccade angle successions to compare scanpaths. One advantage of our method is that it does not require areas of interest to be computed or annotated. The conditional distribution in random ferns additionally allows for learning angle successions, which do not have to be entirely present in a scanpath. We evaluated our approach on two publicly available datasets and improved the classification accuracy by ≈ 10 and ≈ 20 percent.
SubsMatch 2.0 tackles the challenge of quantifying the influence of experimental factors on eye movement sequences. It can extract sequence-sensitive features from eye movements and classify eye movements sequences based on the frequencies of small subsequences.
Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs.
Note: As external programs are utilized, we cannot ship a ready-to-use program but you have to run the bootstrap-script yourself. It will download and install all the necessary programs as well as the data. See the README for details.
SubsMatch is a scanpath comparison tool designed specifically for dynamic scenarios. It is based on a string alignment metric. Fixations are translated to a letter representation based on their location and assigned to equiprobabilistic bins.
Distance calculation is performed on the frequencies of subsequences.
The Matlab implementation together with an evaluation dataset of patients with visual field defects driving in a driving simulator is available here: [Download]
SubsMatch was applied to gaze movements of microneurosurgeons during a Tumor removal surgery in order to determine surgeon Expertise based on gaze behavior.
The distance matrix of pairwise scanpath comparisons shows expertise clusters, where scanpath distances between expert surgeons are quite small and distances between an expert and a novice are rather large. This suggests a separability of the groups.
Scanning Patterns with highest occurrence frequency difference between the groups reveal that expert surgeons exhibit highly repetitive scanning only at the very beginning of Stimulus presentation while novices continue the pattern.
SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies
T. C. Kübler, C. Rothe, U. Schiefer, W. Rosenstiel, and E. Kasneci. Behavior Research Methods online first: 1-17, 2016.