Tobii XR Devzone > Learn > Analytics > Fundamentals >

Analysis and Interpretation


Interpreting eye tracking data can be quite tricky with many moving variables and dependencies. On this page we have gathered some valuable nuggets and recommendations when doing eye tracking analysis.

Table of Contents


General

Some important things to keep in mind for best possible results are:

  • For research, conduct pilot studies and have concrete hypotheses to avoid mining data
  • For UX studies, mining data can be helpful in certain scenarios to find insights into user behavior
  • Results are dependent on the task participants are given
  • Aim for statistical significance, typically requiring at least 30 participants
  • Comparing controlled changes between scenes (A/B testing) generally yields better conclusions than drawing conclusions based on recordings from one scene
  • Replay the session to participants and ask about outliers to gain insights into user behavior instead of distracting them during the study and risk affecting the data

Fixations

A very common metric to analyze is fixation durations. When doing this, it’s important to understand that fixations do not always correlate with cognitive processing and might mean many different things. The best results are usually gained by asking the participants about their outlying fixation data, combining fixation data with other metrics or by comparing the fixation data through A/B testing.

Below are some tips when interpreting fixation data:

  • Longer fixations are usually an indicator of more in-depth cognitive processing
  • Longer fixations do not always equate to higher user interest and could mean that the user is confused
  • Short fixations on items without many (or any) revisits often signifies that those items were ‘distractors’
  • Items with shorter fixation durations with many revisits may mean that they are interpreted as task-relevant
  • Bursts of short fixations of different items/locations might mean that the user was searching for something
  • Combine fixation data with other biometric or interaction metrics for best results

In this example, the user is searching for a key. We can see a burst of short fixations in various locations until the key has been found.


Cross-linking Data

Eye tracking data can be used to support claims and find patterns across data, for example: “Person X said they found the new store layout confusing, which we found consistent with our users’ gaze patterns in the store”. This allows correlating qualitative data from one participant with measured data from a whole group.

Other than corroborating claims, eye tracking visualizations are useful as a way to get an overview of aggregated data from a big group to be able to find patterns and irregularities in the design and scenario. For example, using a heatmap, you can get an overview of what objects in a scene are mostly focused and what objects might not be focused at all.

Here is an example of a perception map, showing an overview of what objects in the scene were mostly looked at.


Correlated Objects

When a user looks back and forth between two or more objects, and does so multiple times, this is a good indication that the user has mentally correlated those objects. For example, during workplace fire safety training, a person may look multiple times between a fire in the room, an open window, and a flammable canister near the fire. From this, you can tell that the user was cognitively processing these things together. Perhaps they were trying to decide which safety risk they should act upon first.

Here is a visualization example where a line is drawn between correlated objects and a number is displayed, showing how many times the focus has shifted between them.


Object-Agnostic Location Data

Sometimes, like in retail, the physical location of the object can be more important than the actual object itself. By running multiple iterations of a test scene with preset slots where objects can reside, but having variation in the placement of the physical objects, you can obtain insights allowing you to evaluate objects and locations as separate entities. This can for example be useful when analyzing optimal product placement in a store layout.

In this example, the shoe locations are shown as colored boxes, going from red to green, the more focus the locations has received in relation to all other shoe locations.