thank you again for your detailed information and patience.
I just mentioned “intention” in an attempt to better describe that I attempted to optimize the execution of the fixation as best as what a viewer can perform and can contribute.
Actually all I do is very simple: fixation right in the sense as it’s used in gaze interaction for activating specific modes, and as a differentiation against smooth pursuit eye movements and saccades.
So no elaborate algorithms beyond scanning the data stream for maximum position deltas, as absolute values as well as in relation to time stamp deltas.
What i need is
1) just the signal that is triggered when the user “fixates” a spot (effectively: not moving the gaze), which i want to measure by an assumed *very* low average gaze velocity in coincidence with a *very* limited variation of current gaze coordinates
2) and i need to determine, whether the user’s gaze is executed “at about the same speed” as the visual stimulus has.
Unfortunately both fail:
1) At low SPEM velocities the measured velocities are “not so much different” from what I measure for the fixations, so state transitions are hard to detect.
2) The discrete velocities of the fixations and the SPEM data exhibit random more or less frequent occurrences of peaks with almost absurd amplitudes which are hundreds or even thousands of percentage beyond what the average is of the measured signal.
As the sampling rate is non-uniform, calculations of velocities are obviously more a guessing game, i am well aware of this.
But if the signal would just exhibit that degree of noise I described as “jitter” at the beginning (which is surely a result of the combination of the eye’s physiological micro oscillations and the fact of the non uniform time deltas), that guess would perfectly fit my needs.
Unfortunately, this tolerable noise is superposed by those peaks, result of super low timestamp deltas every now and then. Very high amplitudes. but not easily filtered out by typical filters used for removing outliers.
I tried every commonly used filter type, and all their permutations of coefficient settings, but the result is not good enough to have it generating reliable triggers.
What I now wonder and currently seems to be the crucial point (according to my current understanding):
what causes the eye tracker to raise the sampling frequency so substantially (because that seems to me to be the explanation for the brief timestamp delta), if the gaze situation actually is as stationary as it practically and physiologically can get?
– Is it because of the device internal data processing requirements (e.g. cache lines that need to be flushed, or s.th. similar),
– or may it be an electronical issue,
– or is there s.th. that I simply did not realise/recognize and that would have to be taken into account?
As for the license: I am aware of TOBII’s licensing terms and believe to comply with them, as i don’t store the data even in temporary arrays.
All the data is scanned on the fly.
Just scanning the data stream for triggers…
If no one else has similar experiences, I would be left to have to consider, that this issue is simply an irregularity which can’t be tackled, and eventually is not relevant to deeper investigate, as it does not reflect any fundamental concept one would have to understand and consider to understand and cure the phenomenon.