Home › Forums › Software Development › access to unfiltered gaze point data in the Tobii Stream Engine › Reply To: access to unfiltered gaze point data in the Tobii Stream Engine
Hello,
having read this interesting post I still am not sure about the conclusions to draw from it.
My ultimate goal would actually be to have a constant sampling rate with the EyeX, as the constant sampling rate would make gaze velocities accessible which I am interested in.
But I understood that the EyeX is not a product designed for a use case where this would typically matter, and that there are other models (Pro series) that do that.
As I am still in a PoC phase and not having options to switch the eye scanner type immediately, i am trying to workaround it for the time being, by compensating for the “missing”/dropped samples. And this would be easier the more evenly the distribution of the samples along the time axis would be.
From the examination twentythree did so well and my own test measurements, it seems like the StreamEngine and the unfiltered gaze data stream of the Interaction API are “better” in this regards than the lightly filtered stream, where “better” actually means: handling validity of samples differently, resulting in more ore less frames to be excluded from being submitted by the tracker.
I havn’t been able in my own measurements to pinpoint how “StreamEngine” and “unfiltered” compare to each other, because i have no options to feed both stacks with the same input signal.
Therefore I wonder:
1) Is there a general classification possible for both gaze data streams (StreamEngine, “Interaction-API(unfiltered)”) in terms of producing fewer/less random-distributed sample gaps due to the technical internals? This specifically with focus on situations when the gaze velocity would change, even substantially, but still well off a saccadic level?
2) Is it true, what I believe to have understood: the time stamp generated by the EyeX and delivered together with the gaze data is undoubtedly the raw time when the gaze data had been recorded in the device (not being submitted yet and before being evaluated by the tracker in terms of validity) and is therefore independent from any latency effects emerging further down the process line?
3) If the StreamEngine would be (substantially) better suited for providing evenly distributed samples under conditions of varying (sub saccadic) gaze velocities, would the overhead of integrating the unmanaged code into the managed code possibly outweigh that benefit by introducing latencies due to processing overhead?
I am well aware that the future implementation then would have to make use of a proper solution, which would be using the Pro series trackers, but I’m still in the PoC phase, sorting out whether the features requiring this functionality are of use for our application, and we would prefer a hands-on demonstration for this rather then a theoretical consideration.
If some of you would have answers, that would definitely help a lot,
thanks