Home › Forums › Software Development › Tracking objects with the Eye Gaze Engine › Reply To: Tracking objects with the Eye Gaze Engine
Welcome to the community! Thank you for a good question, I agree completely that object tracking/snapping is a key feature that every gaze driven application needs.
Right now I’m afraid I cannot give you any implementation details on the EyeX Engine approach, more than that it is based on a probability model that takes into consideration parameters like object size, z order etc. I will see if we can share more info when we get closer to the 1.0 release.
We have been refining the EyeX Engine object (interactor) snapping continuously while developing our application software and running user tests. Our primary goal is to have something that works out-of-the-box and makes it easy for developers to create applications and games without having to bother about filtering raw data or creating their own object tracking/snapping model.
While we think that most developers will be happy with using the EyeX interactors and out-of-the-box snapping, there will of course always be advanced users (like you?) who are capable (and have the interest/time) to experiment with different approaches. That is why we also give access to the raw data from the EyeX Engine, so applications can mix and match between using interactors and their own raw data based algorithms if needed.
We have also seen the GDOT video and paper. It looks like a promising implementation as you said. So far we haven’t had time to experiment and compare the EyeX Engine with the GDOT algorithm. It would be a good test to create an application in which you could switch between EyeX interactors and GDOT and measure the results. Right now we don’t have it in the roadmap for the coming months, but if some 3rd party developer (wink wink) or researcher (I will contact the GDOT team) can prove that the GDOT approach is far superior we will have to re-prioritize.
Anyone accepts the challenge? 🙂