The only sad conclusions I could reach as a result of my foray into tobii/vive territory are these:
a) Tobii accepts that the only way for the world to access its hardware contained in the vive pro eye is through a closed-source windows-only library (called something similar to ‘sranipal’).
b) Vive (HTC/Valve) could not care less about Linux, and seems not too moved at the idea that their products could be used in research environments, and/or anywhere else than in strictly bordered windows-only gaming circumstances (I refused to download this ‘sranipal’ library because in their licence any usage scenario different from the original one (i.e. leisure under windows) is explicitly forbidden.
c) The only viable path for me to try to access to the view ray data (necessarily under Linux, directly managing the USB-level conversation with the device) seems to pass through reverse engineering, which is at the present not a viable path.
So, the project is all but stopped, not for lack of good will on my side.
I have a question, if you wish and are able to answer. Where does the integration of the eye images take place in order to compute the viewing rays? Is it done in hardware aboard the device, or do you upload the images and perform the computation in your library on the host computer?
I ask this because the researchers I work for would be quite interested in also obtaining all or part of the raw images captured by the eyetracer (mainly in order to perform statistics about the change of pupil size in connection with stimuli). If the image stream travels to the computer, it would be possible, at least in theory, to capture all or some of those images.
Thanks in advance for any answer that you will provide.