- 20/02/2014 at 14:58 #497
I’m developing an eye tracker enabled kiosk application for children, which means that there will be a lot of different users, each spending only a couple of minutes at the kiosk station.
I’m thinking about how to calibrate the eye tracker for each user. First of all, is it sensible to use the same calibration profile for all users? I’m guessing: no. I’m therefore wondering if I could calibrate the device behind-the-scenes, while my application is running. I couldn’t find anything in the EyeX documentation about this, but perhaps there is a way?
As a last resort I’m thinking of requiring each user to go through a calibration process, but they shouldn’t be allowed to access anything on the Windows machine except for my kiosk application. Is there a simple way of launching the calibration process from within my application, without security issues.
TL;DR; How should I integrate a calibration process in my eye tracker enabled kiosk application?21/02/2014 at 09:14 #503AndersParticipant
lots of questions there 🙂 — starting from the top:
Q. Is it sensible to use the same calibration for all users?
A. It depends on the level of accuracy you need. If you only need a rough indication of where the user is looking, then you might be able to do without a calibration. The lack of proper calibration will show up as an offset error in the gaze point of up to several centimeters — which doesn’t matter at all in some applications but ruins everything in other cases. My recommendation would be to give it a try and see if you can do without, especially since the users will only spend a couple of minutes at the kiosk station.
Q. Is it possible to calibrate the device behind-the-scenes?
A. Yes. The low-level Gaze SDK provides an API for calibrating the device. It works roughly like this: you attract the user’s eye gaze to a point on the screen and tell the tracker to record a calibration point at the same time. Repeat for as many points as is needed. (This is of course a simplified description. In real life it’s a bit tricky to get it right, because it’s all asynchronous and you will need some error handling and so on.)
Q. Is there a simple way of launching the calibration process from within my application?
A. If you use the low-level Gaze SDK, then the answer is definitely yes. If you use the EyeX SDK then the answer is no, not at the moment, but we are considering that as a future extension.25/02/2014 at 13:25 #513
Thanks for replying! I’m confident the application will require high accuracy, unfortunately.
I’ll look into using the Gaze SDK, but I’d much prefer to not run my own eye tracking engine. Do you know when the EyeX SDK will offer calibration process integration? I’d imagine several developers would be interested in such a feature, especially as eye tracking is becoming a consumer device and many of us are designing showcase applications with the technology.27/02/2014 at 10:11 #520AndersParticipant
No, sorry, I don’t. I’m fairly sure that it will happen, but there is no communicated time plan. We’ll announce it on the dev zone as soon as the decision is made though.02/04/2014 at 01:12 #650
So now I’m looking at implementing a calibration process with the low-level Gaze SDK and I noticed that there’s no calibration demo for the C# SDK. Does this mean that the C and C# Gaze SDKs are functionally different or is it possible to calibrate an eye tracker with C#?
The application I’m working on is written in C# WPF and I’d prefer to not call unmanaged C code with P/Invoke. What would be the best way to go about doing this?03/04/2014 at 10:43 #652Robert [Tobii]Participant
It is possible to calibrate with C#, but we do not have any sample yet.
If you know your WPF you should be able to build your own calibration User Control using these methods on your IEyeTracker instance.
IEyeTracker.AddCalibrationPointAsync (for each point you display on screen)
IEyeTracker.StopCalibrationAsync (on error or cancel)03/04/2014 at 16:28 #655HenkeParticipant
I’m also really interested in some sort of public calibration. @CarlThomme, if its a public project, I’m interested in following it, or maybe we can cooperate. I have noticed though that if you write the app from scratch thinking that it should work without calibration it makes a big difference. You can often achieve the same thing in many different ways, as an example, using “layers” with zooming. My experience is if I use 9 zones on a 24″, its not a big problem, and often I can rewrite the application to work in another way.
- You must be logged in to reply to this topic.