Tagged: analytics sdk;calibration
- 06/01/2015 at 18:01 #2317Stefan EickelbergParticipant
From the Analytics SDK 3.0 Developers’ Guide, Section “Calibration Procedure” (p.20):
1. A small animated object should be shown on the screen to catch the user’s attention.
2. When it arrives at the calibration point, let the point rest for about 0.5 seconds to give the user a chance to focus. Shrink the object to focus the gaze.
3. When shrunk, tell the eye tracker to start collecting data for the specific calibration point.
4. Wait for the eye tracker to finish calibration data collection on the current position.
In the Dev Guide, the calibration procedure in the Analytics SDK 3.0 is described as above. However, all language bindings of the SDK seem to be missing functions to actually trigger recording of calibrationdata for a specified target. All samples just enter a calibration mode, in which the eyetracker presumably records all data it can get, regardless of which calibration target (if any at all) is actually displayed. After having displayed such a target the addCalibrationPoint(point2d) function is called which supposably retrieves a small well-fitting subset of the recorded data and assigns it to the passed calibration point.
Is there any way to get more control over what data is used in calibration (like setting start and endpoint for recording as in the above text)? What kind of logic is applied here to choose datapoints for the calibration procedure?
I am asking these things because I want to squeeze as much precision out of the X2-60 as possible and as we all know, it all starts with a good calibration. 🙂09/01/2015 at 10:23 #2334AndersParticipant
the calibration API is a bit complex and difficult to describe, so it’s perhaps no wonder that you don’t find what you expect. But don’t worry, it’s all there!
“Entering calibration mode” means that the eye tracker clears its calibration buffer and prepares to record calibration data.
Then you start to display calibration targets on the screen. As soon as you have one on the screen, and figure that the user is looking at it, you call the addCalibrationPoint function. This instructs the eye tracker to collect data for the given point. There is a small delay (about 500 ms if I remember right) before the actual recording starts. The operation finishes when enough data has been collected, or after a timeout.
Finally you call computeCalibration to compute a new calibration based on the collected data, and stopCalibration to exit calibration mode.09/01/2015 at 11:55 #2339Stefan EickelbergParticipant
you call the addCalibrationPoint function. This instructs the eye tracker to collect data for the given point. There is a small delay (about 500 ms if I remember right) before the actual recording starts. The operation finishes when enough data has been collected, or after a timeout.
Ah, that’s the information I was missing, thanks. It would probably be a good addition to the dev guide. Since that function was always called AFTER displaying the calibration targets in the code samples, I was under the assumption that previously collected data was used and no recording of new data was triggered.
Am I correct to assume that addCalibrationpPoint is a blocking function so that I can do whatever I want, as soon as control is returned to the calling function?11/01/2015 at 14:22 #2347AndersParticipant
yes, the addCalibrationPoint is a blocking function. There is also an asynchronous variant called addCalibrationPointAsync.
- You must be logged in to reply to this topic.