Home › Forums › Software Development › Question about the GazeAwarePanel
Tagged: Gaze-aware behavior, object snapping
- This topic has 3 replies, 2 voices, and was last updated 10 years, 7 months ago by Robert [Tobii].
- AuthorPosts
- 19/02/2014 at 04:59 #486Cheng GuoParticipant
Hello,
I have some questions about how the GazeAwarePanels example work. It would be great if someone can help me out 🙂
Q1: In the HandleQuery(), the
queryBounds.TryGetRectangularData(out x, out y, out w, out h)
call basically defines the query region. I checked the value of x,y,w,h. It seems that w = h = 153. I assume the x & y value represents the center of the query bound and the w & h is a pre calculated or predefined value. Am I correct?Q2: When a query arrives, a program needs to recreate all of the interactors and send it back to the engine for the engine to determine which type of events should be raised on which interactor.
I think I can accomplish the same by monitoring the OnGazePointData() in the MinimalGazeDataStream sample and determine which object is overlapped with the eye gaze and trigger the corresponding behavior.
What is the benefit of using the eyex engine to keep everything in check rather than maintain everything in my own program?
Thanks!
19/02/2014 at 17:27 #487Robert [Tobii]ParticipantHi, thank you for good questions.
Q1: You are partly right, but x and y are the top-left values of the query rect. w and h have hard coded values right now, but could potentially be adapted to the application context, distance from screen, DPI settings etc.
Q2: You are right, if you want to use the GazeAware or Activatable behavior, you need to update the EyeX Engine with the location of your interactors everytime the engine asks for it (i.e. whenever the user looks nearby your interactors).
Technically it is possible to use the OnGazePointData data stream, but then you need to take care of overlapping windows, hit testing, gaze data filtering, object snapping and input handling yourself. It does not sound so hard, but our experience is that many developers get stuck in details before they have the chance to make any useful interactions. But you are welcome to try 🙂
However, the other great benefit of using the high-level interactions provided by the EyeX Engine is to give a consistent experience to the end user. If you are running an operating system with hundreds of gaze-enabled applications and all of them behave differently, have different ways of filtering data and different key bindings, it will be very confusing for the user. So, using a client-server solution like the EyeX Engine provides enables a standardized way of interacting with the computer using your eyes and is one step towards the ultimate goal: integration into operating systems.
20/02/2014 at 03:42 #490Cheng GuoParticipantHello Robert:
Thanks for your reply. As I started to process everything by myself, I realized that figuring out the z order is something I need to do (which I didn’t anticipate before :P)
Yes, you are right. To provide a uniform experience is important. I can see the benefits when you have multiple eye tracker enable applications running simultaneously.
Could you elaborate a little bit more on “object snapping”? I don’t see this mentioned in the current pre-alpha documentation.
Thanks!
20/02/2014 at 08:55 #492Robert [Tobii]ParticipantGreat that you are trying to do manual processing also. It is a good way to learn the possibilities and limitations of using the gaza data as an input modality. I am sure there will always be situations where the high-level interactions included in the EyeX Engine are not enough, so it is good to be able to have a backup plan and know how to handle the raw gaze data.
However, for most cases when you want to know “which object is the user looking at”, it is very helpful to make use of the features that the EyeX Engine provides. The object tracking/snapping, also discussed in this thread, is a way to make it easier for the user to hit small objects even though the gaze data is noisy. It is a similar technique that is used on touch interfaces. I cannot give you any more implementation details right now, but maybe we can share some more info later on if there is any demand.
- AuthorPosts
- You must be logged in to reply to this topic.