Tobii XR Devzone > Learn > Interaction Design > Use Cases > Hand-Eye Coordination >

Basics

On this page, you can read about the basics when designing your Hand-Eye Coordination interactions.

Table of Contents


Picking Up Objects

When picking up an object, you likely look at it before reaching out to grab it. You can use this to simplify the picking up process to: look at object and click.

This works well and feels nice, and it also removes the physical restriction from having to be close enough to an object to be able to pick it up.

Give the user time to perceive that they are actually picking up the object. One way to do this is by making it fly to your hand.

Consider differentiating the objects that can be picked up with gaze from other static objects with a specific color and presenting its own visual feedback when looked at.


Gaze Targeting

When launching a projectile from our hands or from a handheld object, we naturally look at the target we want to hit - a gaze target. With the help of eye tracking we can use this information and allow the user to do what they want and hit the intended target.

Gaze targeting enables users to hit their intended target with greater reliability compared to a more explicit and unnatural method, like when using a pointer. It allows for targeting when the hands are occupied, and is also quicker and less tiresome.

The interaction just works and for the user, it can feel like magic.

In this example of gaze targeting, the user picks up cards to gain powers in their hand and then proceeds to target enemies by looking at them and releasing the controller button. The power is not affected by physics like gravity and will follow the trajectory of the hand prior to release and always end up hitting the focused target.

It’s important to have a clear visual feedback when the user gazes at an active target to tell the user what object will be targeted when they release the handheld object.


Throw Assist

By using the fact that we naturally look at a target before throwing a physical object, we can adjust the trajectory of the throw to be more accurate.

This helps the user achieve their goal of hitting the targets and reduces the frustration of throwing in VR.

Throwing in VR can be frustrating because you cannot feel the weight of an object and you are not holding the actual object, but rather the controller. This makes it more difficult for you to know how hard you should throw and often leads to you missing the targets.

Consider the balance of always hitting the target, and rarely hitting the target. Always hitting the target makes you feel powerful and in control but removes much of the skill needed. Rarely hitting the target is frustrating, but involves a learning curve, where you can improve and get better.

A scenario somewhere between is usually the most optimal, where it’s a challenge for the user but not an impossible one. But this all boils down to the use case. Sometimes you want to empower the user and give them superpowers and sometimes you just want to minimize frustration and make the experience more realistic.

The throw should match the user’s expectations of its behavior. For example, a thrown physical stone should follow the physical laws so that the throw isn’t acting like a guided missile.


Telekinesis

Manipulating objects from a distance using gaze and telekinesis enables the user to quickly control the environment and objects to the user’s preference. This includes things like opening doors and drawers as well as translating objects around in the environment.

Selecting objects with a pointer instead of gaze can be tricky. With gaze selection it just works and it’s quicker.

Pointer selection also becomes tiresome after a while and it can be hard to select objects in specific angles.

By having the movement of the object controlled by the telekinesis lerp depending on controller velocity and rotation, the experience feels fluent for the user.

Manipulating objects in a zero-gravity environment with air drag makes the experience feel ever better.

It’s a good idea to make the telekinesis sensitivity dependent on distance, and be less sensitive when objects are closer to the user.

Another example of telekinesis is moving chess pieces on a chessboard.

The user can select a chess piece simply by looking at it and pressing down a button on the controller, and then by moving their hand, they can decide where to put it down.

In a scenario like this it can be good to snap the chess piece to the selected tile once released, or if no valid tile or chess move can be found, snap the chess piece back to the tile it started on.

Consider the rotation multiplier so that while the chess piece can still can be rotated, it doesn’t rotate too much.

In certain cases when implementing telekinesis, it makes sense to lock the position of one of the axes, for example the Y-axis when moving a pawn on a flat map. In this case, it can be good to consider locking the rotation as well.

Haptic feedback triggered continuously when the user moves their hand over a distance can improve the experience to let the user know how much movement of the hand is required to move the selected object and to make the hand movement connected to the physical board.