have you looked at the QtEyeTracking sample in the cpp/samples directory in the SDK package? It demonstrates how the environment can be set up for Qt. (Which is a bit tricky: make sure to read the README first.)
But it doesn’t do any eye tracking, only calibration, and the calibration is run in full screen mode. So the question of how to map between window coordinates and normalized coordinates remains unanswered. I’m no Qt expert but I found this on SO. What you need to do is to find the dimensions of the screen, multiply those with the normalized coordinates to get a point in screen pixels, and finally map those coordinates into your window by subtracting geometry().x() and y() — if I understood the Qt docs correctly.