Demo implementation of a generic gaze-contingent display.
We take one input image and create - via image processing - two images
out of it: An image to show at the screen location were the subject
fixates (According to the eye-tracker). A second image to show in the
peripery of the subjects field of view. These two images are blended into
each other via a gaussian weight mask (an aperture). The mask is centered
at the center of gaze and allows for a smooth transition between the two

This illustrates an application of OpenGL Alpha blending by compositing
two images based on a spatial gaussian weight mask. Compositing is done
by the graphics hardware.

If you set the optional ‘usehmd’ parameter to 1 then the demo will display on
a VR HMD, and if that HMD has a suported eyetracker, it will be used to move
the foveated area gaze contingent with the users tracked gaze.

See also: PsychDemos, MovieDemo, DriftDemo

Path   Retrieve current version from GitHub | View changelog