Essential Concepts
This section outlines concepts used in the upcoming pages related to essential rendering or foveation techniques.
Table of Contents
Rendering Architectures
Rendering architectures structure how an application produces visible output from basic geometry.
In all architectures the application sends geometry, usually consisting of triangles (consisting of vertices) to the GPU, along with textures, lights and other data. The application then instructs the GPU to produce one or several rasterizations and compositions of this data, which eventually forms the visible image.
On modern GPUs each triangle takes a relatively predictable path utilizing various generalized or dedicated hardware units that power the shading pipeline. Greatly simplified:
- A vertex shader first transforms the triangle’s vertices into some target space, applying animations and generating additional metadata.
- Next, many triangles might be discarded because they are not visible or relevant for the requested target space.
- Each remaining triangle is then broken down into its fragments or pixels, and for each of which a fragment shader computes it resulting output color.
The time it takes to render an image is directly related to how much work there is to be done in each stage, and how efficient a GPU can be at each stage. How this process is orchestrated in terms of render targets and computational flow is dictated by the rendering architecture.
Rasterization
During ‘rasterization’, triangles are converted into a series of horizontal lines that are written into the render target (or display buffer).
The term originates from the updating of CRT displays where an electron beam illuminated pixels (picture elements) on the display one line at a time. Each of the lines drawn by the beam is termed a “raster line”. Raster graphics refers to rendering which uses the same line by line principle.
Shading
This refers to the calculation of the colour (or value) of a pixel which will be written into the render target (or display buffer) during rasterization.
Modern hardware uses ‘shaders’ to perform this calculation. A shader is a heavily parallelised program which runs on a GPU and normally runs once for every pixel output by rasterization.
Post-processing
This is image processing that is usually applied after the main rendering to apply full screen image manipulations.
Post-processing passes can include: vignetting, high to low dynamic range conversion, colour grading, film grain, gamma conversion, contact shadows, distortions, fades … and many more.
Forward Rendering
Geometry is transformed to 2-D and rasterized into a buffer. While it is being rasterized, shaders process the colour and lighting of each pixel. After the main rasterization and shading, the image may undergo some post processing before being output to a display device (monitor or VR headset).
VR post processing includes re-projection to correct for the movement of the headset since the start of rendering and applying a ’lens warp’ to counteract the distorting effect of HMD lenses.
This is the traditional method of rendering to displays and commonly used in VR. The application provides geometry as a set of triangles, together with light and other information. The renderer then transforms each triangle and rasterizes it directly into a render target. All the lighting calculation is performed in the pixel shader during rasterization.
Forward+ Rendering
This is an optimization of forward rendering, and the flow is very similar, however:
Prior to transforming the geometry, a 2-dimensional array containing lists of lights affecting blocks of pixels is constructed on the GPU. The shaders performing lighting of the pixels use this array to limit the amount of processing they perform to only consider lights in the list matching the current block of pixels.
Before the main scene is rasterized, the scene lights are processed into lists of which lights affect which pixel or block of pixels, so that only lights which affect each pixel need to be processed during the rasterization and shading pass. The shaders used during the rasterization pass read through the per-pixel lighting list to apply the lighting.
Deferred Rendering
In deferred rendering, the shading (lighting) is separated from the rasterization pass. Deferred rendering can reduce the cost of lighting the scene and can make some additional effects possible.
During the rasterization pass, per-pixel data describing the properties of the pixel within the scene, is written to a ‘fat’ render target or G-Buffer (graphics buffer) rather than color information to a simple render target. G-Buffer data usually contains screen-relative depth, normals/tangents and similar, sampled textures and other lighting response parameters.
Separate passes (full screen and sub-rectangles) then generate the color information by reading the per-pixel properties and calculating the effect of each light on the pixel. This color information is often added to a ‘lighting accumulation buffer’.
The downsides of deferred rendering include a fixed overhead for populating the G-Buffer and incompatibility with multisample anti-aliasing (MSAA).
Foveated Rendering Techniques
When it comes to applying foveated rendering in an application, there are two basic ways, static (fixed) foveation and dynamic foveation.
Static Foveated Rendering
Static foveated rendering, sometimes referred to as fixed foveated rendering, has a fixed foveation region that is usually in the center of the display. It exploits the blurring, distortion and occlusion effects resulting from the lenses and internal headset geometry to reduce the rendering load in areas of the display which can never be seen in detail or in some cases seen at all.
Dynamic Foveated Rendering
Also called “eye tracked” foveated rendering. Dynamic foveated rendering depends on an eye tracking signal, and utilizes the fact of the reduced visual acuity of the peripheral vision to reduce detail which will not be noticed by the user.
Dynamic foveated rendering can have advantages over static foveated rendering as larger areas of the display can have reduced processing loads while every viewing direction should still present the maximum clarity available within the optical and physical constraints of the lenses and headset.
Hardware Supported Foveated Rendering
At the time of writing, there are two hardware solutions that allow applications to use foveated rendering with relative ease, Variable Rate Shading (VRS) and Qualcomm Adreno foveation.
Variable Rate Shading
This is usually an extension of MSAA. MSAA has a fixed ratio between the rasterized resolution and the shading resolution. VRS allows the ratio to be specified per-draw call, per primitive or relative to screen position.
There are two tiers of VRS:
- Tier 1 VRS supports draw-call and per-primitive selection of the shader sampling frequency.
- Tier 2 VRS adds support for screen space relative selection of the shader sampling frequency.
All types of sampling frequency selection can be used at the same time with logic used to determine which setting takes priority.
VRS foveated rendering uses screen space frequency control and so requires Tier 2 VRS support.
The frequency selection cascade logic which combines screen space, draw call and per-primitive control can be used to exclude specific problematic geometry from the foveated frequency control.
More information can be found on the following pages:
Qualcomm Adreno Foveation
The Qualcomm Adreno Foveation is an extension to OpenGL ES and is available on some mobile VR platforms. It works similarly to VRS, but has less fine-grained control over the processing and general configuration.