Today is Microsoft's DirectX Developer day, featuring presentations by luminaries from AMD, NVIDIA and Microsoft themselves discussing the development and adoption of DirectX for graphics. Opening the day with the biggest news, Microsoft unveiled the next iteration of DirectX 12 that will lay the foundation of game development on high performance Windows gaming PCs and XBOX Series X consoles prior to the launch of next-generation hardware. Put simply, DirectX 12 Ultimate will integrate advanced graphics rendering tools such as DXR Raytracing and Variable Rate Shading into the main DirectX 12 API and define hardware-level support for these approaches on both platforms, effectively rolling many of the innovations that debuted with NVIDIA GeForce RTX in 2018.
Bringing Next-Gen to DirectX 12
The main thrust of DirectX 12 Ultimate is incorporation of four new or revised methodologies into the main API stack to improve graphics performance and fidelity. Hardware DirectX 12 Ultimate certification will mean support for all of these aspects of the API rather than the patchwork levels of support currently outlined by DirectX 12, drawing a stark line between PC graphics hardware eras.
Two of the methodologies outlined will be familiar to those who followed the development and release of GeForce RTX hardware and SDK revisions since 2018, and all follow a similar theme in dynamic utilisation of hardware resources. Microsoft's decision to bake in support for ray tracing (as part of the revised DXR v1.1) will be the most expected, but NVIDIA have beaten the drum to highlight more than just that particular advancement. In fact, of all the headline tools NVIDIA championed on the release of GeForce RTX, only DLSS notable in its absence.
Here's a very brief summary of the new aspects to DirectX 12 Ultimate:
DirectX Raytracing v1.1
NVIDIA's GeForce RTX hardware has leveraged Microsoft's DirectX Raytracing (DXR) API since it was added to DX12 in October 2019, and with DX12 Ultimate a new incremental revision will be made to the base toolset. DXR allows ray tracing calls to be performed directly on the GPU rather than making a round trip to the CPU, load ray tracing shaders more efficiently, and perform inline ray tracing.
DXR 1.1 will need to be supported to gain DirectX 12 Ultimate certification; prior versions of DX12 had no such requirements for DXR.
Variable Rate Shading
Variable Rate Shading is an umbrella term for many different techniques that dynamically allocate shading resources to different regions of a scene as a frame is rendered. Simply put, rather than equitably diving up resources across the frame, the game engine can specify certain regions which need not be shaded to the highest possible quality, perhaps based on the degree of motion of a particular object or the complexity of a particular texture (or lack thereof). NVIDIA and MachineGames implemented a type of VRS into Wolfenstein: Youngblood last year, supported by NVIDIA's Turing GPU architecture.
Mesh Shading
Mesh Shading gives game developers deeper access to the computational capabilities of compute shaders and bundle together operations on groups of triangles that share one or more vertices, rather than performing these operations individually on each vertex for each triangle. This can significantly reduce repetition of work.
One identified example of where Mesh Shading can have notable impact is in Level of Detail calculations, which determine the quality of objects in a frame based on distance from the camera. Traditional techniques typically make use of many calls to the CPU and a set LoD, but by using Mesh Shading to dynamically determine LoD based on GPU load and other factors the calculation can be done in GPU hardware alone.
NVIDIA developed the Asteroids LoD demo to demonstrate this use of Mesh Shading, and you can read a break-down of it here.
Sampler Feedback/Texture-Space Shading
Currently games tend to render each frame from scratch, going through the process of rasterisation and then shading to determine the camera's view. Sampler Feedback decouples the process of rasterisation and shading in the rendering pipeline, making it possible to re-use calculations made when rendering prior frames in the shading process of a subsequent frame by referencing the sampled textures. This can be particularly useful when sampling and shading algorithms are very complex, taking up plenty of computation cycles.
Texture-Space Shading determines whether new shader calculations need to be performed on particular pixels, and utilises Sampler Feedback to repurpose prior computations where appropriate. For instance, at times, it may be possible to re-use the results of a shader colour calculations from a previous frame; i.e. while an object's observation angle may have changed, its colour won't have. Potentially therefore you could limit shader calculations for certain pixels to once every few frames, allocating shader resources elsewhere and boosting overall fps.
NVIDIA's GeForce RTX hardware has leveraged Microsoft's DirectX Raytracing (DXR) API since it was added to DX12 in October 2019, and with DX12 Ultimate a new incremental revision will be made to the base toolset. DXR allows ray tracing calls to be performed directly on the GPU rather than making a round trip to the CPU, load ray tracing shaders more efficiently, and perform inline ray tracing.
DXR 1.1 will need to be supported to gain DirectX 12 Ultimate certification; prior versions of DX12 had no such requirements for DXR.
Variable Rate Shading
Variable Rate Shading is an umbrella term for many different techniques that dynamically allocate shading resources to different regions of a scene as a frame is rendered. Simply put, rather than equitably diving up resources across the frame, the game engine can specify certain regions which need not be shaded to the highest possible quality, perhaps based on the degree of motion of a particular object or the complexity of a particular texture (or lack thereof). NVIDIA and MachineGames implemented a type of VRS into Wolfenstein: Youngblood last year, supported by NVIDIA's Turing GPU architecture.
Mesh Shading
Mesh Shading gives game developers deeper access to the computational capabilities of compute shaders and bundle together operations on groups of triangles that share one or more vertices, rather than performing these operations individually on each vertex for each triangle. This can significantly reduce repetition of work.
One identified example of where Mesh Shading can have notable impact is in Level of Detail calculations, which determine the quality of objects in a frame based on distance from the camera. Traditional techniques typically make use of many calls to the CPU and a set LoD, but by using Mesh Shading to dynamically determine LoD based on GPU load and other factors the calculation can be done in GPU hardware alone.
NVIDIA developed the Asteroids LoD demo to demonstrate this use of Mesh Shading, and you can read a break-down of it here.
Sampler Feedback/Texture-Space Shading
Currently games tend to render each frame from scratch, going through the process of rasterisation and then shading to determine the camera's view. Sampler Feedback decouples the process of rasterisation and shading in the rendering pipeline, making it possible to re-use calculations made when rendering prior frames in the shading process of a subsequent frame by referencing the sampled textures. This can be particularly useful when sampling and shading algorithms are very complex, taking up plenty of computation cycles.
Texture-Space Shading determines whether new shader calculations need to be performed on particular pixels, and utilises Sampler Feedback to repurpose prior computations where appropriate. For instance, at times, it may be possible to re-use the results of a shader colour calculations from a previous frame; i.e. while an object's observation angle may have changed, its colour won't have. Potentially therefore you could limit shader calculations for certain pixels to once every few frames, allocating shader resources elsewhere and boosting overall fps.
NVIDIA's Asteroids demo in action
These underlying principles are complex, and so it's well worth reading the relevant parts of NVIDIA's GeForce RTX primer which describes them in less abstract terms. Microsoft also go on to outline the new capabilities of DXR 1.1 and Sampler Feedback in a developer-centric manner in their announcement blog.
DirectX 12 Ultimate will serve as the backbone for games development on XBOX Series X, and broad-base support for the API on Windows PCs will also streamline some aspects of porting any given project. Obviously it will also incorporate a DirectX 12 fall-back mode for older and less capable hardware, but the idea is to bring developers up to speed and give them the tools they need to take advantage of next-generation hardware on both platforms.
NVIDIA have confirmed that GeForce RTX hardware will support DirectX 12 Ultimate with a driver update when the Windows 10 2020 update rolling the API debuts later this year. AMD meanwhile have stated that their RDNA2 architecture has 'full support' for DirectX 12 Ultimate, all but confirming these features on the next-generation of AMD PC graphics hardware alongside the next-generation XBOX's custom SoC.
GeForce GTX 16-series based on NVIDIA's Turing architecture will be stuck in something of a no-mans-land, supporting some features (such as VRS) but not others (DXR etc.), and so will not be DX12 Ultimate compliant. AMD's Radeon RX 5000-series expectations are more cut and dry - no support for any DirectX 12 Ultimate-exclusive features is likely until RDNA2 hardware is released.
SOURCE: [url=]Microsoft DirectX 12 Ultimate Announcement Blog[/url], NVIDIA Companion, AMD Companion