The GPU compute portion of Leo is fairly simple to explain, as you can see in the video below. Lights are one of the most complex parts of a 3D scene to render properly, and various techniques have evolved to allow complex lighting to, well, basically work. Some work better than others, but most are computationally painful, or have other rather serious drawbacks like deferred rendering.
Leo at GDC
At GDC this year, we asked AMD to explain the GPU compute functions in Leo, and as you can see, they did. Instead of globally computing illumination, Leo breaks the screen up into 32*32 pixel tiles, a bit over 2000 of them for a 1080p screen, an then uses compute functions to figure out which lights are visible in which tile. This obviously drops the lighting workload per tile by a massive amount and takes very little GPU time too.
In this case, a forward renderer and GPU compute means the difference between functional, and very nice if I do say so myself, lighting and a non-functional demo. The new version mentioned in the video can be found here, grab it and play around. We would tell you more about it, but it sadly only runs on Windows.S|A
Disclosure: Although SemiAccurate has a writer named Leo, he is not the Leo in this demo, eerie similarities aside.
Editors note: You can learn more about this type of material at AFDS 2012, specifically the Heterogenous Compute and Consumer Graphics tracks. More articles of this type can be found on SemiAccurate’s AFDS 2012 links page. Special for our readers if you register for AFDS 2012 and use promo code SEMI12, you get $50 off.
Latest posts by Charlie Demerjian (see all)
- How is Intel solving their 14nm capacity problems? - Jun 13, 2019
- How big is AMD’s new Navi GPU? - Jun 7, 2019
- Intel kills off a (minor) product line - Jun 7, 2019
- A look at Intel’s Ice Lake and Sunny Cove - Jun 5, 2019
- Leaked roadmap shows Intel’s 10nm woes - Apr 25, 2019