The GPU compute portion of Leo is fairly simple to explain, as you can see in the video below. Lights are one of the most complex parts of a 3D scene to render properly, and various techniques have evolved to allow complex lighting to, well, basically work. Some work better than others, but most are computationally painful, or have other rather serious drawbacks like deferred rendering.
Leo at GDC
At GDC this year, we asked AMD to explain the GPU compute functions in Leo, and as you can see, they did. Instead of globally computing illumination, Leo breaks the screen up into 32*32 pixel tiles, a bit over 2000 of them for a 1080p screen, an then uses compute functions to figure out which lights are visible in which tile. This obviously drops the lighting workload per tile by a massive amount and takes very little GPU time too.
In this case, a forward renderer and GPU compute means the difference between functional, and very nice if I do say so myself, lighting and a non-functional demo. The new version mentioned in the video can be found here, grab it and play around. We would tell you more about it, but it sadly only runs on Windows.S|A
Disclosure: Although SemiAccurate has a writer named Leo, he is not the Leo in this demo, eerie similarities aside.
Editors note: You can learn more about this type of material at AFDS 2012, specifically the Heterogenous Compute and Consumer Graphics tracks. More articles of this type can be found on SemiAccurate’s AFDS 2012 links page. Special for our readers if you register for AFDS 2012 and use promo code SEMI12, you get $50 off.
Latest posts by Charlie Demerjian (see all)
- A new x86 server part is back from the fabs - Sep 16, 2019
- Intel to crater pricing on Cascade Lake-X parts - Sep 10, 2019
- Xilinx VU19P is the worlds largest FPGA - Sep 4, 2019
- Intel’s Comet lake is ‘meh’, the launch was not - Aug 26, 2019
- Can Intel sell Xeons at a profit vs AMD’s Rome? - Aug 16, 2019