The Baking show

In Game graphics, everything that can be baked, should be baked.

Captain Honora

Have you first heard of raytracing 5 years ago, or 20 years ago? That would dictate your view of the technology, and what you imagine the point of it is.

When I first heard of raytracing, it was around 2004 with the first versions of blender. Before cycles was around, there was two choices of AO; approximate, or raytraced brute force. We’d spend hours , then later in the 2010’s minutes to calculate a single AO pass

Game’s ambient occlusion, in comparison, is always a pale comparison to the ground truth (often abbreviated GT) which is often modeled as a pre-calculated ray-traced, brute force AO

So first there was screen space AO, that you might know as SSAO in games. HBAO was introduced by nvidia in a 2008 siggraph paper led by Louis Bavoil Miguel Sainz. An improvement over SSAO in terms of artefacts, it’s still a bit expensive to compute per frame. It adds to the costs of the frame and learns nothing.

The solution is to bake the ground truth AO to texture once, for as many objects as possible instead of having to compute it per frame.

So while RTX , VXGI, SDFGI and lumen frees the game developers from having to think of baked lighting workflows, there is still an incredible efficiency gain to baking whatever objects do not move.

I think we often overestimate how much of the level will move/be destructible, vs the number of times players are stuck in the sewers of FPS and 3rd person adventure games because interior lighting is not baked and they cannot see grey-on-grey details.

“but wait, isn’t your game stylized anyway?”

Yes, but you would be surprised how much a baked AO helps the polish and read of the character at a distance.

Just to reiterate why we’re doing this: it’s order of magnitudes faster on the GPU to read the R,G,B values from a texture per vertex, than to calculate even a crude approximation of the same thing per pixel. To correct for this flaw, realtime AO in games is done at low sampling rates such as 1 sample per pixel or 2spp. Denoising is the only way this technique is viable. So this sells GPU with more ML cores, and we get to where we are today… but I digress. Worked great for Nvidia tho.

1 SPP , baby!

So it’s faster to read even a big texture about what the ground truth looked like at one point in time, than to recalculate it per frame even as just one noisy ass sample per pixel.

Because even at just 1920*1080 px (HD) we have two million pixels operation. You can see how this gets huge, especially for 2 and 4K laptops without a GPU.

There is no way they can handle the demand. So we have a weird Frankenstein world where the industry refuses to see that the majority of sales are on hardware that could not play the game at all without rendering at a lower resolution then upscaling. So we’re compounding temporal artefacts, upscaling, AI upscaling, sometimes even on the same frame.

This feels like a strange soapbox to stand on, as someone who loves raytracing so much, but raytracing should be done once and to a texture. So if you want dynamic lighting, you know you will need to carefully constructs the levels so it looks good. for interior you still *have* to bake if you want highest quality visuals and broad range of hardware (not just the beefy desktop GPUs with RTX cores).

The fact that we’re mining the entire planet in part to solve through hardware what is a bad software choice. In an infinite resource scenario I totally get it though, what I’m talking about is all “legacy” concerns, from the standpoint of future hardware for which 64spp raytraced AO would be no big deal. But we don’t get an infinite amount of “new generation of tech, everyone buys a new GPU” type events left, from an economic standpoint and ecological standpoint. Also most people don’t care what a GPU is, and a $1000 spend for gaming rig is excessive.

So we have to contend with hardware that ran half life 2 and warcraft III perfectly, but cannot run The Long Dark (Unity Engine). Can we still make games that look great for this kind of hardware?

Ideally any game would come with two distinct visual styles and rendering modes, eco-retro (thermal efficiency, reduction of bottlenecks, static lighting) and next-gen (pushing the hardware to it’s limit). All console builds and Nintendo Switch would be on eco-retro. See also Case study: Miracle ports, IDG

Anyway, super long winded way to say we’re baking AO and lightmaps and dang it’s not fun to do that in Blender. Gonna dig my pre-adobe builds of substance painter to improve the workflow. I should start a petition to improve the baking workflow for Blender newcomers. 😂 maybe I should even tweak the code myself and upload it as a proposal 🤔 We could call this system open source. Amazing.

I hope you enjoyed this longer episode on baking. As you can see I have a lighting soapbox to rant from any day of the week. You can totally @ me for controversial retro-GPU lighting hot takes. That will be approximately (3) people on earth interested in this truly niche content. Enjoy.

– Honora out.


Posted

in