Why lightmaps also make economical sense for gamedev.
Most devices that people already have that *could* play games are the ARM CPU GPUs in our phones. Or they are thin laptops without dedicated GPUs. Targeting a RTX 2060 as the min-spec means 80% of potential gamers won’t be able to run it.
Most of the cost of frames these days is lighting and shadows. Shadow maps essentially require re-rendering the entire scene from every shadow casting light’s perspective. We do this offline by raytracing to a lightmap texture. And we do it once, not per frame.
At the core of lightmapping is a texture that records the light color, shadow information, AO, and bounce lighting, all at once. The *only* downside of this technique is that it is rarely possible to bake data that the player modified, such as a hand-placed building location. We think that this is solveable by having only the dynamic buildings and ships shaded with something like Voxel GI ( VXGI in godot engine).
When you are using RTX, it’s trying to compute this, using even less samples. So it makes total sense that this will be grainy, expensive, or introduce temporal reprojection artefacts, such as ghosting and motion blurring. But since we are rendering to a texture with all the time in the world, we can just crank up the samples.
Bounce lighting: see the orange wall of the hangar bouncing onto the nearby building?