Skip to content

Instantly share code, notes, and snippets.

@bazhenovc
Last active September 1, 2025 10:51
Show Gist options
  • Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
The Sane Rendering Manifesto

The Sane Rendering Manifesto

The goal of this manifesto is to provide an easy to follow and reasonable rules that realtime and video game renderers can follow.

These rules highly prioritize image clarity/stability and pleasant gameplay experience over photorealism and excess graphics fidelity.

Keep in mind that shipping a game has priority over everything else and it is allowed to break the rules of the manifesto when there are no other good options in order to ship the game.

Do not use dynamic resolution.

Fractional upscaling makes the game look bad on most monitors, especially if the scale factor changes over time.

What is allowed:

  1. Rendering to an internal buffer at an integer scale factor followed by blit to native resolution with a point/nearest filtering.
  2. Integer scale factor that matches the monitor resolution exactly after upscaling.
  3. The scale factor should be fixed and determined by the quality preset in the settings.

What is not allowed:

  1. Adjusting the scale factor dynamically at runtime.
  2. Fractional scale factors.
  3. Any integer scale factor that doesn't exactly match the monitor/TV resolution after upscale.
  4. Rendering opaque and translucent objects at different resolutions.

Implementation recommendations:

  1. Rendering at lower resolution internally, but outputting to native.
  2. Render at lower resolution render target, then do integer upscale and postprocess at native resolution.
  3. Use letterboxing to work around weird resolutions.

Do not render at lower refresh rates.

Low refresh rates (under 60Hz) increase input latency and make the gameplay experience worse for the player.

What is allowed:

  1. In case of a high refresh rate monitors (90Hz, 120Hz, 244Hz etc) it is allowed to render at 60Hz.
  2. It is always allowed to render at the highest refresh rate the hardware supports, even if it's lower than 60Hz (for example incorrect cable/HW configuration or user explicitly configured power/battery saving settings).
  3. Offering alternative graphics presets to reach target refresh rate.

What is not allowed:

  1. Explicitly targeting 30Hz refresh rate during development.
  2. Using any kind of frame generation - it does not improve the input latency which is the whole point of having higher refresh rates.

Implementation recommendations:

  1. Decouple your game logic update from the rendering code.
  2. Use GPU-driven rendering to avoid CPU bottlenecks.
  3. Try to target native monitor refresh rate and use the allowed integer scaling to match it.
  4. Use vendor-specific low-latency input libraries.

Do not use temporal amortization.

If you cannot compute something in the duration of 1 frame then stop and rethink what you are doing.

You are making a game, make sure it looks great in motion first and foremost. Nobody cares how good your game looks on static screenshots.

In many cases bad TAA or unstable temporally amortized effects is an accessibility issue that can cause health issues for your players.

What is allowed:

  1. Ray tracing is allowed as long as the work is not distributed across multiple frames.
  2. Any king of lighting or volume integration is allowed as long as it can be computed or converged during 1 rendering frame.
  3. Variable rate shading is allowed as long as it does not change the shading rate based on the viewing angle and does not introduce aliasing.

What is not allowed:

  1. Reusing view-dependent computation results from previous frames.
  2. TAA, including AI-assisted TAA. It never looked good in motion, even with AI it breaks on translucent surfaces and particles.
  3. Trying to interpolate or denoise missing data in cases of disocclusion or fast camera movement.

Implementation recommendations:

  1. Prefilter your roughness textures with vMF filtering.
  2. Use AI-based tools to generate LOD and texture mipmaps.
  3. Use AI-based tools to assist with roughness texture prefiltering, take supersampled image as an input and train the AI to prefilter it to have less shader aliasing.
  4. Enforce consistent texel density in the art production pipeline.
  5. Enforce triangle density constraints in the art production pipeline.
@bazhenovc
Copy link
Author

@whoisKomet this is a valid idea and it was used before, it's slightly cheaper than regular LPV and slightly lower quality. Makes perfect sense for VR or games that are not rendering shadowmaps for some reason.

@whoisKomet
Copy link

@bazhenovc Interesting, is there a game you could point me to that uses this, just for reference? This paper is personally the first time I've encountered something like this, but I wouldn't know where to start looking anyway.

And regarding LPVs, from where I see it these two techniques compliment each other really well if put together. Having the precomputed VPL locations and colors already in the scene bypasses the need for a RSM for each light source. Each VPL would only need to check it's visibility in a regular shadow map, and then the injection and propagation steps could be performed as usual. I wonder if the removal of the extra shadow buffers is enough to compensate for storing the VPLs beforehand, though. Then again, I am basing my knowledge of LPVs on the og 2009 paper by Kaplanyan, so if there is a more updated version available this might be irrelevant (and I'd like to be made aware of it if possible).

@bazhenovc
Copy link
Author

bazhenovc commented Aug 7, 2025

@whoisKomet

Interesting, is there a game you could point me to that uses this, just for reference?

PC/DX11 version of Ghost Recon: Future Soldier (released in 2012) used this exact idea if I recall correctly (I didn't directly work on the GI implementation), but my memory is hazy and as far as I know it wasn't published so you'll have to take my word for it. It was discussed on several conference afterparties and it's likely there's more games from that era that used it.

Having the precomputed VPL locations and colors already in the scene bypasses the need for a RSM for each light source

If you're rendering a shadow map already, extending it to RSM isn't that expensive. VR/Mobile games often don't render shadow maps so it's an important feature there, otherwise there's very little benefit.

Another thing to consider is that rendering 6000 visible point lights isn't exactly trivial either, LPV at least decouples that and sampling cost is fixed.

Also cached shadow maps isn't exactly new either (i.e. https://gpuzen.blogspot.com/2019/05/gpu-zen-2-parallax-corrected-cached.html), having a cached RSM is a trivial extension of that (albeit being borderline banned by this manifesto lol).

@whoisKomet
Copy link

Another thing to consider is that rendering 6000 visible point lights isn't exactly trivial either, LPV at least decouples that and sampling cost is fixed.

Fair enough. I think the main concern I have with LPVs is light bleeding, which seems to be sufficiently addressed in the original paper but may still appear when used in scenes with the geometric complexity of current generation titles (which in itself might already be problematic anyway lmao). Rendering directly with the VPLs in theory allows shadowmaps or ISMs to brute force through the visibility problem, but even then ISMs aren't trivial and shadowmaps aren't much better. At that point it would be smarter to use the VPLS generated by the RSM anyway, so I see your point.

Still, I wonder why LPVs aren't mentioned much at all anymore. They debut with Crysis 2 IIRC, were added to UE4 for a while, and suddenly left the conversation altogether. I would imagine that something equally as performant (and temporally stable) superseded them, but it clearly hasn't made any headlines yet.

@bazhenovc
Copy link
Author

bazhenovc commented Aug 8, 2025

LPV is inherently low-frequency and cannot do high-frequency details, I'm personally fine with that but a lot of my peers disagree with me.

Light leaking is also a problem, and it cannot do indirect specular.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment