I’ve written here earlier about Epic Games’ Unreal Engine 5, which incorporates their new “Nanite" rendering pipeline, whose goal is to allow rendering arbitrarily complicated scenes in essentially constant time without forcing creators of models to produce objects with explicitly-specified models at various levels of detail. If you haven’t seen Nanite yet, watch the demonstrations, done by independent developers, in that post. The following video tries to push Nanite to its limit, and the developer gave up after creating a model with a million objects with 120 billion triangle faces.
How do they do it? The fundamental trick that makes much of computer graphics possible is to recognise that regardless of how complex the model may be, what you ultimately have to end with are the colour components of the pixels on the screen, and this is fixed by the hardware configuration and doesn’t depend on the model. For example, a “4K” screen has around 8.3 million pixels, so that’s all you ultimately need to compute for each frame. The key trick is to project each pixel into the model and compute only how the objects it intersects affect its colour. This will almost always be a tiny fraction of the whole model. This is a process which was called “quick reject” when I was learning this stuff, but now seems to be called “culling”. Whatever, doing it right accounts for about 90% of what you need to do graphics quickly.
The following hour long video, hosted by one of the developers of Nanite, explains how it delivers such stunning performance. There is no one central trick but rather a collection of complicated and difficult tricks which are optimised around the capabilities of contemporary graphics processing units (GPUs) and central processors (CPUs), assigning each the work they do most efficiently.