Lightcuts in Blender: preliminary tests

This weekend I’ve been performing some preliminary experiments to get a feeling for what could be delivered by a lightcuts implementation in Blender.

In brief, the lightcuts algorithm is just a way to render insane amounts of point lights quickly, by adaptively selecting a different subset of the available lights at each pixel according to perceptual metrics. This means that it’s possible to have a grasp of how results would look like without actually implementing it, if you are willing to bear with huge rendering times.

The first bunch of tests concerned environment lighting. It required converting light probes into collections of point lights: to do this, I wrote a quick Python script that performs a Hammersley sampling of a light probe and places a new directional light at each sample position, with matching color and intensity (I was lucky that Blender.Image.getPixelHDR recently landed on svn), and 1-sample “ray shadow” mode set.

As a result, you should get something very similar to an AO pass with sky texture. Actually, it should be exactly the same calculation (bar any “distance” tweaks), but instead of sampling the environment using a different jitter table per pixel, you use the same sampling scheme over the whole scene. That of course defeats the very purpose of jittering, which is trading banding for noise, but please remember that lightcuts is designed to deal with very large collections of lights.

Environment lighting comparison

So far, I admit it’s no big deal. For sure, with actual lights you can also support specularity, as in the pictures; and AO shows a bit of the bias problem. But anyway, as far as environment lighting is concerned, even after a successful implementation is completed, the best you can get is still pretty similar to what you already have.

Things get more interesting if we add indirect lighting to the mix. Lightcuts is claimed to interact well with Instant Radiosity schemes: that is, you place a large number of small lights where the primary light hits the scene in order to simulate the first bounce of indirect lighting. To accomplish this, I preliminary rendered some color coded maps from the light’s point of view: positions, normals, and color. (By the way, this caused me some headaches because of the linear blending texture saturating at 0 and 1, which is a pity in the float buffer era).

Color maps used for indirect lighting

Afterwards, I wrote another script that, given those textures as input, places a number of 180° spot lights into the scene, with color and intensity matching the hitting surfaces’ color and orientation with respect to light.

Indirect lighting tests

In these test scenes the effect of indirect lighting is a bit exaggerated to see it better, but you get the idea. It is important to stress that Instant Radiosity is an approximate technique with its own shortcomings that would need to be properly addressed in a serious implementation. Still, it looks promising to me.

Significant savings on render times should be obtained when you employ all those lighting sources together: environment lighting, area lights and indirect lighting. This is because lightcuts ultimately just deals with a collection of point lights, without caring about their original purpose. My hope is that this would enable artists to set up complex lighting configurations with several area lights without worrying too much of rendering times.

As a closing note: top blenderheads, like @ndy, are known to use lots of lights in their renderings; nonetheless, I guess even them would be shocked to see something like this:

Thousands of lights

Of course, in an actual implementation, end users wouldn’t be able to see all these individual lights in 3d view… fortunately!