Friday, September 09, 2005

DIM Shadows

Jeff implemented my idea of using two Diffuse Irradiance Maps (DIM) on objects, one for lit areas and one for shadows, and it totally owns!

First a bit about diffuse irradiance maps: A DIM is a type of environment map that stores precomputed diffuse lighting information from a reflection map (also referred to as a Light Probe in this case). Every pixel in the light probe is considered to be a separate, directional light source pointing to the center. This allows for arbitrarily complex lighting conditions to be defined by, say, a panoramic photograph such as the ones Paul Debevec makes. However every point on the surface of a model would need to consider every pixel in the light probe as a light source, or at least every pixel within the hemisphere about the normal of the surface, which is not feasible in real-time by any means (at least not by brute force). Instead a DIM is calculated through a process of Diffuse Convolution. For any normal N, lighting contributions from all pixels in the light probe (really just the ones in the hemisphere around N) are summed up and stored in a new environment map called a DIM. This way any point on the surface of a model can look up the sum of all the light sources from the DIM on the fly using it's normal.

So Josh, Mark, and I went out into a prairie, took a fairly high-resolution HDR panorama and generated two DIMs from it using Debevec's HDRShop.

Light probe of a prairie


Sunlight DIM of prairie


Shadow DIM of a prairie

The light probe had to be scaled down quite a bit because diffuse convolution is O(N^2). HDRShop will tell you just how ridiculously long a full-res convolution will take before it starts (and ten thousand seconds is a lot longer than it sounds). Our 12.6 megapixel images would get done in 229 million seconds or just under 7.25 years. We ended up generating 360 by 180 pixel DIMs which ended up taking about half an hour to compute.

The sunlight DIM was computed from the original light probe normally. The shadow DIM was computed from a light probe where the sun was edited out in Photoshop. This simulates an object blocking out all the light eminating from the pixels that define the sun in the reflection map and only leaves the lighting the object would receive from ambient sources like the sky. Finally in the program itself, we compute shadows with shadow volumes or shadow mapping based on a rough estimate of the sun direction in the light probe. Fragments that end up in shadow get drawn with the shadow DIM, fragments in the light with the sunlight DIM.

The results are amazing. Here are some screenshots of Jeff's SpeedTree shadow-mapping demo:


1 Comments:

Anonymous Anonymous said...

You might want to look into HDRIE. It's a tool I worked on back in my undergrad days that does a lot of things similar to HDRShop, but throws in some extras.

Main reason I mention it is because it does the whole spherical harmonics approach to approximating the diffuse convolution, so it will get a very nearly equivalent result and run many times faster.

http://www.acm.uiuc.edu/siggraph/eoh_projects/eoh2002.html

10:30 AM  

Post a Comment

<< Home