Tuesday, May 8, 2012

Subsurface Scattering Update

I just finished rendering a very large and high quality image with Monte Carlo subsurface scattering. This model, the Lucy model, shows off the effects of subsurface scattering better than the bunny, because it has a lot more variation in shape and thickness. For this render I went for the look of an exotic blue stone: I made it moderately backwards scattering, and gave it a pretty high index of refraction:

Monte Carlo subsurface scattering. Click to enlarge (actual size is 800x1200).

Now, I'm using this as a reference for my approximate multiple scattering implementation. I've been testing my implementation, cross-referencing the math with other sources besides the Jensen 2002 paper (including the PBR book, other Jensen papers, and the Quantized Diffusion paper), and adding new features. I've corrected a few issues, and controlled for a few corner cases, and now, after a lot of iterations (which I may post some images of later), it's working very well overall. Here's the current state of my hierarchical point cloud dipole diffusion-based multiple scattering:

Dipole diffusion approximation to multiple scattering.

As you can see, it's pretty similar to the Monte Carlo reference. Some differences are expected because it's an approximation, and because it assumes a semi-infinite homogenous slab of material, which is clearly not the case in this render (the bunny is a little better in this regard). I do question whether the hue difference is larger than it should be; however, I believe this might be primarily due to the shortcomings of photon diffusion theory, which I'll elaborate on later in the post.

One notable source of inaccuracy in the Jensen 2002 paper (and many other sources) is that they use use classical diffusion and derive certain equations ad hoc without maximum physical accuracy. I'm currently working on mitigating some of these inaccuracies by implementing a few of the state-of-the-art improvements that are presented in the Quantized Diffusion paper, some of which were inspired by decades-old work in the field of neutron transport (as the authors mention in their video, and as I've observed and read about myself, cross-pollination between fields is very important for innovation and creativity, and if we pay attention to other fields we won't have to spend as much time reinventing the wheel). I've already implemented Grosjean's approximation for diffusion, as well as the improved boundary condition (A, the change in fluence rate due to internal reflectance) described in the paper.

I also implemented the diffuse reflectance part of the BRDF approximation from the Jensen 2001 paper. When computing irradiance during the pre-pass, I use the BRDF approximation when a ray hits a subsurface scattering object (because the point cloud has not yet been generated; it's actively in the process of be generated). This makes the irradiance much more accurate. Here's an image rendered using the BRDF approximation instead of the diffusion approximation:

The diffuse reflectance part of the BRDF approximation (no subsurface scattering in this image). Notice that it's totally opaque. This is how the object looks to rays during the irradiance computation stage of the pre-pass.

An important piece that I haven't added yet is the single scattering term. I did run a test using my Monte Carlo subsurface scattering with limited depth, and it seems that single scattering won't have much effect overall in this image. However, it will definitely improve the appearance of the very thin areas, such as the hand, the edges of the dress, and the bottom of the torch, which are dominated by single scattering, and which seem to be the most inaccurate areas in the diffusion render. Unlike the original formulation, Grosjean's approximation correctly decouples multiple and single scattering. I'll also add the single scattering term to the BRDF approximation.

While most of the differences in the images are expected, the hue discrepancy bothers me a little. However it's likely that the hue discrepancy is to be expected, just because of the limitations in the diffusion approximation, in particular the fact that diffusion theory assumes that the scattering coefficient is much higher than the absorption coefficient (which means it has relatively high albedo) (see first paragraph of introduction here, and see here). This would imply that the blue channel is most accurate, which seems to be the case when I compare the individual channels in Photoshop. The reason the hue shifts so much is likely because the R, G, and B wavelengths used in the image have very different absorption coefficients, so they are affected by the inaccuracies in the diffusion approximation to different degrees. From my testing so far, I don't think there are any bugs in my diffusion code, and I'm confident that my Monte Carlo Monte Carlo subsurface scattering is working correctly (the Monte Carlo version is also more intuitive and easier to check just by looking at the code), but I'll want to look into both of these things deeper just in case. (As an aside, it's very useful implementing things multiple ways and then being able to compare them and make sure the results are the same. I've done this with other things in Photorealizer, too. One example that comes to mind is direct vs. passive light sampling.)

Just yesterday, I solved a significant Fresnel-related issue related to the diffuse Fresnel reflectance term Fdr and corresponding transmittance term Fdt, which had resulted from problems in my sources. I'll elaborate on this more in my next post.

Also Fresnel-related, I spent some time trying to figure out how many Fresnel terms to include in the final irradiance. Some papers and sources multiply by two, one for the incoming and one for the outgoing light, the Quantized Diffusion paper multiplies by two but then also adds a new normalization term to correct past papers, and one of the papers seems to only include one Fresnel term. Originally, multiplying by two Fresnel terms didn't make complete sense to me. Here's a thought experiment to see why: Imagine you have a glass of milk (or just water) that doesn't absorb any light at all. Then all of the light that goes into the milk—around 93% on average according to the Fresnel equations—would eventually come back out. On average, around 75% of the light that went in would come back out on the first internal bounce, then 75% of the remaining 25% on the second internal bounce, etc. Since there would be no absorption, what else could could the light do other than eventually come back out? (Unless it got trapped in there somehow!) This would imply that the Fresnel transmittance only applies once, not twice! However, after reviewing some of the formulas, it looks like the boundary condition A in the diffusion approximation takes internal reflectance into account, so it seems that multiplying by two Fresnel terms might make sense after all; however, I haven't examined it closely enough to be certain about that yet.

While the hierarchy traversal and diffusion approximation evaluation are already quite fast algorithmically, there's still room for performance improvement, because I haven't optimized the code much yet (which would have been premature optimization if I had). But the slowest part is the irradiance computation stage of the pre-pass, which I've been performing using path tracing (and direct illumination and all of the other features of my path tracing core). It would be very fast with direct illumination only and no global illumination (which is all that some renderers do), but also less accurate. Also the irradiance computation in the pre-pass is not multi-threaded, which makes it take even longer (in wall clock terms).

By the way, I made it so that I can generate separate point clouds per object (or not if I choose). The point cloud can be associated with any container object in my scene graph. The point cloud is generated using all of the descendents of that node. I also made it work with instancing and transformations. This is a more flexible approach than many renderers take.

Sometime soon, I plan on adding wavelength-dependent scattering coefficients, and implement some other useful features as well.

I also wanted to mention that the dipole diffusion approximation assumes isotropic scattering, so it uses the reduced scattering coefficients. Because of this, a diffusion image would technically be more comparable to a reduced and isotropic Monte Carlo reference. So I rendered a version of the Monte Carlo render at the top of the post using reduced scattering coefficients and isotropic scattering. In this case the difference is fairly subtle, but that wouldn't always be the case. Here's the image:

This image is the same as the Monte Carlo subsurface scattering image at the top of the post, except this time using reduced scattering coefficients and isotropic scattering.

And finally, here's an earlier render with forward scattering and a lower index of refraction, giving it a more organic look:

Earlier Monte Carlo render with scattering properties that make it look a little more organic.

No comments:

Post a Comment