Blackbody Rendering

Posted: December 29th, 2010 | Tags: , , , | 2 Comments »

In between bouts of festive over-eating I added support for blackbody emission to my fluid simulator and thought I'd describe what was involved.

Briefly, a blackbody is an idealised substance that gives off light when heated. Planck's formula describes the intensity of light per-wavelength with units W·sr-1·m-2·m-1 for a given temperature in Kelvins.

Radiance has units W·sr-1·m-2 so we need a way to convert the wavelength dependent power distribution given by Planck's formula to a radiance value in RGB that we can use in our shader / ray-tracer.

The typical way to do this is as follows:

  1. Integrate Planck's formula against the CIE XYZ colour matching functions (available as part of PBRT in 1nm increments)
  2. Convert from XYZ to linear sRGB (do not perform gamma correction yet)
  3. Render as normal
  4. Perform tone-mapping / gamma correction

We are throwing away spectral information by projecting into XYZ but a quick dimensional analysis shows that now we at least have the correct units (because the integration is with respect to measured in meters the extra m-1 is removed).

I was going to write more about the colour conversion process, but I didn't want to add to the confusion out there by accidentally misusing terminology. Instead here are a couple of papers describing the conversion from Spectrum->RGB and RGB->Spectrum, questions about these come up all the time on various forums and I think these two papers do a good job of providing background and clarifying the process:

And some more general colour space links:

Here is a small sample of linear sRGB radiance values for different Blackbody temperatures:

1000K: 1.81e-02, 1.56e-04, 1.56e-04
2000K: 1.71e+03, 4.39e+02, 4.39e+02
4000K: 5.23e+05, 3.42e+05, 3.42e+05
8000K: 9.22e+06, 9.65e+06, 9.65e+06

It's clear from the range of values that we need some sort of exposure control and tone-mapping. I simply picked a temperature in the upper end of my range (around 3000K) and scaled intensities around it before applying Reinhard tone mapping and gamma correction. You can also perform more advanced mapping by taking into account the human visual system adaptation as described in Physically Based Modeling and Animation of Fire.

Again the hardest part was setting up the simulation parameters to get the look you want, here's one I spent at least 4 days tweaking:

Simulation time is ~30s a frame (10 substeps) on a 128^3 grid tracking temperature, fuel, smoke and velocity. Most of that time is spent in the tri-cubic interpolation during advection, I've been meaning to try MacCormack advection to see if it's a net win.

There are some pretty obvious artifacts due to the tri-linear interpolation on the GPU, that would be helped by a higher resolution grid or manually performing tri-cubic in the shader.

Inspired by Kevin Beason's work in progress videos I put together a collection of my own failed tests which I think are quite amusing:

{ 2 Comments » }


Adventures in Fluid Simulation

Posted: November 1st, 2010 | Tags: , , , , | 9 Comments »

I have to admit to being simultaneously fascinated and slightly intimidated by the fluid simulation crowd. I've been watching the videos on Ron Fedkiw's page for years and am still in awe of his results, which sometimes seem little short of magic.

Recently I resolved to write my first fluid simulator and purchased a copy of Fluid Simulation for Computer Graphics by Robert Bridson.

Like a lot of developers my first exposure to the subject was Jos Stam's stable fluids paper and his more accessible Fluid Dynamics for Games presentation, while the ideas are undeniable great I never came away feeling like I truly understood the concepts or the mathematics behind it.

I'm happy to report that Bridson's book has helped change that. It includes a review of vector calculus in the appendix that is given in a wonderfully straight-forward and concise manner, Bridson takes almost nothing for granted and gives lots of real-world examples which helps for some of the less intuitive concepts.

I'm planning a bigger post on the subject but I thought I'd write a quick update with my progress so far.

I started out with a 2D simulation similar to Stam's demos, having a 2D implementation that you're confident in is really useful when you want to quickly try out different techniques and to sanity check results when things go wrong in 3D (and they will).

Before you write the 3D sim though, you need a way of visualising the data. I spent quite a while on this and implemented a single-scattering model using brute force ray-marching on the GPU.

I did some tests with a procedural pyroclastic cloud model which you can see below, this runs at around 25ms on my MacBook Pro (NVIDIA 320M) but you can dial the sample counts up and down to suit:

Here's a simplified GLSL snippet of the volume rendering shader, it's not at all optimised apart from some branches to skip over empty space and an assumption that absorption varies linearly with density:

uniform sampler3D g_densityTex;
uniform vec3 g_lightPos;
uniform vec3 g_lightIntensity;
uniform vec3 g_eyePos;
uniform float g_absorption;

void main()
{
    // diagonal of the cube
    const float maxDist = sqrt(3.0);

    const int numSamples = 128;
    const float scale = maxDist/float(numSamples);

    const int numLightSamples = 32;
    const float lscale = maxDist / float(numLightSamples);

    // assume all coordinates are in texture space
    vec3 pos = gl_TexCoord[0].xyz;
    vec3 eyeDir = normalize(pos-g_eyePos)*scale;

    // transmittance
    float T = 1.0;
    // in-scattered radiance
    vec3 Lo = vec3(0.0);

    for (int i=0; i < numSamples; ++i)
    {
        // sample density
        float density = texture3D(g_densityTex, pos).x;

        // skip empty space
        if (density > 0.0)
        {
            // attenuate ray-throughput
            T *= 1.0-density*scale*g_absorption;
            if (T <= 0.01)
                break;

            // point light dir in texture space
            vec3 lightDir = normalize(g_lightPos-pos)*lscale;

            // sample light
            float Tl = 1.0; // transmittance along light ray
            vec3 lpos = pos + lightDir;

            for (int s=0; s < numLightSamples; ++s)
            {
                float ld = texture3D(g_densityTex, lpos).x;
                Tl *= 1.0-g_absorption*lscale*ld;

                if (Tl <= 0.01)
                    break;

                lpos += lightDir;
            }

            vec3 Li = g_lightIntensity*Tl;

            Lo += Li*T*density*scale;
        }

        pos += eyeDir;
    }

    gl_FragColor.xyz = Lo;
    gl_FragColor.w = 1.0-T;
}

I'm pretty sure there's a whole post on the ways this could be optimised but I'll save that for next time. Also this example shader doesn't have any wavelength dependent variation. Making your absorption coefficient different for each channel looks much more interesting and having a different coefficient for your primary and shadow rays also helps, you can see this effect in the videos.

To create the cloud like volume texture in OpenGL I use a displaced distance field like this (see the SIGGRAPH course for more details):

// create a volume texture with n^3 texels and base radius r
GLuint CreatePyroclasticVolume(int n, float r)
{
    GLuint texid;
    glGenTextures(1, &texid);

    GLenum target = GL_TEXTURE_3D;
    GLenum filter = GL_LINEAR;
    GLenum address = GL_CLAMP_TO_BORDER;

    glBindTexture(target, texid);

    glTexParameteri(target, GL_TEXTURE_MAG_FILTER, filter);
    glTexParameteri(target, GL_TEXTURE_MIN_FILTER, filter);

    glTexParameteri(target, GL_TEXTURE_WRAP_S, address);
    glTexParameteri(target, GL_TEXTURE_WRAP_T, address);
    glTexParameteri(target, GL_TEXTURE_WRAP_R, address);

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    byte *data = new byte[n*n*n];
    byte *ptr = data;

    float frequency = 3.0f / n;
    float center = n / 2.0f + 0.5f;

    for(int x=0; x < n; x++)
    {
        for (int y=0; y < n; ++y)
        {
            for (int z=0; z < n; ++z)
            {
                float dx = center-x;
                float dy = center-y;
                float dz = center-z;

                float off = fabsf(Perlin3D(x*frequency,
                               y*frequency,
                               z*frequency,
                               5,
                               0.5f));

                float d = sqrtf(dx*dx+dy*dy+dz*dz)/(n);

                *ptr++ = ((d-off) < r)?255:0;
            }
        }
    }

    // upload
    glTexImage3D(target,
                 0,
                 GL_LUMINANCE,
                 n,
                 n,
                 n,
                 0,
                 GL_LUMINANCE,
                 GL_UNSIGNED_BYTE,
                 data);

    delete[] data;

    return texid;
}

An excellent introduction to volume rendering is the SIGGRAPH 2010 course, Volumetric Methods in Visual Effects and Kyle Hayward's Volume Rendering 101 for some GPU specifics.

Once I had the visualisation in place, porting the fluid simulation to 3D was actually not too difficult. I spent most of my time tweaking the initial conditions to get the smoke to behave in a way that looks interesting, you can see one of my more successful simulations below:

Currently the simulation runs entirely on the CPU using a 128^3 grid with monotonic tri-cubic interpolation and vorticity confinement as described in Visual Simulation of Smoke by Fedkiw. I'm fairly happy with the result but perhaps I have the vorticity confinement cranked a little high.

Nothing is optimised so its running at about 1.2s a frame on my 2.66ghz Core 2 MacBook.

Future work is to port the simulation to OpenCL and implement some more advanced features. Specifically I'm interested in A Vortex Particle Method for Smoke, Water and Explosions which Kevin Beason describes on his fluid page (with some great videos).

On a personal note, I resigned from LucasArts a couple of weeks ago and am looking forward to some time off back in New Zealand with my family and friends. Just in time for the Kiwi summer!

Links

GPU Gems - Fluid Simulation on the GPU
GPU Gems 3 - Real-Time Rendering and Simulation of 3D Fluids
Fluid Simulation For Computer Graphics: A Tutorial in Grid Based and Particle Based Methods

{ 9 Comments » }


Faster Fog

Posted: June 10th, 2010 | Tags: , , | 2 Comments »

Cedrick at Lucas suggested some nice optimisations for the in-scattering equation I posted last time.

I had left off at:

\[L_{s} = \frac{\sigma_{s}I}{v}( \tan^{-1}\left(\frac{d+b}{v}\right) - \tan^{-1}\left(\frac{b}{v}\right) )\]

But we can remove one of the two inverse trigonometric functions by using the following identity:

\[\tan^{-1}x - \tan^{-1}y = \tan^{-1}\frac{x-y}{1+xy}\]

Which simplifies the expression for \(L_{s}\) to:

\[L_{s} = \frac{\sigma_{s}I}{v}( \tan^{-1}\frac{x-y}{1+xy} )\]

With \(x\) and \(y\) being replaced by:

\[\begin{array}{lcl} x = \frac{d+b}{v} \\ y = \frac{b}{v}\end{array}\]

So the updated GLSL snippet looks like:

float InScatter(vec3 start, vec3 dir, vec3 lightPos, float d)
{
   vec3 q = start - lightPos;

   // calculate coefficients
   float b = dot(dir, q);
   float c = dot(q, q);
   float s = 1.0f / sqrt(c - b*b);

   // after a little algebraic re-arrangement
   float x = d*s;
   float y = b*s;
   float l = s * atan( (x) / (1.0+(x+y)*y));

   return l;
}

Of course it's always good to verify your 'optimisations', ideally I would take GPU timings but next best is to run it through NVShaderPerf and check the cycle counts:

Original (2x atan()):

Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
Results 76 cycles, 10 r regs, 2,488,320,064 pixels/s

Updated (1x atan())

Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
Results 55 cycles, 8 r regs, 3,251,200,103 pixels/s

A tasty 25% reduction in cycle count!

Another idea is to use an approximation of atan(), Robin Green has some great articles about faster math functions where he discusses how you can range reduce to 0-1 and approximate using minimax polynomials.

My first attempt was much simpler, looking at it's graph we can see that atan() is almost linear near 0 and asymptotically approaches pi/2.

Perhaps the simplest approximation we could try would be something like:

\[\tan^{-1}(x) \approx min(x, \frac{\pi}{2})\]

Which looks like:

float atanLinear(float x)
{
   return clamp(x, -0.5*kPi, 0.5*kPi);
}

// Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
// Results 34 cycles, 8 r regs, 4,991,999,816 pixels/s

Pretty ugly, but even though the maximum error here is huge (~0.43 relative), visually the difference is surprisingly small.

Still I thought I'd try for something more accurate, I used a 3rd degree minimax polynomial to approximate the range 0-1 which gave something practically identical to atan() for my purposes (~0.0052 max relative error):

float MiniMax3(float x)
{
   return ((-0.130234*x - 0.0954105)*x + 1.00712)*x - 0.00001203333;
}

float atanMiniMax3(float x)
{
   // range reduction
   if (x < 1)
      return MiniMax3(x);
   else
      return kPi*0.5 - MiniMax3(1.0/x);
}

// Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
// Results 40 cycles, 8 r regs, 4,239,359,951 pixels/s

Disclaimer: This isn't designed as a general replacement for atan(), for a start it doesn't handle values of x < 0 and it hasn't had anywhere near the love put into other approximations you can find online (optimising for floating point representations for example).

As a bonus I found that putting the polynomial evaluation into Horner form shaved 4 cycles from the shader.

Cedrick also had an idea to use something a little different:

\[\tan^{-1}(x) \approx \frac{\pi}{2}\left(\frac{kx}{1+kx}\right)\]

This might look familiar to some as the basic Reinhard tone mapping curve!  We eyeballed values for k until we had one that looked close (you can tell I'm being very rigorous here), in the end k=1 was close enough and is one cycle faster :)

float atanRational(float x)
{
   return kPi*0.5*x / (1.0+x);
}

// Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
// Results 34 cycles, 8 r regs, 4,869,120,025 pixels/s

To get it down to 34 cycles we had to expand out the expression for x and perform some more grouping of terms which shaved another cycle and a register off it.  I was surprised to see the rational approximation be so close in terms of performance to the linear one, I guess the scheduler is doing a good job at hiding some work there.

In the end all three approximations gave pretty good visual results:

Original (cycle count 76):

MiniMax3, Error 8x (cycle count 40):

Rational, Error 8x (cycle count 34):

Linear, Error 8x (cycle count 34):

Links:

http://realtimecollisiondetection.net/blog/?p=9

http://www.research.scea.com/gdc2003/fast-math-functions.html

{ 2 Comments » }


In-Scattering Demo

Posted: May 29th, 2010 | Tags: , , , | 2 Comments »

This demo shows an analytic solution to the differential in-scattering equation for light in participating media. It's a similar but simplified version of equations found in [1], [2] and as I recently discovered [3]. However I thought showing the derivation might be interesting for some out there, plus it was a good excuse for me to brush up on my \(\LaTeX\).

You might notice I also updated the site's theme, unfortunately you need a white background to make wordpress.com LaTeX rendering play nice with RSS feeds (other than that it's very convenient).

Download the demo here

The demo uses GLSL and shows point and spot lights in a basic scene with some tweakable parameters:

Background

Given a view ray defined as:

\[\mathbf{x}(t) = \mathbf{p} + t\mathbf{d}\]

We would like to know the total amount of light scattered towards the viewer (in-scattered) due to a point light source. For the purposes of this post I will only consider single scattering within isotropic media.

The differential equation that describes the change in radiance due to light scattered into the view direction inside a differential volume is given in PBRT (p578), if we assume equal scattering in all directions we can write it as:

\[dL_{s}(t) = \sigma_{s}L_{i}(t)\,dt\]

Where \(\sigma_{s}\)  is the scattering probability which I will assume includes the normalization term for an isotropic phase funtion of \(\frac{1}{4pi}\). For a point light source at distance d with intensity I we can calculate the radiant intensity at a receiving point as:

\[L_{i} = \dfrac{I}{d^2}\]

Plugging in the equation for a point along the view ray we have:

\[L_{i}(t) = \dfrac{I}{|\mathbf{x}(t)-\mathbf{s}|^2}\]

Where s is the light source position. The solution to (1) is then given by:

\[L_{s} = \int_{0}^{d} \sigma_{s}L_{i}(t) \, dt\]

\[L_{s} = \int_{0}^{d} \frac{\sigma_{s}I}{|\mathbf{x}(t)-\mathbf{s}|^2}\,dt\]

To find this integral in closed form we need to expand the distance calculation in the denominator into something we can deal with more easily:

\[L_{s} = \sigma_{s}I\int_0^d{\dfrac{dt}{(\mathbf{p} + t\mathbf{d} - \mathbf{s})\cdot(\mathbf{p} + t\mathbf{d} - \mathbf{s})}}\]

Expanding the dot product and gathering terms, we have:

\[L_{s} = \sigma_{s}I\int_{0}^{d}\frac{dt}{(\mathbf{d}\cdot\mathbf{d})t^2 + 2(\mathbf{m}\cdot\mathbf{d})t + \mathbf{m}\cdot\mathbf{m} }\]

Where \(\mathbf{m} = (\mathbf{p}-\mathbf{s})\).

Now we have something a bit more familiar, because the direction vector is unit length we can remove the coefficient from the quadratic term and we have:

\[L_{s} = \sigma_{s}I\int_{0}^{d}\frac{dt}{t^2 + 2bt + c}\]

At this point you could look up the integral in standard tables but I'll continue to simplify it for completeness.  Completing the square we obtain:

\[L_{s} = \sigma_{s}I\int_{0}^{d}\frac{dt}{ (t^2 + 2bt + b^2) + (c-b^2)}\]

Making the substitution \(u = (t + b)\), \(v = (c-b^2)^{1/2}\) and updating our limits of integration, we have:

\[L_{s} = \sigma_{s}I\int_{b}^{b+d}\frac{du}{ u^2 + v^2}\]

\[L_{s} = \sigma_{s}I \left[ \frac{1}{v}\tan^{-1}\frac{u}{v} \right]_b^{b+d}\]

Finally giving:

\[L_{s} = \frac{\sigma_{s}I}{v}( \tan^{-1}\frac{d+b}{v} - \tan^{-1}\frac{b}{v} )\]

This is what we will evaluate in the pixel shader, here's the GLSL snippet for the integral evaluation (direct translation of the equation above):

float InScatter(vec3 start, vec3 dir, vec3 lightPos, float d)
{
// light to ray origin
vec3 q = start - lightPos;

// coefficients
float b = dot(dir, q);
float c = dot(q, q);

// evaluate integral
float s = 1.0f / sqrt(c - b*b);
float l = s * (atan( (d + b) * s) - atan( b*s ));

return l;
}

Where d is the distance traveled, computed by finding the entry / exit points of the ray with the volume.

To make the effect more interesting it is possible to incorporate a particle system, I apply the same scattering shader to each particle and treat it as a thin slab to obtain an approximate depth, then simply multiply by a noise texture at the end.

Optimisations

  • As it is above the code only supports lights with infinite extent, this implies drawing the entire frame for each light.  It would be possible to limit it to a volume but you'd want to add a falloff to the effect to avoid a sharp transition at the boundary.
  • Performing the full evaluation per-pixel for the particles is probably unnecessary, doing it at a lower frequency, per-vertex or even per-particle would probably look acceptable.

Notes

  • Generally objects appear to have wider specular highlights and more ambient lighting in the presence of particpating media.  [1] Discusses this in detail but you can fudge it by lowering the specular power in your materials as the scattering coefficient increases.
  • According to Rayliegh scattering blue light at the lower end of the spectrum is scattered considerably more than red light.  It's simple to account for this wavelength dependence by making the scattering coefficient a constant vector weighted towards the blue component.  I found this helps add to the realism of the effect.
  • I'm curious to know how the torch light was done in Alan Wake as it seems to be high quality (not just billboards) with multiple light shafts.. maybe someone out there knows?

References

[1] Sun, B., Ramamoorthi, R., Narasimhan, S. G., and Nayar, S. K. 2005. A practical analytic single scattering model for real time rendering.

[2] Wenzel, C. 2006. Real-time atmospheric effects in games.

[3] Zhou, K., Hou, Q., Gong, M., Snyder, J., Guo, B., and Shum, H. 2007. Fogshop: Real-Time Design and Rendering of Inhomogeneous, Single-Scattering Media.

Related

[4] Engelhardt, T. and Dachsbacher, C. 2010. Epipolar sampling for shadows and crepuscular rays in participating media with single scattering.

[5] Volumetric Shadows using Polygonal Light Volumes (upcoming HPG2010)

{ 2 Comments » }


Stochastic Pruning for Real-Time LOD

Posted: January 12th, 2010 | Tags: , , , | 5 Comments »

Rendering plants efficiently has always been a challenge in computer graphics, a relatively new technique to address this is Pixar's stochastic pruning algorithm. Originally developed for rendering the desert scenes in Cars, Weta also claim to have used the same technique on Avatar.

Although designed with offline rendering in mind it maps very naturally to the GPU and real-time rendering. The basic algorithm is this:

  1. Build your mesh of N elements (in the case of a tree the elements would be leaves, usually represented by quads)
  2. Sort the elements in random order (a robust way of doing this is to use the Fisher-Yates shuffle)
  3. Calculate the proportion U of elements to render based on distance to the object.
  4. Draw N*U unpruned elements with area scaled by 1/U

So putting this onto the GPU is straightforward, pre-shuffle your index buffer (element wise), when you come to draw you can calculate the unpruned element count using something like:

[sourcecode language="cpp"]
// calculate scaled distance to viewer
float z = max(1.0f, Length(viewerPos-objectPos)/pruneStartDistance);
// distance at which half the leaves will be pruned
float h = 2.0f;
// proportion of elements unpruned
float u = powf(z, -Log(h, 2));
// actual element count
int m = ceil(numElements * u);
// scale factor
float s = 1.0f / u;
[/sourcecode]

Then just submit a modified draw call for m quads:

[sourcecode language="cpp"]
glDrawElements(GL_QUADS, m*4, GL_UNSIGNED_SHORT, 0);
[/sourcecode]

The scale factor computed above preserves the total global surface area of all elements, this ensures consistent pixel coverage at any distance. The scaling by area can be performed efficiently in the vertex shader meaning no CPU involvement is necessary (aside from setting up the parameters of course). In a basic implementation you would see elements pop in and out as you change distance but this can be helped by having a transition window that scales elements down before they become pruned (discussed in the original paper).

Tree unpruned

Tree pruned to 10% of original

Billboards still have their place but it seems like this kind of technique could have applications for many effects, grass and particle systems being obvious ones.

I've updated my previous tree demo with an implementation of stochastic pruning and a few other changes:

  • Fixed some bugs with ATI driver compatability
  • Preetham based sky-dome
  • Optimised shadow map generation
  • Some new example plants
  • Tweaked leaf and branch shaders

You can download the demo here

I use the Self-organising tree models for image synthesis algorithm (from SIGGRAPH09) to generate the trees which I have posted about previously.

While I was researching I also came across Physically Guided Animation of Trees from Eurographics 2009, they have some great videos of real-time animated trees.

I've also posted my collection of plant modelling papers onto Mendeley (great tool for organising pdfs!).

Tree pruned to 70% of original

{ 5 Comments » }