<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Miles Macklin </title>
    <link>http://blog.mmacklin.com/</link>
    <language>en-us</language>
    <author></author>
    <rights>(C) 2017</rights>
    <updated>2017-05-17 00:00:00 &#43;0000 UTC</updated>

    
      
    
      
    
      
    
      
    
      
    
      
        <item>
          <title>XPBD slides and stiffness</title>
          <link>http://blog.mmacklin.com/2016/10/12/xpbd-slides-and-stiffness/</link>
          <pubDate>Wed, 12 Oct 2016 05:16:03 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2016/10/12/xpbd-slides-and-stiffness/</guid>
          <description>&lt;p&gt;The slides for my talk on XPBD are now up on the publications page, or available &lt;a href=&#34;http://mmacklin.com/xpbd_slides.pdf&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I spoke to someone who had already implemented the method and who was surprised to find they needed to use very small compliance values in the range of 10&lt;nobr&gt;&lt;sup&gt;-6&lt;/sup&gt;&lt;/nobr&gt; to get stiffness comparable to regular PBD.&lt;/p&gt;

&lt;p&gt;The reason for this is that, unlike stiffness in PBD, compliance in XPBD has a direct correspondence to engineering stiffness, i.e.: Young&#39;s modulus. Most real-world materials have a Young&#39;s modulus of several GPa (&lt;nobr&gt;10&lt;sup&gt;9&lt;/sup&gt; N/m&lt;sup&gt;2&lt;/sup&gt;&lt;/nobr&gt;), and because compliance is simply inverse stiffness it must be correspondingly small.&lt;/p&gt;

&lt;p&gt;Below are some stiffness values for some common materials, I have listed the compliance in the right hand column, which is of course just the reciprocal of the stiffness.&lt;/p&gt;

&lt;table&gt;
&lt;tr&gt;&lt;th&gt;Material&lt;/th&gt;&lt;th&gt;Stiffness (N/m^2)&lt;/th&gt;&lt;th&gt;Compliance (m^2/N)&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Concrete&lt;/td&gt;&lt;td&gt;25.0 x 10&lt;sup&gt;9&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;0.04 x 10&lt;sup&gt;-9&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Wood&lt;/td&gt;&lt;td&gt;6.0 x 10&lt;sup&gt;9&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;0.16 x 10&lt;sup&gt;-9&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Leather&lt;/td&gt;&lt;td&gt;1.0 x 10&lt;sup&gt;8&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;1.0 x 10&lt;sup&gt;-8&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Tendon&lt;/td&gt;&lt;td&gt;5.0 x 10&lt;sup&gt;7&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;0.2 x 10&lt;sup&gt;-7&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Rubber&lt;/td&gt;&lt;td&gt;1.0 x 10&lt;sup&gt;6&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;1.0 x 10&lt;sup&gt;-6&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Muscle&lt;/td&gt;&lt;td&gt;5.0 x 10&lt;sup&gt;3&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;0.2 x 10&lt;sup&gt;-3&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Fat&lt;/td&gt;&lt;td&gt;1.0 x 10&lt;sup&gt;3&lt;/sup&gt;&lt;/td&gt;&lt;td&gt;1.0 x 10&lt;sup&gt;-3&lt;/sup&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;Note that for deformable materials, like soft tissue, the material stiffness depends heavily on strain. In the case of muscle it may become up to three orders of magnitude stiffer as it is stretched, even the surrounding temperature can have a large impact on these materials. The values listed here should only be used as a rough guide for a material under low strain.&lt;/p&gt;

&lt;p&gt;This is probably not a very convenient range for artists to work with, so it can make sense to expose a parameter in the [0,1] range, map it to the desired stiffness range, and then take the reciprocal to obtain compliance.&lt;/p&gt;

&lt;h5 id=&#34;sources&#34;&gt;Sources&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a style=&#34;word-break: break-all;&#34; href=&#34;http://biomechanics.stanford.edu/me338/me338_project02.pdf&#34;&gt;biomechanics.stanford.edu/me338/me338_project02.pdf&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a style=&#34;word-break: break-all;&#34; href=&#34;http://www-mdp.eng.cam.ac.uk/web/library/enginfo/cueddatabooks/materials.pdf&#34;&gt;mdp.eng.cam.ac.uk/web/library/enginfo/cueddatabooks/materials.pdf&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a style=&#34;word-break: break-all;&#34; href=&#34;http://www-mech.eng.cam.ac.uk/profiles/fleck/papers/259.pdf&#34;&gt;www-mech.eng.cam.ac.uk/profiles/fleck/papers/259.pdf&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a style=&#34;word-break: break-all;&#34; href=&#34;http://brl.illinois.edu/Publications/1996/Chen-UFFC-191-1996.pdf&#34;&gt;brl.illinois.edu/Publications/1996/Chen-UFFC-191-1996.pdf&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;/ol&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>XPBD</title>
          <link>http://blog.mmacklin.com/2016/09/15/xpbd/</link>
          <pubDate>Thu, 15 Sep 2016 00:54:50 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2016/09/15/xpbd/</guid>
          <description>&lt;p&gt;Anyone who has worked with Position-Based Dynamics (PBD) in production will know that the constraint stiffness is heavily dependent on the number of iterations performed by the solver. Regardless of how you set the stiffness coefficients, the solver will converge to an infinitely stiff solution given enough iterations.&lt;/p&gt;

&lt;p&gt;We have a new paper that solves this iteration count and time step dependent stiffness with a very small addition to the original algorithm. Here is the abstract:&lt;/p&gt;

&lt;blockquote&gt;
We address the long-standing problem of iteration count and time-step dependent constraint stiffness in position-based dynamics (PBD). We introduce a simple extension to PBD that allows it to accurately and efficiently simulate arbitrary elastic and dissipative energy potentials in an implicit manner. In addition, our method provides constraint force estimates, making it applicable to a wider range of applications, such as those requiring haptic user-feedback. We compare our algorithm to more expensive non-linear solvers and find it produces visually similar results while maintaining the simplicity and robustness of the PBD method.
&lt;/blockquote&gt;

&lt;p&gt;The method is derived from an implicit integration scheme, and produces results very close to those given by more complex Newton-based solvers, as you can see in the submission video:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/jrvJFzrF3kg?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;I will be presenting the paper at &lt;a href=&#34;https://mig2016.inria.fr&#34;&gt;Motion in Games (MIG)&lt;/a&gt; in San Francisco next month. If you&#39;re in the area you should attend, these smaller conferences are usually very nice.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://mmacklin.com/xpbd.pdf&#34;&gt;Download Paper (PDF, 2mb)&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://mmacklin.com/xpbd_supplementary.mp4&#34;&gt;Download Video (MP4, 51mb)&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;
 [...]</description>
        </item>
      
    
      
    
      
        <item>
          <title>New SIGGRAPH paper</title>
          <link>http://blog.mmacklin.com/2014/05/15/new-siggraph-paper/</link>
          <pubDate>Thu, 15 May 2014 00:28:00 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2014/05/15/new-siggraph-paper/</guid>
          <description>&lt;p&gt;Just a quick note to say that the pre-print for our paper on particle physics for real-time applications is now available. Visit the &lt;a href=&#34;http://blog.mmacklin.com/flex&#34;&gt;project page&lt;/a&gt; for all the downloads, or check out the submission video below:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe src=&#34;https://player.vimeo.com/video/94622661&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;The paper contains most of the practical knowledge and insight about Position-Based Dynamics that I gained while developing Flex. In addition, it introduces a few new features such as implicit friction and smoke simulation.&lt;/p&gt;

&lt;p&gt;As noted in the paper, unified solvers are common in offline VFX, but are relatively rare in games. In fact, it was my experience at Rocksteady working on Batman: Arkham Asylum that helped inspire this work. The Batman universe has all these great characters with unique special powers, and I think a tool like this would have found many applications (e.g.: a Clayface boss fight). Particle based methods have their limitations, and traditional rigid-body physics engines will still be important, but I think frameworks like this can be a great addition to the toolbox.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
    
      
    
      
    
      
    
      
        <item>
          <title>FLEX</title>
          <link>http://blog.mmacklin.com/2013/11/13/flex/</link>
          <pubDate>Wed, 13 Nov 2013 03:51:40 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2013/11/13/flex/</guid>
          <description>&lt;p&gt;FLEX is the name of the new GPU physics solver I have been working on at NVIDIA. It was announced at the Montreal editor&#39;s day a few weeks ago, and today we have released some more information in the form of a video trailer and a Q&amp;amp;A with the PhysX fan site.&lt;/p&gt;

&lt;p&gt;The solver builds on my Position Based Fluids work, but adds many new features such as granular materials, clothing, pressure constraints, lift + drag model, rigid bodies with plastic deformation, and more. Check out the video below and see the article for more details on what FLEX can do.&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/lXxjleVS6pE?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;Full Video (130mb): &lt;a href=&#34;http://mmacklin.com/flex_demo_reel.mp4&#34;&gt;http://mmacklin.com/flex_demo_reel.mp4&lt;/a&gt;&lt;br&gt;
Full Article: &lt;a href=&#34;http://physxinfo.com/news/11860/introducing-nvidia-flex-unified-gpu-physx-solver/&#34;&gt;http://physxinfo.com&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
    
      
    
      
        <item>
          <title>SIGGRAPH slides</title>
          <link>http://blog.mmacklin.com/2013/07/25/siggraph-slides/</link>
          <pubDate>Thu, 25 Jul 2013 18:08:59 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2013/07/25/siggraph-slides/</guid>
          <description>&lt;p&gt;Slides for my SIGGRAPH presentation of Position Based Fluids are available here:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://mmacklin.com/pbf_slides.pdf&#34;&gt;http://mmacklin.com/pbf_slides.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the presentation I showed videos of some more recent results including two-way coupling of fluids with clothing and rigid bodies. They&#39;re embedded below:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/LMBeC_Ht2Lk?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/-4wjJr2XwNo?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/GqEAV2xIpiQ?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/2fcdK_hWtMg?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;Overall it has been a great SIGGRAPH, I met tons of new people who provided lots of inspiration for new research ideas. Thanks!&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
        <item>
          <title>Real-Time Video Capture with FFmpeg</title>
          <link>http://blog.mmacklin.com/2013/06/11/real-time-video-capture-with-ffmpeg/</link>
          <pubDate>Tue, 11 Jun 2013 23:46:33 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2013/06/11/real-time-video-capture-with-ffmpeg/</guid>
          <description>&lt;p&gt;Working on a distributed team means that often the best way to share new results is via video captures of simulations. Previously I would do this by dumping uncompressed frames from OpenGL to disk, and then compressing with FFmpeg. I prefer this over tools like Fraps because it gives more control over compression quality, and has no watermarking or time limits.&lt;/p&gt;

&lt;p&gt;The problem with this method is simply that saving uncompressed frames generates a large amount of data that quickly fills up the write cache and slows down the whole system during capture, it also makes FFmpeg disk bound on reads during encoding.&lt;/p&gt;

&lt;p&gt;Thankfully there is a better alternative, by using a direct pipe between the app and FFmpeg you can avoid this disk IO entirely. I couldn&#39;t find a concise example of this on the web, so here&#39;s how to do it in a Win32 GLUT app.&lt;/p&gt;

&lt;p&gt;At startup:&lt;/p&gt;

&lt;pre class=&#34;prettyprint lang-cc&#34;&gt;
#include &amp;lt;stdio.h&amp;gt;

// start ffmpeg telling it to expect raw rgba 720p-60hz frames
// -i - tells it to read frames from stdin
const char* cmd = &#34;ffmpeg -r 60 -f rawvideo -pix_fmt rgba -s 1280x720 -i - &#34;
                  &#34;-threads 0 -preset fast -y -pix_fmt yuv420p -crf 21 -vf vflip output.mp4&#34;;

// open pipe to ffmpeg&#39;s stdin in binary write mode
FILE* ffmpeg = _popen(cmd, &#34;wb&#34;);

int* buffer = new int[width*height];
&lt;/pre&gt;

&lt;p&gt;After rendering each frame, grab back the framebuffer and send it straight to the encoder:&lt;/p&gt;

&lt;pre class=&#34;prettyprint lang-cc&#34;&gt;
glutSwapBuffers();
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

fwrite(buffer, sizeof(int)*width*height, 1, ffmpeg);
&lt;/pre&gt;

&lt;p&gt;When you&#39;re done, just close the stream as follows:&lt;/p&gt;

&lt;pre class=&#34;prettyprint lang-cc&#34;&gt;
_pclose(ffmpeg);
&lt;/pre&gt;

&lt;p&gt;With these settings FFmpeg generates a nice H.264 compressed mp4 file, and almost manages to keep up with my real-time simulations.&lt;/p&gt;

&lt;p&gt;This has has vastly improved my workflow, so I hope someone else finds it useful.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Update:&lt;/b&gt; Added -pix_fmt yuv420p to the output params to generate files compatible with Windows Media Player and Quicktime.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Update:&lt;/b&gt; For OSX / Linux, change:&lt;/p&gt;

&lt;pre class=&#34;prettyprint lang-cc&#34;&gt;FILE* ffmpeg = _popen(cmd, &#34;wb&#34;);&lt;/pre&gt;
into
&lt;pre class=&#34;prettyprint lang-cc&#34;&gt;FILE* ffmpeg = popen(cmd, &#34;w&#34;);&lt;/pre&gt;
 [...]</description>
        </item>
      
    
      
    
      
        <item>
          <title>Position Based Fluids</title>
          <link>http://blog.mmacklin.com/2013/04/24/position-based-fluids/</link>
          <pubDate>Wed, 24 Apr 2013 06:33:12 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2013/04/24/position-based-fluids/</guid>
          <description>&lt;p&gt;Position Based Fluids (PBF) is the title of our paper that has been accepted for presentation at SIGGRAPH 2013. I&#39;ve set up a project page where you can download the paper and all the related content here:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://blog.mmacklin.com/publications&#34;&gt;http://blog.mmacklin.com/publications&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have continued working on the technique since the submission, mainly improving the rendering, and adding features like spray and foam (based on the excellent paper from the University of Freiburg: &lt;a href=&#34;http://cg.informatik.uni-freiburg.de/publications/2012_CGI_sprayFoamBubbles.pdf&#34;&gt;Unified Spray, Foam and Bubbles for Particle-Based Fluids&lt;/a&gt;). You can see the results in action below, but I recommend checking out the project page and downloading the videos, they look great at full resolution and 60hz.&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/F5KuP6qEuew?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe frameborder=&#34;0&#34; src=&#34;https://www.youtube.com/embed/mgYztcjOvRQ?controls=2&#34; class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
        <item>
          <title>2D FEM</title>
          <link>http://blog.mmacklin.com/2012/06/27/2d-fem/</link>
          <pubDate>Wed, 27 Jun 2012 10:40:40 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2012/06/27/2d-fem/</guid>
          <description>&lt;p&gt;This post is about generating meshes for finite element simulations. I&#39;ll be covering other aspects of FEM based simulation in a later post, until then I recommend checking out Matthias Müller&#39;s very good introduction in the SIGGRAPH 2008 Real Time Physics course &lt;a href=&#34;#ref1&#34;&gt;[1]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After spending the last few weeks reading, implementing and debugging meshing algorithms I have a new-found respect for people in this field. It is amazing how many ways meshes can &amp;quot;go wrong&amp;quot;, even the experts have it tough:&lt;/p&gt;

&lt;blockquote&gt;“I hate meshes. I cannot believe how hard this is. Geometry is hard.” — David Baraff, Senior Research Scientist, Pixar Animation Studios&lt;/blockquote&gt;

&lt;p&gt;Meshing algorithms are hard, but unless you are satisfied simulating cantilever beams and simple geometric shapes you will eventually need to deal with them.&lt;/p&gt;

&lt;p&gt;My goal was to find an algorithm that would take an image as input, and produce as output a &lt;i&gt;good quality&lt;/i&gt; triangle mesh that conformed to the boundary of any non-zero regions in the image.&lt;/p&gt;

&lt;p&gt;My first attempt was to perform a coarse grained edge detect and generate a &lt;a href=&#34;http://en.wikipedia.org/wiki/Delaunay_triangulation&#34;&gt;Delaunay triangulation&lt;/a&gt; of the resulting point set. The input image and the result of a low-res edge detect:&lt;/p&gt;

&lt;div class=&#34;aligncenter&#34; style=&#34;text-align: center; padding: 16px; &#34;&gt;
&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/armadillo.jpg&#34; alt=&#34;&#34; title=&#34;armadillo&#34; width=&#34;30%&#34;/&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_figure1.png&#34; width=&#34;30%&#34; alt=&#34;&#34; title=&#34;Coarse edge detect&#34; /&gt;
&lt;/div&gt;

&lt;p&gt;This point set can be converted to a mesh by any Delaunay triangulation method, the &lt;a href=&#34;http://en.wikipedia.org/wiki/Bowyer%E2%80%93Watson_algorithm&#34;&gt;Bowyer-Watson algorithm&lt;/a&gt; is probably the simplest. It works by inserting one point at a time, removing any triangles whose circumcircle is encroached by the new point and re-tessellating the surrounding edges. A nice feature is that the algorithm has a direct analogue for tetrahedral meshes, triangles become tetrahedra, edges become faces and circumcircles become circumspheres.&lt;/p&gt;

&lt;p&gt;Here&#39;s an illustration of how Bowyer/Watson proceeds to insert the point in red into the mesh:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_del_1.png&#34; alt=&#34;&#34; title=&#34;Delaunay_1&#34; width=&#34;25%&#34; class=&#34;alignnone&#34; /&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_del_1a.png&#34; alt=&#34;&#34; title=&#34;Delaunay_1a&#34; width=&#34;25%&#34; class=&#34;alignnone&#34; /&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_del_2.png&#34; alt=&#34;&#34; title=&#34;Delaunay_2&#34; width=&#34;25%&#34; class=&#34;alignnone&#34; /&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_del_3.png&#34; alt=&#34;&#34; title=&#34;Delaunay_3&#34; width=&#34;25%&#34; class=&#34;alignnone&#34; /&gt;&lt;/p&gt;

&lt;p&gt;And here is the Delaunay triangulation of the Armadillo point set:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_figure2.png&#34; alt=&#34;&#34; title=&#34;Delaunay Triangulation&#34; class=&#34;aligncenter&#34; /&gt;&lt;/p&gt;

&lt;p&gt;As you can see, Delaunay triangulation algorithms generate the convex hull of the input points. But we want a mesh that conforms to the shape boundary - one way to fix this is to sample the image at each triangle&#39;s centroid, if the sample lies outside the shape then simply throw away the triangle. This produces:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_figure3.png&#34; alt=&#34;&#34; title=&#34;Trimmed Delaunay&#34; class=&#34;aligncenter&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Much better! Now we have a reasonably good approximation of the input shape. Unfortunately, FEM simulations don&#39;t work well with long thin &amp;quot;sliver&amp;quot; triangles. This is due to interpolation error and because a small movement in one of the triangle&#39;s vertices leads to large forces, which leads to inaccuracy and small time steps &lt;a href=&#34;#ref2&#34;&gt;[2]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before we look at ways to improve triangle quality it&#39;s worth talking about how to measure it. One measure that works well in 2D is the ratio of the triangle&#39;s circumradius to it&#39;s shortest edge. A smaller ratio indicates a higher quality triangle, which intuitively seems reasonable, long skinny triangles have a large circumradius but one very short edge:&lt;/p&gt;

&lt;div class=&#34;aligncenter&#34; style=&#34;text-align: center; padding: 16px; &#34;&gt;
&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_good_quality.png&#34; alt=&#34;&#34; title=&#34;Good quality triangle&#34; width=&#34;30%&#34; /&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/fem_poor_quality.png&#34; alt=&#34;&#34; title=&#34;fem_poor_quality&#34; width=&#34;30%&#34; /&gt;&lt;/div&gt;

&lt;p&gt;The triangle on the left, which is equilateral, has a ratio ~0.5 and is the best possible triangle by this measure. The triangle on the right has a ratio of ~8.7, note the circumcenter of sliver triangles tend to fall outside of the triangle itself.&lt;/p&gt;

&lt;h3&gt;Delaunay refinement&lt;/h3&gt;

&lt;p&gt;Methods such as &lt;a href=&#34;http://en.wikipedia.org/wiki/Chew&#39;s_second_algorithm&#34;&gt;Chew&#39;s algorithm&lt;/a&gt; and &lt;a href=&#34;http://en.wikipedia.org/wiki/Ruppert%27s_algorithm&#34;&gt;Ruppert&#39;s algorithm&lt;/a&gt; are probably the most well known refinement algorithms. They attempt to improve mesh quality while maintaining the &lt;a href=&#34;http://en.wikipedia.org/wiki/Delaunay_triangulation#Properties&#34;&gt;Delaunay property&lt;/a&gt; (no vertex encroaching a triangle&#39;s circumcircle). This is typically done by inserting the circumcenter of low-quality triangles and subdividing edges.&lt;/p&gt;

&lt;p&gt;Jonathon Shewchuk&#39;s &lt;a href=&#34;http://www.cs.berkeley.edu/~jrs/papers/2dj.pdf&#34;&gt;&amp;quot;ultimate guide&amp;quot;&lt;/a&gt; has everything you need to know and there is &lt;a href=&#34;http://www.cs.cmu.edu/~quake/triangle.html&#34;&gt;Triangle&lt;/a&gt;, an open source tool to generate high quality triangulations.&lt;/p&gt;

&lt;p&gt;Unfortunately these algorithms require an accurate polygonal boundary as input as the output is sensitive to the input segment lengths. They are also famously difficult to implement robustly and efficiently, I spent most of my time implementing Ruppert&#39;s algorithm only to find the next methods produced better results with much simpler code.&lt;/p&gt;

&lt;h3&gt;Variational Methods&lt;/h3&gt;

&lt;p&gt;Variational (energy based) algorithms improve the mesh through a series of optimization steps that attempt to minimize a global energy function. I adapted the approach in Variational Tetrahedral Meshing &lt;a href=&#34;#ref3&#34;&gt;[3]&lt;/a&gt; to 2D and found it produced great results, this is the method I settled on so I&#39;ll go into some detail.&lt;/p&gt;

&lt;p&gt;The algorithm proceeds as follows:&lt;/p&gt;

&lt;ol style=&#34;font-family: courier; font-size: 12px;&#34;&gt;
&lt;li&gt;Generate a set of uniformly distributed points interior to the shape P&lt;/li&gt;
&lt;li&gt;Generate a set of points on the boundary of the shape B&lt;/li&gt;
&lt;li&gt;Generate a Delaunay triangulation of P&lt;/li&gt;
&lt;li&gt;Optimize boundary points by moving them them to the average of their neighbours in B&lt;/li&gt;
&lt;li&gt;Optimize interior points by moving them to the centroid of their Voronoi cell (area weighted average of connected triangle circumcenters)&lt;/li&gt;
&lt;li&gt;Unless stopping criteria met, go to 3.&lt;/li&gt;
&lt;li&gt;Remove boundary sliver triangles&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The core idea is that of repeated triangulation (3) and relaxation (4,5), it&#39;s a somewhat similar process to Lloyd&#39;s clustering, conincidentally the same algorithm I had used to generate surfel hierarchies for global illumination sims in the past.&lt;/p&gt;

&lt;p&gt;Here&#39;s an animation of 7 iterations on the Armadillo, note the number of points stays the same throughout (another nice property):&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;http://blog.mmacklin.com/wp-content/uploads/2012/06/figure_variational.gif&#34; alt=&#34;&#34; title=&#34;figure_variational&#34; class=&#34;aligncenter size-full wp-image-1507&#34; /&gt;&lt;/p&gt;

&lt;p&gt;It&#39;s interesting to see how much the quality improves after the very first step. Although Alliez et al. &lt;a href=&#34;#ref3&#34;&gt;[3]&lt;/a&gt; don&#39;t provide any guarantees on the resulting mesh quality I found the algorithm works very well on a variety of input images with a fixed number of iterations.&lt;/p&gt;

&lt;p&gt;This is the algorithm I ended up using but I&#39;ll quickly cover a couple of alternatives for completeness.&lt;/p&gt;

&lt;h3&gt;Structured Methods&lt;/h3&gt;

&lt;p&gt;These algorithms typically start by tiling interior space using a BCC (body centered cubic) lattice which is simply two interleaved grids. They then generate a Delaunay triangulation and throw away elements lying completely outside the region of interest.&lt;/p&gt;

&lt;p&gt;As usual, handling boundaries is where the real challenge lies, Molino et al. &lt;a href=&#34;#ref4&#34;&gt;[4]&lt;/a&gt; use a force based simulation to push grid points towards the boundary. Isosurface Stuffing &lt;a href=&#34;#ref5&#34;&gt;[5]&lt;/a&gt; refines the boundary by directly moving vertices to the zero-contour of a signed distance field or inserts new vertices if moving the existing lattice nodes would generate a poor quality triangle.&lt;/p&gt;

&lt;p&gt;Lattice based methods are typically very fast and don&#39;t suffer from the numerical robustness issues of algorithms that rely on triangulation. However if you plan on fracturing the mesh along element boundaries then this regular nature is exposed and looks quite unconvincing.&lt;/p&gt;

&lt;h3 id=&#34;simplification-methods&#34;&gt;Simplification Methods&lt;/h3&gt;

&lt;p&gt;Another approach is to start with a very fine-grained mesh and progressively simplify it in the style of Progressive Meshes&lt;a href=&#34;#ref6&#34;&gt; [6]&lt;/a&gt;. Barbara Cutler&#39;s &lt;a href=&#34;http://people.csail.mit.edu/bmcutler/PROJECTS/PHD/index.html&#34;&gt;thesis&lt;/a&gt; and associated &lt;a href=&#34;http://people.csail.mit.edu/bmcutler/PROJECTS/SGP04/index.html&#34;&gt;paper&lt;/a&gt; discusses the details and very helpfully provides &lt;a href=&#34;http://people.csail.mit.edu/bmcutler/PROJECTS/SGP04/meshes/index.html&#34;&gt;the resulting tetrahedral meshes&lt;/a&gt;, but the implementation appears to be considerably more complex than variational methods and relies on quite a few heuristics to get good results.&lt;/p&gt;

&lt;h3 id=&#34;simulation&#34;&gt;Simulation&lt;/h3&gt;

&lt;p&gt;Now the mesh is ready it&#39;s time for the fun part (apologies if you really love meshing). This simple simulation is using co-rotational linear FEM with a semi-implicit time-stepping scheme:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe src=&#34;https://player.vimeo.com/video/44652965&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;(Armadillo and Bunny images courtesy of the &lt;a href=&#34;http://graphics.stanford.edu/data/3Dscanrep/&#34;&gt;Stanford Scanning Respository&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Pre-built binaries for OSX/Win32 here: &lt;a href=&#34;http://mmacklin.com/fem.zip&#34;&gt;http://mmacklin.com/fem.zip&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source code is available on Github: &lt;a href=&#34;https://github.com/mmacklin/sandbox/tree/master/projects/fem&#34;&gt;https://github.com/mmacklin/sandbox/tree/master/projects/fem&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;Refs:&lt;/h3&gt;

&lt;p&gt;&lt;a name=&#34;ref1&#34;&gt;&lt;/a&gt;[1] Matthias Müller, Jos Stam, Doug James, and Nils Thürey. Real time physics: class notes. In ACM SIGGRAPH 2008 classes &lt;a href=&#34;http://www.matthiasmueller.info/realtimephysics/index.html&#34;&gt;http://www.matthiasmueller.info/realtimephysics/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a name=&#34;ref2&#34;&gt;&lt;/a&gt;[2] Jonathan Richard Shewchuk. 2002. What Is a Good Linear Finite Element? Interpolation, Conditioning, Anisotropy, and Quality Measures, unpublished preprint. &lt;a href=&#34;http://www.cs.berkeley.edu/~jrs/papers/elemj.pdf&#34;&gt;http://www.cs.berkeley.edu/~jrs/papers/elemj.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a name=&#34;ref3&#34;&gt;&lt;/a&gt;[3] Pierre Alliez, David Cohen-Steiner, Mariette Yvinec, and Mathieu Desbrun. 2005. Variational tetrahedral meshing. &lt;a href=&#34;ftp://ftp-sop.inria.fr/prisme/alliez/vtm.pdf&#34;&gt;ftp://ftp&amp;#8209;sop.inria.fr/prisme/alliez/vtm.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a name=&#34;ref4&#34;&gt;&lt;/a&gt;[4] Molino, Bridson, et al. - 2003. A Crystalline, Red Green Strategy for Meshing Highly Deformable Objects with Tetrahedra &lt;a href=&#34;http://www.math.ucla.edu/~jteran/papers/MBTF03.pdf&#34;&gt;http://www.math.ucla.edu/~jteran/papers/MBTF03.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a name=&#34;ref5&#34;&gt;&lt;/a&gt;[5] François Labelle and Jonathan Richard Shewchuk. 2007. Isosurface stuffing: fast tetrahedral meshes with good dihedral angles. In ACM SIGGRAPH 2007 papers &lt;a href=&#34;http://www.cs.berkeley.edu/~jrs/papers/stuffing.pdf&#34;&gt;http://www.cs.berkeley.edu/~jrs/papers/stuffing.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a name=&#34;ref6&#34;&gt;&lt;/a&gt;[6] Hugues Hoppe. 1996. Progressive meshes. &lt;a href=&#34;http://research.microsoft.com/en-us/um/people/hoppe/pm.pdf&#34;&gt;http://research.microsoft.com/en-us/um/people/hoppe/pm.pdf&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
    
      
        <item>
          <title>Implicit Springs</title>
          <link>http://blog.mmacklin.com/2012/05/04/implicitsprings/</link>
          <pubDate>Fri, 04 May 2012 11:43:39 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2012/05/04/implicitsprings/</guid>
          <description>&lt;p&gt;This is a quick post to document some work I did while writing a mass spring simulation using an implicit integrator. Implicit, or backward Euler integration is well described in David Baraff&#39;s &lt;a href=&#34;http://www.pixar.com/companyinfo/research/pbm2001/&#34;&gt;Physically Based Modelling SIGGRAPH course&lt;/a&gt; and this post assumes some familiarity with it.&lt;/p&gt;

&lt;p&gt;Springs are a workhorse in physical simulation, once you have unconditionally stable springs you can use them to model just about anything, from rigid bodies to &lt;a href=&#34;http://www.rhythm.com/~tae/wichita.pdf&#34;&gt;drool and snot&lt;/a&gt;. For example, Industrial Light &amp;amp; Magic used a tetrahedral mesh with edge and altitude springs to model the damage to ships in Avatar (see &lt;a target=&#34;_blank&#34; href=&#34;http://physbam.stanford.edu/~mlentine/images/deformingrigids.pdf&#34;&gt; Avatar: Bending Rigid Bodies&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;If you sit down and try and implement an implicit integrator one of the first things you need is the Jacobian of the particle forces with respect to the particle positions and velocities. The rest of this post shows how to derive these Jacobians for a basic Hookean spring in a form ready to be plugged into a linear system solver (I use a hand-rolled conjugate gradient solver, see Jonathon Shewchuk&#39;s &lt;a target=&#34;_blank&#34; href=&#34;http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf&#34;&gt;painless introduction&lt;/a&gt; for details, it is all of about 20 lines of code to implement).&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\renewcommand{\v}[1]{\mathbf{#1}} \newcommand{\uv}[1]{\mathbf{\widehat{#1}}} \newcommand\ddx[1]{\frac{\partial#1}{\partial \v{x} }} \newcommand\dd[2]{\frac{\partial#1}{\partial #2}}\]&lt;/span&gt;&lt;/p&gt;

&lt;h3&gt;Vector Calculus Basics&lt;/h3&gt;

&lt;p&gt;In order to calculate the force Jacobians we first need to know how to calculate the derivatives of some basic geometric quantities with respect to a vector.&lt;/p&gt;

&lt;p&gt;In general the derivative of a scalar valued function with respect to a vector is defined as the following row vector of partial derivatives:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[ \ddx{f} = \begin{bmatrix} \dd{f}{x_i} &amp; \dd{f}{x_j} &amp; \dd{f}{x_k} \end{bmatrix}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;And for a vector valued function with respect to a vector:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\ddx{\v{f}} = \begin{bmatrix} \dd{f_i}{x_i} &amp; \dd{f_i}{x_j} &amp; \dd{f_i}{x_k} \\ \dd{f_j}{x_i} &amp; \dd{f_j}{x_j} &amp; \dd{f_j}{x_k} \\ \dd{f_k}{x_i} &amp; \dd{f_k}{x_j} &amp; \dd{f_k}{x_k} \end{bmatrix}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Applying the first definition to the dot product of two vectors we can calculate the derivative with respect to one of the vectors:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\ddx{\v{x}^T \cdot \v{y}} = \v{y}^T \]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Note that I&#39;ll explicitly keep track of whether vectors are row or column vectors as it will help keep things straight later on.&lt;/p&gt;

&lt;p&gt;The derivative of a vector magnitude with respect to the vector, gives the normalized vector transposed:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\ddx{|\v{x}|} = \left(\frac{\v{x}}{|\v{x}|}\right)^T = \uv{x}^T \]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;The derivative of a normalized vector &lt;span  class=&#34;math&#34;&gt;\(\v{\widehat{x}} = \frac{\v{x}}{|\v{x}|} \)&lt;/span&gt; can be obtained using the quotient rule:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\ddx{\uv{x}} = \frac{\v{I}|\v{x}| - \v{x}\cdot\uv{x}^T}{|\v{x}|^2}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Where &lt;span  class=&#34;math&#34;&gt;\(\v{I}\)&lt;/span&gt; is the &lt;span  class=&#34;math&#34;&gt;\(n\)&lt;/span&gt; x &lt;span  class=&#34;math&#34;&gt;\(n\)&lt;/span&gt; identity matrix and n is the dimension of &lt;span  class=&#34;math&#34;&gt;\(x\)&lt;/span&gt;. The product of a column vector and a row vector &lt;span  class=&#34;math&#34;&gt;\(\uv{x}\cdot\uv{x}^T\)&lt;/span&gt; is the outer product which is a &lt;span  class=&#34;math&#34;&gt;\(n\)&lt;/span&gt; x &lt;span  class=&#34;math&#34;&gt;\(n\)&lt;/span&gt; matrix that can be constructed using standard matrix multiplication definition.&lt;/p&gt;

&lt;p&gt;Dividing through by &lt;span  class=&#34;math&#34;&gt;\(|\v{x}|\)&lt;/span&gt; we have:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\ddx{\uv{x}} = \frac{\v{I} - \uv{x}\cdot\uv{x}^T}{\v{|x|}}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;br/&gt;&lt;br&gt;
&lt;h3&gt;Jacobian of Stretch Force&lt;/h3&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to compute the Jacobian of the spring forces. Recall the equation for the elastic force on a particle &lt;span  class=&#34;math&#34;&gt;\(i\)&lt;/span&gt; due to an undamped Hookean spring:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\v{F_s} = -k_s(|\v{x}_{ij}| - r)\uv{x}_{ij}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Where &lt;span  class=&#34;math&#34;&gt;\(\v{x}_{ij} = \v{x}_i - \v{x}_j\)&lt;/span&gt; is the vector between the two connected particle positions, &lt;span  class=&#34;math&#34;&gt;\(r\)&lt;/span&gt; is the rest length and &lt;span  class=&#34;math&#34;&gt;\(k_s\)&lt;/span&gt; is the stiffness coefficient.&lt;/p&gt;

&lt;p&gt;The Jacobian of this force with respect to particle &lt;span  class=&#34;math&#34;&gt;\(i\)&lt;/span&gt;&#39;s position is obtained by using the product rule for the two &lt;span  class=&#34;math&#34;&gt;\(\v{x}_i\)&lt;/span&gt; dependent terms in &lt;span  class=&#34;math&#34;&gt;\(\v{F_s}\)&lt;/span&gt;:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\dd{\v{F_s}}{\v{x}_i} = -ks\left[(|\v{x}_{ij}| - r)\dd{\uv{x}_{ij}}{\v{x}_i} + \uv{x}_{ij}\dd{(|\v{x}_{ij}| - r)}{\v{x}_i}\right]\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Using the previously derived formulas for the derivative of a vector magnitude and normalized vector we have:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\dd{\v{F_s}}{\v{x}_i} = -ks\left[(|\v{x}_{ij}| - r)\left(\frac{\v{I} - \uv{x}_{ij}\cdot \uv{x}_{ij}^T}{|\v{x}_{ij}|}\right) + \uv{x}_{ij}\cdot\uv{x}_{ij}^T\right]\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Dividing the first two terms through by &lt;span  class=&#34;math&#34;&gt;\(|\v{x}_{ij}|\)&lt;/span&gt;:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\dd{\v{F_s}}{\v{x}_i} = -ks\left[(1 - \frac{r}{|\v{x}_{ij}|})\left(\v{I} - \uv{x}_{ij}\cdot \uv{x}_{ij}^T\right) + \uv{x}_{ij}\cdot \uv{x}_{ij}^T\right]\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Due to the symmetry in the definition of &lt;span  class=&#34;math&#34;&gt;\(\v{x}_{ij}\)&lt;/span&gt; we have the following force derivative with respect to the opposite particle:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\dd{\v{F_s}}{\v{x}_j}  = -\dd{\v{F_s}}{\v{x}_i}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;br/&gt;&lt;br&gt;
&lt;h3&gt;Jacobian of Damping Force&lt;/h3&gt;&lt;/p&gt;

&lt;p&gt;The equation for the damping force on a particle &lt;span  class=&#34;math&#34;&gt;\(i\)&lt;/span&gt; due to a spring is:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\v{F_d} = -k_d\cdot\uv{x}(\v{v}_{ij}\cdot \uv{x}_{ij})\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Where &lt;span  class=&#34;math&#34;&gt;\(\v{v}_{ij} = \v{v}_i-\v{v}_j\)&lt;/span&gt; is the relative velocities of the two particles. This is the preferred formulation because it damps only relative velocity along the spring axis.&lt;/p&gt;

&lt;p&gt;Taking the derivative with respect to &lt;span  class=&#34;math&#34;&gt;\(\v{v}_i\)&lt;/span&gt;:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\dd{\v{F_d}}{\v{v}_i} = -k_d\cdot\uv{x}\cdot\uv{x}^T\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;As with stretching, the force on the opposite particle is simply negated:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\dd{\v{F_d}}{\v{v}_j} = -\dd{\v{F_d}}{\v{v}_i} \]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Note that implicit integration introduces it&#39;s own artificial damping so you might find it&#39;s not necessary to add as much additional damping as you would with an explicit integration scheme.&lt;/p&gt;

&lt;p&gt;I&#39;ll be going into more detail about implicit methods and FEM in subsequent posts, stay tuned!&lt;/p&gt;

&lt;h3&gt;Refs&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;http://www.pixar.com/companyinfo/research/pbm2001&#34;&gt;[Baraff Witkin] - Physically Based Modelling, SIGGRAPH course&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://run.usc.edu/cs599-s10/cloth/baraff-witkin98.pdf&#34;&gt;[Baraff Witkin] - Large Steps in Cloth Simulation&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://njoubert.com/teaching/cs184_sp09/section/simulation.pdf&#34;&gt;[N. Joubert] - An Introduction to Simulation&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://davidpritchard.org/freecloth/docs/report.pdf&#34;&gt;[D Prichard] - Implementing Baraff and Witkin&#39;s Cloth Simulation&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://graphics.snu.ac.kr/~kjchoi/publication/cloth.pdf&#34;&gt;[Choi] - Stable but Responsive Cloth&lt;/a&gt;&lt;/li&gt;&lt;br&gt;
Numerical Recipes, 3rd edition 2007 - ch17.5&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>New Look</title>
          <link>http://blog.mmacklin.com/2012/05/03/new-look/</link>
          <pubDate>Thu, 03 May 2012 00:29:09 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2012/05/03/new-look/</guid>
          <description>&lt;p&gt;Hi all, welcome to my new site. I&#39;ve moved to my own hosting and have updated a few things - a new theme and a switch to MathJax for equation rendering. Apologies to RSS readers who will now see only a bunch of Latex code, but it is currently by far the easiest way to put decent looking equations in a web page.&lt;/p&gt;

&lt;p&gt;It&#39;s been a little over a year since I started working at NVIDIA and not coincidentally, since my last blog post. I&#39;m really enjoying working more on the simulation side of things, it makes a nice change from pure rendering and the PhysX team is full of über-talented people who I&#39;m learning a lot from.&lt;/p&gt;

&lt;p&gt;I&#39;ve got some simulation related posts (from a graphics programmer&#39;s perspective) planned over the next few months, I hope you enjoy them!&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Blackbody Rendering</title>
          <link>http://blog.mmacklin.com/2010/12/29/blackbody-rendering/</link>
          <pubDate>Wed, 29 Dec 2010 08:21:08 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/12/29/blackbody-rendering/</guid>
          <description>&lt;p&gt;In between bouts of festive over-eating I added support for blackbody emission to my fluid simulator and thought I&#39;d describe what was involved.&lt;/p&gt;

&lt;p&gt;Briefly, a &lt;a href=&#34;http://en.wikipedia.org/wiki/Black_body&#34;&gt;blackbody&lt;/a&gt; is an idealised substance that gives off light when heated. &lt;a href=&#34;http://en.wikipedia.org/wiki/Planck&#39;s_law&#34;&gt;Planck&#39;s formula&lt;/a&gt; describes the intensity of light per-wavelength with units &lt;strong&gt;W·sr&lt;sup&gt;-1&lt;/sup&gt;·m&lt;sup&gt;-2&lt;/sup&gt;·m&lt;sup&gt;-1&lt;/sup&gt;&lt;/strong&gt; for a given temperature in Kelvins.&lt;/p&gt;

&lt;p&gt;Radiance has units &lt;strong&gt;W·sr&lt;sup&gt;-1&lt;/sup&gt;·m&lt;sup&gt;-2&lt;/sup&gt;&lt;/strong&gt; so we need a way to convert the wavelength dependent power distribution given by Planck&#39;s formula to a radiance value in RGB that we can use in our shader / ray-tracer.&lt;/p&gt;

&lt;p&gt;The typical way to do this is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Integrate Planck&#39;s formula against the CIE XYZ colour matching functions (available as part of &lt;a href=&#34;https://github.com/mmp/pbrt-v2/blob/master/src/core/spectrum.cpp&#34;&gt;PBRT&lt;/a&gt; in 1nm increments)&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Convert from &lt;a href=&#34;http://en.wikipedia.org/wiki/CIE_1931_color_space&#34;&gt;XYZ&lt;/a&gt; to linear &lt;a href=&#34;http://en.wikipedia.org/wiki/SRGB&#34;&gt;sRGB&lt;/a&gt; (do not perform gamma correction yet)&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Render as normal&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Perform tone-mapping / gamma correction&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We are throwing away spectral information by projecting into XYZ but a quick dimensional analysis shows that now we at least have the correct units (because the integration is with respect to &lt;em&gt;dλ&lt;/em&gt; measured in meters the extra &lt;strong&gt;m&lt;sup&gt;-1&lt;/sup&gt;&lt;/strong&gt; is removed).&lt;/p&gt;

&lt;p&gt;I was going to write more about the colour conversion process, but I didn&#39;t want to add to the confusion out there by accidentally misusing terminology. Instead here are a couple of papers describing the conversion from Spectrum-&amp;gt;RGB and RGB-&amp;gt;Spectrum, questions about these come up all the time on various forums and I think these two papers do a good job of providing background and clarifying the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.anyhere.com/gward/papers/egwr02/index.html&#34;&gt;Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.9608&#34;&gt;An RGB to Spectrum Conversion for Reflectances&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And some more general colour space links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://graphics.stanford.edu/courses/cs148-10-summer/docs/2010--kerr--cie_xyz.pdf&#34;&gt;The CIE XYZ and xyY Color Spaces by Douglas Kerr&lt;/a&gt; (particularly good)&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://renderwonk.com/publications/s2010-color-course/&#34;&gt;SIGGRAPH 2010: Color Enhancement and Rendering in Film and Game Production&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;ftp://rtfm.mit.edu/pub/usenet/news.answers/graphics/colorspace-faq&#34;&gt;Color Space FAQ&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a small sample of linear sRGB radiance values for different Blackbody temperatures:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
1000K: 1.81e-02, 1.56e-04, 1.56e-04
2000K: 1.71e+03, 4.39e+02, 4.39e+02
4000K: 5.23e+05, 3.42e+05, 3.42e+05
8000K: 9.22e+06, 9.65e+06, 9.65e+06
&lt;/pre&gt;

&lt;p&gt;It&#39;s clear from the range of values that we need some sort of exposure control and tone-mapping. I simply picked a temperature in the upper end of my range (around 3000K) and scaled intensities around it before applying Reinhard tone mapping and gamma correction. You can also perform more advanced mapping by taking into account the human visual system adaptation as described in &lt;a href=&#34;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.3511&amp;amp;rep=rep1&amp;amp;type=pdf&#34;&gt;Physically Based Modeling and Animation of Fire&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Again the hardest part was setting up the simulation parameters to get the look you want, here&#39;s one I spent at least 4 days tweaking:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe src=&#34;https://player.vimeo.com/video/18232573&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;Simulation time is ~30s a frame (10 substeps) on a 128^3 grid tracking temperature, fuel, smoke and velocity. Most of that time is spent in the tri-cubic interpolation during advection, I&#39;ve been meaning to try MacCormack advection to see if it&#39;s a net win.&lt;/p&gt;

&lt;p&gt;There are some pretty obvious artifacts due to the tri-linear interpolation on the GPU, that would be helped by a higher resolution grid or manually performing tri-cubic in the shader.&lt;/p&gt;

&lt;p&gt;Inspired by Kevin Beason&#39;s work in progress videos I put together a collection of my own failed tests which I think are quite amusing:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe src=&#34;https://player.vimeo.com/video/18232467&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
    
      
        <item>
          <title>Adventures in Fluid Simulation</title>
          <link>http://blog.mmacklin.com/2010/11/01/adventures-in-fluid-simulation/</link>
          <pubDate>Tue, 02 Nov 2010 01:34:27 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/11/01/adventures-in-fluid-simulation/</guid>
          <description>&lt;p&gt;I have to admit to being simultaneously fascinated and slightly intimidated by the fluid simulation crowd. I&#39;ve been watching the videos on &lt;a href=&#34;http://physbam.stanford.edu/~fedkiw/&#34;&gt;Ron Fedkiw&#39;s page&lt;/a&gt; for years and am still in awe of his results, which sometimes seem little short of magic.&lt;/p&gt;

&lt;p&gt;Recently I resolved to write my first fluid simulator and purchased a copy of &lt;a href=&#34;http://www.cs.ubc.ca/~rbridson/fluidbook/&#34;&gt;Fluid Simulation for Computer Graphics&lt;/a&gt; by Robert Bridson.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2011/08/fluidbook.jpg&#34; alt=&#34;&#34; title=&#34;fluidbook&#34; style=&#34;float: right; margin: 16px&#34; width=&#34;125&#34; height=&#34;185&#34; class=&#34;alignright size-full wp-image-1461&#34; /&gt; Like a lot of developers my first exposure to the subject was &lt;a href=&#34;http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/ns.pdf&#34;&gt;Jos Stam&#39;s stable fluids paper&lt;/a&gt; and his more accessible &lt;a href=&#34;http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/GDC03.pdf&#34;&gt;Fluid Dynamics for Games&lt;/a&gt; presentation, while the ideas are undeniable great I never came away feeling like I truly understood the concepts or the mathematics behind it.&lt;/p&gt;

&lt;p&gt;I&#39;m happy to report that Bridson&#39;s book has helped change that. It includes a review of vector calculus in the appendix that is given in a wonderfully straight-forward and concise manner, Bridson takes almost nothing for granted and gives lots of real-world examples which helps for some of the less intuitive concepts.&lt;/p&gt;

&lt;p&gt;I&#39;m planning a bigger post on the subject but I thought I&#39;d write a quick update with my progress so far.&lt;/p&gt;

&lt;p&gt;I started out with a 2D simulation similar to Stam&#39;s demos, having a 2D implementation that you&#39;re confident in is really useful when you want to quickly try out different techniques and to sanity check results when things go wrong in 3D (and they will).&lt;/p&gt;

&lt;p&gt;Before you write the 3D sim though, you need a way of visualising the data. I spent quite a while on this and implemented a single-scattering model using brute force ray-marching on the GPU.&lt;/p&gt;

&lt;p&gt;I did some tests with a procedural pyroclastic cloud model which you can see below, this runs at around 25ms on my MacBook Pro (NVIDIA 320M) but you can dial the sample counts up and down to suit:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe src=&#34;https://player.vimeo.com/video/16159247&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;Here&#39;s a simplified GLSL snippet of the volume rendering shader, it&#39;s not at all optimised apart from some branches to skip over empty space and an assumption that absorption varies linearly with density:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
uniform sampler3D g_densityTex;
uniform vec3 g_lightPos;
uniform vec3 g_lightIntensity;
uniform vec3 g_eyePos;
uniform float g_absorption;

void main()
{
    // diagonal of the cube
    const float maxDist = sqrt(3.0);

    const int numSamples = 128;
    const float scale = maxDist/float(numSamples);

    const int numLightSamples = 32;
    const float lscale = maxDist / float(numLightSamples);

    // assume all coordinates are in texture space
    vec3 pos = gl_TexCoord[0].xyz;
    vec3 eyeDir = normalize(pos-g_eyePos)*scale;

    // transmittance
    float T = 1.0;
    // in-scattered radiance
    vec3 Lo = vec3(0.0);

    for (int i=0; i &amp;lt; numSamples; ++i)
    {
        // sample density
        float density = texture3D(g_densityTex, pos).x;

        // skip empty space
        if (density &amp;gt; 0.0)
        {
            // attenuate ray-throughput
            T *= 1.0-density*scale*g_absorption;
            if (T &amp;lt;= 0.01)
                break;

            // point light dir in texture space
            vec3 lightDir = normalize(g_lightPos-pos)*lscale;

            // sample light
            float Tl = 1.0; // transmittance along light ray
            vec3 lpos = pos + lightDir;

            for (int s=0; s &amp;lt; numLightSamples; ++s)
            {
                float ld = texture3D(g_densityTex, lpos).x;
                Tl *= 1.0-g_absorption*lscale*ld;

                if (Tl &amp;lt;= 0.01)
                    break;

                lpos += lightDir;
            }

            vec3 Li = g_lightIntensity*Tl;

            Lo += Li*T*density*scale;
        }

        pos += eyeDir;
    }

    gl_FragColor.xyz = Lo;
    gl_FragColor.w = 1.0-T;
}
&lt;/pre&gt;

&lt;p&gt;I&#39;m pretty sure there&#39;s a whole post on the ways this could be optimised but I&#39;ll save that for next time.  Also this example shader doesn&#39;t have any wavelength dependent variation.  Making your absorption coefficient different for each channel looks much more interesting and having a different coefficient for your primary and shadow rays also helps, you can see this effect in the videos.&lt;/p&gt;

&lt;p&gt;To create the cloud like volume texture in OpenGL I use a displaced distance field like this (see the SIGGRAPH course for more details):&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
// create a volume texture with n^3 texels and base radius r
GLuint CreatePyroclasticVolume(int n, float r)
{
    GLuint texid;
    glGenTextures(1, &amp;amp;texid);

    GLenum target = GL_TEXTURE_3D;
    GLenum filter = GL_LINEAR;
    GLenum address = GL_CLAMP_TO_BORDER;

    glBindTexture(target, texid);

    glTexParameteri(target, GL_TEXTURE_MAG_FILTER, filter);
    glTexParameteri(target, GL_TEXTURE_MIN_FILTER, filter);

    glTexParameteri(target, GL_TEXTURE_WRAP_S, address);
    glTexParameteri(target, GL_TEXTURE_WRAP_T, address);
    glTexParameteri(target, GL_TEXTURE_WRAP_R, address);

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    byte *data = new byte[n*n*n];
    byte *ptr = data;

    float frequency = 3.0f / n;
    float center = n / 2.0f + 0.5f;

    for(int x=0; x &amp;lt; n; x++)
    {
        for (int y=0; y &amp;lt; n; ++y)
        {
            for (int z=0; z &amp;lt; n; ++z)
            {
                float dx = center-x;
                float dy = center-y;
                float dz = center-z;

                float off = fabsf(Perlin3D(x*frequency,
                               y*frequency,
                               z*frequency,
                               5,
                               0.5f));

                float d = sqrtf(dx*dx+dy*dy+dz*dz)/(n);

                *ptr++ = ((d-off) &amp;lt; r)?255:0;
            }
        }
    }

    // upload
    glTexImage3D(target,
                 0,
                 GL_LUMINANCE,
                 n,
                 n,
                 n,
                 0,
                 GL_LUMINANCE,
                 GL_UNSIGNED_BYTE,
                 data);

    delete[] data;

    return texid;
}
&lt;/pre&gt;

&lt;p&gt;An excellent introduction to volume rendering is the SIGGRAPH 2010 course, &lt;a href=&#34;http://magnuswrenninge.com/volumetricmethods&#34;&gt;Volumetric Methods in Visual Effects&lt;/a&gt; and Kyle Hayward&#39;s &lt;a href=&#34;http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html&#34;&gt;Volume Rendering 101&lt;/a&gt; for some GPU specifics.&lt;/p&gt;

&lt;p&gt;Once I had the visualisation in place, porting the fluid simulation to 3D was actually not too difficult. I spent most of my time tweaking the initial conditions to get the smoke to behave in a way that looks interesting, you can see one of my more successful simulations below:&lt;/p&gt;

&lt;p&gt;&lt;div class=&#34;videocontainer&#34;&gt;&lt;iframe src=&#34;https://player.vimeo.com/video/16357651&#34; frameborder=&#34;0&#34; webkitallowfullscreen mozallowfullscreen allowfullscreen class=&#34;video&#34;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;Currently the simulation runs entirely on the CPU using a 128^3 grid with monotonic tri-cubic interpolation and vorticity confinement as described in &lt;a href=&#34;http://graphics.ucsd.edu/~henrik/papers/smoke/smoke.pdf&#34;&gt;Visual Simulation of Smoke&lt;/a&gt; by Fedkiw.  I&#39;m fairly happy with the result but perhaps I have the vorticity confinement cranked a little high.&lt;/p&gt;

&lt;p&gt;Nothing is optimised so its running at about 1.2s a frame on my 2.66ghz Core 2 MacBook.&lt;/p&gt;

&lt;p&gt;Future work is to port the simulation to OpenCL and implement some more advanced features.  Specifically I&#39;m interested in &lt;a href=&#34;http://physbam.stanford.edu/~fedkiw/papers/stanford2005-01.pdf&#34;&gt;A Vortex Particle Method for Smoke, Water and Explosions&lt;/a&gt; which &lt;a href=&#34;http://www.kevinbeason.com/&#34;&gt;Kevin Beason&lt;/a&gt; describes on his &lt;a href=&#34;http://www.kevinbeason.com/scs/fluid/&#34;&gt;fluid page&lt;/a&gt; (with some great videos).&lt;/p&gt;

&lt;p&gt;On a personal note, I resigned from LucasArts a couple of weeks ago and am looking forward to some time off back in New Zealand with my family and friends.  Just in time for the Kiwi summer!&lt;/p&gt;

&lt;h3 id=&#34;links&#34;&gt;Links&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html&#34;&gt;GPU Gems - Fluid Simulation on the GPU&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://http.developer.nvidia.com/GPUGems3/gpugems3_ch30.html&#34;&gt;GPU Gems 3 - Real-Time Rendering and Simulation of 3D Fluids&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.colinbraley.com/Pubs/FluidSimColinBraley.pdf&#34;&gt;Fluid Simulation For Computer Graphics: A Tutorial in Grid Based and Particle Based Methods&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;/ol&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
    
      
        <item>
          <title>Tracing</title>
          <link>http://blog.mmacklin.com/2010/10/03/tracing/</link>
          <pubDate>Sun, 03 Oct 2010 23:37:47 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/10/03/tracing/</guid>
          <description>&lt;p&gt;Gregory Pakosz reminded me to write a follow up on my path tracing efforts since my &lt;a href=&#34;http://mmack.wordpress.com/2009/12/02/path-tracing/&#34;&gt;last post&lt;/a&gt; on the subject. &lt;/p&gt;

&lt;p&gt;It&#39;s good timing because the friendly work-place competition between &lt;a href=&#34;http://imdoingitwrong.wordpress.com/&#34;&gt;Tom&lt;/a&gt; and me has been in full swing. The great thing about ray tracing is that there are many opportunities for optimisation at all levels of computation.  This keeps you &#34;hooked&#34; by constantly offering decent speed increases for relatively little effort.&lt;/p&gt;

&lt;p&gt;Tom had an existing BIH (&lt;a href=&#34;http://en.wikipedia.org/wiki/Bounding_interval_hierarchy&#34;&gt;bounding interval hierarchy&lt;/a&gt;) implementation that was doing a pretty good job, so I had some catching up to do.  Previously I had a positive experience using a BVH (&lt;a href=&#34;http://en.wikipedia.org/wiki/Bounding_volume_hierarchy&#34;&gt;AABB tree&lt;/a&gt;) in a games context so decided to go that route.&lt;/p&gt;

&lt;p&gt;Our benchmark scene was &lt;a href=&#34;http://www.crytek.com/cryengine/cryengine3/downloads&#34;&gt;Crytek&#39;s Sponza&lt;/a&gt; with the camera positioned in the center of the model looking down the z-axis.  This might not be the most representative case but was good enough for comparing primary ray speeds.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/sponza_bench.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1070&#34; title=&#34;sponza_bench&#34; src=&#34;./wp-content/uploads/2010/10/sponza_bench.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here&#39;s a rough timeline of the performance progress (all timings were taken from my 2.6ghz i7 running 8 worker threads):&lt;/p&gt;

&lt;table style=&#34;text-align:left;&#34;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;th&gt;Optimisation&lt;/th&gt;
&lt;th&gt;Rays/second&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Baseline (median split)&lt;/td&gt;
&lt;td&gt;91246&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tweak compiler settings (/fp:fast /sse2 /Ot)&lt;/td&gt;
&lt;td&gt;137486&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Non-recursive traversal&lt;/td&gt;
&lt;td&gt;145847&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traverse closest branch first&lt;/td&gt;
&lt;td&gt;146822&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Surface area heuristic&lt;/td&gt;
&lt;td&gt;1.27589e+006&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Surface area heuristic (exhaustive)&lt;/td&gt;
&lt;td&gt;1.9375e+006&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimized ray-AABB&lt;/td&gt;
&lt;td&gt;2.14232e+006&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VS2008 to VS2010&lt;/td&gt;
&lt;td&gt;2.47746e+006&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;You can see the massive difference tree quality has on performance.  What I found surprising though was the effect switching to VS2010 had, 15% faster is impressive for a single compiler revision.&lt;/p&gt;

&lt;p&gt;I played around with a quantized BVH which reduced node size from 32 bytes to 11 but I couldn&#39;t get the decrease in cache traffic to outweigh the cost in decoding the nodes.  If anyone has had success with this I&#39;d be interested in the details.&lt;/p&gt;
&lt;p&gt;Algorithmically it is a uni-directional path tracer with multiple importance sampling.  Of course importance sampling doesn&#39;t make individual samples faster but allows you to take less total samples than you would have to otherwise.&lt;/p&gt;
&lt;p&gt;So, time for some pictures:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/sponza_plus.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1042&#34; title=&#34;Sponza&#34; src=&#34;./wp-content/uploads/2010/10/sponza_plus.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/classroom_neon.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1036&#34; title=&#34;Classroom (from LuxRender distibution)&#34; src=&#34;./wp-content/uploads/2010/10/classroom_neon.png&#34; alt=&#34;&#34;  /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/matte_lucy_big.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1040&#34; title=&#34;Lucy&#34; src=&#34;./wp-content/uploads/2010/10/matte_lucy_big.png&#34; alt=&#34;&#34;/&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/gold_statuette_exp.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1039&#34; title=&#34;Thai Statuette&#34; src=&#34;./wp-content/uploads/2010/10/gold_statuette_exp.png&#34; alt=&#34;&#34;  /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/gold_dragon.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1038&#34; title=&#34;Dragon&#34; src=&#34;./wp-content/uploads/2010/10/gold_dragon.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/10/bunny_fresnel.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1035&#34; title=&#34;Bunny&#34; src=&#34;./wp-content/uploads/2010/10/bunny_fresnel.png&#34; alt=&#34;&#34;  /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Despite being the lowest poly models, Sponza (200k triangles) and the classroom (250k triangles) were by far the most difficult for the renderer; they both took 10+ hours and still have visible noise.  In contrast the gold statuette (10 million triangles) took only 20 mins to converge!&lt;/p&gt;

&lt;p&gt;This is mainly because the architectural models have a mixture of very large and very small polygons which creates deep trees with large nodes near the root.  I think a kd-tree which splits or duplicates primitives might be more effective in this case.&lt;/p&gt;

&lt;p&gt;A fun way to break your spatial hierarchy is simply to add a ground plane.  Until I performed an exhaustive split search adding a large two triangle ground plane could slow down tracing by as much as 50%.&lt;/p&gt;

&lt;p&gt;Of course these numbers are peanuts compared to what people are getting with GPU or SIMD packet tracers, &lt;a href=&#34;http://www.tml.tkk.fi/~timo/&#34;&gt;Timo Aila&lt;/a&gt; reports speeds of 142 million rays/second on similar scenes using a GPU tracer in &lt;a href=&#34;http://www.tml.tkk.fi/~timo/publications/aila2009hpg_paper.pdf&#34;&gt;this paper&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Writing a path tracer has been a great education for me and I would encourage anyone interested in getting a better grasp on computer graphics to get a copy of PBRT and have a go at it.  It&#39;s easy to get started and seeing the finished product is hugely rewarding.&lt;/p&gt;

&lt;p&gt;&lt;h3 style=&#34;text-align:left;&#34;&gt;Model credits:&lt;/h3&gt;&lt;br&gt;
Sponza - &lt;a href=&#34;http://www.crytek.com/cryengine/cryengine3/downloads&#34;&gt;Crytek&lt;/a&gt;&lt;br&gt;
Classroom - &lt;a href=&#34;http://src.luxrender.net/luxrays/&#34;&gt;LuxRender&lt;/a&gt;&lt;br&gt;
Thai Statuette, Dragon, Bunny, Lucy - &lt;a href=&#34;http://graphics.stanford.edu/data/3Dscanrep/&#34;&gt;Stanford scanning repository&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Faster Fog</title>
          <link>http://blog.mmacklin.com/2010/06/10/faster-fog/</link>
          <pubDate>Fri, 11 Jun 2010 03:24:11 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/06/10/faster-fog/</guid>
          <description>&lt;p&gt;&lt;a href=&#34;http://ccollomb.free.fr/blog/&#34;&gt;Cedrick&lt;/a&gt; at Lucas suggested some nice optimisations for the in-scattering equation I posted &lt;a href=&#34;http://mmack.wordpress.com/2010/05/29/in-scattering-demo/&#34;&gt;last time&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I had left off at:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \frac{\sigma_{s}I}{v}( \tan^{-1}\left(\frac{d+b}{v}\right) - \tan^{-1}\left(\frac{b}{v}\right) )\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;But we can remove one of the two inverse trigonometric functions by using the following identity:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\tan^{-1}x - \tan^{-1}y = \tan^{-1}\frac{x-y}{1+xy}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Which simplifies the expression for $L_{s}$ to:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \frac{\sigma_{s}I}{v}( \tan^{-1}\frac{x-y}{1+xy} )\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;With $x$ and $y$ being replaced by:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\begin{array}{lcl} x = \frac{d+b}{v} \\ y = \frac{b}{v}\end{array}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;So the updated GLSL snippet looks like:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float InScatter(vec3 start, vec3 dir, vec3 lightPos, float d)
{
   vec3 q = start - lightPos;

   // calculate coefficients
   float b = dot(dir, q);
   float c = dot(q, q);
   float s = 1.0f / sqrt(c - b*b);

   // after a little algebraic re-arrangement
   float x = d*s;
   float y = b*s;
   float l = s * atan( (x) / (1.0+(x+y)*y));

   return l;
}
&lt;/pre&gt;

&lt;p&gt;Of course it&#39;s always good to verify your &#39;optimisations&#39;, ideally I would take GPU timings but next best is to run it through NVShaderPerf and check the cycle counts:&lt;/p&gt;

&lt;p&gt;Original (2x atan()):&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
Results 76 cycles, 10 r regs, 2,488,320,064 pixels/s
&lt;/pre&gt;

&lt;p&gt;Updated (1x atan())&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
Results 55 cycles, 8 r regs, 3,251,200,103 pixels/s
&lt;/pre&gt;

&lt;p&gt;A tasty 25% reduction in cycle count!&lt;/p&gt;

&lt;p&gt;Another idea is to use an approximation of atan(), Robin Green has some great articles about &lt;a href=&#34;http://www.research.scea.com/gdc2003/fast-math-functions.html&#34;&gt;faster math functions&lt;/a&gt; where he discusses how you can range reduce to 0-1 and approximate using &lt;a href=&#34;http://mathworld.wolfram.com/MinimaxPolynomial.html&#34;&gt;minimax polynomials&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My first attempt was much simpler, looking at it&#39;s graph we can see that atan() is almost linear near 0 and asymptotically approaches pi/2.&lt;br&gt;
&lt;p style=&#34;text-align:center;&#34;&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/atan.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-878&#34; title=&#34;atan&#34; src=&#34;./wp-content/uploads/2010/06/atan.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;br&gt;
Perhaps the simplest approximation we could try would be something like:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\tan^{-1}(x) \approx min(x, \frac{\pi}{2})\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Which looks like:&lt;br&gt;
&lt;p style=&#34;text-align:center;&#34;&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/atan_approx.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-879&#34; title=&#34;atan_approx&#34; src=&#34;./wp-content/uploads/2010/06/atan_approx.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float atanLinear(float x)
{
   return clamp(x, -0.5*kPi, 0.5*kPi);
}

// Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
// Results 34 cycles, 8 r regs, 4,991,999,816 pixels/s
&lt;/pre&gt;

&lt;p&gt;Pretty ugly, but even though the maximum error here is huge (~0.43 relative), visually the difference is &lt;a href=&#34;./wp-content/uploads/2010/06/linear.png&#34;&gt;surprisingly small&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Still I thought I&#39;d try for something more accurate, I used a 3rd degree minimax polynomial to approximate the range 0-1 which gave something practically identical to atan() for my purposes (~0.0052 max relative error):&lt;br&gt;
&lt;p style=&#34;text-align:center;&#34;&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/atan_minimax.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-906&#34; title=&#34;atan_minimax&#34; src=&#34;./wp-content/uploads/2010/06/atan_minimax.png&#34; alt=&#34;&#34; width=&#34;90%&#34; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float MiniMax3(float x)
{
   return ((-0.130234*x - 0.0954105)*x + 1.00712)*x - 0.00001203333;
}

float atanMiniMax3(float x)
{
   // range reduction
   if (x &lt; 1)
      return MiniMax3(x);
   else
      return kPi*0.5 - MiniMax3(1.0/x);
}

// Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
// Results 40 cycles, 8 r regs, 4,239,359,951 pixels/s
&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This isn&#39;t designed as a general replacement for atan(), for a start it doesn&#39;t handle values of x &amp;lt; 0 and it hasn&#39;t had anywhere near the love put into other approximations you can find online (optimising for floating point representations for example).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As a bonus I found that putting the polynomial evaluation into &lt;a href=&#34;http://en.wikipedia.org/wiki/Horner_scheme&#34;&gt;Horner form&lt;/a&gt; shaved 4 cycles from the shader.&lt;/p&gt;

&lt;p&gt;Cedrick also had an idea to use something a little different:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\tan^{-1}(x) \approx \frac{\pi}{2}\left(\frac{kx}{1+kx}\right)\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;This might look familiar to some as the basic Reinhard &lt;a href=&#34;http://filmicgames.com/archives/category/tonemapping&#34;&gt;tone mapping&lt;/a&gt; curve!  We eyeballed values for k until we had one that looked close (you can tell I&#39;m being very rigorous here), in the end k=1 was close enough and is one cycle faster :)&lt;br&gt;
&lt;p style=&#34;text-align:center;&#34;&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/atan_rational1.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-1003&#34; title=&#34;atan_rational&#34; src=&#34;./wp-content/uploads/2010/06/atan_rational1.png&#34; alt=&#34;&#34; width=&#34;90%&#34; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float atanRational(float x)
{
   return kPi*0.5*x / (1.0+x);
}

// Fragment Performance Setup: Driver 174.74, GPU G80, Flags 0x1000
// Results 34 cycles, 8 r regs, 4,869,120,025 pixels/s
&lt;/pre&gt;

&lt;p&gt;To get it down to 34 cycles we had to expand out the expression for x and perform some more grouping of terms which shaved another cycle and a register off it.  I was surprised to see the rational approximation be so close in terms of performance to the linear one, I guess the scheduler is doing a good job at hiding some work there.&lt;/p&gt;

&lt;p&gt;In the end all three approximations gave pretty good visual results:&lt;/p&gt;

&lt;p&gt;Original (cycle count 76):&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/original.png&#34;&gt;&lt;img title=&#34;original&#34; src=&#34;./wp-content/uploads/2010/06/original.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MiniMax3, Error 8x (cycle count 40):&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/minimax3.png&#34;&gt; &lt;img title=&#34;minimax3&#34; src=&#34;./wp-content/uploads/2010/06/minimax3.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/minimax3_diff.png&#34;&gt;&lt;img title=&#34;minimax3_diff&#34; src=&#34;./wp-content/uploads/2010/06/minimax3_diff.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rational, Error 8x (cycle count 34):&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/rational.png&#34;&gt;&lt;img  title=&#34;rational&#34; src=&#34;./wp-content/uploads/2010/06/rational.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/rational_diff2.png&#34;&gt;&lt;img alignnone&#34; title=&#34;rational_diff&#34; src=&#34;./wp-content/uploads/2010/06/rational_diff2.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Linear, Error 8x (cycle count 34):&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/linear.png&#34;&gt;&lt;img title=&#34;linear&#34; src=&#34;./wp-content/uploads/2010/06/linear.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;a href=&#34;./wp-content/uploads/2010/06/linear_diff.png&#34;&gt;&lt;img title=&#34;linear_diff&#34; src=&#34;./wp-content/uploads/2010/06/linear_diff.png?w=150&#34; alt=&#34;&#34; width=&#34;150&#34; height=&#34;86&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://realtimecollisiondetection.net/blog/?p=9&#34;&gt;http://realtimecollisiondetection.net/blog/?p=9&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://www.research.scea.com/gdc2003/fast-math-functions.html&#34;&gt;http://www.research.scea.com/gdc2003/fast-math-functions.html&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    
      
        <item>
          <title>In-Scattering Demo</title>
          <link>http://blog.mmacklin.com/2010/05/29/in-scattering-demo/</link>
          <pubDate>Sat, 29 May 2010 23:32:07 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/05/29/in-scattering-demo/</guid>
          <description>&lt;p&gt;This demo shows an analytic solution to the differential in-scattering equation for light in participating media. It&#39;s a similar but simplified version of equations found in&lt;a href=&#34;http://www.eecs.berkeley.edu/~ravir/papers/singlescat/scattering.pdf&#34;&gt; [1]&lt;/a&gt;, &lt;a href=&#34;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.3787&amp;amp;rep=rep1&amp;amp;type=pdf&#34;&gt;[2]&lt;/a&gt; and as I recently discovered &lt;a href=&#34;http://research.microsoft.com/en-us/um/people/johnsny/papers/fogshop-pg.pdf&#34;&gt;[3]&lt;/a&gt;. However I thought showing the derivation might be interesting for some out there, plus it was a good excuse for me to brush up on my&lt;strong&gt; &lt;/strong&gt; &lt;span  class=&#34;math&#34;&gt;\(\LaTeX\)&lt;/span&gt;.&lt;/p&gt;

&lt;p&gt;You might notice I also updated the site&#39;s theme, unfortunately you need a white background to make wordpress.com LaTeX rendering play nice with RSS feeds (other than that it&#39;s very convenient).&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://mmacklin.dreamhosters.com/FogVolumes.zip&#34;&gt;Download the demo here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The demo uses GLSL and shows point and spot lights in a basic scene with some tweakable parameters:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/05/fogvolumes1.png&#34;&gt;&lt;img class=&#34;aligncenter size-full wp-image-712&#34; title=&#34;FogVolumes1&#34; src=&#34;./wp-content/uploads/2010/05/fogvolumes1.png&#34; alt=&#34;&#34; s /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;

&lt;p&gt;Given a view ray defined as:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[\mathbf{x}(t) = \mathbf{p} + t\mathbf{d}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;We would like to know the total amount of light scattered towards the viewer (in-scattered) due to a point light source. For the purposes of this post I will only consider single scattering within isotropic media.&lt;/p&gt;

&lt;p&gt;The differential equation that describes the change in radiance due to light scattered into the view direction inside a differential volume is given in &lt;a href=&#34;http://www.pbrt.org/&#34;&gt;PBRT &lt;/a&gt;(p578), if we assume equal scattering in all directions we can write it as:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[dL_{s}(t) = \sigma_{s}L_{i}(t)\,dt\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Where &lt;span  class=&#34;math&#34;&gt;\(\sigma_{s}\)&lt;/span&gt;  is the scattering probability which I will assume includes the normalization term for an isotropic phase funtion of &lt;span  class=&#34;math&#34;&gt;\(\frac{1}{4pi}\)&lt;/span&gt;. For a point light source at distance d with intensity I we can calculate the radiant intensity at a receiving point as:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{i} = \frac{I}{d^2}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Plugging in the equation for a point along the view ray we have:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{i}(t) = \frac{I}{|\mathbf{x}(t)-\mathbf{s}|^2}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Where s is the light source position. The solution to (1) is then given by:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \int_{0}^{d} \sigma_{s}L_{i}(t) \, dt\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \int_{0}^{d} \frac{\sigma_{s}I}{|\mathbf{x}(t)-\mathbf{s}|^2}\,dt\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;To find this integral in closed form we need to expand the distance calculation in the denominator into something we can deal with more easily:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \sigma_{s}I\int_0^d{\frac{dt}{(\mathbf{p} + t\mathbf{d} - \mathbf{s})\cdot(\mathbf{p} + t\mathbf{d} - \mathbf{s})}}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Expanding the dot product and gathering terms, we have:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \sigma_{s}I\int_{0}^{d}\frac{dt}{(\mathbf{d}\cdot\mathbf{d})t^2 + 2(\mathbf{m}\cdot\mathbf{d})t + \mathbf{m}\cdot\mathbf{m} }\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Where &lt;span  class=&#34;math&#34;&gt;\(\mathbf{m} = (\mathbf{p}-\mathbf{s})\)&lt;/span&gt;.&lt;/p&gt;

&lt;p&gt;Now we have something a bit more familiar, because the direction vector is unit length we can remove the coefficient from the quadratic term and we have:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \sigma_{s}I\int_{0}^{d}\frac{dt}{t^2 + 2bt + c}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;At this point you could look up the integral in standard tables but I&#39;ll continue to simplify it for completeness.  Completing the square we obtain:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \sigma_{s}I\int_{0}^{d}\frac{dt}{ (t^2 + 2bt + b^2) + (c-b^2)}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Making the substitution (u = (t + b)), (v = (c-b&lt;sup&gt;2)&lt;/sup&gt;{1/2}) and updating our limits of integration, we have:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \sigma_{s}I\int_{b}^{b+d}\frac{du}{ u^2 + v^2}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \sigma_{s}I \left[ \frac{1}{v}\tan^{-1}\frac{u}{v} \right]_b^{b+d}\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Finally giving:&lt;/p&gt;

&lt;p&gt;&lt;span  class=&#34;math&#34;&gt;\[L_{s} = \frac{\sigma_{s}I}{v}( \tan^{-1}\frac{d+b}{v} - \tan^{-1}\frac{b}{v} )\]&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;This is what we will evaluate in the pixel shader, here&#39;s the GLSL snippet for the integral evaluation (direct translation of the equation above):&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float InScatter(vec3 start, vec3 dir, vec3 lightPos, float d)
{
// light to ray origin
vec3 q = start - lightPos;

// coefficients
float b = dot(dir, q);
float c = dot(q, q);

// evaluate integral
float s = 1.0f / sqrt(c - b*b);
float l = s * (atan( (d + b) * s) - atan( b*s ));

return l;
}
&lt;/pre&gt;

&lt;p&gt;Where d is the distance traveled, computed by finding the entry / exit points of the ray with the volume.&lt;/p&gt;

&lt;p&gt;To make the effect more interesting it is possible to incorporate a particle system, I apply the same scattering shader to each particle and treat it as a thin slab to obtain an approximate depth, then simply multiply by a noise texture at the end.&lt;br&gt;
&lt;p style=&#34;text-align: center;&#34;&gt;&lt;a href=&#34;./wp-content/uploads/2010/05/fogvolumes2.png&#34;&gt;&lt;img class=&#34;aligncenter&#34; title=&#34;FogVolumes2&#34; src=&#34;./wp-content/uploads/2010/05/fogvolumes2.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;br&gt;
&lt;p style=&#34;text-align: center;&#34;&gt;&lt;a href=&#34;./wp-content/uploads/2010/05/fogvolumes4.png&#34;&gt;&lt;img class=&#34;aligncenter&#34; title=&#34;FogVolumes4&#34; src=&#34;./wp-content/uploads/2010/05/fogvolumes4.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;/p&gt;

&lt;h3 id=&#34;optimisations&#34;&gt;Optimisations&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;As it is above the code only supports lights with infinite extent, this implies drawing the entire frame for each light.  It would be possible to limit it to a volume but you&#39;d want to add a falloff to the effect to avoid a sharp transition at the boundary.&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Performing the full evaluation per-pixel for the particles is probably unnecessary, doing it at a lower frequency, per-vertex or even per-particle would probably look acceptable.&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;notes&#34;&gt;Notes&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Generally objects appear to have wider specular highlights and more ambient lighting in the presence of particpating media.  &lt;a href=&#34;http://www.eecs.berkeley.edu/%7Eravir/papers/singlescat/scattering.pdf&#34;&gt;[1]&lt;/a&gt; Discusses this in detail but you can fudge it by lowering the specular power in your materials as the scattering coefficient increases.&lt;br&gt;&lt;/li&gt;
&lt;li&gt;According to &lt;a href=&#34;http://en.wikipedia.org/wiki/Rayleigh_scattering&#34;&gt;Rayliegh scattering&lt;/a&gt; blue light at the lower end of the spectrum is scattered considerably more than red light.  It&#39;s simple to account for this wavelength dependence by making the scattering coefficient a constant vector weighted towards the blue component.  I found this helps add to the realism of the effect.&lt;br&gt;&lt;/li&gt;
&lt;li&gt;I&#39;m curious to know how the torch light was done in Alan Wake as it seems to be high quality (not just billboards) with multiple light shafts.. maybe someone out there knows?&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;references&#34;&gt;References&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;http://www.eecs.berkeley.edu/~ravir/papers/singlescat/scattering.pdf&#34;&gt;Sun, B., Ramamoorthi, R., Narasimhan, S. G., and Nayar, S. K. 2005. A practical analytic single scattering model for real time rendering. &lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.3787&amp;amp;rep=rep1&amp;amp;type=pdf&#34;&gt;Wenzel, C. 2006. Real-time atmospheric effects in games. &lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://research.microsoft.com/en-us/um/people/johnsny/papers/fogshop-pg.pdf&#34;&gt;Zhou, K., Hou, Q., Gong, M., Snyder, J., Guo, B., and Shum, H. 2007. Fogshop: Real-Time Design and Rendering of Inhomogeneous, Single-Scattering Media. &lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.vis.uni-stuttgart.de/eng/research/pub/pub2010/espmss10.pdf&#34;&gt;Engelhardt, T. and Dachsbacher, C. 2010. Epipolar sampling for shadows and crepuscular rays in participating media with single scattering.&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.cse.chalmers.se/~billeter/pub/volumetric/index.html&#34;&gt;Volumetric Shadows using Polygonal Light Volumes&lt;/a&gt;&lt;br&gt;&lt;/li&gt;
&lt;/ol&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Threading Fun</title>
          <link>http://blog.mmacklin.com/2010/05/24/threading-fun/</link>
          <pubDate>Tue, 25 May 2010 06:01:41 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/05/24/threading-fun/</guid>
          <description>&lt;p&gt;So we had an interesting threading bug at work today which I thought I&#39;d write up here as I hadn&#39;t seen this specific problem before (note I didn&#39;t write this code, I just helped debug it).  The set up was a basic single producer single consumer arrangement something like this:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
#include &amp;lt;Windows.h&amp;gt;
#include &amp;lt;cassert&amp;gt;

volatile LONG gAvailable = 0;

// thread 1
DWORD WINAPI Producer(LPVOID)
{
    while (1)
    {
        InterlockedIncrement(&amp;amp;gAvailable);
    }
}

// thread 2
DWORD WINAPI Consumer(LPVOID)
{
    while (1)
    {
        // pull available work with a limit of 5 items per iteration
        LONG work = min(gAvailable, 5);

        // this should never fire.. right?
        assert(work &amp;lt;= 5);

        // update available work
        InterlockedExchangeAdd(&amp;amp;gAvailable, -work);
    }
}

int main(int argc, char* argv[])
{
    HANDLE h[2];

    h[0] = CreateThread(0, 0, Consumer, NULL, 0, 0);
    h[1] = CreateThread(0, 0, Producer, NULL, 0, 0);

    WaitForMultipleObjects(2, h, TRUE, INFINITE);

    return 0;
}
&lt;/pre&gt;

&lt;p&gt;So where&#39;s the problem?  What would make the assert fire?&lt;/p&gt;

&lt;p&gt;We triple-checked the logic and couldn&#39;t see anything wrong (it was more complicated than the example above so there were a number of possible culprits) and unlike the example above there were no asserts, just a hung thread at some later stage of execution.&lt;/p&gt;

&lt;p&gt;Unfortunately the bug reproduced only once every other week so we knew we had to fix it while I had it in a debugger.  We checked all the relevant in-memory data and couldn&#39;t see any that had obviously been overwritten (&amp;quot;memory stomp&amp;quot; is usually the first thing called out when these kinds of bugs show up).&lt;/p&gt;

&lt;p&gt;It took us a while but eventually we checked the disassembly for the call to min().  Much to our surprise it was performing two loads of gAvailable instead of the one we had expected!&lt;/p&gt;

&lt;p&gt;This happened to be on X360 but the same problem occurs on Win32, here&#39;s the disassembly for the code above (VS2010 Debug):&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
// calculate available work with a limit of 5 items per iteration
LONG work = min(gAvailable, 5);

// (1) read gAvailable, compare against 5
002D1457  cmp         dword ptr [gAvailable (2D7140h)],5
002D145E  jge         Consumer+3Dh (2D146Dh)

// (2) read gAvailable again, store on stack
002D1460  mov         eax,dword ptr [gAvailable (2D7140h)]
002D1465  mov         dword ptr [ebp-0D0h],eax
002D146B  jmp         Consumer+47h (2D1477h)
002D146D  mov         dword ptr [ebp-0D0h],5

// (3) store gAvailable from (2) in &#39;work&#39;
002D1477  mov         ecx,dword ptr [ebp-0D0h]
002D147D  mov         dword ptr [work],ecx
&lt;/pre&gt;

&lt;p&gt;The question is what happens between (1) and (2)?  Well the answer is that any other thread can add to gAvailable, meaning that the stored value at (3) is now &amp;gt; 5.&lt;/p&gt;

&lt;p&gt;In this case the simple solution was to read gAvailable outside of the call to min():&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
// pull available work with a limit of 5 items per iteration
LONG available = gAvailable;
LONG work = min(available, 5);
&lt;/pre&gt;

&lt;p&gt;Maybe this is obvious to some people but it sure caused me and some smart people a headache for a few hours :)&lt;/p&gt;

&lt;p&gt;Note that you may not see the problem in some build configurations depending on whether or not the compiler generates code to perform the second read of the variable after the comparison.  As far as I know there are no guarantees about what it may or may not do in this case, FWIW we had the problem in a release build with optimisations enabled.&lt;/p&gt;

&lt;p&gt;Big props to Tom and &lt;a href=&#34;http://twitter.com/aruslan&#34;&gt;Ruslan&lt;/a&gt; at Lucas for helping track this one down.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
        <item>
          <title>GOW III: Shadows</title>
          <link>http://blog.mmacklin.com/2010/03/11/gow-iii-shadows/</link>
          <pubDate>Fri, 12 Mar 2010 06:09:31 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/03/11/gow-iii-shadows/</guid>
          <description>&lt;p&gt;I checked out this session at GDC today - I&#39;ll try and sum up the main takeaways (at least for me):&lt;/p&gt;

&lt;p&gt;&lt;ul&gt;&lt;br&gt;
&lt;li&gt;Artist controlled cascaded shadow maps, each cascade is accumulated into a &#39;white buffer&#39; (new term coined?) in deferred style passes using standard PCF filtering&lt;/li&gt;&lt;br&gt;
&lt;li&gt;Shadow accumulation pass re-projects world space position from an FP32 depth buffer (separate from the main depth buffer).  The motivation for the separate depth buffer is performance so I assume they store linear depth which means they can reconstruct the world position using just a single multiply-add (saving a reciprocal).&lt;/li&gt;&lt;br&gt;
&lt;li&gt;They have the ability to tile individual cascades to achieve arbitrary levels of sampling within a fixed size memory (render cascade tile, apply into white buffer, repeat)&lt;/li&gt;&lt;br&gt;
&lt;li&gt;Often up to 9 mega-texel resolution used for in game scenes&lt;/li&gt;&lt;br&gt;
&lt;li&gt;White buffer is blended to using MIN blend mode to avoid double darkening (old school)&lt;/li&gt;&lt;br&gt;
&lt;li&gt;Invisible &#39;caster only&#39; geometry to make baked shadows match on dynamic objects&lt;/li&gt;&lt;br&gt;
&lt;li&gt;Stencil bits used to mask off baked geometry, fore-ground, back-ground characters&lt;/li&gt;&lt;br&gt;
&lt;/ul&gt;&lt;br&gt;
&amp;nbsp;&lt;br&gt;
The most interesting part (in my opinion) was the optimisation work, Ben creates a light direction aligned 8x8x4 grid that he renders extruded bounding spheres into (on the SPUs).  Each cell records whether or not it is in shadow and the rough bounds of that shadow.  To take advantage of this information the accumulation pass (where the expensive filtering is done) breaks the screen up into tiles, checks the tile against the volume and adjusts it&#39;s depth and 2D bounds accordingly, potentially rejecting entire tiles.&lt;/p&gt;

&lt;p&gt;Looking forward to the the rest of the talks, this is my first year at GDC and it&#39;s pretty great :)&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Stochastic Pruning (2)</title>
          <link>http://blog.mmacklin.com/2010/02/07/stochastic-pruning-2/</link>
          <pubDate>Sun, 07 Feb 2010 07:39:41 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/02/07/stochastic-pruning-2/</guid>
          <description>&lt;p&gt;A quick update for anyone who was having problems running my stochastic pruning demo on NVIDIA cards, I&#39;ve updated &lt;a href=&#34;http://mmacklin.dreamhosters.com/Plant.zip&#34;&gt;the demo&lt;/a&gt; with a fix (I had forgotten to disable a vertex array).&lt;/p&gt;

&lt;p&gt;While I was at it I added some grass:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2010/02/tree_large.png&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2010/02/tree_large.png&#34; alt=&#34;&#34; title=&#34;tree_large&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-595&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The grass uses stochastic pruning but still generates a lot of geometry, it&#39;s just one grass tile flipped around and rendered multiple times.  I wanted to see if it would be practical for games to render grass using pure geometry but really you&#39;d need to be much more aggressive with the LOD (Update: apparently the same technique was used in Flower, see comments).&lt;/p&gt;

&lt;p&gt;Kevin Boulanger has done some impressive real time &lt;a href=&#34;http://www.kevinboulanger.net/grass.html&#34;&gt;grass rendering&lt;/a&gt; using 3 levels of detail with transitions.  Cool stuff and quite practical by the looks of it.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Stochastic Pruning for Real-Time LOD</title>
          <link>http://blog.mmacklin.com/2010/01/12/stochastic-pruning-for-real-time-lod/</link>
          <pubDate>Tue, 12 Jan 2010 07:28:31 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2010/01/12/stochastic-pruning-for-real-time-lod/</guid>
          <description>&lt;p&gt;Rendering plants efficiently has always been a challenge in computer graphics, a relatively new technique to address this is &lt;a href=&#34;http://graphics.pixar.com/library/StochasticPruning/paper.pdf&#34;&gt;Pixar&#39;s stochastic pruning algorithm&lt;/a&gt;.  Originally developed for rendering the desert scenes in Cars, Weta also &lt;a href=&#34;http://www.cgw.com/Publications/CGW/2009/Volume-32-Issue-12-Dec-2009-/CG-In-Another-World.aspx&#34;&gt;claim &lt;/a&gt;to have used the same technique on Avatar.&lt;/p&gt;

&lt;p&gt;Although designed with offline rendering in mind it maps very naturally to the GPU and real-time rendering.  The basic algorithm is this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build your mesh of N elements (in the case of a tree the elements would be leaves, usually represented by quads)&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Sort the elements in random order (a robust way of doing this is to use the &lt;a href=&#34;http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle&#34;&gt;Fisher-Yates shuffle&lt;/a&gt;)&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Calculate the proportion U of elements to render based on distance to the object.&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Draw N*U unpruned elements with area scaled by 1/U&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So putting this onto the GPU is straightforward, pre-shuffle your index buffer (element wise), when you come to draw you can calculate the unpruned element count using something like:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
// calculate scaled distance to viewer
float z = max(1.0f, Length(viewerPos-objectPos)/pruneStartDistance);
// distance at which half the leaves will be pruned
float h = 2.0f;
// proportion of elements unpruned
float u = powf(z, -Log(h, 2));
// actual element count
int m = ceil(numElements * u);
// scale factor
float s = 1.0f / u;
&lt;/pre&gt;

&lt;p&gt;Then just submit a modified draw call for m quads:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
glDrawElements(GL_QUADS, m*4, GL_UNSIGNED_SHORT, 0);
&lt;/pre&gt;

&lt;p&gt;The scale factor computed above preserves the total global surface area of all elements, this ensures consistent pixel coverage at any distance.  The scaling by area can be performed efficiently in the vertex shader meaning no CPU involvement is necessary (aside from setting up the parameters of course).  In a basic implementation you would see elements pop in and out as you change distance but this can be helped by having a transition window that scales elements down before they become pruned (discussed in the original paper).&lt;/p&gt;

&lt;figure&gt;&lt;a href=&#34;./wp-content/uploads/2010/01/tree_unpruned.png&#34;&gt;&lt;img class=&#34;size-full wp-image-535&#34; title=&#34;tree_unpruned&#34; src=&#34;./wp-content/uploads/2010/01/tree_unpruned.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;figurecaption&gt;Tree unpruned&lt;/figurecaption&gt;&lt;/figure&gt;

&lt;figure&gt;&lt;a href=&#34;./wp-content/uploads/2010/01/tree_pruned.png&#34;&gt;&lt;img class=&#34;size-full wp-image-534&#34; title=&#34;tree_pruned&#34; src=&#34;./wp-content/uploads/2010/01/tree_pruned.png&#34; alt=&#34;&#34; /&gt;&lt;/a&gt;&lt;figurecaption&gt;Tree pruned to 10% of original&lt;/figurecaption&gt;&lt;/figure&gt;

&lt;p&gt;Billboards still have their place but it seems like this kind of technique could have applications for many effects, grass and particle systems being obvious ones.&lt;/p&gt;

&lt;p&gt;I&#39;ve updated my previous tree demo with an implementation of stochastic pruning and a few other changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed some bugs with ATI driver compatability&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Preetham based sky-dome&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Optimised shadow map generation&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Some new example plants&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Tweaked leaf and branch shaders&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can download the demo &lt;a href=&#34;http://mmacklin.dreamhosters.com/Plant.zip&#34;&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use the &lt;a href=&#34;http://algorithmicbotany.org/papers/selforg.sig2009.html&#34;&gt;Self-organising tree models for image synthesis&lt;/a&gt; algorithm (from SIGGRAPH09) to generate the trees which I have posted about &lt;a href=&#34;http://mmack.wordpress.com/2009/09/28/trees-the-green-kind/&#34;&gt;previously&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While I was researching I also came across &lt;a href=&#34;http://www.cg.tuwien.ac.at/research/publications/2009/Habel_09_PGT/&#34;&gt;Physically Guided Animation of Trees&lt;/a&gt; from Eurographics 2009, they have some great videos of real-time animated trees.&lt;/p&gt;

&lt;p&gt;I&#39;ve also posted &lt;a href=&#34;http://www.mendeley.com/collections/729981/Algorithmic-Botany/&#34;&gt;my collection of plant modelling papers&lt;/a&gt; onto Mendeley (great tool for organising pdfs!).&lt;/p&gt;

&lt;figure&gt;&lt;a href=&#34;./wp-content/uploads/2010/01/tree_lowsym1.png&#34;&gt;&lt;img class=&#34;size-full wp-image-527&#34; title=&#34;tree_lowsym&#34; src=&#34;./wp-content/uploads/2010/01/tree_lowsym1.png&#34; alt=&#34;&#34; s/&gt;&lt;/a&gt;&lt;figurecaption&gt;Tree pruned to 70% of original&lt;/figurecaption&gt;&lt;/figure&gt;
 [...]</description>
        </item>
      
    
      
    
      
        <item>
          <title>Sky</title>
          <link>http://blog.mmacklin.com/2009/12/31/sky/</link>
          <pubDate>Thu, 31 Dec 2009 22:46:48 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/12/31/sky/</guid>
          <description>&lt;p&gt;I had been meaning to implement &lt;a href=&#34;http://www.cs.utah.edu/~shirley/papers/sunsky/sunsky.pdf&#34;&gt;Preetham&#39;s analytic sky model&lt;/a&gt; ever since I first came across it years ago.  Well I finally got around to it and was pleased to find it&#39;s one of those few papers that gives you pretty much everything you need to put together an implementation (although with over 50 unique constants you need to be careful with your typing).&lt;/p&gt;

&lt;p&gt;I integrated it into my path tracer which made for some nice images:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2009/12/sky_t2.png&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/12/sky_t2.png&#34; alt=&#34;&#34; title=&#34;sky_t2&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-459&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;./wp-content/uploads/2009/12/sky_t2_am.png&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/12/sky_t2_am.png&#34; alt=&#34;&#34; title=&#34;sky_t2_am&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-456&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also a small &lt;a href=&#34;http://www.youtube.com/watch?v=Ptrq16x20rk&#34;&gt;video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It looks like the technique has been surpassed now by &lt;a href=&#34;http://hal.archives-ouvertes.fr/docs/00/28/87/58/PDF/article.pdf&#34;&gt;Precomputed Atmospheric Scattering&lt;/a&gt; but it&#39;s still useful for generating environment maps / SH lights.&lt;/p&gt;

&lt;p&gt;I also fixed a load of bugs in my path tracer, I was surprised to find that on my new i7 quad-core (8 logical threads) renders with 8 worker threads were only twice as fast as with a single worker, given the embarrassingly parallel nature of path-tracing you would expect at least a factor of 4 decrease in render time.&lt;/p&gt;

&lt;p&gt;It turns out the problem was contention in the OS allocator, as I allocate BRDF objects per-intersection there was a lot of overhead there (more than I had expected).  I added a per-thread memory arena where each worker thread has a pool of memory to allocate from linearly during a trace, allocations are never freed and the pool is just reset per-path.&lt;/p&gt;

&lt;p&gt;This had the following effect on render times:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;1 thread:  128709ms-&amp;gt;35553ms  (3.6x faster)
8 threads: 54071ms-&amp;gt;8235ms   (6.5x faster!)&lt;/pre&gt;

&lt;p&gt;You might also notice that the total speed up is not linear with the number of workers.  It tails off as the 4 &#39;real&#39; execution units are used up, so hyper-threading doesn&#39;t seem to be too effective here, I suspect this is due to such simple scenes not providing enough opportunity for swapping the thread states.&lt;/p&gt;

&lt;p&gt;The HT numbers seems to roughly agree with what people are &lt;a href=&#34;http://ompf.org/forum/viewtopic.php?f=1&amp;amp;t=1076&amp;amp;p=10626&amp;amp;hilit=hyperthreading#p10626&#34;&gt;reporting&lt;/a&gt; on the Ompf forums (~20% improvement).&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Path Tracing</title>
          <link>http://blog.mmacklin.com/2009/12/02/path-tracing/</link>
          <pubDate>Thu, 03 Dec 2009 06:59:48 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/12/02/path-tracing/</guid>
          <description>&lt;p&gt;A few of us at work have been having a friendly path-tracing competition (greets to Tom &amp;amp; Dom).  It&#39;s been a lot of fun and comparing images in the office each Monday morning is a great motivation to get features in and renders out.  I thought I&#39;d write a post about it to record my progress and gather links to some reference material.&lt;/p&gt;

&lt;p&gt;Here&#39;s a list of features I&#39;ve implemented so far and some pics below:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Monte-Carlo path tracing with explicit area light sampling at each step&lt;/li&gt;
    &lt;li&gt;Stratified image sampling&lt;/li&gt;
    &lt;li&gt;Importance sampled Lambert and Blinn BRDFs&lt;/li&gt;
    &lt;li&gt;Sphere, Plane, Disc, Metaball and Distance Field primitives (no triangles yet)&lt;/li&gt;
    &lt;li&gt;Multi-threaded tile renderer&lt;/li&gt;
    &lt;li&gt;Cross-compiles for PS3 on Linux (runs on SPUs)&lt;/li&gt;
    &lt;li&gt;Quite general shade-trees with Perlin noise etc&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;br&gt;
&lt;a href=&#34;./wp-content/uploads/2009/12/shinysphere1.jpg&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/12/shinysphere1.jpg&#34; alt=&#34;&#34; title=&#34;shinysphere&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-424&#34; /&gt;&lt;/a&gt;
&lt;a href=&#34;./wp-content/uploads/2009/12/shinyblob.jpg&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/12/shinyblob.jpg&#34; alt=&#34;&#34; title=&#34;shinyblob&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-415&#34; /&gt;&lt;/a&gt;
&lt;a href=&#34;./wp-content/uploads/2009/12/diffuse_spheres.jpg&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/12/diffuse_spheres.jpg&#34; alt=&#34;&#34; title=&#34;diffuse_spheres&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-429&#34; /&gt;&lt;/a&gt;
&lt;/p&gt;

Sphere-tracing the distance fields produced some cool effects (the blobby sphere above).  I first heard about the technique from &lt;a href=&#34;http://www.iquilezles.org/www/&#34;&gt;Inigo Quilez&lt;/a&gt; who used it to generate an amazing image in his &lt;a href=&#34;http://www.iquilezles.org/www/articles/raymarchingdf/raymarchingdf.htm&#34;&gt;slisesix&lt;/a&gt; demo, he has some good descriptions on his page but for the details I would check out these papers:

&lt;ul&gt;
    &lt;li&gt;&lt;a href=&#34;http://graphics.cs.uiuc.edu/~jch/papers/zeno.pdf&#34;&gt;Sphere tracing: a geometric method for the antialiased ray tracing of implicit surfaces&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&#34;http://graphics.cs.uiuc.edu/~jch/papers/dl.pdf&#34;&gt;A Lipschitz Method for Accelerated Volume Rendering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
&lt;br&gt;
And for global illumination and path-tracing in general:
&lt;ul&gt;
    &lt;li&gt;&lt;a href=&#34;http://graphics.pixar.com/library/HQRenderingCourse/paper.pdf&#34;&gt;High Quality Rendering using Ray Tracing and Photon Mapping (SIGGRAPH 2007)&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&#34;http://www.pbrt.org/&#34;&gt;Physically Based Rendering&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&#34;http://graphics.stanford.edu/papers/veach_thesis/&#34;&gt;Robust Monte Carlo Methods for Light Transport Simulation&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&#34;http://www.cs.ucl.ac.uk/teaching/4074/Jesper/Matt_Pharr_reading.htm&#34;&gt;Notes from Matt Pharr on implementing your first path tracer&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&#34;http://www.kevinbeason.com/scs/pane/&#34;&gt;Kevin Beason&#39;s renderer Pane&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/p&gt;
&lt;p&gt;
&lt;br&gt;
Also, this is what happens when you push Perlin too far:

&lt;a href=&#34;./wp-content/uploads/2009/12/devilmusic.jpg&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/12/devilmusic.jpg&#34; alt=&#34;&#34; title=&#34;devilmusic&#34; class=&#34;aligncenter size-full wp-image-416&#34; style=&#34;width: 256px; height: 256px;&#34; /&gt;&lt;/a&gt;
&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Trees (the green kind)</title>
          <link>http://blog.mmacklin.com/2009/09/28/trees-the-green-kind/</link>
          <pubDate>Mon, 28 Sep 2009 17:13:26 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/09/28/trees-the-green-kind/</guid>
          <description>&lt;p&gt;I&#39;ve always had an interest in computer generated plants so I was pleased to read &lt;a href=&#34;http://algorithmicbotany.org/papers/selforg.sig2009.html&#34;&gt;Self-organising tree models for image synthesis&lt;/a&gt; from Siggraph this year.&lt;/p&gt;

&lt;p&gt;The paper basically pulls together a bunch of techniques that have been around for a while and uses them to generate some really good looking tree models.&lt;/p&gt;

&lt;p&gt;Seeing as I&#39;ve had a bit of time on my hands between Batman and before I start at LucasArts I decided to put together an implementation in OpenGL (being a games programmer I want realtime feedback).&lt;/p&gt;

&lt;p&gt;Some screenshots below and a Win32 executable available - &lt;a href=&#34;http://mmacklin.dreamhosters.com/Plant.zip&#34;&gt;Plant.zip&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/09/tree_1.jpg&#34; alt=&#34;tree_1&#34; title=&#34;tree_1&#34; class=&#34;aligncenter size-full wp-image-323&#34; /&gt;&lt;br&gt;
&lt;img src=&#34;./wp-content/uploads/2009/09/tree_2_bare.jpg&#34; alt=&#34;tree_2_bare&#34; title=&#34;tree_2_bare&#34; class=&#34;aligncenter size-full wp-image-325&#34; /&gt;&lt;br&gt;
&lt;img src=&#34;./wp-content/uploads/2009/09/tree_2_leaves.jpg&#34; alt=&#34;tree_2_lea  ves&#34; title=&#34;tree_2_leaves&#34; class=&#34;aligncenter size-full wp-image-326&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Some Notes:&lt;/p&gt;

&lt;p&gt;I implemented both the space colonisation and shadow propagation methods.  The space colonisation is nice in that you can draw where the plant should grow by placing space samples with the mouse, this allows some pretty funky topiary but I found it difficult to grow convincing real-world plants with this method.  The demo only uses the shadow propagation method.&lt;/p&gt;

&lt;p&gt;Creating the branch geometry from generalised cylinders requires generating a continuous coordinate frame along a curve without any twists or knots.  I used a parallel transport frame for this which worked out really nicely, these two papers describe the technique and the problem:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://www.cs.indiana.edu/pub/techreports/TR425.pdf&#34;&gt;Parallel Transport Approach to Curve Framing&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.8658&#34;&gt;Quaternion Gauss Maps and Optimal Framings of Curves and Surfaces (1998) &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Getting the lighting and leaf materials to look vaguely realistic took quite a lot of tweaking and I&#39;m not totally happy with it.  Until I implemented self-shadowing on the trunk and leaves it looked very weird.  Also you need to account for the transmission you get through the leaves when looking toward the light:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/09/tree_inside.jpg&#34; alt=&#34;tree_inside&#34; title=&#34;tree_inside&#34; class=&#34;aligncenter size-full wp-image-327&#34; /&gt;&lt;/p&gt;

&lt;p&gt;There is a nice &lt;a href=&#34;http://http.developer.nvidia.com/GPUGems3/gpugems3_ch04.html&#34;&gt;article&lt;/a&gt; in GPU Gems 3 on how SpeedTree do this.&lt;/p&gt;

&lt;p&gt;The leaves are normal mapped with a simple Phong specular, I messed about with various modified diffuse models like half-Lambert but eventually just went with standard Lambert.  It would be interesting to use a more sophisticated ambient term.&lt;/p&gt;

&lt;p&gt;Still a lot of scope for performance optimisation, the leaves are alpha-tested right now so it&#39;s doing loads of redundant fragment shader work (something like &lt;a href=&#34;http://www.humus.name/index.php?page=Cool&amp;amp;ID=8&#34;&gt;Emil Persson&#39;s particle trimmer&lt;/a&gt; would be useful here).&lt;/p&gt;

&lt;p&gt;If you want to take a look at the source code drop me an &lt;a href=&#34;http://mmack.wordpress.com/about/&#34;&gt;email&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Known issues:&lt;/p&gt;

&lt;p&gt;On my NVIDIA card when the vert count is &amp;gt; 10^6 it runs like a dog, I need to break it up into smaller vertex buffers.&lt;/p&gt;

&lt;p&gt;Some ATI mobile drivers don&#39;t like the variable number of shadow mapping samples.  If that&#39;s your card then I recommend hacking the shaders to disable it.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Atomic float&#43;</title>
          <link>http://blog.mmacklin.com/2009/08/19/atomic-float/</link>
          <pubDate>Wed, 19 Aug 2009 12:50:56 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/08/19/atomic-float/</guid>
          <description>&lt;p&gt;No hardware I know of has atomic floating point operations but here&#39;s a handy little code snippet from Matt Pharr over on the &lt;a href=&#34;http://groups.google.com/group/pbrt&#34;&gt;PBRT mailing list&lt;/a&gt; which emulates the same functionality using an atomic compare and swap:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
inline float AtomicAdd(volatile float *val, float delta) {
     union bits { float f; int32_t i; };
     bits oldVal, newVal;
     do {
         oldVal.f = *val;
         newVal.f = oldVal.f + delta;
     } while (AtomicCompareAndSwap(*((AtomicInt32 *)val),
         newVal.i, oldVal.i) != oldVal.i);
     return newVal.f;
}
&lt;/pre&gt;

&lt;p&gt;In unrelated news, I&#39;ve taken a job at LucasArts which I&#39;ll be starting soon, sad to say goodbye to Rocksteady they&#39;re a great company to work for and I&#39;ll miss the team there.&lt;/p&gt;

&lt;p&gt;Looking forward to San Francisco though, 12 hours closer to my home town (Auckland, New Zealand) and maybe now I can finally get along to Siggraph or GDC.  If anyone has some advice on where to live there please let me know!&lt;/p&gt;

&lt;p&gt;Also a few weeks in between jobs so hopefully time to write some code and finish off all the tourist activities we never got around to in London.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Fin!</title>
          <link>http://blog.mmacklin.com/2009/08/14/fin/</link>
          <pubDate>Fri, 14 Aug 2009 12:49:11 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/08/14/fin/</guid>
          <description>&lt;p&gt;&lt;a href=&#34;http://www.batmanarkhamasylum.com&#34;&gt;Batman: Arkham Asylum&lt;/a&gt; is finished and the demo is up on PSN and Xbox Live.  I was pretty much responsible for the PS3 version on the engineering side so anything wrong with it is ultimately my fault.  I think most PS3 engineers working on a cross platform title will tell you that there is always some apprehension of the &#39;side by side comparisons&#39; which are so popular these days.  This one popped up pretty quickly after the demo was released:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://www.eurogamer.net/articles/digitalfoundry-batman-demo-showdown-blog-entry&#34;&gt;http://www.eurogamer.net/articles/digitalfoundry-batman-demo-showdown-blog-entry&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The article is quite accurate (unlike some of the comments) and it was generally very positive which is great to see as we put a lot of effort into getting parity between the two console versions.&lt;/p&gt;

&lt;p&gt;The game has been getting a good &lt;a href=&#34;http://img229.imageshack.us/img229/8589/batmanaagireview.jpg&#34;&gt;reception &lt;/a&gt;which is especially nice given that Batman games have a long tradition of being terrible.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Tim Sweeney&#39;s HPG talk</title>
          <link>http://blog.mmacklin.com/2009/08/14/tim-sweeneys-hpg-talk/</link>
          <pubDate>Fri, 14 Aug 2009 12:14:37 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/08/14/tim-sweeneys-hpg-talk/</guid>
          <description>&lt;p&gt;This link was going round our office, a discussion over at Lambda the Ultimate regarding Tim Sweeney&#39;s HPG talk.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://lambda-the-ultimate.org/node/3560&#34;&gt;http://lambda-the-ultimate.org/node/3560&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tim chimes in a bit further down in the comments.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Handy hints for Bovine occlusion</title>
          <link>http://blog.mmacklin.com/2009/07/24/handy-hints-for-bovine-occlusion/</link>
          <pubDate>Fri, 24 Jul 2009 10:42:55 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/07/24/handy-hints-for-bovine-occlusion/</guid>
          <description>&lt;p&gt;Code517E &lt;a href=&#34;http://c0de517e.blogspot.com/2009/07/analytic-diffuse-shading.html&#34;&gt;recently &lt;/a&gt; reminded me of a site I&#39;ve used before when looking up form factors for various geometric configurations.&lt;/p&gt;

&lt;p&gt;One I had missed the first time though is the differential element on ceiling, floor or wall to cow.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://www.me.utexas.edu/~howell/sectionb/B-68.html&#34;&gt;http://www.me.utexas.edu/~howell/sectionb/B-68.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Very handy if you&#39;re writing a farmyard simulator I&#39;m sure.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Particle lighting</title>
          <link>http://blog.mmacklin.com/2009/06/15/particle-lighting/</link>
          <pubDate>Mon, 15 Jun 2009 22:39:50 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/06/15/particle-lighting/</guid>
          <description>&lt;p&gt;I put together an implementation of the particle shadowing technique NVIDIA &lt;a href=&#34;http://www.youtube.com/watch?v=xh2q_p6hQEo&#34;&gt;showed off&lt;/a&gt; a while ago.  My original intention was to do a survey of particle lighting techniques, in the end I just tried out two different methods that I thought sounded promising.&lt;/p&gt;

&lt;p&gt;The first was the one ATI used in the Ruby White Out demo, the best take away from it is that they write out the min distance, max distance and density in one pass.  You can do this by setting your RGB blend mode to GL_MIN, your alpha blend mode to GL_ADD and writing out r=z, g=1-z, b=0, a=density for each particle (you can reconstruct the max depth from min(1-z), think of it as the minimum distance from an end point).  Here&#39;s a screen:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/06/smoke2.jpg&#34; alt=&#34;smoke2&#34; title=&#34;smoke2&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-281&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The technique needs a bit of fudging to look OK.  Blur the depths, add some smoothing functions, it only works for mostly convex objects, good for amorphous blobs (clouds maybe).  Performance wise it is probably the best candidate for current-gen consoles.&lt;br&gt;
&lt;a href=&#34;http://ati.amd.com/developer/gdc/2007/ArtAndTechnologyOfWhiteout(Siggraph07).pdf&#34;&gt;&lt;br&gt;
&lt;a href=&#34;http://ati.amd.com/developer/gdc/2007/ArtAndTechnologyOfWhiteout(Siggraph07).pdf&#34;&gt;http://ati.amd.com/developer/gdc/2007/ArtAndTechnologyOfWhiteout(Siggraph07).pdf&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IMO the NVIDIA technique is much nicer visually, it gives you fairly accurate self shadowing which looks great but is considerably more expensive.  I won&#39;t go into the implementation details too much as the paper does a pretty good job at describing it.&lt;br&gt;
&lt;a href=&#34;http://developer.download.nvidia.com/compute/cuda/sdk/website/projects/smokeParticles/doc/smokeParticles.pdf&#34;&gt;http://developer.download.nvidia.com/compute/cuda/sdk/website/projects/smokeParticles/doc/smokeParticles.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Nvidia demo uses 32k particles and 32 slices but you can get pretty decent results with much less.  Here&#39;s a pic of my implementation, this is running on my trusty 7600 with 1000 particles and 10 slices through the volume:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/06/smoke1.jpg&#34; alt=&#34;smoke1&#34; title=&#34;smoke1&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-279&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately you need quite a lot of quite transparent particles otherwise there are noticeable artifacts as particles change order and end up in different slices.  You can improve this by using a non-linear distribution of slices so that you use more slices up front (which works nicely because the extinction for light in participating media is exponential).&lt;/p&gt;

&lt;p&gt;Looking forward to tackling some surface shaders next.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Code charity</title>
          <link>http://blog.mmacklin.com/2009/04/01/code-charity/</link>
          <pubDate>Wed, 01 Apr 2009 23:11:17 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/04/01/code-charity/</guid>
          <description>&lt;p&gt;A friend just sent me this:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://playpower.org/&#34;&gt;http://playpower.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It&#39;s a non-profit organisation with the goal of developing educational games for developing countries that run on 8bit NES hardware.  The old Nintedo chips are now patent-free and clones are very common:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://picasaweb.google.co.in/dereklomas/TVComputer&#34;&gt;&lt;img src=&#34;./wp-content/uploads/2009/03/nes.jpg&#34; alt=&#34;nes&#34; title=&#34;nes&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-241&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They&#39;re trying to recruit programmers with a social conscience, I&#39;m not old-school enough to know 8bit assembly but then I wouldn&#39;t mind learning.. who needs GPUs anyway!&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Red balls</title>
          <link>http://blog.mmacklin.com/2009/04/01/red-balls/</link>
          <pubDate>Wed, 01 Apr 2009 23:00:13 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/04/01/red-balls/</guid>
          <description>&lt;p&gt;A small update on my global illumination renderer, I&#39;ve ported the radiance transfer to the GPU. It was fairly straight forward as my CPU tree structure was already set up for easy GPU traversal, basically just a matter of converting array offsets into texture coordinates and packing into an indices texture.&lt;/p&gt;

&lt;p&gt;The hardest part is of course wrangling OpenGL to do what you want and give you a proper error message.  This site is easily the best starting point I found for GPGPU stuff:&lt;br&gt;
&lt;a href=&#34;http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html&#34;&gt;&lt;br&gt;
&lt;a href=&#34;http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html&#34;&gt;http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So here&#39;s an image, there are 7850 surfels, it runs about 20ms on my old school NVidia 7600, so it&#39;s still at least an order of magnitude or two slower than you would need for typical game scenes.  But besides that it&#39;s fun to pull area lights around in real time.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/04/balls2.jpg&#34; alt=&#34;balls2&#34; title=&#34;balls2&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-249&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Not as much colour bleeding as you might expect, there is some but it is subtle.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
        <item>
          <title>Tree traversals</title>
          <link>http://blog.mmacklin.com/2009/02/22/tree-traversals/</link>
          <pubDate>Sun, 22 Feb 2009 21:37:27 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/02/22/tree-traversals/</guid>
          <description>&lt;p&gt;I changed my surfel renderer over to use a pre-order traversal layout for the nodes, this generally gives better cache utilisation and I did see a small speed up from using it.  The layout is quite nice because to traverse your tree you just linearly iterate over your array and whenever you find a subtree you want to skip you just increment your node pointer by the size of that subtree (which is precomputed, see Real Time Collision Detection 6.6.2).&lt;/p&gt;

&lt;p&gt;The best optimisation though comes from compacting the size of the surfel data, which again improves the cache performance.  As some parts of the traversal don&#39;t need all of the surfel data it seems to make sense to split things out, for instance to store the hierarchy information and the area seperately from the colour/irradiance information.&lt;/p&gt;

&lt;p&gt;In fact it seems like when generalised, this idea leads you to the &lt;a href=&#34;http://software.intel.com/en-us/articles/how-to-manipulate-data-structure-to-optimize-memory-use-on-32-bit-intel-architecture/&#34;&gt;structure of arrays&lt;/a&gt; (SOA) layout, which essentially provides the finest grained breakdown where you only pull into the cache what you use and for all the nodes that you skip over there is no added cost.&lt;/p&gt;

&lt;p&gt;I haven&#39;t done any timings to see how much of a win this would actually be, mainly because dealing with SOA data is so damn cumbersome.&lt;/p&gt;

&lt;p&gt;It definitely seems like something you should do after you&#39;ve done all your hierarchy building and node shuffling which is just so much more intuitive with structures.  Then you can just &#39;bake&#39; it down to SOA format and throw it at the GPU/SIMD.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>PBRT</title>
          <link>http://blog.mmacklin.com/2009/02/22/pbrt/</link>
          <pubDate>Sun, 22 Feb 2009 20:25:55 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/02/22/pbrt/</guid>
          <description>&lt;p&gt;I just bought a copy of &lt;a href=&#34;http://www.pbrt.org/&#34;&gt;Physically Based Rendering&lt;/a&gt;, I&#39;ve been meaning to get one for ages as it&#39;s often recommended and I thought it might be useful given my recent interest in global illumination. I&#39;m also hoping to get a more formal background in rendering rather than the hacktastic world of real time.&lt;/p&gt;

&lt;p&gt;The subjects covered are broad and it&#39;s very readable.  It&#39;s my first exposure to &lt;a href=&#34;http://en.wikipedia.org/wiki/Literate_programming&#34;&gt;literate programming&lt;/a&gt;, where essentially the book describes and contains the full implementation of a program.  In fact the source code for their renderer is generated (tangled) from the definition of the book before compilation.&lt;/p&gt;

&lt;p&gt;The only problem is the amount it weighs, I like to read on the tube but 1000 page hard backs aren&#39;t exactly light reading.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>(Almost) realtime GI</title>
          <link>http://blog.mmacklin.com/2009/01/21/almost-realtime-gi/</link>
          <pubDate>Wed, 21 Jan 2009 23:40:01 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/01/21/almost-realtime-gi/</guid>
          <description>&lt;p&gt;After my initial implementation of surfel based illumination I&#39;ve extended it to do hierarchical clustering of surfels based on a similar idea to the one presented in GPU Gems 2.&lt;/p&gt;

&lt;p&gt;A few differences:&lt;/p&gt;

&lt;p&gt;I&#39;m using a k-means clustering to build the approximation hierarchy bottom up.  A couple of iterations of Lloyd&#39;s algorithm provides pretty good results.  Really you could get away with one iteration.&lt;/p&gt;

&lt;p&gt;To seed the clustering I&#39;m simply selecting every n&#39;th surfel from the source input.  At first I thought I should be choosing a random spatial distribution but it turns out clustering based on regular intervals in the mesh works well.  This is because you will end up with more detail in highly tesselated places, which is what you want (assuming your input has been cleaned).&lt;/p&gt;

&lt;p&gt;For example, a two tri wall will be clustered into one larger surfel where as a highly tesselated sphere will be broken into more clusters.&lt;/p&gt;

&lt;p&gt;The error metric I used to do the clustering is this:&lt;/p&gt;

&lt;p&gt;&lt;pre class=&#34;prettyprint&#34;&gt;Error = (1.0f + (1.0f-Dot(surfel.normal, cluster.normal)) * Length(surfel.pos-cluster.pos)&lt;br&gt;
&lt;/pre&gt;&lt;br&gt;
So it&#39;s a combination of how aligned the surfel and cluster are and how far away they are from each other.  You can experiment with weighting each of those metrics individually but just summing seems to give good results.&lt;/p&gt;

&lt;p&gt;When summing up the surfels to form your representative cluster surfel you want to:&lt;/p&gt;

&lt;p&gt;a) sum the area&lt;br&gt;
b) average the position, normal, emission and albedo weighted by area&lt;/p&gt;

&lt;p&gt;Weighting by area is quite important and necessary for the emissive value or you&#39;ll basically be adding energy into the simulation.&lt;/p&gt;

&lt;p&gt;Then you get to the traversal, Bunnel recommended skipping a sub-tree of surfels if the distance to the query point is &amp;gt; 4*radius of the cluster surfel.  That gives results practically identical to the brute force algorithm and I think you can be more agressive without losing much quality at all.&lt;/p&gt;

&lt;p&gt;I get between a 4-6x speed up using the hierarchy over brute force.  Not quite realtime yet but I haven&#39;t optimised the tree structure at all, I&#39;m also not running it on the GPU :)&lt;/p&gt;

&lt;p&gt;It seems like every post I make here has to reference Christer Ericson somehow but I really recommend his &lt;a href=&#34;http://realtimecollisiondetection.net/books/rtcd/&#34;&gt;book&lt;/a&gt; for ideas about optimising bounding volumes.  Loads of good stuff in there that I have yet to implement.&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://www1.cs.columbia.edu/~ravir/6160/papers/SHExp.pdf&#34;&gt;Real-time Soft Shadows in Dynamic Scenes using Spherical Harmonic Exponentiation&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://www.cs.cornell.edu/~kb/projects/lightcuts/&#34;&gt;Lightcuts: a scalable approach to illumination&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Indirect illumination</title>
          <link>http://blog.mmacklin.com/2009/01/11/indirect-illumination/</link>
          <pubDate>Sun, 11 Jan 2009 23:44:57 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/01/11/indirect-illumination/</guid>
          <description>&lt;p&gt;It&#39;s been a while since I checked in on the state of the art in global illumination but there is some seriously cool research happening at the moment.&lt;/p&gt;

&lt;p&gt;I liked the basic idea Dreamworks used on Shrek2 (&lt;a href=&#34;http://www.tabellion.org/et/paper/siggraph_2004_gi_for_films.pdf&#34;&gt;An Approximate Global Illumination System for Computer Generated Films&lt;/a&gt;) which stores direct illumination in light maps and then runs a final gather pass on that to calculate one bounce of indirect.  It might be possible to adapt this to real-time if you could pre-compute and store the sample coordinates for each point..&lt;/p&gt;

&lt;p&gt;However the current state of the art seems to be the point based approach that was first presented by Michael Bunnell with his GPU Gems 2 article &lt;a href=&#34;http://download.nvidia.com/developer/GPU_Gems_2/GPU_Gems2_ch14.pdf&#34;&gt;Dynamic Ambient Occlusion and Indirect Lighting&lt;/a&gt;, he approximates the mesh as a set of oriented discs and computes the radiance transfer dynamically on the GPU.&lt;/p&gt;

&lt;p&gt;Turns out Pixar took this idea and now use it on all their movies, Pirates of Carribean, Wall-E, etc.  The technique is described here in &lt;a href=&#34;http://graphics.pixar.com/library/PointBasedColorBleeding/paper.pdf&#34;&gt;Point Based Approximate Color Bleeding&lt;/a&gt;.  Bunnell&#39;s original algorithm was O(N^2) in the number of surfels but he used an clustered hierarchy to get that down to O(N.log(N)), Pixar use an Octree which stores a more accurate spherical harmonic approximation at each node.&lt;/p&gt;

&lt;p&gt;What&#39;s really interesting is how far Bunnell has pushed this idea, if you read through Fantasy Lab&#39;s &lt;a href=&#34;http://www.freepatentsonline.com/7408550.html&#34;&gt;recent patent&lt;/a&gt; (August 2008), there are some really nice ideas in there that I haven&#39;t seen published anywhere.&lt;/p&gt;

&lt;p&gt;Here&#39;s a summary of what&#39;s new since the GPU Gems 2 article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixed the slightly cumbersome multiple shadow passes by summing &#39;negative illumination&#39; from back-facing surfels&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Takes advantage of temporal coherence by simply iterating the illumination map each frame&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Added some directional information by subdividing the hemisphere into quadrants&lt;br&gt;&lt;/li&gt;
&lt;li&gt;Threw in some nice subdivision surfaces stuff in at the end&lt;br&gt;
&lt;br&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anyway, I knocked up a prototype of the indirect illumination technique and it seems to work quite well.  The patent leaves out loads of information (and spends two pages describing what a GPU is), but it&#39;s not too difficult to work out the details (note the form factor calculation is particularly simplified).&lt;/p&gt;

&lt;p&gt;Here are the results from a &lt;em&gt;very&lt;/em&gt; low resolution mesh, in reality you would prime your surfels with direct illumination calculated in the traditional way with shadow mapping / shaders then let the sim bounce it round but in this case I&#39;ve done the direct lighting using his method as well.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/01/gi_shot1.jpg&#34; alt=&#34;gi_shot1&#34; title=&#34;gi_shot1&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-206&#34; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2009/01/gi_shot2.jpg&#34; alt=&#34;gi_shot2&#34; title=&#34;gi_shot2&#34; width=&#34;90%&#34; class=&#34;aligncenter size-full wp-image-207&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Disclaimer: some artifacts are visible here due to the way illumination is baked back to the mesh and there aren&#39;t really enough surfels to capture all the fine detail but it&#39;s quite promising.&lt;/p&gt;

&lt;p&gt;This seems like a perfect job for CUDA or Larrabee as the whole algorithm can be run in parallel.  You can do it purely through DirectX or OpenGL but it&#39;s kind&#39;ve nasty.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Branch free Clamp()</title>
          <link>http://blog.mmacklin.com/2009/01/09/branch-free-clamp/</link>
          <pubDate>Fri, 09 Jan 2009 23:03:37 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/01/09/branch-free-clamp/</guid>
          <description>&lt;p&gt;One of my work mates had some code with a lot of floating point clamps in it the other day so I wrote this little branch free version using the PS3&#39;s floating point select intrinsic:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float Clamp(float x, float lower, float upper)
{
    float t = __fsels(x-lower, x, lower);
    return __fsels(t-upper, upper, t);
}
&lt;/pre&gt;

&lt;p&gt;__fsels basically does this:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
float __fsels(float x, float a, float b)
{
    return (x &gt;= 0.0f) ? a : b
}
&lt;/pre&gt;

&lt;p&gt;I measured it to be 8% faster than a standard implementation, not a whole lot but quite fun to write.  The SPUs have quite general selection functionality which is more useful, some stuff about it here:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://realtimecollisiondetection.net/blog/?p=90&#34;&gt;http://realtimecollisiondetection.net/blog/?p=90&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Not sure about this free Wordpress code formatting, I may have to move it to my own host soon)&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Two threads, one cache line</title>
          <link>http://blog.mmacklin.com/2009/01/09/two-threads-one-cache-line/</link>
          <pubDate>Fri, 09 Jan 2009 22:20:30 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2009/01/09/two-threads-one-cache-line/</guid>
          <description>&lt;p&gt;An interesting thread going around the &lt;a href=&#34;https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list&#34;&gt;GDA mailing list&lt;/a&gt; at the moment about multithreaded programming reminded me of a little test app I wrote a while back to measure the cost of two threads accessing memory on the same cache line.&lt;/p&gt;

&lt;p&gt;The program basically creates two threads which increment a variable a large number of times measuring the time it takes to complete with different distances between the write addresses.  Something like this:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;
__declspec (align (512)) volatile int B[128]; 


DWORD WINAPI ThreadProc(LPVOID param)
{
    // read/write the address a whole lot
    for (int i=0; i &amp;lt; 10000000; ++i)
    {
        (*(volatile int*)param)++;
    }

    return 0;
}


int main()
{
    volatile int* d1 = &amp;amp;B[0];
    volatile int* d2 = &amp;amp;B[127];

    while (d1 != d2)
    {
        HANDLE threads[2];

        // QPC wrapper
        double start = GetSeconds();

        threads[0] = CreateThread(NULL, 0, ThreadProc, (void*)d1, 0, NULL);
        threads[1] = CreateThread(NULL, 0, ThreadProc, (void*)d2, 0, NULL);

        WaitForMultipleObjects(2, threads, TRUE, INFINITE);

        double end = GetSeconds();

        --d2;

        cout &amp;lt;&amp;lt; (d2-d1) * sizeof(int) &amp;lt;&amp;lt; &amp;quot;bytes apart: &amp;quot; &amp;lt;&amp;lt; (end-start)*1000.0f &amp;lt;&amp;lt; &amp;quot;ms&amp;quot; &amp;lt;&amp;lt; endl;
    }
    
    int i;
    cin &amp;gt;&amp;gt; i;
    return 0;
}
&lt;/pre&gt;

&lt;p&gt;On my old P4 with a 64 byte cache line these are the results:&lt;br&gt;
&lt;pre class=&#34;prettyprint&#34;&gt;&lt;br&gt;
128bytes apart: 17.4153ms&lt;br&gt;
124bytes apart: 17.878ms&lt;br&gt;
120bytes apart: 17.4028ms&lt;br&gt;
116bytes apart: 17.3625ms&lt;br&gt;
112bytes apart: 17.959ms&lt;br&gt;
108bytes apart: 18.0241ms&lt;br&gt;
104bytes apart: 17.2938ms&lt;br&gt;
100bytes apart: 17.6643ms&lt;br&gt;
96bytes apart: 17.5377ms&lt;br&gt;
92bytes apart: 19.3156ms&lt;br&gt;
88bytes apart: 17.2013ms&lt;br&gt;
84bytes apart: 17.9361ms&lt;br&gt;
80bytes apart: 17.1321ms&lt;br&gt;
76bytes apart: 17.5997ms&lt;br&gt;
72bytes apart: 17.4634ms&lt;br&gt;
68bytes apart: 17.6562ms&lt;br&gt;
64bytes apart: 17.4704ms&lt;br&gt;
60bytes apart: 17.9947ms&lt;br&gt;
56bytes apart: 149.759ms ***&lt;br&gt;
52bytes apart: 151.64ms&lt;br&gt;
48bytes apart: 150.132ms&lt;br&gt;
44bytes apart: 125.318ms&lt;br&gt;
40bytes apart: 160.33ms&lt;br&gt;
36bytes apart: 147.889ms&lt;br&gt;
32bytes apart: 152.42ms&lt;br&gt;
28bytes apart: 157.003ms&lt;br&gt;
24bytes apart: 149.552ms&lt;br&gt;
20bytes apart: 142.372ms&lt;br&gt;
16bytes apart: 136.908ms&lt;br&gt;
12bytes apart: 145.691ms&lt;br&gt;
8bytes apart: 146.768ms&lt;br&gt;
4bytes apart: 128.408ms&lt;br&gt;
0bytes apart: 125.655ms&lt;br&gt;
&lt;/pre&gt;&lt;br&gt;
You can see when it gets to 56 bytes there is a large penalty (9x slower!) as it brings the cache-coherency protocol into play which forces the processor to reload from main memory.&lt;/p&gt;

&lt;p&gt;Actually it turns out this is called &amp;quot;false sharing&amp;quot; and it&#39;s quite well known, the common solution is to pad your shared data to be at least one cache line apart.&lt;/p&gt;

&lt;p&gt;Refs:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://software.intel.com/en-us/articles/reduce-false-sharing-in-net/&#34;&gt;http://software.intel.com/en-us/articles/reduce-false-sharing-in-net/&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://www.ddj.com/embedded/196902836&#34;&gt;http://www.ddj.com/embedded/196902836&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>More Metaballs</title>
          <link>http://blog.mmacklin.com/2008/12/28/more-metaballs/</link>
          <pubDate>Sun, 28 Dec 2008 16:46:09 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2008/12/28/more-metaballs/</guid>
          <description>&lt;p&gt;So after running my Metaballs demo on my girlfriends laptop it appears to be GPU limited due to fillrate.  This is mainly due to the overkill number of particles in my test setup and the fact they hang round for so long, but the technique is fillrate heavy so it might be a problem.&lt;/p&gt;

&lt;p&gt;It&#39;d be nice to do a multithreaded CPU implementation to see how that compares, but the advantage of the current method is that it keeps the CPU free to do other things.&lt;/p&gt;

&lt;p&gt;You could probably get more performance in some cases by uploading all the metaballs as shader parameters (or as a texture) and evaluating them directly in the pixel shader.  Also I just realised I could probably render the &#39;density&#39; texture at a lower resolution for a cheap speedup.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>GPU Metaballs</title>
          <link>http://blog.mmacklin.com/2008/12/20/gpu-metaballs/</link>
          <pubDate>Sat, 20 Dec 2008 14:10:56 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2008/12/20/gpu-metaballs/</guid>
          <description>&lt;p&gt;I&#39;ve been meaning to implement this idea for ages, it&#39;s a GPU implementation of 2D metaballs.  It&#39;s very simple, very fast and doesn&#39;t even require hardware shader support.  Seems like the kind of effect that could be useful in some little game..&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;./wp-content/uploads/2008/12/metaballs1.gif&#34; alt=&#34;metaballs&#34; title=&#34;metaballs&#34; class=&#34;aligncenter size-full wp-image-161&#34; /&gt;&lt;/p&gt;

&lt;p&gt;It&#39;s quite fun to play with (use left drag to rotate the emitter), so I might try and fancy it up with some better graphics and collision.&lt;/p&gt;

&lt;p&gt;The demo is below, I&#39;ll probably release source at some stage but at the moment it&#39;s all tied up in my dev framework which needs to be cleaned up (the exe is also larger than it should be as I have all sorts of crap linked in).&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://mmacklin.dreamhosters.com/Metaballs.zip&#34;&gt;Demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The simulation is just done using my simple particle system but it would be fun to implement smoothed particle hydrodynamics..&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Sprite mipmaps</title>
          <link>http://blog.mmacklin.com/2008/11/03/sprite-mipmaps/</link>
          <pubDate>Mon, 03 Nov 2008 22:31:35 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2008/11/03/sprite-mipmaps/</guid>
          <description>&lt;p&gt;Information on how to correctly make sprite textures is just not on the web anywhere.  There are so many subtle problems with making transparent textures for use in a 3D engine, here are some of the things I learned recently from my hobby project:&lt;/p&gt;

&lt;p&gt;Generally the easiest way is to paint on a transparent background, that is, don&#39;t try and paint the alpha channel yourself.  That&#39;s because it&#39;s a complete nightmare trying to match up your rgb and alpha channels correctly.  This is one of the reasons why you see loads of games with dark &#39;halo&#39;s around their particle textures (although it&#39;s not the only reason).  Unless you want to get really tricky and try and paint your own pre-multiplied texture it&#39;s not worth it, just create a new image in Photoshop with a transparent background and export as PNG.&lt;/p&gt;

&lt;p&gt;Note: the Photoshop Targa exporter won&#39;t export the alpha channel properly from a transparent image, it only supports alpha if you explicitly add the alpha channel to an RGB image and paint it yourself.  This is a slight pain because I&#39;ve always favoured Targa as an image file format that&#39;s easy to read / write but in this case Photoshop drops the ball once again.&lt;/p&gt;

&lt;p&gt;OK now you have your image exported, the image is currently NOT pre-multiplied, it may look like it in Photoshop and most image viewers but on disc it is not pre-multiplied.&lt;/p&gt;

&lt;p&gt;Read the file into ram using libPNG or your PNG library of choice.&lt;/p&gt;

&lt;p&gt;Generate mip maps... this is where it gets interesting.  If you just call something like gluBuild2DMipmaps() your texture will be wrong, the lower level mipmaps will have a dark halo around the edges.  Now what most artists do in this case is convert it to a non-transparent image and start painting in some bright colour around the edges of their sprite in the RGB channels, this is the equivalent of a really nasty filthy hack from which no good can come.  You can never paint just the right colour in there and can never get it in just the right place, if you&#39;re having this problem go talk to your coders and get them to implement a better texture import pipeline (see below):&lt;/p&gt;

&lt;p&gt;The fix is really quite simple and you should probably have already guessed that it is to use pre-multiplied alpha.  If you haven&#39;t heard of it before read here:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Premultiplied%20alpha]]&#34;&gt;Tom Forsyth&#39;s blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://keithp.com/~keithp/porterduff/&#34;&gt;http://keithp.com/~keithp/porterduff/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although it&#39;s not mentioned on those sites pre-multiplication is also the solution to building mipmaps correctly.  Let&#39;s imagine you don&#39;t pre-multiply before you downscale to build your mipmap, you have four source RGBA texels from the higher level:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;t1, t2, t3, t4
&lt;/pre&gt;
In some kind of box filter arrangement, then your destination pixel in the next lower mip is computed like this:

&lt;pre class=&#34;prettyprint&#34;&gt;lowermip.rgb = (t1.rgb + t2.rgb + t3.rgb + t4.rgb) / 4
lowermip.a = (t1.a + t2.a + t3.a + t4.a) /4&lt;/pre&gt;

&lt;p&gt;And when you come to render with your blend function set to GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA you get:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;framebuffer.rgb = lowermip.rgb * lowermip.a + framebuffer.rgb * (1.0-lowermip.a)&lt;/pre&gt;

&lt;p&gt;It should be farily obvious why this doesn&#39;t work, the filtered RGB channel will contain information from neighbouring pixels even if those pixels have a zero alpha.  As the background in most exported source textures is black it&#39;s the equivalent of &#39;pulling in&#39; black to your lower mip rgb channel.&lt;/p&gt;

&lt;p&gt;The very simple solution is to use pre-multiplication which makes your lower mip calculation like this:&lt;/p&gt;

&lt;pre class=&#34;prettyprint&#34;&gt;lowermip.rgb = (t1.rgb*t1.a + t2.rgb*t2.a + t3.rgb*t3.a + t4.rgb*t4.a) / 4;
lowermip.a =  (t1.a + t2.a + t3.a + t4.a) / 4;&lt;/pre&gt;

&lt;p&gt;So now you&#39;re weighting each pixels contribution to your lower mip by the appropriate alpha value.  And when you render you set your blend modes to GL_ONE, GL_INV_SRC_ALPHA.  Not only do your mips work correctly you can now &#39;mix&#39; additive and subtractive blending which is great for fire turning into smoke for instance (see references below).&lt;/p&gt;

&lt;p&gt;You don&#39;t need to export your image pre-multiplied, I fix mine up at import / asset cooking time which is probably preferable so you can keep working on the PNG source asset in a convenient way.&lt;/p&gt;

&lt;p&gt;This is really just the basics and mip-map filtering has had a lot of other good stuff written about it:&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;http://number-none.com/product/Mipmapping,%20Part%201/index.html&#34;&gt;http://number-none.com/product/Mipmapping,%20Part%201/index.html&lt;/a&gt;&lt;br&gt;
&lt;a href=&#34;http://number-none.com/product/Mipmapping,%20Part%202/index.html&#34;&gt;http://number-none.com/product/Mipmapping,%20Part%202/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also see &lt;a href=&#34;http://code.google.com/p/nvidia-texture-tools/&#34;&gt;http://code.google.com/p/nvidia-texture-tools/&lt;/a&gt; for some image manipulation source code (the Nvidia Photoshop DDS plugin also has an option to modulate your image by alpha when generating mip-maps).&lt;/p&gt;

&lt;p&gt;This is a great little paper about soft-edged particles and pre-multiplied alpha (even though it&#39;s not referred too as such) &lt;a href=&#34;http://wscg.zcu.cz/WSCG2006/Papers_2006/Short/A73-full.pdf&#34;&gt;&lt;br&gt;
Soft Edges and Burning Things: Enhanced Real-Time Rendering of Particle Systems&lt;/a&gt;&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>iPhone GL specs</title>
          <link>http://blog.mmacklin.com/2008/09/06/iphone-gl-specs/</link>
          <pubDate>Sat, 06 Sep 2008 12:23:52 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2008/09/06/iphone-gl-specs/</guid>
          <description>&lt;p&gt;So I&#39;ve been trying to find out a bit more about the iPhone&#39;s capabilities, &lt;a href=&#34;http://www.glbenchmark.com/phonedetails.jsp?benchmark=pro&amp;amp;D=Apple%20iPhone&amp;amp;testgroup=gl&#34;&gt;this site&lt;/a&gt; has some good details.  No shaders, roughly enough power for 15000 lit triangles @ 30hz.  Not too shabby, I wonder about the power consumption and if certain hardware paths drain the battery faster than others.  It would be nice to make something that people can play for more than 3 hours.&lt;/p&gt;

&lt;p&gt;So the word on the street is that O2 are bringing out a pre-pay iPhone in December, I&#39;m currently debating whether to wait for that or just get an iPod touch which you can still develop for using pretty much the same feature set.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>Parallelisation and cups of tea</title>
          <link>http://blog.mmacklin.com/2008/07/18/parallelisation-and-cups-of-tea/</link>
          <pubDate>Fri, 18 Jul 2008 08:51:54 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2008/07/18/parallelisation-and-cups-of-tea/</guid>
          <description>&lt;p&gt;While I was at Sony I spent a lot of time thinking about making tasks run concurrently and how they should be scheduled to maximise performance.&lt;/p&gt;

&lt;p&gt;Recently this kind of analysis has been spilling over into my everyday life and you start seeing ways you could parallelise all sorts of tasks.  Of course this is just common sense and something we all do naturally do different degrees.&lt;/p&gt;

&lt;p&gt;A simple example is making a cup of tea, you don&#39;t want to get the tea bags and the cups first, no, that would be a waste of precious milliseconds.  First you switch on the jug and then get to work on the rest of the process in parallel.  I imagine anyone who has worked in a kitchen before would be experts at this kind of scheduling and could probably do my job better than I can.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
        <item>
          <title>New job</title>
          <link>http://blog.mmacklin.com/2008/07/17/5/</link>
          <pubDate>Thu, 17 Jul 2008 23:25:31 &#43;0000</pubDate>
          <author></author>
          <guid>http://blog.mmacklin.com/2008/07/17/5/</guid>
          <description>&lt;p&gt;So I took a new job today, working for Rocksteady in London.  Quite exciting, they use Unreal3 tech which I&#39;ve always respected (and stolen ideas from).  It&#39;s a big game title and technically I don&#39;t know what it is.. but it&#39;s a big franchise and should do well.&lt;/p&gt;

&lt;p&gt;Plus working with friends in Kentish Town, s&#39;all good.&lt;/p&gt;
 [...]</description>
        </item>
      
    
      
    
      
    

  </channel>
</rss>
