Subject: Re: Fur paper (2)
From: Martin Prazak
Date: Sat, 19 Jan 2008 11:59:22 +0100
To: Erik Reinhard

Hi Erik,

So, the actual algorithm looks like this:

Depth recovery:
Its implemented the same way as in your paper. I wanted to experiment with direct application of lighting model equations, but there was no tome for this.
Khan, E. A., Reinhard, E., Fleming, R. W., and Bülthoff, H. H. 2006. Image-based material editing. ACM Trans. Graph. 25, 3 (Jul. 2006), 654-663. DOI=
I suppose you know this one :)
I will not mention the papers you already have there, like the "Depth discrimination from shading under diffuse lighting" and so.
But you don't have there the "mental eye" one and the "on seeing stuff", I am not sure how much important they are, but it was a nice reading... :)

- luminance map
- conversion to logarithmic domain
Durand, F. and Dorsey, J. 2002. Fast bilateral filtering for the display of high-dynamic-range images. In Proceedings of the 29th Annual Conference on Computer Graphics and interactive Techniques (San Antonio, Texas, July 23 - 26, 2002). SIGGRAPH '02. ACM, New York, NY, 257-266. DOI=
- bilateral filter, implementation according to the new paper
Chen, J., Paris, S., and Durand, F. 2007. Real-time edge-aware image processing with the bilateral grid. In ACM SIGGRAPH 2007 Papers (San Diego, California, August 05 - 09, 2007). SIGGRAPH '07. ACM, New York, NY, 103. DOI=
- inversion of sigmoidal compression
- a few times the "magic" function from your paper

The depth map is then represented as a luminance map and used in rendering ("32-bit grayscale TIFF image).

Generation of the fur:
- gradient also based on your paper
- then a polygonal mesh is constructed (2 triangular polygons per pixel)
- the number of hairs per polygon is computed using a formula haircount = floor(polygon.GetArea() * hairs_per_unit) + (float)rand()/(float)RAND_MAX; which can for hight hair densities even give more than 1 hair per pixel (its not much used so far), then multiplied by the fur mask value
- the actual placement of hair is again random inside the polygon using barycentric coords
        l1 = (float)rand()/(float)RAND_MAX; l2 = (float)rand()/(float)RAND_MAX; if(l1 + l2 > 1) {l1 = 1 - l1;l2 = 1 - l2;}
        base = vertices[0] * l1 + vertices[1] * l2 + vertices[2] * (1.0 - l1 - l2);
- the normal is also interpolated the same way
- the normal, base point and length (can be adjusted by another map) are then used as input params for a simple particle simulation, where the normal is randomly changed (each component can change up to +- 30%) and used as starting particle velocity. The particles are attracted by gravity (always vector [0,-1,0]) and integration is based on simplest Euler's method. No interaction between particles and with underlying surface is implemented (it would be necessary for really long and curly hair, but then the whole simulation would become much much more complex, because it will become a large set of equation rather than simple numeric simulation)
- then we have the description of the fur shape
Inspiration for the particle system was from this paper
N. Magnenat-Thalmann, S. Hadap, and P. Kalra. State of the art in hair simulation. Proceedings of International Workshop on Human Modeling and Animation 2000, 3--9, 2000.

Hair representation:
- to avoid aliasing issues, the hair is rendered as flat ribbon always facing the camera and the lighting model is computed as a integration around the half-cylinder of the hair surface (see below)
- the cone integration discussion:
J.T. Kajiya and T.L. Kay. Rendering fur with three dimensional textures. Computer Graphics, 23(3):271–280, 1989.
- I am not sure where the idea of "ribbon-like" curves came from, according to the Thalmann's survey its from
DALDEGAN, A., AND MAGNENAT-THALMANN, N. Creating virtual fur and hair styles for synthetic actors. In Communicating with Virtual Worlds (1993), N. Magnenat-Thalmann and D. Thalmann, Eds., Springer-Verlag.
but I haven't read this one to be honest.
- the curves are triangle-shaped - root is thick, top is thin, and are more transparent towards the top (opacity = (1-sqrt(v))*root_opacity, where v -> 1 linearly towards the tip), and they have rounded root to avoid artifacts (computed in the color shader too)

Hair lighting model:
- first model was the original Kajiya-Kay's model
J.T. Kajiya and T.L. Kay. Rendering fur with three dimensional textures. Computer Graphics, 23(3):271–280, 1989.
but its too "sharp".
- more general is the Bank's extension, which gives better results (illustration 2 in the original paper), but I haven't implemented the clamping, which I suppose could have made the result more realistic... :(
Banks, D. C. 1994. Illumination in diverse codimensions. In Proceedings of the 21st Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '94. ACM, New York, NY, 327-334. DOI=
- originally I used the description from another paper, which was much more simple
Lengyel, J., Praun, E., Finkelstein, A., and Hoppe, H. 2001. Real-time fur over arbitrary surfaces. In Proceedings of the 2001 Symposium on interactive 3D Graphics I3D '01. ACM, New York, NY, 227-232. DOI=
- the new model is based on my nightmares and Merschner's paper - the actual analytical solution to raytracing inside a filament with gaussians used to approximate inaccuracies in the hair shape and make numberical solution, which contains for the most important TRT (transmission-reflection inside hair-transmission) component "glinches" with infinite intensity, more realistic looking.
Marschner, S. R., Jensen, H. W., Cammarano, M., Worley, S., and Hanrahan, P. 2003. Light scattering from human hair fibers. ACM Trans. Graph. 22, 3 (Jul. 2003), 780-791. DOI=
I had to adjust quite many things, which were not described in the paper exactly. But I think its not as interesting or important, but if you want to, I can make a list and give you quite a few "scientific" graphs where I was trying to figure out whats wrong in the model. Its kind of strange, because these graphs were not published to my knowledge, and the direct implementation of the paper is quite tricky and almost impossible to verify.

Lighting of the scene:
- the lighting of the scene is based either on the original IBME paper, where the half-sphere mapping is used, or just using a simple distant light source
- when the half-sphere mapping is used, the deep shadow algorithm is used to compute shadows. This is only possible with more light sources all around the scene with quite close intensities, otherwise the shadows would show the underlying shape too much. Also its necessary to shift the shadow map to avoid self shadowing of every hair, but it can make problems when shading underlying surface the same way, because it introduces artifacts. Its nicely described here:
Ahokas, T. 2002. Shadow Maps. Helsinki University of Technology
and the original deep shadow maps paper is:
Lokovic, T. and Veach, E. 2000. Deep shadow maps. In Proceedings of the 27th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 385-392. DOI=

Composition with background:
- the compositing is done in LDR using an imager shader in Renderman. Its just really simple thing... :)

Rendering of the whole thing:
- its based on Renderman interface implementation 3Delight. I coudn't find an original Renderman paper, so I am not sure if there is any, but there was a course on Renderman on practically every Siggraph for past 10 years...
- the main idea was to use different shaders to get stuff done. So...
    - imager shader "background_texture" can map the texture back to the original image (not used anymore)
    - shader "fur" is a surface shader with Kajiya / Kay / Banks lighting model
    - shader "fur2" is Merschner's model
    - shader plastic_background is shader used for the underlying polygon, which serves as the underlying shape and also recomposites the whole thing into background as a texture of the polygon
    - shadowdistant2 is the distant light with shadows shader as shown in one of the renderman tutorials from siggraph (I can look for it exactly, if you want to add it to references, but its a standart way to do it)
    - shader "shape" is the shader creating the underlying shape from the polygon using displacement mapping based on the estimated depth 32-bit gs texture

So, I hope its all... If there is something missing, please send me an email. There is not much inovation in here, except a tiny little fixes of the things from papers.


Erik Reinhard wrote:

Hey Martin,

Do you think you could possibly send me the input image
that you used for furry_hat? I'd like to put them side-by-side
in the paper... (I'll continue with this tomorrow, so no
immediate hurry)



Erik Reinhard

Quoting Martin Prazak <>:

Hi Erik,

Thank you for the emails! Whow, how can you write this thing so fast? :)

The accents are right, thanks! Just the university name is "Brno
University of Technology" for both me and Pavel, and I am not sure, but
the "googlemail" email address is not very nice, maybe I'll try to ask
for an address in Dublin to make it looking better.

Maybe it would be good to mention, that the "dark-is-deep" works only
under diffuse or not so obvious direct lighting, because otherwise the
shadows would ruin the resulting shape... Otherwise its really great! :)

About the techniques, I'll try to make a list of them and send it to
you today.

The exploring of parameter space... Well, I'll try at least to some
extend, but its a painful process on my poor old laptop...

And also, how do you like the last images of "hat"?


Erik Reinhard wrote:

Hi Guys,

Here is version 2 of our fur rendering paper. The updates
are only minor, but include Diego's.

I did correct Pavel's e-mail address, so that hopefully he
now receives my e-mail as well. It's late, sorry for being

Please all check that I have typed your name correctly!
Martin, did I get the accents right?

Tania, I think it is decision time for you. You have an
official name and a call name. This is the right time to
choose under which name you would like to publish your
papers. For career management, it would be best to choose
now, and then always stick with your choice. Otherwise
you will end up with a CV that might raise questions for
future employers...



Erik Reinhard