Subject: Re: Fur paper (2) |

From: Martin Prazak |

Date: Sat, 19 Jan 2008 11:59:22 +0100 |

To: Erik Reinhard |

Hi Erik,

So, the actual algorithm looks like this:

Khan, E. A., Reinhard, E., Fleming, R. W., and Bülthoff, H. H. 2006. Image-based material editing.

I suppose you know this one :)

I will not mention the papers you already have there, like the "Depth discrimination from shading under diffuse lighting" and so.

But you don't have there the "mental eye" one and the "on seeing stuff", I am not sure how much important they are, but it was a nice reading... :)

- luminance map

- conversion to logarithmic domain

Durand, F. and Dorsey, J. 2002. Fast bilateral filtering for the display of high-dynamic-range images. In

- bilateral filter, implementation according to the new paper

Chen, J., Paris, S., and Durand, F. 2007. Real-time edge-aware image processing with the bilateral grid. In

- inversion of sigmoidal compression

- a few times the "magic" function from your paper

The depth map is then represented as a luminance map and used in rendering ("32-bit grayscale TIFF image).

- gradient also based on your paper

- then a polygonal mesh is constructed (2 triangular polygons per pixel)

- the number of hairs per polygon is computed using a formula haircount = floor(polygon.GetArea() * hairs_per_unit) + (float)rand()/(float)RAND_MAX; which can for hight hair densities even give more than 1 hair per pixel (its not much used so far), then multiplied by the fur mask value

- the actual placement of hair is again random inside the polygon using barycentric coords

l1 = (float)rand()/(float)RAND_MAX; l2 = (float)rand()/(float)RAND_MAX; if(l1 + l2 > 1) {l1 = 1 - l1;l2 = 1 - l2;}

base = vertices[0] * l1 + vertices[1] * l2 + vertices[2] * (1.0 - l1 - l2);

- the normal is also interpolated the same way

- the normal, base point and length (can be adjusted by another map) are then used as input params for a simple particle simulation, where the normal is randomly changed (each component can change up to +- 30%) and used as starting particle velocity. The particles are attracted by gravity (always vector [0,-1,0]) and integration is based on simplest Euler's method. No interaction between particles and with underlying surface is implemented (it would be necessary for really long and curly hair, but then the whole simulation would become much much more complex, because it will become a large set of equation rather than simple numeric simulation)

- then we have the description of the fur shape

Inspiration for the particle system was from this paper

N. Magnenat-Thalmann, S. Hadap, and P. Kalra.

- the cone integration discussion:

J.T. Kajiya and T.L. Kay. Rendering fur with three dimensional textures. Computer Graphics, 23(3):271–280, 1989.

- I am not sure where the idea of "ribbon-like" curves came from, according to the Thalmann's survey its from

DALDEGAN, A., AND MAGNENAT-THALMANN, N. Creating virtual fur and hair styles for synthetic actors. In Communicating with Virtual Worlds (1993), N. Magnenat-Thalmann and D. Thalmann, Eds., Springer-Verlag.

but I haven't read this one to be honest.

- the curves are triangle-shaped - root is thick, top is thin, and are more transparent towards the top (opacity = (1-sqrt(v))*root_opacity, where v -> 1 linearly towards the tip), and they have rounded root to avoid artifacts (computed in the color shader too)

but its too "sharp".

- more general is the Bank's extension, which gives better results (illustration 2 in the original paper), but I haven't implemented the clamping, which I suppose could have made the result more realistic... :(

Banks, D. C. 1994. Illumination in diverse codimensions. In

- originally I used the description from another paper, which was much more simple

Lengyel, J., Praun, E., Finkelstein, A., and Hoppe, H. 2001. Real-time fur over arbitrary surfaces. In

- the new model is based on my nightmares and Merschner's paper - the actual analytical solution to raytracing inside a filament with gaussians used to approximate inaccuracies in the hair shape and make numberical solution, which contains for the most important TRT (transmission-reflection inside hair-transmission) component "glinches" with infinite intensity, more realistic looking.

Marschner, S. R., Jensen, H. W., Cammarano, M., Worley, S., and Hanrahan, P. 2003. Light scattering from human hair fibers.

I had to adjust quite many things, which were not described in the paper exactly. But I think its not as interesting or important, but if you want to, I can make a list and give you quite a few "scientific" graphs where I was trying to figure out whats wrong in the model. Its kind of strange, because these graphs were not published to my knowledge, and the direct implementation of the paper is quite tricky and almost impossible to verify.

Lighting of the scene:

- when the half-sphere mapping is used, the deep shadow algorithm is used to compute shadows. This is only possible with more light sources all around the scene with quite close intensities, otherwise the shadows would show the underlying shape too much. Also its necessary to shift the shadow map to avoid self shadowing of every hair, but it can make problems when shading underlying surface the same way, because it introduces artifacts. Its nicely described here:

Ahokas, T. 2002. Shadow Maps. Helsinki University of Technology

and the original deep shadow maps paper is:

Lokovic, T. and Veach, E. 2000. Deep shadow maps. In

Rendering of the whole thing:

- the main idea was to use different shaders to get stuff done. So...

- imager shader "background_texture" can map the texture back to the original image (not used anymore)

- shader "fur" is a surface shader with Kajiya / Kay / Banks lighting model

- shader "fur2" is Merschner's model

- shader plastic_background is shader used for the underlying polygon, which serves as the underlying shape and also recomposites the whole thing into background as a texture of the polygon

- shadowdistant2 is the distant light with shadows shader as shown in one of the renderman tutorials from siggraph (I can look for it exactly, if you want to add it to references, but its a standart way to do it)

- shader "shape" is the shader creating the underlying shape from the polygon using displacement mapping based on the estimated depth 32-bit gs texture

So, I hope its all... If there is something missing, please send me an email. There is not much inovation in here, except a tiny little fixes of the things from papers.

Martin

Erik Reinhard wrote:

Hey Martin,

Do you think you could possibly send me the input image

that you used for furry_hat? I'd like to put them side-by-side

in the paper... (I'll continue with this tomorrow, so no

immediate hurry)

Cheers,

Erik

_______________________________________

Erik Reinhard reinhard@cs.ucf.edu

reinhard@cs.bris.ac.uk

_______________________________________

Quoting Martin Prazak <martin.prazak@googlemail.com>:

Hi Erik,

Thank you for the emails! Whow, how can you write this thing so fast? :)

The accents are right, thanks! Just the university name is "Brno

University of Technology" for both me and Pavel, and I am not sure, but

the "googlemail" email address is not very nice, maybe I'll try to ask

for an address in Dublin to make it looking better.

Maybe it would be good to mention, that the "dark-is-deep" works only

under diffuse or not so obvious direct lighting, because otherwise the

shadows would ruin the resulting shape... Otherwise its really great! :)

About the techniques, I'll try to make a list of them and send it to

you today.

The exploring of parameter space... Well, I'll try at least to some

extend, but its a painful process on my poor old laptop...

And also, how do you like the last images of "hat"?

Martin

Erik Reinhard wrote:

Hi Guys,

Here is version 2 of our fur rendering paper. The updates

are only minor, but include Diego's.

I did correct Pavel's e-mail address, so that hopefully he

now receives my e-mail as well. It's late, sorry for being

dim.

Please all check that I have typed your name correctly!

Martin, did I get the accents right?

Tania, I think it is decision time for you. You have an

official name and a call name. This is the right time to

choose under which name you would like to publish your

papers. For career management, it would be best to choose

now, and then always stick with your choice. Otherwise

you will end up with a CV that might raise questions for

future employers...

Cheers,

Erik

_______________________________________

Erik Reinhard reinhard@cs.ucf.edu

reinhard@cs.bris.ac.uk

_______________________________________