Original picture - tonemapped using very simple exponential tonemapping (I'll implement something better in time)
Bilateral filter - applied to an LDR image.
Depth map - processed using sigmoid, bilateral filter, back sigmoid and 3 times "magic" function.
Depth map - detail in 3D - problem of "coarseness" of the technique used in new implementation. Althought here it seems quite horrible, its not actually visible in result at all and this implementation is faster by two orders than the original one.
Background of the rendering - filled using the very simple technique from the paper.
Glass rendering - rendered using 3Delight and 3 custom shaders, really simple implementation. The result needs a little bit of changes of parameters, but it works.
Fur cover - implementation of fur covering a model reconstructed from depth map.
Fur ball - fur on a ball, with correct light model.
Fur ball with shadows - fur on a ball, with correct light model, shadows and self-shadowing.
Fur cover on statue - without color model, second image with random normal change by approx 33%.
Fur on statue without lighting model - every hair represented as bezier curve, no lighting model, 207299 hairs.
Fur on statue with linear lighting model - the same as above, just the color of every hair is black at the beginning and red at the end. Notice hair direction problems in highlights part.
Fur on statue with (hopefully) correct lighting model - just without any shadowing (!! incorrect randomness of the hair, only random in X and Y axis, straight in Z !!).
Encapsulated renderman shadows by shadowmaps - 3 days of work... :( .
Fur with shadowing - effect of shadows - shadow from top front (vector [0 1 1]).
Different light sources - separated and joined together with different colours.
Different hair lengths - left => approx 200k hairs, length 0.1; right => approx 400k hairs, length 0.03 .
Resolution of shadowmap - left => 512 x 512; right => 2048 x 2048 (the "simulation" of deep shadows) . The actual effect on result is really minimal, deep shadows would make a difference for partially-transparent hair, which is not used in here.
Sampling of HDR image for image-based lighting using Ostromoukhov's implementation of his sampler.
Errors in Ostromoukhov's sampling technique caused by the "tiling" algorithm, which does not parse values in all possible directions, just in direction of another "tile" of the Penrose tiling. Although giving nice results for real-world images, for artificial or somehow extreme images the whole method fails. Also the "randomness" added using table-based relaxation sometimes does not exactly hit the proper surface, giving samples in "unimportant" areas.
Image-based lighting - a sphere lit by set of lights geterated from hdr images (left - star image as above, right - iv).
Background compositing - the polygon and displacement map, with a little bit of fur on it. The previous renders were INCORRECT, the fur was reversed in Z axis (left hand coordinate system, again). The first two images were rendered with 9 lights and with 26 lights, third is the second composited into background.
Coarse fur - the demostration of actual fur rendering and its lighting model - each hair is a "flat ribbon" curve, the lighting model is a integration of phong-shaded cylinder along the "width". Shadows of each hair are also visible on the left image.
Fur density - from top left, approximate numbers: 1000 hairs, 10 000 hairs, 100 000 hairs, 200 000 hairs, 200 000 hairs with 3x longer than on previous images
The problematic part of the new model.
New lighting model - on the left side is a rendering of a ball made of hairs going from top of the sphere to the bottom, lit by a single distant light source with the same direction as camera, showing the properties of the new model - the very strong secondary highlight based on "glinches" and uncolored primary highlights. The whole model is more complex and its not possible to demonstrate all its properties this way.

On the right side is a rendering of the statue with the new model and same parameters as for the ball on the left.
Fur tests - tests to make the furball as furry as possible. Left - Kajiya & Kay model, right Marschner. There is still an error in Marschner's lighting model in TRT component. But where...

Update: there is NO error in the Marschner's model. Added the last thing for non-circular cross sections and noise-based smooth random rotation along each hair led to the third image. The "sparkles" in the image are caused by one particular part in the model, and are actually correct.
Simple particle system and transparency for hairs towards the tip. Rendered WITHOUT shadowing with SINGLE light source. 1st - Kajiya & Kay, lightsource on the place of camera, 2nd - the same with Marschner's model, 3rd - Kajiya & Kay, lightsource from "top right", 4th - the same with Marschner's model. Please click to zoom, otherwise its not so good... :) The red parts are there to show the parts not covered by the fur from particle system.
No comment... :D
Source files of the furry hat rendering. From left - tonemapped LDR background image (just background, not actually used for depth estimation), mask used for shape recovery, mask of applied fur and mask defining the lengths of the hair.
IMPLEMENTATION NOTES (the lost email to Erik)
Particle models
Color changes of the TT and TRT component (R component is uncolored). With the side lighting [1 1 -1] the color change is either not so obvious, or too intensive, because of influence of uncolored R reflection, but after change of the lighting to [0.1 0.1 -1], the TRT component of the lighing model intensifies to the third result.
Mouse - with "gravity" going user-specified direction. (bottom row - retouched the highlight)
More mouses :)
First attempt to destroy bunny... :)