Here's what I could get from SSLT using Blend2Pov (the skin not the hair which
uses single sampled media)
http://maurice.raybaud.eu/images/stories/POVhumanShaders.mov
The lighting conditions are just a single static spot with a value of 1 and no
radiosity. Only the camera is moving.
I found quite hard to get the right scale, small enough for good translucency
yet with realistic absorption for skin at glancing angles (like just before the
last frames of the file). I settled for mm_per_unit = 1000 while keeping the
scattering and absorption coefficients from a research paper mentionned by
clipka. Those give an overall satisfying color at most scales if there are
enough subsurface samples. Finally the samples: due to rendering speed on my
laptop I could afford only subsurface { samples 6, 4 }which accounts for all
the noise.
To sum it all up:
a-a fast way to interpolate samples or calcuate more in the same time would be
great. (In fact I can't go on with the testing without that :P)
b-I'm not sure the absorption coefficient is computed accurately / or /
c-maybe the scale allowed in parameters is not covering enough range./ or /
d-maybe I got the scale right for my scene and the effect will take the desired
look once I can Put much Higher sample counts.
Mr schrieb:
> Here's what I could get from SSLT using Blend2Pov (the skin not the hair which> uses single sampled media)> http://maurice.raybaud.eu/images/stories/POVhumanShaders.mov
Could you provide a non-Quicktime version of the animation as well?
clipka <ano### [at] anonymousorg> wrote:
> Mr schrieb:> > Here's what I could get from SSLT using Blend2Pov (the skin not the hair which> > uses single sampled media)> > http://maurice.raybaud.eu/images/stories/POVhumanShaders.mov>> Could you provide a non-Quicktime version of the animation as well?
Would AVI do?