POV-Ray : Newsgroups : povray.binaries.images : Skin Deep : Re: Skin Deep Server Time
7 Sep 2024 13:09:01 EDT (-0400)
  Re: Skin Deep  
From: clipka
Date: 25 Mar 2009 20:25:00
Message: <web.49cacb2e6a616ed1c1f399840@news.povray.org>
"grammophone" <eml### [at] ingr> wrote:
> There's my own effort for pseudo-sss a few years ago. It isn't a volumetric
> implementaiton either, but it takes into account internal and external
> obstacles. It should be slower than yours though, as it shoots many rays per
> surface intersection.

So does my current implementation: It picks a point just a little bit below the
surface and shoots rays randomly in all directions to pick sample points on the
object's surface (+) , from which in turn I do classic shadow ray tests to
measure the incoming light; the incoming light intensities are then shoved
through the BSSRDF formula (*), weighted to compensate for my choice of samples
(**), and then summed up.

(+ at the moment, no intersection tests with other objects are performed; I
guess it would be better to do this, but in some cases it may be just as wrong;
it'll never realistically capture the effect of bones in a finger, for instance)

(* the formula from 2001 Jensen et al. paper - the same that the Tariq & Ibarria
patch was based on; as a matter of fact my code is a heavily modified version of
theirs)

(** weighting is done based on distance, and the angle at which the probe ray
intersects the surface, to get an estimate of the area the sample "represents")


Of course there's a lot of potential for improvement:

- For sample points far away enough, I might just skip the test and assume that
they don't contribute (some adc_bailout like mechanism).

- For sample points moderately far away, I might re-use sample data collected
earlier sufficiently close (similar to the radiosity sample cache); or, as you
already mention, I might shoot photons at the object in advance, so I wouldn't
have to bother about shadow tests and instead just "collect" the photons close
enough to my sample points.

- I might actually skip the step of shooting rays to choose sample points, and
instead just "sum up" all sample points already collected on the same object's
surface, unless I find I don't have enough of them yet; using some spatial
subdivision structure with pre-calculated sums would help in this process.

- Maybe I could even do a caching of "final result data"

; for instance, I could cache already computed sample points, and when I need to
test another one, I might check whether I have any sample points in the cache
"sufficiently" close. In a follow-up paper Jensen actually suggested such an
approach, although the proposed algorithm isn't suited too well for POV-ray's
way of doing things.

> An implmentation idea for faster and more realistic sss could be to piggyback on
> the caustics photon mechanism.

I don't think it would provide any additional realism. I can use it to possibly
speed up the computation of incident illumination at my sample points, but I
can't shoot photons efficiently through the scattering material for any good
effect. There may simply be just too many scattering events (the paper speaks
of typically several hundreds in skin for a single photon).


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.