|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
By curiosity I've rendered some images to visualize the distribution of
samples for focal blur and radiosity in povray 3.5. And also I've tried a low
discrepancy sequence (Halton sequence, it's really easy to generate)
http://195.221.122.126/samples/samples.html
M
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
1.
Should the focal blur distribution be divided in a circular area instead of
a rectangular one?
2.
I like the radiosity distribution, but maybe it would be nice to have some
more samples than 1600. What about 32768 samples? If it's just a matter of
precomputing the distribution, why not?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Apache" <apa### [at] yahoocom> wrote...
> 1.
> Should the focal blur distribution be divided in a circular area instead
of
> a rectangular one?
>
> 2.
> I like the radiosity distribution, but maybe it would be nice to have some
> more samples than 1600. What about 32768 samples? If it's just a matter of
> precomputing the distribution, why not?
Does that really look like your every-day probability distribution funciton
to you? To me it looks like the points are much more evenly spaced than any
distribution funciton I've ever seen. Notice how at samples=50, the samples
are in almost perfect rings, almost like a geodesic dome. The points are
also considerably more evenly spaced than in the Halton sequence, which
itself is intended to be low-discrepancy.
Also, as far as I know, we don't have the source code that generated the
existing 1600 samples. The file "rad_data.cpp" containing the data is
credited to Jim McElhiney.
-Nathan
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Nathan Kopp wrote:
>
> Does that really look like your every-day probability distribution funciton
> to you? To me it looks like the points are much more evenly spaced than any
> distribution funciton I've ever seen. Notice how at samples=50, the samples
> are in almost perfect rings, almost like a geodesic dome. The points are
> also considerably more evenly spaced than in the Halton sequence, which
> itself is intended to be low-discrepancy.
>
> Also, as far as I know, we don't have the source code that generated the
> existing 1600 samples. The file "rad_data.cpp" containing the data is
> credited to Jim McElhiney.
>
As discussed in the thread:
Subject: Radiosity flouroescent lighting troubles
Date: Tue, 19 Nov 2002 18:43:42 EST
From: "Rohan Bernett" <rox### [at] yahoocom>
Newsgroups: povray.advanced-users
it seems to be generated by projecting a distribution on a disc onto the
hemisphere. I have made first tests with using Sim-POV to generate a
better distribution, some first results:
http://www.schunter.etc.tu-bs.de/~chris/files/samples_internal.png
http://www.schunter.etc.tu-bs.de/~chris/files/samples_simpov.png
Note that the new distribution has slightly more samples (1850) and i did
not test how well it follows the cosine theta distribution.
Another thing i have been thinking of is if it would not be better to have
a uniform distribution but weight the sample rays according to cosine
theta. At least it could be worth trying if that diminishes radiosity
artefacts in some situations. Although it seems logical that you need
less samples where the samples have not much importance the density
approaching zero at the bottom rim of the hemisphere leads to an extremely
nonuniform distribution along the rim.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 31 Dec. 2002 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3E155F55.1A237A45@gmx.de> , Christoph Hormann
<chr### [at] gmxde> wrote:
> Another thing i have been thinking of is if it would not be better to have
> a uniform distribution but weight the sample rays according to cosine
> theta. At least it could be worth trying if that diminishes radiosity
> artefacts in some situations.
Keep in mind that such a method will very likely require more samples to be
taken. Most pseudo-random functions simply don't give you too nice values
for a small set of samples (like the 35 default). My guess (or hope?) is
that the current table has been tweaked accordingly for small sample sets.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I remember reading that the Mersenne Twister algorithm (and I think there
are various improvements to it now) is supposed to be a very good, fast
pseudorandom generator. Has anyone looked into it yet?
George
"Thorsten Froehlich" <tho### [at] trfde> wrote in message
news:3e157f92$1@news.povray.org...
> In article <3E155F55.1A237A45@gmx.de> , Christoph Hormann
> <chr### [at] gmxde> wrote:
>
> > Another thing i have been thinking of is if it would not be better to
have
> > a uniform distribution but weight the sample rays according to cosine
> > theta. At least it could be worth trying if that diminishes radiosity
> > artefacts in some situations.
>
> Keep in mind that such a method will very likely require more samples to
be
> taken. Most pseudo-random functions simply don't give you too nice values
> for a small set of samples (like the 35 default). My guess (or hope?) is
> that the current table has been tweaked accordingly for small sample sets.
>
> Thorsten
>
> ____________________________________________________
> Thorsten Froehlich, Duisburg, Germany
> e-mail: tho### [at] trfde
>
> Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Here is the MT homepage: http://www.math.keio.ac.jp/~matumoto/emt.html
>
> I remember reading that the Mersenne Twister algorithm (and I think there
> are various improvements to it now) is supposed to be a very good, fast
> pseudorandom generator. Has anyone looked into it yet?
>
> George
>
>
> "Thorsten Froehlich" <tho### [at] trfde> wrote in message
> news:3e157f92$1@news.povray.org...
> > In article <3E155F55.1A237A45@gmx.de> , Christoph Hormann
> > <chr### [at] gmxde> wrote:
> >
> > > Another thing i have been thinking of is if it would not be better to
> have
> > > a uniform distribution but weight the sample rays according to cosine
> > > theta. At least it could be worth trying if that diminishes radiosity
> > > artefacts in some situations.
> >
> > Keep in mind that such a method will very likely require more samples to
> be
> > taken. Most pseudo-random functions simply don't give you too nice
values
> > for a small set of samples (like the 35 default). My guess (or hope?)
is
> > that the current table has been tweaked accordingly for small sample
sets.
> >
> > Thorsten
> >
> > ____________________________________________________
> > Thorsten Froehlich, Duisburg, Germany
> > e-mail: tho### [at] trfde
> >
> > Visit POV-Ray on the web: http://mac.povray.org
>
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> http://www.schunter.etc.tu-bs.de/~chris/files/samples_simpov.png
it looks good, does it take a long time to create this distribution ?
Anyway, as far as radiosity speed is concerned it would be more interesting to
find a way to make better use of the existing directions rather than adding
more call to trace() :) For example would it be possible to use an adaptive
number of samples ? it seems that when we specify "count 1600" povray will
always shoot 1600 rays at samples locations, although there are probably some
places where less rays are needed... The focal blur uses confidence/variance
to decide when to stop sending rays. Do you think we can do something like
this for radiosity ?
M
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3e15944c$1@news.povray.org> , "George Pantazopoulos"
<the### [at] attbicom*KILLSPAM*> wrote:
> I remember reading that the Mersenne Twister algorithm (and I think there
> are various improvements to it now) is supposed to be a very good, fast
> pseudorandom generator. Has anyone looked into it yet?
Well, finding a good random number algorithm isn't the main problem if one
ignores performance. But performance is the main catch here! And the
implementations I have seen clearly leads to terrible performance.
Completely unsuitable for the expected use:
After all the algorithms to be used have to compete against one single plain
memory access. And an Mersenne Twister implementations need more than five
times 623 (the dimensional equidistribution property) memory accesses. In
short it is roughly more than 3000 times(!!!) slower than the current
implementation! :-(
So in the time to compute one Mersenne Twister algorithm random number, for
most scenes* a whole ray can be traced...
Thorsten
* "most" because intersections with a certain object types can take almost
infinitely long - an isosurface. And note that really only one intersection
needs to be computed per ray when proper bounding is used and all objects
are bound.
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Mael wrote:
>
> > http://www.schunter.etc.tu-bs.de/~chris/files/samples_simpov.png
>
> it looks good, does it take a long time to create this distribution ?
I did not write down the time for that one but i later did some tests with
4000 points and it needed at least 1 hour to reach a usable state.
> Anyway, as far as radiosity speed is concerned it would be more interesting to
> find a way to make better use of the existing directions rather than adding
> more call to trace() :) For example would it be possible to use an adaptive
> number of samples ? it seems that when we specify "count 1600" povray will
> always shoot 1600 rays at samples locations, although there are probably some
> places where less rays are needed... The focal blur uses confidence/variance
> to decide when to stop sending rays. Do you think we can do something like
> this for radiosity ?
I have made some tests with adapting the count dynamically before:
http://www-public.tu-bs.de:8080/~y0013390/simpov/docu04.html
but the problem is to find a fast criteria for the adaptation. As
explained concentrating the sample rays at a certain area might seem
logical but would be really tricky and possibly slow. The method i used
just checks the distances of the intersection points which have to be
calculated anyway.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 31 Dec. 2002 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |