|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi folks.
I am currently in the process of rendering my first animation using
radiosity. Actually it's not working too well, but the experts will probably
tell me I'm rendering the wrong sort of scene. Anyway, will hopefully post
the results if they can be compressed to a reasonable size without too much
loss... (Since most of the image is black, this seems possible.)
But anyway, my question...
Does POV-Ray do *real* radiosity?
According to my understanding (which may or may not be correct) "radiosity"
algorithms start from the light source(s) and follow the paths of the rays
forwards into the scene, eventually reaching the camera. However, POV-Ray is
"radiosity mode" seems to still follow light backwards, only this time
attempting to take diffuse interreflections into account. (Actually I am
*extremely* fuzzy on exactly what POV-Ray does do... the documentation seems
awefully vague.) Does this really count as radiosity?
I'm interested in the difference between POV-Ray's radiosity mode and photon
mapping... IIRC, the latter is designed only to forward-trace specular
reflections and refraction. Photon maps appear to be totally independant of
the camera angle, whereas the animation I'm currently rendering seems to
indicate that radiosity IS camera dependent (or just uses a lot of random
sampling), further implying that it traces backwards not forwards. Could the
photon mapping algorithm be changed to include diffuse elements also? Would
this be worth doing?
Radiosity is still supposed to be an "experimental feature"... is it likely
to change in the next version of POV-Ray?... is there likely to BE a next
version of POV-Ray?
Thoughts? Opinions? (No doubt flames too...)
Thanks.
Andrew.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Andrew Coppin <orp### [at] btinternetcom> wrote:
> Does POV-Ray do *real* radiosity?
What is "real radiosity"?
Firstly, the exact meaning of "radiosity" is a specific algorithm where
lightmaps are calculated projecting (usually with scanline rendering) the
scene onto each surface of the scene. This algorithm has nothing to do
with raytracing nor with light sources per se.
I suppose that you are using the term "radiosity" as meaning any
global illumination algorithm. Even so, there's no such a thing as
a "real" algorithm. There are several different algorithms, but we
can't say any of them is more "real" than the other. They are all
approximations, different ways of calculating a similar thing.
One algorithm can be "better" than another (by whatever criteria
you happen to choose), but quality does not make it the "real" algorithm
(and the other ones "fake").
> According to my understanding (which may or may not be correct) "radiosity"
> algorithms start from the light source(s) and follow the paths of the rays
> forwards into the scene, eventually reaching the camera.
That's just *one* way of doing it. In no way it's the only way, nor
necessarily the "best" way. And as I already said, there's nothing in
this algorithm which would make it "real".
> However, POV-Ray is
> "radiosity mode" seems to still follow light backwards, only this time
> attempting to take diffuse interreflections into account.
Right. There are good advantages (and perhaps some disadvantages) of
doing it that way.
> Does this really count as radiosity?
This is not the algorithm called "radiosity", which is a scanline-rendering
algorithm.
There's nothing in this algorithm which would make it "less" global
illumination algorithm than any other.
> Could the
> photon mapping algorithm be changed to include diffuse elements also? Would
> this be worth doing?
This is a FAQ.
Nathan Kopp tried to implement global illumination using photon mapping,
but AFAIK the results were not very promising.
If you think about the amount of samples needed to calculate global
illumination for a huge scene, you will understand why this is so. It's
very difficult to calculate more samples where they are needed and less
samples where they aren't. This is an advantage of the stochastic algorithm
used by POV-Ray.
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
squidian <squ### [at] localhostlocaldomain> wrote:
> Tried that with 3.1. 3.1 uses noise as an input. Each frame uses a
> different noise making a mess like leaving jitter on. Doesn't work.
You really don't have the slightest clue about what you are talking about,
do you?
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> This is a FAQ.
> Nathan Kopp tried to implement global illumination using photon mapping,
> but AFAIK the results were not very promising.
> If you think about the amount of samples needed to calculate global
> illumination for a huge scene, you will understand why this is so. It's
> very difficult to calculate more samples where they are needed and less
> samples where they aren't. This is an advantage of the stochastic
> algorithm used by POV-Ray.
Interesting! Well I tested VirtuaLight and some other raytracers wich use
photon mapping and they're all rather slow in calculating. Due to the fact
that some Raytracers are using photon mapping anyway, what are the benefits
of those methods generally?
Regards,
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I have utterly utterly no idea why, but I had to do strange things to OE to
get this thread to appear! Anyway...
> > Does POV-Ray do *real* radiosity?
>
> What is "real radiosity"?
>
> Firstly, the exact meaning of "radiosity" is a specific algorithm where
> lightmaps are calculated projecting (usually with scanline rendering) the
> scene onto each surface of the scene. This algorithm has nothing to do
> with raytracing nor with light sources per se.
...which would render my initial assumption invalid. I clearly don't know
what I'm on about.
> I suppose that you are using the term "radiosity" as meaning any
> global illumination algorithm. Even so, there's no such a thing as
> a "real" algorithm. There are several different algorithms, but we
> can't say any of them is more "real" than the other. They are all
> approximations, different ways of calculating a similar thing.
> One algorithm can be "better" than another (by whatever criteria
> you happen to choose), but quality does not make it the "real" algorithm
> (and the other ones "fake").
Quite right.
> > According to my understanding (which may or may not be correct)
"radiosity"
> > algorithms start from the light source(s) and follow the paths of the
rays
> > forwards into the scene, eventually reaching the camera.
>
> That's just *one* way of doing it. In no way it's the only way, nor
> necessarily the "best" way. And as I already said, there's nothing in
> this algorithm which would make it "real".
As established, my initial assumption was not correct. I assumed there was a
class of programs which do what raytracers do, but in the other direction,
yielding heigher quality images with much larger render times. Apparently
that's not the case at all...
> > However, POV-Ray is
> > "radiosity mode" seems to still follow light backwards, only this time
> > attempting to take diffuse interreflections into account.
>
> Right. There are good advantages (and perhaps some disadvantages) of
> doing it that way.
OK. That makes sense.
> > Does this really count as radiosity?
Dumb-ass question.
> > Could the
> > photon mapping algorithm be changed to include diffuse elements also?
Would
> > this be worth doing?
>
> This is a FAQ.
> Nathan Kopp tried to implement global illumination using photon mapping,
> but AFAIK the results were not very promising.
That's a shame.
> If you think about the amount of samples needed to calculate global
> illumination for a huge scene, you will understand why this is so.
Yeah, actually... caustics are heigh-detail, but they don't take up such
surface area, so the photon maps don't cover very much. Global lighting
would take too much memory I suppose.
> It's
> very difficult to calculate more samples where they are needed and less
> samples where they aren't. This is an advantage of the stochastic
algorithm
> used by POV-Ray.
Ah well - I guess I'm just trying to get it to solve the wrong sort of
problems.
(Actually, I tried it with a ambient white sky over a matt white floor, with
a green ball in the middle. Didn't take long to find parameters that worked
perfectly. Looked good too.)
Thanks.
Andrew.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Andrew Coppin <orp### [at] btinternetcom> wrote:
>> If you think about the amount of samples needed to calculate global
>> illumination for a huge scene, you will understand why this is so.
> Yeah, actually... caustics are heigh-detail, but they don't take up such
> surface area, so the photon maps don't cover very much. Global lighting
> would take too much memory I suppose.
Photon mapping for caustics works well because they can be concentrated on
very specific areas of the scene (you tell which specific objects need to
be shot photons at).
Global illumination would require shooting photons at *everything*. And
lots of them.
Even caustics can sometimes show odd artifacts when the lights spreads
too much, in which case you need to increase the photon mapping parameters
to shoot more photons. This slows down, but not unacceptably because the
total amount is still quite limited. However, having to do this for the
entire scene is just enormous (I'm not saying it's not possible for simple
scenes, but just imagine a huge one).
The advantage of the global illumination method used in POV-Ray is that
it can concentrate on the parts which really need it. It only calculates
the illumination of the parts of the scene which are visible on the final
image or have some indirect effect on visible areas. Areas which are not
visible nor have any effect are simply skipped. It also calculates more
samples where they are needed.
The algorithm shows often some artifacts. It's still quite unclear to me
whether this is a problem with the algorithm itself or whether its
implemention in POV-Ray has some problems (bugs, misunderstandings or bad
designs). AFAIK the same algorithm is used in the Radiance renderer, which
is used professionally, and its results are great (at least based on the
few images I have seen).
--
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
squidian <squ### [at] localhostlocaldomain> wrote:
> I know I tried radiosity which looked ok on a single frame. I know the
> "radios" contribution changed with each frame. What else is there to
> know? That it is not noise with causes the problem? That it does not use
> the same random number each time?
Perhaps you should study how the global illumination algorithm used
in POV-Ray works before you talk about it.
--
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3ef81d3f$1@news.povray.org> , squidian
<squ### [at] localhostlocaldomain> wrote:
>> Perhaps you should study how the global illumination algorithm used
>> in POV-Ray works before you talk about it.
>
> As the algorithm sucks for animations what more is there to know and why
> should I stroke your all-knowing, all-seeing ego when I am right?
>
> I have retested it without panning. Shall I upload it to show what I am
> talking about?
>
> OTOH: You could have simply posted how to make radiosity work with
> animation and have been done with it. You did not. Therefore you do not
> know. QED
I strongly recommend you do not continue this way of discussion because you
really do not appear to know anything helpful about the topic being
discussed. A prerequisite would be to "know" at least as much as the
documentation explains. Having read previous replies to this thread, like
the one from Warp, could also have told you that your short comment might
not be a suitable answer.
This is povray.advanced-users, and if you don't understand something
discussed here there is no shame. However, disrupting a discussion with
remarks of the kind you posted - which offer no help solving with the
problem a user is having and assume something the user has not said would be
his problem - is strongly discouraged!
Thorsten Froehlich, POV-Team
____________________________________________________
Thorsten Froehlich
e-mail: mac### [at] povrayorg
I am a member of the POV-Ray Team.
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
>squidian <squ### [at] localhostlocaldomain> wrote:
>> Tried that with 3.1. 3.1 uses noise as an input. Each frame uses a
>> different noise making a mess like leaving jitter on. Doesn't work.
>
> You really don't have the slightest clue about what you are talking about,
>do you?
>
>#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
>-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
>
Sorry Warp, I believe he does have a valid point, even if he is not
expressing himself very well.
In render.cpp, function Start_Tracing_Radiosity_Preview(), line about 1022:
while ((skip >= 2) && (skip >= EndPixelSize))
{
/* for each pass */
jitter_range = 3;
jitter_offset = skip / 2 - 1; /* add a very small amount of jitter
*/
if ( skip <= 8 ) Smooth_Preview = 1;
for (Current_Line_Number = opts.First_Line; Current_Line_Number <
opts.Last_Line; Current_Line_Number += skip)
{
check_stats(Current_Line_Number, 1, skip);
Do_Cooperate(0);
for (x = opts.First_Column; x < opts.Last_Column; x += skip)
{
Check_User_Abort(false);
offset_x = jitter_offset + (POV_RAND() % jitter_range);
offset_y = jitter_offset + (POV_RAND() % jitter_range);
/* don't use focal blur for radiosity preview! */
save_use_blur = Focal_Blur_Is_Used;
Focal_Blur_Is_Used = false;
trace_pixel(x + offset_x, Current_Line_Number + offset_y, Colour);
Thus random noise is introduced into the radiosity trace. And it is
different for each frame of an animation. Some scenes show it more than
others, but it is there.
When I compile custom versions I always remove this jitter ...
Mike Andrews.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <web.3ef884b34485a7259735c0c0@news.povray.org> , "Mike Andrews"
<nomail@nomail> wrote:
> Sorry Warp, I believe he does have a valid point, even if he is not
> expressing himself very well.
>
> In render.cpp, function Start_Tracing_Radiosity_Preview(), line about 1022:
No, this is something different and not directly related to the observed
effect. If you see excessive changes of colors from one frame to the other
it implies the number of samples you takes is too small.
> Thus random noise is introduced into the radiosity trace. And it is
> different for each frame of an animation. Some scenes show it more than
> others, but it is there.
>
> When I compile custom versions I always remove this jitter ...
If you like grid artifacts ... if you really have a problem with the
"randomness" of the same distribution, you should increase the number of
samples taken. And you cannot expect pixel to pixel equivalent images for
_different_ camera positions. Nevertheless, the effect you will see
rendering the same scene twice will be identical because the "random
distribution" will be identical as well. Even if you move the camera, the
"randomness" will be absolutely identical, however, it applies to pixels
that represent different objects in the scene now, and that will create
artifacts when the sample count is too low. Thus, the "randomness" is not
different for each frame of the animation. Further, this has nothing to do
with noise.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|