POV-Ray : Newsgroups : povray.off-topic : ANN: New, open-source, free software rendering system for physically correc= Server Time
11 Oct 2024 23:12:36 EDT (-0400)
  ANN: New, open-source, free software rendering system for physically correc= (Message 61 to 70 of 82)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 28 Oct 2007 17:05:37
Message: <472507b1$1@news.povray.org>
Vincent Le Chevalier wrote:
> Well Scott did, and it's not all that difficult to perfect it to obtain 
> the result you seek, i.e. to have different colors for diffusion and 
> reflection.

As long as we're simplifying, can anyone describe what the benefit of 
this technique is, compared to biased ray tracing? What does biased 
ray-tracing miss that this one catches?

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 17:56:54
Message: <472513b5@news.povray.org>
Vincent Le Chevalier <gal### [at] libertyallsurfspamfr> wrote:
> color = 0
> for ray=1 to 1000
> r = random number between 0 and 1
> a = specular_amount
> b = specular_amount+diffuse_amount
> c = specular_amount+diffuse_amount+refraction_amount

> if 0 < r < a
>   color += reflection_color * fire_reflection_ray

> if a < r < b
>   color += diffuse_color * fire_diffuse_ray_in_random_direction

> if b < r < c
>   color += refraction_color * fire_refraction_ray

> if c<r<1.0 //absorption
>   color += 0

> //Of course you could have emitting surfaces as well
> if emission
>   color += emitted_color

> next

> pixel_color = color / 1000

  As far as I can see this has still the problem that light reflected
from the surface of the object by specular reflection is not *added*
to the rest of the light emitted by the object (by diffuse reflection
and/or refraction), but instead it's *averaged* with the rest.

  This will effectively make the reflection dimmer and the surface
possibly more opaque (if it was defined to be semitransparent) than
originally defined.

  The problem is that using this formula to render an object will most
probably result in a very different result than the regular way it's done
now. And this different result might not be better or more realistic.
While it could still be feasible, I'm not completely convinced the end
result will be "correct".
  (Besides, the basic idea I had was that you could optionally switch to
this alternative rendering method and get basically the same image in
average coloration, give or take some graininess. However, as presented
above, it will most probably not give the same image.)

  Basically what POV-Ray uses is the phong lighting model. While it's not
a 100% physically accurate lighting model, it's often close enough to
reality that quite realistic images can be created using it.

  In the phong lighting model the diffuse and specular reflection components
are added. While in real life a pure addition doesn't probably ever happen,
but more of a weighted average (the weighting factors depending on angles
of incidence, etc), it's not far off with many materials which is why this
simple lighting model often gives good-enough results.

  The algorithm presented above calculates the non-weighted average of
the diffuse and specular components. This means that, for example, a
black object cannot have a completely white highlight (because the averaging
will make it gray), even though in real life it's perfectly possible for
this to happen.

  Another problematic case I can see is an object which has been defined
to have a completely clear surface pigment, a strong reflection and
refraction. The algorithm given above will make the surface visibly look
only semi-transparent instead of completely clear. This is because the
reflection is not added to the refraction, but instead averaged with it.

  Once again, the simple phong lighting model might not be 100% accurate
in this case, but it's close nevertheless. Reflected and refracted light
is indeed added in reality, not averaged. A light ray cannot make another
light ray dimmer, it can only make it brighter. Averaging would mean that
eg. a refracted ray can make a reflected ray dimmer, which I think is not
physically correct.

  The following definitions are a bit problematic:

> a = specular_amount
> b = specular_amount+diffuse_amount
> c = specular_amount+diffuse_amount+refraction_amount

  If a texture has been defined to have "reflection 1" and "transmit 1"
(which I suppose would be 100% refraction_amount) and a standard diffuse
finish value of 0.6, what would be the values of a, b and c?

  If you want to make a+b+c = 1, then you would have to actually lower
those values. You would have to make reflection be about 0.38, transmit
be about 0.38 and diffuse be about 0.23. Effecively you are making the
object less transparent, less reflective and less diffuse, which is not
how the original texture was defined.

> The algorithm should check that c<1, obviously, otherwise the surface 
> transmits more light than it receives.

  A surface can receive light from more than one source, and the lighting
is added. Most obviously, the lighting at a certain point may be the caused
by reflection *and* refraction. This means that two different rays of light
from different sources are arriving at the same point on the surface of
the object, and from there to the same point in the projection plane.

  Thus, obviously, the lighting of the surface at that point can be brighter
than if it was illuminated by only one of the original sources.

  A surface is not emitting more light than it receices simply because it's
emitting more light than *one* light source can emit.

> The real problem with that approach, that you should have pointed out, 
> is that the lights are missed most of the time if you don't fire rays to 
> them specifically. If all your lights are point lights, they will always 
> be missed.

  This is not a problem because light rays can be shot towards point light
sources (and area light sources as well).

> The other problem is deciding when you stop firing rays. If many 
> surfaces have no absorption, it's possible to end up with very long 
> paths...

  That's what max_trace_level is for.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 18:00:45
Message: <4725149d@news.povray.org>
Tom York <alp### [at] zubenelgenubi34spcom> wrote:
> Certainly I think it's pointless to add this to
> POV 3.7, it would need a huge amount of work to get the sampling right.

  I can't understand why.

  The only difference is that instead of spawing multiple rays at each
intersection point, only one ray is spawned. The only problem is how the
results are gathered in order to produce the correct color.

> I've no idea what will and what will not slow the POV team down. How could I?

  I can develop new features for pov3.7 independently, and if they work and
I get green light from the pov-team I can add it to the codebase. Basically
no effort from the pov-team itself is required.

  Basically this kind of testing is free. It doesn't cost anyone anything
(except my own free time, of course).

-- 
                                                          - Warp


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 28 Oct 2007 18:02:46
Message: <47251516$1@news.povray.org>

> Vincent Le Chevalier wrote:
>> Well Scott did, and it's not all that difficult to perfect it to 
>> obtain the result you seek, i.e. to have different colors for 
>> diffusion and reflection.
> 
> As long as we're simplifying, can anyone describe what the benefit of
>  this technique is, compared to biased ray tracing? What does biased
>  ray-tracing miss that this one catches?
> 

Having had a look in the books again...

Basically, all the methods creating images by firing rays into the scene
can be thought of as attempts to evaluate a very hairy integral (the
rendering equation) using a monte-carlo approach.

To do this, paths of light inside the scene must be sampled, in a way
that makes the monte-carlo method converge when the number of samples rises.

Classical raytracing, where no diffuse rays are fired, is obviously
biased, which means that whatever the number of rays you will fire,
you'll never reach the physically correct image. That's because there is
an entire category of paths that never gets sampled, so we have no idea
of what their contribution is. For example, no path such as Light
source<->diffuse surface<->diffuse surface<->camera are ever created.

On the other hand, the picture is not noisy (statistically speaking, the
estimator has a low variance).

There are other ways for a bias to creep in. For example, if you limit
systematically the length of sampled paths to a fixed value. Russian
roulette is one of the methods to avoid such problems.

Methods such as irradiance caching, in my limited understanding, also
introduces a bias. Simply speaking, every time you try to minimize the
variance (the high frequency noise in the picture), you are in danger of
introducing a bias unless great care is taken.

The bias is not in itself something that prevents beautiful pictures. It
prevents accurate pictures. The benefit would be more of a practical
nature: since you know that the picture will eventually be physically
correct, you can use real-life parameters and trust the physics, instead
of having to adjust the algorithm themselves and being forced to include
non-physical "hacks" everywhere in your scene. In way, you can be more
straightforward and coherent. But it's slower...

That being said, I never used an unbiased renderer, so who am I to talk :-)

-- 
Vincent


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 18:40:42
Message: <47251dfa$1@news.povray.org>
I will not try to answer point by point as it is late, maybe tomorrow... 
Nevertheless, I wanted to point out several things.

First, I don't understand completely what you are trying to do. If you 
implement an unbiased method, of course the pictures will be radically 
different. There is no way around that. If the pictures, with the exact 
same parameters, were only more grainy, but eventually converged to the 
exact same image on average, then the method would be biased...

The other problem is indeed that many material models within POV-Ray are 
physically inaccurate, so you'll be unable to even use them and still 
make sense. For example, reflection 1 transmit 1 is not realistic. 
Simply follow the light through the surface. 100% of the light should be 
transmitted, and 100% reflected? It's clearly not conservative. I 
suspect the case of your black reflective object is a bit similar.

And of course you cannot use max_trace_level because it introduces a bias...

All of this is why I was thinking that adding such capability into the 
current POV would not be a trivial matter...

-- 
Vincent


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 28 Oct 2007 19:10:01
Message: <472524d9$1@news.povray.org>

> The other problem is indeed that many material models within POV-Ray are 
> physically inaccurate, so you'll be unable to even use them and still 
> make sense. For example, reflection 1 transmit 1 is not realistic. 
> Simply follow the light through the surface. 100% of the light should be 
> transmitted, and 100% reflected? It's clearly not conservative. I 
> suspect the case of your black reflective object is a bit similar.

How does POV-Ray behave with reflection 1, transmit 1, and conserve_energy?


Post a reply to this message

From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 21:12:24
Message: <47254187@news.povray.org>
Vincent Le Chevalier <gal### [at] libertyallsurfspamfr> wrote:
> First, I don't understand completely what you are trying to do. If you 
> implement an unbiased method, of course the pictures will be radically 
> different.

  Unless I have understood something completely incorrectly, I think that
"unbiased rendering" simply means that light is not assumed to be coming
from a specific direction, but the entire space is sampled for possible
incoming light.

  Surface properties are described as BRDFs. (The advantage of unbiased
rendering is that it allows, due to its unbiased sampling nature, much
richer and complex BRDFs to be defined than traditional rendering methods.)

  Well, the phong lighting model is a perfectly valid BRDF, so I see
absolutely no reason why unbiased rendering would exclude the possibility
of using it. I don't believe that unbiased rendering would somehow limit
what kind of BRDFs you can use.

  If the phong lighting model is used as the BRDF for all surfaces the
end result should be pretty much the same using unbiased rendering than
using the traditional raytracing method (except for the possible graininess).

  The advantage, in this case, is that much more complex scenes,
traiditionally requiring enormous amounts of rays (because of ray
bifurcation and things like area lights) may render faster using single
ray paths which do not bifurcate. (This requires *much* higher antialiasing
settings, but in certain situations the overall amount of rays traced may
in fact be smaller than with the traditional raytracing method.)

> There is no way around that. If the pictures, with the exact 
> same parameters, were only more grainy, but eventually converged to the 
> exact same image on average, then the method would be biased...

  I'm more interested in the single-path-tracing than in the unbiasing.

  (Besides, I think the only difference pure unbiased rendering would do
is to make the image have global illumination, ie. what povray calls
"radiosity", especially if big area lights are used instead of point
lights.)

> The other problem is indeed that many material models within POV-Ray are 
> physically inaccurate

  I bet all BRDFs are physically inaccurate to some extent, and only
*approximate* the real thing. The phong lighting model is one approximation
among others (it might not be the best one, but it's a simple and fast one,
and often gives good results).

  I also have hard time believing that unbiased rendering would somehow
exclude "physically inaccurate" BRDFs.

> Simply follow the light through the surface. 100% of the light should be 
> transmitted, and 100% reflected? It's clearly not conservative.

  It doesn't have to be 100% physically accurate. The only thing that
matters is that it can be described as a BRDF.

> I suspect the case of your black reflective object is a bit similar.

  No, that phenomenon happens in real life. Even surfaces which do not
reflect diffusely light almost at all can have strong specular reflection
properties. That's the reason why pitch-black plastic can have bright
highlights. It has something to do with quantum mechanics or something
similar, don't remember any details.

> And of course you cannot use max_trace_level because it introduces a bias...

  Using a high-enough max_trace_level will probably not have too much
influence in the resulting image (except in cases where it would have
a significant influence in the traditional raytracing method as well).

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 29 Oct 2007 01:21:04
Message: <47257bd0$1@news.povray.org>
Vincent Le Chevalier wrote:
> of what their contribution is. For example, no path such as Light
> source<->diffuse surface<->diffuse surface<->camera are ever created.

I see!  Thank you.

> since you know that the picture will eventually be physically
> correct, 

"Physically correct."  You keep using that word.  I do not think it 
means what you think it means.  :-)

Seriously, "physically correct" would mean it accounts for polarized 
surfaces and handles interference fringes and diffraction, and traces 
the rays of light from the light source to the camera, rather than 
starting at the camera. :-)

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 29 Oct 2007 04:06:36
Message: <4725a29c$1@news.povray.org>

> Seriously, "physically correct" would mean it accounts for polarized 
> surfaces and handles interference fringes and diffraction, and traces 
> the rays of light from the light source to the camera, rather than 
> starting at the camera. :-)
> 

OK, let's say more physically correct then ;-) I don't know, something 
that conserves energy, for a start :-)

Starting at the camera and not at the light source is no big deal to me. 
What matters is finding a path joining the light to the camera. Starting 
from the camera or the light is purely a matter of what's more practical...

-- 
Vincent


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 29 Oct 2007 04:48:58
Message: <4725ac8a$1@news.povray.org>

> Vincent Le Chevalier <gal### [at] libertyallsurfspamfr> wrote:
>> First, I don't understand completely what you are trying to do. If you 
>> implement an unbiased method, of course the pictures will be radically 
>> different.
> 
>   Unless I have understood something completely incorrectly, I think that
> "unbiased rendering" simply means that light is not assumed to be coming
> from a specific direction, but the entire space is sampled for possible
> incoming light.
> 

But that means that the images should be different, no? And I mean, more 
than just noisy...

>   Well, the phong lighting model is a perfectly valid BRDF, so I see
> absolutely no reason why unbiased rendering would exclude the possibility
> of using it. I don't believe that unbiased rendering would somehow limit
> what kind of BRDFs you can use.
> 

I'm not sure about the phong model being a valid BRDF. But I don't 
remember the details so let's say it is...

>   If the phong lighting model is used as the BRDF for all surfaces the
> end result should be pretty much the same using unbiased rendering than
> using the traditional raytracing method (except for the possible graininess).
> 

Except that traditional raytracing neglects plenty of light transfers, 
which is what is making it biased in the first place. It's not even a 
question of what BRDFs are used...

>> There is no way around that. If the pictures, with the exact 
>> same parameters, were only more grainy, but eventually converged to the 
>> exact same image on average, then the method would be biased...
> 
>   I'm more interested in the single-path-tracing than in the unbiasing.
> 
>   (Besides, I think the only difference pure unbiased rendering would do
> is to make the image have global illumination, ie. what povray calls
> "radiosity", especially if big area lights are used instead of point
> lights.)
> 

Well that's a big difference, and I don't know if it's really possible 
or even desirable to make a single path if you're not looking for an 
unbiased result.

>> The other problem is indeed that many material models within POV-Ray are 
>> physically inaccurate
> 
>   I bet all BRDFs are physically inaccurate to some extent, and only
> *approximate* the real thing. The phong lighting model is one approximation
> among others (it might not be the best one, but it's a simple and fast one,
> and often gives good results).
> 

I was thinking more of your reflection 1 transmit 1 example. Energy 
conservation is still one of the basic properties of BRDFs. There is no 
way you can represent that with a BRDF. Of course all BRDFs are only 
approximations, but that does not mean that they do not have 
constraints. I guess my use of "physically accurate" is the problem here...


>> I suspect the case of your black reflective object is a bit similar.
> 
>   No, that phenomenon happens in real life. Even surfaces which do not
> reflect diffusely light almost at all can have strong specular reflection
> properties. That's the reason why pitch-black plastic can have bright
> highlights. It has something to do with quantum mechanics or something
> similar, don't remember any details.
> 

In your previous post, you said:
>   The algorithm presented above calculates the non-weighted average of
> the diffuse and specular components. This means that, for example, a
> black object cannot have a completely white highlight (because the averaging
> will make it gray), even though in real life it's perfectly possible for
> this to happen.

A black object can have a bright highlight, of course. Or rather, a 
bright reflection. In that case, you would lower diffuse_amount, to be 
able to set a higher reflection_amount.

What you cannot do is a white object with a 100% reflection on top of 
that. Because if you define that 100% of the incoming light goes in the 
specular reflection, there is nothing left to be diffused. So making the 
object white effectively means you have a dimmer highlight.

The problems in the algorithm is the colors should all be normalized. 
That way the only possiblity to have a black object is lowering its 
diffuse_amount, not setting its color to black. In current POV, you can 
do one or the other...

-- 
Vincent


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.