POV-Ray : Newsgroups : povray.off-topic : ANN: New, open-source, free software rendering system for physically correc= Server Time
11 Oct 2024 17:47:32 EDT (-0400)
  ANN: New, open-source, free software rendering system for physically correc= (Message 53 to 62 of 82)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 27 Oct 2007 13:32:18
Message: <47237621@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
>   I'm not trolling here. I have access to the pov3.7 source and I'm helping
> the pov-team develop it (eg. the new string comparison operators were added
> by me). I would be interested in testing this kind of rendering to see if
> it's feasible. Although unlikely, it's theoretically *possible* that I could
> try adding some support for this to pov3.7, if I could just figure out the
> algorithm how it's supposed to be done.

  Well, I suppose I will not be testing this on pov3.7 and any chance of
having anything like this is out.

-- 
                                                          - Warp


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 27 Oct 2007 19:52:59
Message: <4723cf5b$1@news.povray.org>

> Warp <war### [at] tagpovrayorg> wrote:
>>   I'm not trolling here. I have access to the pov3.7 source and I'm helping
>> the pov-team develop it (eg. the new string comparison operators were added
>> by me). I would be interested in testing this kind of rendering to see if
>> it's feasible. Although unlikely, it's theoretically *possible* that I could
>> try adding some support for this to pov3.7, if I could just figure out the
>> algorithm how it's supposed to be done.
> 
>   Well, I suppose I will not be testing this on pov3.7 and any chance of
> having anything like this is out.
> 

I'd like to help but I have entire books on the subject here, and 
retyping them is beyond my patience ;)

This one already popped up in the discussion: http://www.pbrt.org/
I also have the first edition of this one: 
http://www.advancedglobalillumination.com/

I don't think it would be feasible to pack this kind of algorithms in 
POV-Ray easily. It's not just a matter of shooting more rays, they use 
different definitions for materials that wouldn't be compatible with the 
POV-Ray way.

On the other hand, I never looked at the source code of POV-Ray, so 
maybe judge for yourself.

-- 
Vincent


Post a reply to this message

From: Tom York
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 27 Oct 2007 20:20:00
Message: <web.4723d5a2f73ddf437d55e4a40@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
>   Well, I suppose I will not be testing this on pov3.7 and any chance of
> having anything like this is out.
>
> --
>                                                           - Warp

I can't see how that's a bad thing, there must be plenty left to do on 3.7
anyway without adding more features.

Tom


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 27 Oct 2007 20:42:46
Message: <4723db06@news.povray.org>

> I don't think it would be feasible to pack this kind of algorithms in 
> POV-Ray easily. It's not just a matter of shooting more rays, they use 
> different definitions for materials that wouldn't be compatible with the 
> POV-Ray way.

Existing definition of pigments should work for any algorithm. Normal 
perturbations would work with some; not all actually use the normal 
vector. The whole finish and interior blocks may need to be different 
for each rendering algorithm. Objects would have no problem either, 
except for those that have built-in normal perturbation 
(smooth_triangle, bicubic patch, and smooth heightfield; any other?).

> On the other hand, I never looked at the source code of POV-Ray, so 
> maybe judge for yourself.
> 
It will be rewritten for 4.0 anyway, so we might as well...


Post a reply to this message

From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 27 Oct 2007 22:50:25
Message: <4723f8f1@news.povray.org>
Tom York <alp### [at] zubenelgenubi34spcom> wrote:
> I can't see how that's a bad thing, there must be plenty left to do on 3.7
> anyway without adding more features.

  Could people please make up their minds already? People want new features
and people don't want new features at the same time.

  It's not like me testing some new features would slow down the pov-team
in any way.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 27 Oct 2007 22:56:53
Message: <4723fa75@news.povray.org>
Vincent Le Chevalier <gal### [at] libertyallsurfspamfr> wrote:
> I'd like to help but I have entire books on the subject here, and 
> retyping them is beyond my patience ;)

  I simply wanted to try the notion of raytracing in the way that no more
than one ray is ever traced from a given point (iow. a ray is never split
into two or more rays), and that the end result is produced by sending many
rays from the same pixel and averaging the results (which is basically what
antialiasing does).
  In theory if the scene has many objects which would split rays, as well
as other features which would require multiple rays (such as area lights),
doing it this way would reduce the overall number of traced rays per pixel
while still getting an acceptable result, thus speeding up the rendering.
(This is, AFAIK, how Pixar raytraces their images.)

  However, I don't want to read lengthy books filled with material and
lighting theory just to try this. If someone could write me a simple
algorithm then I could try it.

-- 
                                                          - Warp


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 07:00:35
Message: <472479e3$1@news.povray.org>

>   However, I don't want to read lengthy books filled with material and
> lighting theory just to try this. If someone could write me a simple
> algorithm then I could try it.
> 

Well Scott did, and it's not all that difficult to perfect it to obtain 
the result you seek, i.e. to have different colors for diffusion and 
reflection.

color = 0
for ray=1 to 1000
r = random number between 0 and 1
a = specular_amount
b = specular_amount+diffuse_amount
c = specular_amount+diffuse_amount+refraction_amount

if 0 < r < a
  color += reflection_color * fire_reflection_ray

if a < r < b
  color += diffuse_color * fire_diffuse_ray_in_random_direction

if b < r < c
  color += refraction_color * fire_refraction_ray

if c<r<1.0 //absorption
  color += 0

//Of course you could have emitting surfaces as well
if emission
  color += emitted_color

next

pixel_color = color / 1000

The algorithm should check that c<1, obviously, otherwise the surface 
transmits more light than it receives.

The real problem with that approach, that you should have pointed out, 
is that the lights are missed most of the time if you don't fire rays to 
them specifically. If all your lights are point lights, they will always 
be missed.

With very simple BRDF such as these (specular + diffuse), you would only 
have to fire shadow rays in the diffuse case above. And possibly sample 
all visible emitting surfaces. I don't remember the details...

The other problem is deciding when you stop firing rays. If many 
surfaces have no absorption, it's possible to end up with very long 
paths... I think one of the books uses a form of Russian roulette to 
stop ray spawning and still keep an unbiased picture.

-- 
Vincent


Post a reply to this message

From: Tom York
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 09:10:00
Message: <web.472497bef73ddf437d55e4a40@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Tom York <alp### [at] zubenelgenubi34spcom> wrote:
> > I can't see how that's a bad thing, there must be plenty left to do on 3.7
> > anyway without adding more features.
>
>   Could people please make up their minds already? People want new features
> and people don't want new features at the same time.
>
>   It's not like me testing some new features would slow down the pov-team
> in any way.
>
> --
>                                                           - Warp

I don't understand. Looking back through the thread, the only person discussing
this for 3.7 appears to be you; as far as I can see the rest of the discussion
was in the context of POV 4. Certainly I think it's pointless to add this to
POV 3.7, it would need a huge amount of work to get the sampling right.

I've no idea what will and what will not slow the POV team down. How could I?

Tom


Post a reply to this message

From: Darren New
Subject: Re: ANN: New, open-source, free software rendering system for physicallyco=
Date: 28 Oct 2007 17:05:37
Message: <472507b1$1@news.povray.org>
Vincent Le Chevalier wrote:
> Well Scott did, and it's not all that difficult to perfect it to obtain 
> the result you seek, i.e. to have different colors for diffusion and 
> reflection.

As long as we're simplifying, can anyone describe what the benefit of 
this technique is, compared to biased ray tracing? What does biased 
ray-tracing miss that this one catches?

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

From: Warp
Subject: Re: ANN: New, open-source, free software rendering system for physically co=
Date: 28 Oct 2007 17:56:54
Message: <472513b5@news.povray.org>
Vincent Le Chevalier <gal### [at] libertyallsurfspamfr> wrote:
> color = 0
> for ray=1 to 1000
> r = random number between 0 and 1
> a = specular_amount
> b = specular_amount+diffuse_amount
> c = specular_amount+diffuse_amount+refraction_amount

> if 0 < r < a
>   color += reflection_color * fire_reflection_ray

> if a < r < b
>   color += diffuse_color * fire_diffuse_ray_in_random_direction

> if b < r < c
>   color += refraction_color * fire_refraction_ray

> if c<r<1.0 //absorption
>   color += 0

> //Of course you could have emitting surfaces as well
> if emission
>   color += emitted_color

> next

> pixel_color = color / 1000

  As far as I can see this has still the problem that light reflected
from the surface of the object by specular reflection is not *added*
to the rest of the light emitted by the object (by diffuse reflection
and/or refraction), but instead it's *averaged* with the rest.

  This will effectively make the reflection dimmer and the surface
possibly more opaque (if it was defined to be semitransparent) than
originally defined.

  The problem is that using this formula to render an object will most
probably result in a very different result than the regular way it's done
now. And this different result might not be better or more realistic.
While it could still be feasible, I'm not completely convinced the end
result will be "correct".
  (Besides, the basic idea I had was that you could optionally switch to
this alternative rendering method and get basically the same image in
average coloration, give or take some graininess. However, as presented
above, it will most probably not give the same image.)

  Basically what POV-Ray uses is the phong lighting model. While it's not
a 100% physically accurate lighting model, it's often close enough to
reality that quite realistic images can be created using it.

  In the phong lighting model the diffuse and specular reflection components
are added. While in real life a pure addition doesn't probably ever happen,
but more of a weighted average (the weighting factors depending on angles
of incidence, etc), it's not far off with many materials which is why this
simple lighting model often gives good-enough results.

  The algorithm presented above calculates the non-weighted average of
the diffuse and specular components. This means that, for example, a
black object cannot have a completely white highlight (because the averaging
will make it gray), even though in real life it's perfectly possible for
this to happen.

  Another problematic case I can see is an object which has been defined
to have a completely clear surface pigment, a strong reflection and
refraction. The algorithm given above will make the surface visibly look
only semi-transparent instead of completely clear. This is because the
reflection is not added to the refraction, but instead averaged with it.

  Once again, the simple phong lighting model might not be 100% accurate
in this case, but it's close nevertheless. Reflected and refracted light
is indeed added in reality, not averaged. A light ray cannot make another
light ray dimmer, it can only make it brighter. Averaging would mean that
eg. a refracted ray can make a reflected ray dimmer, which I think is not
physically correct.

  The following definitions are a bit problematic:

> a = specular_amount
> b = specular_amount+diffuse_amount
> c = specular_amount+diffuse_amount+refraction_amount

  If a texture has been defined to have "reflection 1" and "transmit 1"
(which I suppose would be 100% refraction_amount) and a standard diffuse
finish value of 0.6, what would be the values of a, b and c?

  If you want to make a+b+c = 1, then you would have to actually lower
those values. You would have to make reflection be about 0.38, transmit
be about 0.38 and diffuse be about 0.23. Effecively you are making the
object less transparent, less reflective and less diffuse, which is not
how the original texture was defined.

> The algorithm should check that c<1, obviously, otherwise the surface 
> transmits more light than it receives.

  A surface can receive light from more than one source, and the lighting
is added. Most obviously, the lighting at a certain point may be the caused
by reflection *and* refraction. This means that two different rays of light
from different sources are arriving at the same point on the surface of
the object, and from there to the same point in the projection plane.

  Thus, obviously, the lighting of the surface at that point can be brighter
than if it was illuminated by only one of the original sources.

  A surface is not emitting more light than it receices simply because it's
emitting more light than *one* light source can emit.

> The real problem with that approach, that you should have pointed out, 
> is that the lights are missed most of the time if you don't fire rays to 
> them specifically. If all your lights are point lights, they will always 
> be missed.

  This is not a problem because light rays can be shot towards point light
sources (and area light sources as well).

> The other problem is deciding when you stop firing rays. If many 
> surfaces have no absorption, it's possible to end up with very long 
> paths...

  That's what max_trace_level is for.

-- 
                                                          - Warp


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.