POV-Ray : Newsgroups : povray.general : contemporary photorealism : Re: contemporary photorealism Server Time
2 Aug 2024 14:18:10 EDT (-0400)
  Re: contemporary photorealism  
From: m1j
Date: 16 Dec 2004 10:10:00
Message: <web.41c1a48de72f59079a1e5e670@news.povray.org>
"Paris" <par### [at] lycoscom> wrote:
> Pov-Ray is lagging farther and farther behind commercial rendering software
> in terms of photo-realism.   There are many reasons why this is the case,
> but some reasons are more evident and more easily solved than others.
>
> 1.  The phong model is outdated now.   The next release of pov-ray should
> use physically-based BRDFs, and only keep the phong model around for
> compatability.  Phong makes everything look like plastic, including what we
> have been calling "glass".   The difference between pov-ray glass and
> physically-based glass in other packages is STRIKING.  Real glass has a
> fresnel effect, where shallow-angled light reflected from the surface tends
> more towards a perfect mirror.  This also happens in real life.  My
> suggestion is allow BRDFS for those who need them, and base the
> documentation around the Phong model as usual.
>
> 2.  Pov-ray does not have hair, fuzz, fur, or suede textures.  Brushed metal
> would be nice also. And car paint too...  I won't ask for
> subsurfacescattered flesh just yet, it tends to be very time-consuming to
> implement.   There are many textures out there that can only be implemented
> using path-tracing techniques, such as very shiney, partially-reflective
> gold.  A few others are glossy reflections (blurred reflections) and
> frosted glass.
>
> 3.  Povray uses distributed ray-tracing to simulate global illumination.
> Reflected parts of the scene do not have the radiosity calculation
> performed on them.  I have found this to be highly frustrating.  (Simply
> create a radiosity room and stick a large reflecting ball in the middle to
> see what I mean.)   The speed-ups to this method leave even the most
> advanced users scratching their heads.  Ive been using pov-ray since it was
> named DKB-Trace, and honestly, I'm still not sure that "minimum_reuse"
> means under the radiosity settings.  If you think about what distributed
> ray-tracing does, you will notice it works by tracing rays into DARK PARTS
> of the scene, hoping for a swath of light.   It doesnt take a professor to
> realize that this is a wasted calculation.   Tracing a ray into a dark part
> of the scene will mathematically never make a difference in the shaded
> pixel.  There are methods which make an unhappy marriage between photon
> shooting and "gathering" from those shoots in a way based off of importance
> sampling.   By making the algorithm more complex, it tends to avoid WASTED
> CALCULATIONS.  Also, these bi-directional algorithms (as they are called)
> make the settings for the end-user very simple.  So we need not worry that
> more complex algorithms will "confuse the end-user even more".
>
> 4.  There are other physically-based methods out there that turn ray-tracing
> on its head.   I pretty much expect future versions of Pov-ray to move away
> from the phong model (part 1) and implement a few BRDFs for popular
> surfaces, but other methods would be nice to see also, which I have less
> faith in.   There are ways to calculate light in rendering in which you do
> not even use RGB color space.   These algorithms use spectral integration,
> and create a large picture out of pixels that are colored with the
> SPECTRUM, rather than RGB triples.   This kind of rendering allows you to
> specify the wattage of incandescent lightbulbs, or even simulate flash
> bulbs from particular cameras.  REALISTIC SUNLIGHT, of course, is the
> biggest pay-off to this method.  Other freeware packages on the web that
> attempt to do this are usually written by a single busy person, and they
> are hopelessly buggy or just plain do not work.   This is the reason I have
> come to this board to make this suggestion.  Pov-ray is the most robust and
> stable free rendering software in the world.
>
> 5.  Even without spectral integration, you can render in RGB space and still
> do EXPOSURE simulations.  (Usually its the case that exposure simulation is
> not used unless a certain amount of "energy" is calculated to be passing
> through the cameras' aperture, but it can be done ad hoc in RGB space also,
> by fanagling.)  This basically works by storing floating point triples into
> each pixel, none of which are CLIPPED or "tuned down"  to fit into  0.0 -->
> 1.0.    The idea is that even  in the wide amplitude of real light, you
> always want your display adapter to use its contrast ratio to its maximum.
>   A pass is performed over this final image, the workings of which are
> controlled by the user.   The plugin that someone made to simulate this is
> not robust enough.  The user must be allowed to map directly to floating
> point triples, and then "slide" this window around on an image whose
> contrast ratio is much larger than 0.0 to 1.0.    Automatically
> "stretching" the mapping to fit the entire contrast of the triples is not
> always what you want.
>
> --
> Paris


With all of these ideas I can hardly wait to see you patch to compare it to
the official POVRay. I am just assuming that you would back up this post
with real code applied to a patch for all of us to see you ideas in action.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.