POV-Ray : Newsgroups : povray.beta-test : Gamma tutorial for the 3.7 documentation? : Re: Gamma tutorial for the 3.7 documentation? Server Time
5 Oct 2024 00:48:29 EDT (-0400)
  Re: Gamma tutorial for the 3.7 documentation?  
From: clipka
Date: 4 Nov 2009 18:10:49
Message: <4af209f9@news.povray.org>
Sven Geier schrieb:

> At this point, it might actually be nice to have a clear "mission statement"
> that actually tells us what purpose all the futzing with gamma in POV is
> actually supposed to have. Input? Output? Both? Some high-level declaration
> against which we can measure any one change that is occuring: "does *this*
> change really serve *that* purpose?"

It's actually about in- /and/ output.

For output, it is about converting from linear light intensity values to 
whatever it takes to make the CRT or LCD display generate just exactly 
that light intensity.

For input, it is about converting e.g. texture images that /already/ 
have undergone such a conversion process (or individual colors "picked" 
from such an image), back to linear light intensity values as needed to 
properly perform the raytracing computations.

Voila.

The current settings


>>  If you specify "50% gray" in your scene file, and the end result is not
>> 50% gray on your screen, is that how you wanted it to work? Because that's
>> how POV-Ray 3.6 works.
> ...
> On the one side, I think of it in terms of physics. Photons interacting with
> surfaces, watts per square meter per steradian emitted in certain given spectral
> bands and then absorbed somewhere else. Diffraction, illumination, scattering.
> All the wonderful bits of math involved. Conservation of energy. That kind of
> thing.
> 
> From this perspective, your question up there doesn't make much sense: a
> specification of "rgb 0.5" in my scene file is about the reality of a given
> object, namely that it emits half as many watts per area into some solid angle
> (given by things like specularity) as it received. This specification is
> completely unrelated to any kind of output device - an object specified as "50%
> grey" can appear on any one monitor as any one color, depending on the
> illumination of that surface by light sources and other surfaces and by the
> angle to and distance from the camera.

Let me rephrase Warp's question a bit:

If you specify an object with a checkered pattern, colored as "50% grey" 
and "100% grey" (i.e. white) - how do you expect the "50% grey" squares 
of the object to appear /in relation to/ the adjacent white squares?

If you specify two spotlights, with brightness of "rgb 1.0" and "rgb 
0.5", illuminating a uniformly colored object - how do you expect the 
illuminated spots to appear /in relation to/ each other?

If you specify two spotlights with brightness of "rgb 0.5" each, shining 
at the /same/ spot on an object - how do you expect the illuminated spot 
to appear /as compared to/ a single light source with brightnes of "rgb 
1.0" instead?

If you specify a light with brightness of "rgb 0.5" illuminating an 
object with a color of "red 1.0", how would you expect that object to 
appear /as compared to/ a similar scene having a light brightness of 
"rgb 1.0" and a color of "red 0.5"?


Of course you cannot name an /absolute/ value for how a color should 
appear in an image, unless you know exactly how bright "1.0" really is 
physically; but you can alway specify /relationships/ between multiple 
color or brightness levels you have specified in your scene. Gamma does 
not only affect the /absolute/ values, but also with the /relative/ ones.


> And from an arts perspective, the recent betas of 3.7 have produced results that
> looked like foot. But looked a lot better if I take them individually into a
> paint program after the fact - and reduce gamma by about ~50%.
> 
> Am I the only one who looks at things this way? Who is the target audience here,
> really? And what do they want and think? I am honestly curious.

I guess since this is an open source project and none of the developers 
get anything out of it except the fun of developing an interesting piece 
of software and/or get an improved version of their favorite ray tracing 
software, the target audience of POV-Ray is people having the same 
mindset about 3D rendering as the developers themselves.

I can't speak for the dev team, but I can speak for myself as a 
contributing developer: As a hobbyist I don't have access to 
professional physical light simulations; but having always been 
interested in natural science, my approach at trying to achieve effects 
is a very physics-oriented one; I'm not asking "how does /this/ software 
package happen to allow me to create that effect", but "how does /real 
life/ create it. And therefore that's how I want POV-Ray to operate, 
too, so I can more easily wrap my head around how to get the software to 
produce what I want.

I also think that it makes sense to follow this way even if other people 
approach it with that other mindset of "how does /POV-Ray/ allow me to 
create that effect" - because for those folks it wouldn't matter whether 
the rules by which POV-Ray operates are close to real-world physics, or 
some magic fantasy science.


Now as for your problems with POV-Ray 3.7, I suppose that they result 
not from this mindset per se, but rather from a change in the way how 
POV-Ray does things (especially gamma handling) that you just haven't 
got accustomed to - or from the known issues with input file gamma, 
which will be addressed with the next beta.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.