|
|
|
|
|
|
| |
| |
|
|
From: Warp
Subject: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 11 Sep 2011 09:00:43
Message: <4e6cb0f9@news.povray.org>
|
|
|
| |
| |
|
|
This topic has been discussed in great length already, but perhaps I could
try a different approach at explaining why assumed_gamma 1.0 ought to produce
a physically more accurate result (as well as the practical complications that
using it causes when designing scenes).
The main reason why assumed_gamma 1.0 ought to produce more accurate
results has to do with a physical concept called irradiance. Despite the
fancy name, irradiance is simply the amount of energy that a certain amount
of light carries. This is measured in watts per square meter (in other
words, how many watts of energy a certain amount of light carries to
each square unit of a surface).
Now, when light hits a surface, some of it is reflected. How much is
reflected and to which direction is an extremely complicated function,
but for the sake of simplicity let's assume the following scenario:
A fully white (rgb 1.0) light source, and a fully white (rgb 1.0) diffuse
surface, oriented so that it's facing at 60 degrees from the light source.
Due to this orientation the surface will reflect exactly 50% of the incoming
light to all directions (including the camera, of course).
(The reason why it's exactly 50% at 60 degrees is due to the cosine law,
that says that the amount of reflected light is proportional to the cosine
of the angle of the incoming light, and cos(60) = 0.5.)
In other words, assume that the light source were emitting 10 watts/m^2
of light. The surface would hence emit 5 watts/m^2. Hence the brightness
of this surface would be 5 watts/m^2 (ie. exactly half of that of the light
source).
Now, how do we *draw* this surface? The problem is that the relationship
between irradiance and the brightness perceived by the human eye is far from
linear. In other words, the surface might be emitting 50% of the incoming
light, but it will not *look* half as bright as a fully-lit white surface.
In fact, rather than looking 50% gray, it will look approximately 73% gray,
because that's how the human eye perceives it.
In other words, when we draw this surface, it has to *look* like a
73% gray rather than a 50% gray, because that's the perceived brightness
of half of the full irradiance.
That is what the assumed_gamma 1.0 is doing. It's the reason why a "rgb 0.5"
will look about 73% gray with that setting (rather than 50% gray).
Hence if you render for example a diffuse sphere like this, it ought to
be more accurate in terms of brightness than with assumed_gamma 2.2. (The
parts of the sphere that are facing at 60 degrees from the light source
should look about 73% gray, rather than 50% gray, if the physics are correct.)
However, this causes a practical problem when specifying colors. Namely,
do you want the color definition "rgb 0.5" to mean "half of full irradiance",
or do you want it to mean "50% gray"? With assumed_gamma 1.0 it will mean
the former (while with assumed_gamma 2.2 it will mean the latter).
As said, however, "half of full irradiance" corresponds roughly to about
73% perceived brightness (compared to the brightness of "rgb 1.0"). In other
words, "rgb 0.5" will *look* significantly brighter than half-gray.
The relationship between irradiance (the absolute amount of energy that
is carried by light, measured in watts/m^2) and the brightness that is
*perceived* by the human eye is roughly logarithmic, and the exponent is
approximately 2.2. (The estimation is probably very rough, though.)
Now, displays also have a non-linear relationship between raw pixel values
and the irradiance emitted by those pixels. In other words, a pixel with
values (128,128,128) will not emit 50% of the irradiance of a fully-white
pixel (255,255,255). Instead, this relationship is also logarithmic, with
an exponent of, curiously (although I don't know if coincidentally), 2.2
in most systems.
What this means is that a pixel with value (128,128,128) will *look*
approximately 50% gray (even though the monitor is only sending about 22%
of the light, as measured in watts/m^2). This is actually extremely
convenient when dealing with bitmaps: There's an almost linear relationship
between pixel values and perceived brightness.
This is the reason why image manipulation programs will use (128,128,128)
for half gray (because it certainly looks half gray, which is convenient).
This is also the explanation of the "assumed_gamma" keyword: It is assumed
that the color was specified in an environment with gamma 2.2 (which is the
most common). In such an environment (128,128,128) does look 50% gray, and
hence if you use "assumed_gamma 2.2" in povray, the equivalent "rgb 0.5"
will also look 50% gray.
The problem is that if you try to do this when assumed_gamma 1.0 has been
specified in povray, the result will be completely different. In that case
the colors are assumed to be linear (in terms of irradiance).
As said, with assumed_gamma 1.0 "rgb 0.5" does not mean "50% gray", and
instead it means "half of the full irradiance" (which is approximately the
same as 73% gray).
If you want "50% gray", you need to pre-gamma-correct the color. Basically,
you need to calculate pow(0.5, 2.2), getting you about "rgb 0.218", which
would be about 50% gray.
Of course there are still some problems left. Most prominently, if you
want a gradient from one color to another that *looks* linear (rather than
being linear with respect to irradiance), there's currently no easy way to
achieve that, when using assumed_gamma 1.0. There are many other situations
as well, related to color maps and other such maps. This can make designing
textures a bit difficult.
An easy way around this problem is to simply use assumed_gamma 2.2 and
accept that the end result might not be physically as accurate. After all,
the human brain is quite forgiving of such small inaccuracies, and nobody
will notice in practice.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
From: Ive
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 11 Sep 2011 09:42:18
Message: <4e6cbaba@news.povray.org>
|
|
|
| |
| |
|
|
Am 11.09.2011 15:00, schrieb Warp:
> The main reason why assumed_gamma 1.0 ought to produce more accurate
> results has to do with a physical concept called irradiance. Despite the
> fancy name, irradiance is simply the amount of energy that a certain amount
> of light carries. This is measured in watts per square meter (in other
> words, how many watts of energy a certain amount of light carries to
> each square unit of a surface).
>
> Now, when light hits a surface, some of it is reflected. How much is
> reflected and to which direction is an extremely complicated function,
> but for the sake of simplicity let's assume the following scenario:
>
> A fully white (rgb 1.0) light source, and a fully white (rgb 1.0) diffuse
> surface, oriented so that it's facing at 60 degrees from the light source.
> Due to this orientation the surface will reflect exactly 50% of the incoming
> light to all directions (including the camera, of course).
>
> (The reason why it's exactly 50% at 60 degrees is due to the cosine law,
> that says that the amount of reflected light is proportional to the cosine
> of the angle of the incoming light, and cos(60) = 0.5.)
>
> In other words, assume that the light source were emitting 10 watts/m^2
> of light. The surface would hence emit 5 watts/m^2. Hence the brightness
> of this surface would be 5 watts/m^2 (ie. exactly half of that of the light
> source).
>
> Now, how do we *draw* this surface? The problem is that the relationship
> between irradiance and the brightness perceived by the human eye is far from
> linear. In other words, the surface might be emitting 50% of the incoming
> light, but it will not *look* half as bright as a fully-lit white surface.
> In fact, rather than looking 50% gray, it will look approximately 73% gray,
> because that's how the human eye perceives it.
>
> In other words, when we draw this surface, it has to *look* like a
> 73% gray rather than a 50% gray, because that's the perceived brightness
> of half of the full irradiance.
>
> That is what the assumed_gamma 1.0 is doing. It's the reason why a "rgb 0.5"
> will look about 73% gray with that setting (rather than 50% gray).
>
> Hence if you render for example a diffuse sphere like this, it ought to
> be more accurate in terms of brightness than with assumed_gamma 2.2. (The
> parts of the sphere that are facing at 60 degrees from the light source
> should look about 73% gray, rather than 50% gray, if the physics are correct.)
>
> However, this causes a practical problem when specifying colors. Namely,
> do you want the color definition "rgb 0.5" to mean "half of full irradiance",
> or do you want it to mean "50% gray"? With assumed_gamma 1.0 it will mean
> the former (while with assumed_gamma 2.2 it will mean the latter).
>
> As said, however, "half of full irradiance" corresponds roughly to about
> 73% perceived brightness (compared to the brightness of "rgb 1.0"). In other
> words, "rgb 0.5" will *look* significantly brighter than half-gray.
>
> The relationship between irradiance (the absolute amount of energy that
> is carried by light, measured in watts/m^2) and the brightness that is
> *perceived* by the human eye is roughly logarithmic, and the exponent is
> approximately 2.2. (The estimation is probably very rough, though.)
>
> Now, displays also have a non-linear relationship between raw pixel values
> and the irradiance emitted by those pixels. In other words, a pixel with
> values (128,128,128) will not emit 50% of the irradiance of a fully-white
> pixel (255,255,255). Instead, this relationship is also logarithmic, with
> an exponent of, curiously (although I don't know if coincidentally), 2.2
> in most systems.
>
> What this means is that a pixel with value (128,128,128) will *look*
> approximately 50% gray (even though the monitor is only sending about 22%
> of the light, as measured in watts/m^2). This is actually extremely
> convenient when dealing with bitmaps: There's an almost linear relationship
> between pixel values and perceived brightness.
>
> This is the reason why image manipulation programs will use (128,128,128)
> for half gray (because it certainly looks half gray, which is convenient).
>
> This is also the explanation of the "assumed_gamma" keyword: It is assumed
> that the color was specified in an environment with gamma 2.2 (which is the
> most common). In such an environment (128,128,128) does look 50% gray, and
> hence if you use "assumed_gamma 2.2" in povray, the equivalent "rgb 0.5"
> will also look 50% gray.
>
> The problem is that if you try to do this when assumed_gamma 1.0 has been
> specified in povray, the result will be completely different. In that case
> the colors are assumed to be linear (in terms of irradiance).
>
> As said, with assumed_gamma 1.0 "rgb 0.5" does not mean "50% gray", and
> instead it means "half of the full irradiance" (which is approximately the
> same as 73% gray).
>
> If you want "50% gray", you need to pre-gamma-correct the color. Basically,
> you need to calculate pow(0.5, 2.2), getting you about "rgb 0.218", which
> would be about 50% gray.
>
> Of course there are still some problems left. Most prominently, if you
> want a gradient from one color to another that *looks* linear (rather than
> being linear with respect to irradiance), there's currently no easy way to
> achieve that, when using assumed_gamma 1.0. There are many other situations
> as well, related to color maps and other such maps. This can make designing
> textures a bit difficult.
>
> An easy way around this problem is to simply use assumed_gamma 2.2 and
> accept that the end result might not be physically as accurate. After all,
> the human brain is quite forgiving of such small inaccuracies, and nobody
> will notice in practice.
>
While what you write about irradiance is true I completely disagree with
all conclusions you draw from this.
The main misconception seems to be that you assume there is something
like a color that is the inherent property of an object. What the color
within a pigment statement actually describes is the way (diffuse,
simplified I know) light is reflected. And BTW you always seem to assume
white light while e.g. the color from a light bulb is far away from this.
And all I have to say about "small inaccuracies" and "nobody will notice
in practice" is that my experience simply shows the opposite.
But I believe you and I know it is possible to create great and
realistic looking scenes with POV-Ray following your advice but I am
also quite sure there is a lot more fiddling with colors and settings
involved to make it *look* right instead (as I propose) trying to feed
POV-Ray with "real-world" values and simply let POV-Ray *calculate* it
right. This has also the great advantage (I am very lazy) that I can
reuse my own object within different scenes and completely different
lighting setups and they look always right and as expected.
-Ive
Post a reply to this message
|
|
| |
| |
|
|
From: Thomas de Groot
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 12 Sep 2011 03:14:31
Message: <4e6db157$1@news.povray.org>
|
|
|
| |
| |
|
|
On 11-9-2011 15:00, Warp wrote:
> This topic has been discussed in great length already, but perhaps I could
> try a different approach at explaining why assumed_gamma 1.0 ought to produce
> a physically more accurate result (as well as the practical complications that
> using it causes when designing scenes).
>
Yes, I am aware of this discussion, and I apologize for bringing this up
again :-) It is a complex matter not immediately understood by the lay
person.
Your explanation is very clear and answers a number of puzzles I
obviously had about my scene setup. Thanks indeed.
Thomas
Post a reply to this message
|
|
| |
| |
|
|
From: Warp
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 12 Sep 2011 14:32:00
Message: <4e6e5020@news.povray.org>
|
|
|
| |
| |
|
|
Ive <ive### [at] lilysoftorg> wrote:
> While what you write about irradiance is true I completely disagree with
> all conclusions you draw from this.
All conclusions? Like what?
As far as I can see, these are the conclusions I drew:
When half of the incoming light is reflected from a surface, it looks
to the human eye approximately 73% from the full brightness. Do you
"completely disagree" with this? Can you explain?
assumed_gamma 1.0 better simulates that perception than assumed_gamma 2.2
(because in the latter case the brightness of the surface looks 50% from
full brightness, rather than 73%, which would mean that significantly less
light is being reflected). You disagree with this? Why?
Displays with a gamma of 2.2 happen to approximately coincide with the
brightness perception of the human eye, which means that pixel values
scale almost linearly to perceived brightness (which means that eg. a
pixel value of (128,128,128) will look like about 50% gray). You disagree
with this? Please explain.
If you use assumed_gamma 1.0 in povray, linear gradients will not look
linear (and instead they will look logarithmic). That's because they will
be linear in terms of irradiance, not in terms of perceived brightness.
You disagree with this?
Because of the previous, designing many textures becomes more complicated,
at least currently. (If you want, for example, a gradient that looks linear,
you would have to somehow compensate from the logarithmic nature of the
perceived brightness of the linear irradiance gradient. This can be quite
difficult to do with complex color maps.) Do you disagree with this, and
why?
If you are using assumed_gamma 1.0 and you want, for example, a color
that looks 50% gray, you will have to "gamma-uncorrect" rgb 0.5 in order
to achieve that (giving you "rgb .218"). In other words, you need to convert
irradiance values to perceived colors. Please explain your disagreement.
Using assumed_gamma 2.2 makes it easier to map color values to perceived
colors because it corresponds roughly to a linear scale. On the other hand,
the rendering is not technically accurate because surface lighting will be,
technically speaking, scaled in the wrong way (for example a surface that
should reflect 50% of the incoming light actually will be reflecting about
22% instead). You could disagree with this, but you'll have to explain your
technical reasoning.
The technically "wrong" illumination calculations do not produce images
that are obviously wrong. There are literally millions of images out there
made by different renderers (which use this same "wrong" gamma handling),
and over 10 years worth of povray renderings made by thousands of people
out there, that attest to this. Hence using assumed_gamma 2.2 is not such
a big deal in practice. Feel free to disagree.
> The main misconception seems to be that you assume there is something
> like a color that is the inherent property of an object.
I don't understand what that has anything to do with what I wrote.
I also don't even understand what is it that you are trying to say.
> And all I have to say about "small inaccuracies" and "nobody will notice
> in practice" is that my experience simply shows the opposite.
Feel free to point out a few examples out of the millions of images out
there which have been, technically speaking, rendered with the wrong
gamma settings, and which obviously look wrong. You can start with the
POV-Ray hall of fame.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
From: Tim Cook
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 17 Sep 2011 09:03:22
Message: <4e749a9a@news.povray.org>
|
|
|
| |
| |
|
|
On 2011-09-11 08:00, Warp wrote:
> Now, how do we *draw* this surface? The problem is that the relationship
> between irradiance and the brightness perceived by the human eye is far from
> linear. In other words, the surface might be emitting 50% of the incoming
> light, but it will not *look* half as bright as a fully-lit white surface.
> In fact, rather than looking 50% gray, it will look approximately 73% gray,
> because that's how the human eye perceives it.
Query: is this a matter of how the human eye sends data on to the
brain, how the brain processes the raw eye-data, or a combination of the
two? Are the values the same for everyone? If they're not, what's the
range that different people see?
Obviously, there's probably the average perception that's being
targeted, but...it does make me wonder.
Post a reply to this message
|
|
| |
| |
|
|
From: Darren New
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 17 Sep 2011 09:47:13
Message: <4e74a4e1$1@news.povray.org>
|
|
|
| |
| |
|
|
On 9/17/2011 6:03, Tim Cook wrote:
> Query: is this a matter of how the human eye sends data on to the brain, how
> the brain processes the raw eye-data, or a combination of the two?
Visual data is tremendously processed before it even gets out of your eye.
Rods and cones are analog devices, meaning that if you stare at a large red
circle on a large green surface, your eyes aren't even seeing the middle of
the circle after about a fifth of a second. That's the purpose of saccades
and the reason you can't see that your blind spot is there. The second layer
detects small features, the third layer detects edges, and then it's on the
way to your brain. Pretty soon, it's split out into "objects" vs
"locations", so people with brain damage in the "objects" part of their
brain can't see what you throw them, but they can catch it because the know
it's there, for example.
I think asking whether it's the eyes or the brain doing the interpretation
is an over-simplified question. :-)
--
Darren New, San Diego CA, USA (PST)
How come I never get only one kudo?
Post a reply to this message
|
|
| |
| |
|
|
From: Tim Cook
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 17 Sep 2011 21:21:43
Message: <4e7547a7$1@news.povray.org>
|
|
|
| |
| |
|
|
On 2011-09-17 08:47, Darren New wrote:
> I think asking whether it's the eyes or the brain doing the
> interpretation is an over-simplified question. :-)
Well, was thinking more about the reception of the direct input from the
rods and cones, separate any other processing. Sort of a..."is the
colour I see as 'blue' the same colour you see as 'blue'?"
Post a reply to this message
|
|
| |
| |
|
|
From: Alain
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 17 Sep 2011 22:12:28
Message: <4e75538c@news.povray.org>
|
|
|
| |
| |
|
|
> On 2011-09-11 08:00, Warp wrote:
>> Now, how do we *draw* this surface? The problem is that the relationship
>> between irradiance and the brightness perceived by the human eye is
>> far from
>> linear. In other words, the surface might be emitting 50% of the incoming
>> light, but it will not *look* half as bright as a fully-lit white
>> surface.
>> In fact, rather than looking 50% gray, it will look approximately 73%
>> gray,
>> because that's how the human eye perceives it.
>
> Query: is this a matter of how the human eye sends data on to the brain,
> how the brain processes the raw eye-data, or a combination of the two?
> Are the values the same for everyone? If they're not, what's the range
> that different people see?
Your retina does a good amount of preprocessing of what you see,
including some pattern optimisation, interpolation and differientiation.
Then, the optic nerve apply still some more intermediate processing.
Finaly, your brain does the main processing, lots of cross referencing
and pattern analysis and recognition.
And finaly, you see the image.
This allow you to instantly recognize a 95% degraded image of something,
but also causes all those optical illusions.
Then, no, not all peoples see the same thing the same way. The colour
response of your eye is almost sertainly different from mine by at least
a minute amount.
That difference is extremely difficult to eveluate, as two person that
don't see the same thing the same way will probably describe it the same
way.
>
> Obviously, there's probably the average perception that's being
> targeted, but...it does make me wonder.
Not average perception, but consensously thermed perception.
Post a reply to this message
|
|
| |
| |
|
|
From: Patrick Elliott
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 18 Sep 2011 03:51:54
Message: <4e75a31a$1@news.povray.org>
|
|
|
| |
| |
|
|
On 9/17/2011 6:03 AM, Tim Cook wrote:
> Query: is this a matter of how the human eye sends data on to the brain,
> how the brain processes the raw eye-data, or a combination of the two?
> Are the values the same for everyone? If they're not, what's the range
> that different people see?
>
Almost impossible to say. None of us have a "name" for colors that
contain both red and green in them, because, except for some situations
where you cause over-saturation, and some people "briefly" see a
confusing color that they normally don't, the processing basically robs
us of that range of colors. Some people, have four types of receptors,
so can see more colors, sort of, than we can, but without the "language"
to go with it, there is no way to process that into something tangible,
unless, by shear chance, a situation arose where someone "needed" to see
the differences, which is bloody unlikely. Otherwise, short of testing
it, there is no way to say precisely, save that it ranges from "not able
to see that color" to "everything is shifted slightly, so they don't see
some slice of the color range as clearly. I have no idea if certain
genetic forms produce a wider, or narrower, range, but that is likely,
so it could be shifted, or missing things on one end of the spectrum, or
the other, or both, etc.
In short, its a damn mess. lol
Post a reply to this message
|
|
| |
| |
|
|
From: Tim Cook
Subject: Re: Why assumed_gamma 1.0 should be used (and the drawbacks)
Date: 18 Sep 2011 10:08:17
Message: <4e75fb51$1@news.povray.org>
|
|
|
| |
| |
|
|
On 2011-09-18 02:51, Patrick Elliott wrote:
> Almost impossible to say. None of us have a "name" for colors that
> contain both red and green in them, because, except for some situations
> where you cause over-saturation, and some people "briefly" see a
> confusing color that they normally don't, the processing basically robs
> us of that range of colors. Some people, have four types of receptors,
> so can see more colors, sort of, than we can, but without the "language"
> to go with it, there is no way to process that into something tangible,
> unless, by shear chance, a situation arose where someone "needed" to see
> the differences, which is bloody unlikely. Otherwise, short of testing
> it, there is no way to say precisely, save that it ranges from "not able
> to see that color" to "everything is shifted slightly, so they don't see
> some slice of the color range as clearly. I have no idea if certain
> genetic forms produce a wider, or narrower, range, but that is likely,
> so it could be shifted, or missing things on one end of the spectrum, or
> the other, or both, etc.
It occurs to me, however, a potential way to quantify the data. We have
the ability to emit very specific wavelengths of light. We can,
therefore, use a definite reference 'red', 'green', and 'blue', and
calibrate a filtered sensor to each. This being done, we can use a very
fine checkerboard pattern of the colours plus white, alternating with a
pigment made by /mixing/ the colour plus white, thence other
combinations. This would produce the baseline.
From here, it's a matter of detecting at the optic nerve what data gets
sent on to the brain.
*whips out some nano-wires and a scalpel*
Who's game? XD
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|