![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Sorry, just noticed with method 3, max samples didn't matter... <_<
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Am 27.06.2010 08:59, schrieb Gyscos:
> On 3.6, I would get the result I was looking for. In 3.7, with the now-real
> gamma computation, the light rays are too bright. So I tried to set a lower
> color for the media (0.01 instead of 0.1), but I can't get as much contrast as I
> did in 3.6...
> The workaround I use is to use type 3 instead of type 2, wich is more directive,
> and then gives more contrast...
Yes, having had a look at the scene I guess that's the way to go indeed.
(Apparently, type 2 was the wrong model for your purposes.)
> When're you're talking about the computed file, is it the colors of the pixels
> stored ? Or after the gamma correction ?
Neither. I mean the brightness levels computed; the "idea" of the image,
rather than its actual binary representation.
When an output file is actually written, there is a mapping applied from
the (simulated) physical brightness levels to the binary representation
values. With most file types, this mapping includes some gamma-encoding,
and in POV-Ray is governed by the FILE_GAMMA parameter.
Likewise, when a file is read and displayed by a viewing software, there
is a mapping applied from the binary representation values back to
(real) physical brightness levels, via the viewing software, operating
system API, graphics drivers, graphics card, display controller, and
(typically) LCD panel. With most file types, this mapping is fixed and
out of POV-Ray's control.
Last not least, when POV-Ray outputs to the preview window, there is a
mapping applied directly from the /simulated/ physical brightness levels
to /real/ physical brightness levels, via the OS API, graphics drivers,
graphics card, display controller, and (typically) LCD panel. This
mapping, in all cases, is governed by the DISPLAY_GAMMA parameter.
> Cause if the gamma correction is set to linear, then a linear preview in POV-Ray
> wouldn't be wrong, would it ?...
I think you're approaching the problem wrong, and it's difficult (though
not impossible) to argue against it, so I prefer to try and lead you to
the conclusion via a different path. Forget for a moment how you /think/
FILE_GAMMA and DISPLAY_GAMMA are related - because as a matter of fact
they're not: They're serving independent purposes.
Let's have a look at DISPLAY_GAMMA:
This is the parameter that controls how /simulated/ physical brightness
levels (as computed by the POV-Ray rendering engine) are mapped to
/real/ physical brightness levels emitted by your display.
I think it should be obvious that you want a direct 1:1 mapping between
these two: You want your display to show what POV-Ray calculated.
However, the pipeline from the operating system to the emitted light has
some non-linearities in it: Rather than a 1:1 mapping, it is
approximately a f(x)=x^GAMMA mapping, with some system-specific fixed
value for GAMMA; so in order to achieve the net 1:1 mapping, POV-Ray
must compensate for that before handing the data over to the OS, by
applying another g(x)=x^(1/GAMMA) mapping.
Evidently, in order to do so, POV-Ray needs to know the value of GAMMA,
which is exactly what you specify with DISPLAY_GAMMA. If you get this
wrong, you'll retain non-linearities in the mapping between simulated
and real light intensities, and the preview will /not/ show what was
computed.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thanks, I think I get it better now...
But isn't it possible to query the right value for Display_Gamma from the OS ? I
mean, how do other viewer software do ?
Also, about File_Gamma, it is used for the gamma-encoding that is applied when
creating the file. When reading the file, it is decoded by the software, and
then sent to the OS.
I understand that, between the file being sent to the OS and the pixel being
illuminated on the screen, there can be the gamma-transformation, caused by the
graphic card, system, screen, ... because of some non-linearity.
Now, before that, how does the software decode the file ?
Does it do a reverse-encoding with the gamma value stored in the file ? Or does
it just send it linearly to the system ?
I understood the Display_Gamma was mainly to compensate the system+GC+screen non
linearity.
The question is : What should the File_Gamma value compensate ? The software
decoding ? The system+GC+screen non linearity ? Or the eye's perception ?
Thanks for your patience :)
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Am 28.06.2010 12:48, schrieb Gyscos:
> But isn't it possible to query the right value for Display_Gamma from the OS ? I
> mean, how do other viewer software do ?
That would be pretty cool. However, in a typical end user's system, the
OS doesn't know either, or even has wrong information about it.
For more professional systems, where the user (or admin) cares about
display nonlinearity, you'll typically have calibrated displays with a
corresponding ICC profile; but that don't help either, because it's much
more complex than a simple gamma curve, and POV-Ray isn't
color-profile-aware. Not yet.
> Also, about File_Gamma, it is used for the gamma-encoding that is applied when
> creating the file. When reading the file, it is decoded by the software, and
> then sent to the OS.
> I understand that, between the file being sent to the OS and the pixel being
> illuminated on the screen, there can be the gamma-transformation, caused by the
> graphic card, system, screen, ... because of some non-linearity.
> Now, before that, how does the software decode the file ?
> Does it do a reverse-encoding with the gamma value stored in the file ? Or does
> it just send it linearly to the system ?
For JPEG, BMP or the like, the software will essentially send the data
right to the display subsystem unchanged, expecting the display
subsystem's inherent non-linearity to take care of the gamma-decoding.
For PNG, the data will theoretically be gamma-decoded by the software,
then gamma pre-corrected by the same software to fit the display
subsystem's inherent gamma (having the same effect as gamma-encoding the
data again, though possibly with a different gamma value), and finally
the data passed to the display subsystem.
In practice, the initial gamma-decoding and subsequent gamma
pre-correction might be performed in one single step, taking advantage
of the fact that (x^A)^B = x^(A*B).
> I understood the Display_Gamma was mainly to compensate the system+GC+screen non
> linearity.
>
> The question is : What should the File_Gamma value compensate ? The software
> decoding ? The system+GC+screen non linearity ? Or the eye's perception ?
In a sense, File_Gamma is intended to compensate for the nonlinearity in
the eye's dynamic range, i.e. that the eye can tell apart two dark
colors easier than two bright colors.
In the PNG file format, that is its only role.
In JPEG, BMP or the like, the File_Gamma double-features as gamma
pre-correction for the intended output display's inherent nonlinearity,
because for those file formats this is customary.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> Yeah, I just noticed that too...
> The thing is, I saw in the doc that default value for interval was 10 :
> http://www.povray.org/documentation/view/3.6.1/421/
That's the default for method 1.
> So when I first saw the results weren't good, I changed it to 20. But now if I
> set it to 10, it works fine too, so I doubt the default value really is 10...
> Maybe 1 indeed.
>
> Also, still unrelated, but when I use method 3 with a very low variance
> (0.000000001), a very high confidence (0.99999999), and "samples 10, 100",
> shouldn't it use more than 10 if the result isn't perfect ? I don't see why
> "samples 30, 100" works better there...
>
>
The default for method 3 are:
intervals 1
samples 10
There is no second samples value used and if one is provided, it's just
ignored.
Alain
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
About media scattering, thanks... I just saw default method was 3 and not 1...
>_<
About gamma, File_Gamma is supposed to compensate for the eye's non-linear
perception. But why do we need to do this ?... Shouldn't the eye do it itself ?
I mean, if POV-Ray computes the value like they should be in real-world, then
the eye should perceive them the same way, right ? I mean, the eye perceives
non-linearly the screen as well as it perceives non-linearly the outside world,
doesn't it ?
The brightness POV-Ray computes is linearly proportionnal to the amount of
photonic energy the surface receives, right ? So it needs gamma-transformation
between this value and the brain to look like what we would see in real life.
But if the eye already makes this transformation, why do we also do it on the
computer side ?
I mean, for real photos, do we apply a gamma-correction ?
I don't see why we should bother with how the eye perceives things, since he
will apply whatever gamma-correction it wants to for both screen and real world,
with the same brightness values... Am I wrong ? (Since we do use gamma, I
probably am, I just don't know where... )
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Am 28.06.2010 23:17, schrieb Gyscos:
> About gamma, File_Gamma is supposed to compensate for the eye's non-linear
> perception. But why do we need to do this ?... Shouldn't the eye do it itself ?
It's not about the non-linear perception /per se/, but about the eye's
non-linearity in how good it can tell apart different brightness levels:
For the same given absolute brightness difference, the human eye can
tell apart two dark colors easier than two bright colors.
As a consequence, color banding due to limited bit depth is much more of
a problem in dark areas than in bright ones; with a straightforward
linear encoding, a bit depth of 8 would be more than enough for the
bright colors, while being still insufficient for dark colors.
This is circumvented by using a non-linear encoding ("gamma encoding")
in the image files, so that the discrete brightness levels that can be
encoded are /perceptually/ distributed evenly among both dark and bright
colors. For instance, the brightness levels encoded with values 16 and
17 can be told apart about as easily as the brightness levels encoded
with values 239 and 240, or those encoded with values 127 and 128.
This gamma encoding is what File_Gamma controls.
Of course, if a file is gamma-encoded this way, that very same encoding
must be reverted in one way or the other. The traditional way was to use
a gamma encoding that matched the intended target system's display
gamma, so that essentially the gamma decoding would be performed by the
hardware; or, seen from a different perspective, the gamma encoding
would serve as gamma pre-correction for the target display.
On the other hand, the modern way - as used e.g. in the PNG file format
- is to (theoretically) de-couple the two concepts of gamma encoding and
gamma pre-correction: In a first step, the viewing software will
gamma-decode the image to reconstruct the linear brightness values, and
in a second step gamma pre-correct these linear brightness values to
compensate for the display's non-linearity. (In practice both steps may
be combined, and this /may/ even result in a "nothing to do" operation,
but that's just a special case now, and software must no longer take
that for granted.)
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
So the whole gamma thing is just a transitionnal state to store efficiently the
important values ? Like gramophones where some frequencies are amplified on the
track and then de-amplified during playback to compensate for a loss of
precision... :D Yay, I finally get why people invented it in the first place...
Thank you very much, and sorry again for being so slow ! :)
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
But then, in theory, this gamma value just affects the transition, it shouldn't
change the final image (except for this precision improvements), right ?
So whatever File_Gamma I set, the correctly decoded image should have pretty
much the same brightness ?
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Am 29.06.2010 11:39, schrieb Gyscos:
> So the whole gamma thing is just a transitionnal state to store efficiently the
> important values ? Like gramophones where some frequencies are amplified on the
> track and then de-amplified during playback to compensate for a loss of
> precision... :D Yay, I finally get why people invented it in the first place...
Yes, as far as File_Gamma goes (i.e. gamma /encoding/), that's exactly
what's happening. Nice analogy.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |