|
|
TC schrieb:
> Why is gamma-correction correction applied to images in povray, anyway? And
> in 3.7 by default?
>
> As I understand it, gamma correction is correction that has to be applied to
> make pictures look like they "should" - to correct luminance.
Yes.
> However, in order to get the appearance right, you have to gamma-correct
> your monitor and printer, too. So, if we apply gamma-correction in povray,
> we still need additional correction on printer and monitor. So why have a
> correction in povray in the first place, if it is not really neccessary. We
> still need to correct gamma later. Besides, what does look "right" is a very
> subjective matter - it even changes with age.
No, what looks "right" can be quantified quite well: For instance, a
50%-intensity-grey should look exactly the same (when you squint your
eyes a bit, or otherwise "physically blur" what your display shows) as a
pattern of alternating black (0%) and white (100%) stripes.
> Is the "raw" povray output correct (in the sense of: does the output follow
> the laws of physics), or isn't it?
Sort of, yes. There are some features in it that don't /exactly/ follow
physics, but even those are designed to come as close as possible in the
/linear/ world.
> Does it >have< to be corrected?
Yes.
The 2.2 gamma curve is much closer to the dynamics of human perception
than linear color space; e.g. the difference between a 10%-intensity
grey and an 11%-intensity grey is percieved as far greater than between
a 90%-intensity grey and a 91%-intensity grey. Thus, for efficient
storage of image data, it makes sense to "compress" brighter colors and
instead "expand" darker colors, to make the most out of the bit depth.
This is called "gamma encoding".
For gamma-encoded images, a color depth of 8 bits per pixel is just
about enough to avoid visible color banding. With linear encoding on the
other hand, a color depth depth of 12 bits per pixel or more would be
needed to avoid banding artifacts in dark areas, while for brighter
regions some 6 bits per pixel would suffice.
Obviously, this is not very efficient, and since high-performace RAM
suitable for graphics card frame buffers used to be very expensive (and
still is to some extent), while a few additional lines of software code
are cheap, graphics hardware almost invariably used gamma encoding for
the frame buffers. Not to mention that it also fit so well with the
inherent gamma of CRTs, allowing both the graphics adaptor and CRT
display electronics to be pretty simple.
Therefore, even today, output devices are /not/ calibrated to achieve
/linear/ behavior, but to achieve some /standard/ gamma (typically 2.2
on PC systems, though the sRGB output curve is becoming more frequent),
which is then made known to (or silently assumed by) the image
processing software running on the system. Most operating systems have
either an official or at least a de-facto standard for the gamma
encoding to be used for data submitted to the display subsystem.
Of course, the same storage efficiency applying to graphics hardware is
of benefit for image file formats as well; for this reason, and because
image viewing software has a much easier job if all it needs to do is
shove the image file data into the display frame buffer, gamma encoding
(double featuring as pre-correction, and often exclusively interpreted
as such) has been the de-facto standard for virtually all 8-bit image
file formats out in the wild, and is explicitly recommended for the most
common formats nowadays.
Therefore, if POV-Ray would /not/ perform gamma pre-correction for the
preview display, it would go against operating system conventions; and
if it would /not/ perform gamma encoding / pre-correction for the output
file, it would go against de-facto and recommended standards of the
respective file format.
Note that for some file formats, POV-Ray does /not/ apply gamma
correction, namely Radiance HDR and OpenEXR, as both formats are defined
to encode linear light intensity data (which they can do efficiently
because they use floating-point-like data formats).
> Or, better put, is the left or right side of your image "correct"?
As you can see (or /should/ see, unless your display an/or your image
viewing software is /totally/ screwed up), the left side shows a
checkered plane, but the reflection on the sphere does not. Can this
possibly be correct?
The right side may or may not look /perfectly/ correct, depending on
whether your display is perfectly calibrated or not, but if it is
/anywhere/ close to sanity, it /will/ look "much more correct" than the
left side (that's a physical fact, not just an assumption about your
personal visual perception).
The only thing that could mess up this "experiment" would be a system
that ignores both color profile information and gAMA chunk present in
the image file, /and/ instead of exhibiting just some typical random
display gamma would happen to use linear color space. *Very* unlikely,
unless the person in charge of the system has no clue about color
management but thinks he/she does.
That, or you may be unable to squint your eyes; in that case, some
peculiarities about your visual perception might kick in and spoil it
all, but even that seems rather unlikely to me. As soon as you somehow
physically blur the image projected by the display (which is what
squinting your eyes does), the experiment /will/ work on every system
that is able to display PNG files at least /roughly/ right.
Post a reply to this message
|
|