|
|
POV-Ray now supports dithering for the produced image. While the image
will always be full-color and the effect of dithering will hence be
extremely subtle, and hence how the image is dithered isn't really all
that important, I was still wondering if gamma is being taken into
account in the dithering process.
As we all know by now (from the lengthy discussions about the subject
of gamma correction), a raw RGB color of (128,128,128) does *not* look
the same in terms of overall brightness as half of the pixels being black
and the other half white. In fact, there's usually a very pronounced
difference in brightness (usually a difference of an order of magnitude
of 2.2 with most monitors, which is why gamma has to be applied).
Assume we are dithering a full-color (or full-grayscale) image into
a two-color image using a palette of pure black and pure white, and
assume that the image consists solely of the (8-bit) pixel values
(128,128,128). Most dithering algorithms out there will naively produce
a result where about 50% of the pixels will be black and the others white.
As we know, this will *not* produce a result that looks like the original
in terms of brightness.
A more competent dithering algorithm would allow the user to specify
a gamma value. Taking it into account is rather simple: First apply the
gamma correction to both the input image data and the palette, then
perform the dithering as usual, and then apply the inverted gamma
correction to the result. (Basically, the dithering is performed in
sRGB color space, if I understand correctly, rather than raw RGB space.)
This results in the correct dithering taking the specified gamma into
account.
Of course a two-color palette is going to show the difference very
radically, but if you are dithering a full-color image, the difference
will probably be so subtle as to be indistinguishable, but I was still
wondering how POV-Ray is doing it, and whether it's taking gamma into
account or not.
--
- Warp
Post a reply to this message
|
|