|
![](/i/fill.gif) |
> IMO the problem should be solved in the exact opposite way: All pixel
> information should be universally the same (ie. for example a value of
> 128 (in a 8 bits per channel image) means exactly half the brightness,
> not more, not less) and the OSes then correct it so that it will look
> like that in the target monitor. If the image wants an exactly-half-bright
> gray color, then it specifies 128,128,128 for that pixel and the OS then
> makes sure when showing that image that it will look half-bright on the
> monitor by whatever corrections are necessary to achieve that.
In theory this is a good idea, but the problem is with only 8 bits and a
linear scale dark areas will look rubbish. The eye is much more sensitive
to differences in dark shades than bright shades, so unless you have a 16+
bit or floating point image, it makes quite a lot of sense to use a
non-linear scale.
IMO it would be better to split "images" into two categories. One where you
want the exact pixel value to be shown on the monitor, so 50% in the image
is 50% brightness, etc. This would be used for web pngs that have to match
CSS colours and diagrams that want to exploit "pure red", "pure white" etc
on a monitor. The other type would be "photo" type images, where you want
all viewers to see exactly the same colour. For this you should use a
proper physical colour space, like Yuv to specify pixel colours. It is then
up to the OS/application how to translate these values into RGB to send to
the monitor/printer/projector.
Post a reply to this message
|
![](/i/fill.gif) |