|
|
> My theory is that the set of aesthetically pleasing images which look good on
> properly calibrated monitors will all have a similar histogram.
It surely depends on the content of the image (skiing scene, sunny
beach, overcast fields, moon at night)? Did you pick a few random
images (eg from google or flickr) that look good and compare the
histograms? My expectation is that they would be quite different.
> I would guess that when you write an algorithm for a digital camera to chose
> exposure time, you'd just leave the shutter open long enough to hit some target
> RGB value.
Indeed, usually the camera tries to get the average pixel value to
something like 17%. On more expensive cameras you can select the region
over which this average is done (entire frame, center weighted, spot,
etc). This works well enough most of the time, but a serious
photographer will use manual exposure and look at the histogram. A good
example is snow, auto exposure usually gives pictures that look way too
dark (I assume because the camera's "17% rule" is assuming the sky to be
way brighter than the ground, in snow it often isn't).
Usually you can't just take the range of pixel brightnesses and map this
to 0-255, a small number of very dark and very bright pixels will cause
most of the image to lie between 100 and 140 or something and look very
washed out overall.
Post a reply to this message
|
|