|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Dithering is used by imaging software to make a 256-color image look
like it has more color depth than it actually has. In some images, the
256-color image can look exactly the same as a 24-bit original.
It accomplishes this by determining an optimal palette of 256 colors,
then using a "fuzz" algorithm to make 24-bit color values pick a nearby
palette entry.
The effect can be seen with an up-close inspection, but is invisible to
the casual viewer.
So, my question is this: has anyone done or seen any research on
dithering a 48-bit image to 24-bit?
My own efforts haven't been successful, but I seriously want to pursue
this because I think it could substantially improve the perceived
quality of 24-bit images, particularly those with a lot of gray shadowy
areas.
The ultimate effect is that a 24-bit image could look nearly as good as
a 48-bit one, and most importantly, the improvement would be visible on
commodity hardware.
-Ryan
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Dithering is used by imaging software to make a 256-color image look
> like it has more color depth than it actually has. In some images,
> the 256-color image can look exactly the same as a 24-bit original.
Not exactly. Dithering trades spatial resolution for colour resolution.
This is why it's popular for printing (where spatial resolution is much
more than it needs to be) but less so for displays (where spatial
resolution costs).
> So, my question is this: has anyone done or seen any research on
> dithering a 48-bit image to 24-bit?
Who uses 48-bit images? I was under the impression the next stage up
from 24-bit colour was 24bits per channel as used for films.
> The ultimate effect is that a 24-bit image could look nearly as good
> as a 48-bit one, and most importantly, the improvement would be
> visible on commodity hardware.
I think 24bit images have more colour resolution than you or I do, and
that commodity hardware doesn't have enough spatial resolution to spare.
If you want more contrast in your pictures, you'd be better off IMO
starting from YUV images and dithering the Y channel to get more
luminance resolution.
Daniel
--
Now as he walked by the sea of Galilee, he saw Simon and Andrew his
brother casting a spam into the net: for they were phishers. And Jesus
said unto them, Come ye after me, and I will make you to become phishers
of men. And straightway they forsook their nets, and followed him.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Daniel Hulme <pho### [at] isticorg> wrote:
> I think 24bit images have more colour resolution than you or I do
Have you tried making grayscale images with a 24-bit resolution?
Have you counted how many shades of grey you have available at
that bit depth? Answer: 256.
256 shades of grey may be enough for most applications, but not
for all. You start seeing the limitation in some cases.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> > I think 24bit images have more colour resolution than you or I do
> 256 shades of grey may be enough for most applications, but not
> for all. You start seeing the limitation in some cases.
Human eyes can see far fewer than 2^24 colours. It is just that the
representation we use is inefficient. We have more luminance resolution
than chrominance resolution, but as you say, RGB does not reflect this.
That is why I suggested using a different colour basis like YUV rather
than simply pumping up the bits.
Daniel
--
Now as he walked by the sea of Galilee, he saw Simon and Andrew his
brother casting a spam into the net: for they were phishers. And Jesus
said unto them, Come ye after me, and I will make you to become phishers
of men. And straightway they forsook their nets, and followed him.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Daniel Hulme wrote:
> Human eyes can see far fewer than 2^24 colours. It is just that the
> representation we use is inefficient. We have more luminance resolution
> than chrominance resolution, but as you say, RGB does not reflect this.
> That is why I suggested using a different colour basis like YUV rather
> than simply pumping up the bits.
YUV would be great, but commodity hardware always uses 24-bit RGB for
the final output.
So... the idea is to make 24-bit RGB images look as good as possible.
The 256-shades-of-gray-even-at-24-bit-RGB problem is the most obvious
way the issue can be seen. Some dithering algorithm from a higher
BPP-level or perhaps directly from POV's internal floating point color
could very effectively compensate.
-Ryan
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ryan Lamansky wrote:
> Daniel Hulme wrote:
> > Human eyes can see far fewer than 2^24 colours. It is just that the
>
>> representation we use is inefficient. We have more luminance resolution
>> than chrominance resolution, but as you say, RGB does not reflect this.
>> That is why I suggested using a different colour basis like YUV rather
>> than simply pumping up the bits.
>
>
> YUV would be great, but commodity hardware always uses 24-bit RGB for
> the final output.
>
> So... the idea is to make 24-bit RGB images look as good as possible.
> The 256-shades-of-gray-even-at-24-bit-RGB problem is the most obvious
> way the issue can be seen. Some dithering algorithm from a higher
> BPP-level or perhaps directly from POV's internal floating point color
> could very effectively compensate.
>
> -Ryan
So why not calculate YUV within POV and convert to RGB for the image?
--------------
David Wallace
TenArbor Consulting
"Just In Time Cash"
www.tenarbor.com
1-866-572-CASH
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
David Wallace wrote:
> So why not calculate YUV within POV and convert to RGB for the image?
POV-Ray already uses RGB internally. YUV would require a re-write of
the entire coloring and lighting parts of the engine, and the gains of
this effort would be questionable...
Internally, POV-uses 32-bit floating point for each color channel, which
goes a long way toward making up for appropriateness problems of RGB.
The meaty issue is how to perform dithering during downsampling of the
raw image data to something that looks optimal on 24-bit hardware. As
discussed earlier, this is where big improvements in perceived final
image quality can be gained for minimal effort.
-Ryan
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp <war### [at] tagpovrayorg> wrote:
> Have you tried making grayscale images with a 24-bit resolution?
> Have you counted how many shades of grey you have available at
> that bit depth? Answer: 256.
>
> 256 shades of grey may be enough for most applications, but not
> for all. You start seeing the limitation in some cases.
especially when there is a little bit of color. instead of stepping
one grey level, it will make separate 1-bit R,G and B steps.
this is very visible, but easy to solve:
add a little of noise to your texture. make sure the noise resolution
smaller than the size of a pixel in the final image.
(it may be a bit slower, because of the more complex texture and more
work done in anti-aliasing, but the result is very good)
jaap.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |