|
![](/i/fill.gif) |
Wasn't it Jan Walzer who wrote:
>> However, one algorithm can compress in *average* better than another.
>define "average" ... as you probably know, there are different kind of
>messurements ...
>
>If you think about the standard [ (x1+x2+x3)/3 ] average, then I wouldn't
>swear on this for an infinite set of sources to compress ...
>
>Maybe it _CAN_ be true for a countable, closed set ...
>(as for all possible images, where it could be computable) but I'm still not
>sure 'bout this ...
For lossless compression, there are good reasons for believing that the
*overall* average compression should be none whatsoever.
Suppose we compressed all possible images of a given size and colour
depth losslessly. The number of different images would be
height*depth*colour-depth. If our compression is lossless, it must give
a different compressed result for each of the input image files, i.e.
height*depth*colour-depth different compressed files. For these output
files to all be different, their average length would need to be
height*depth*colour-depth bits - which is what we started with.
--
Mike Williams
Gentleman of Leisure
Post a reply to this message
|
![](/i/fill.gif) |