POV-Ray : Newsgroups : povray.off-topic : Largest POV image? : Re: Largest POV image? Server Time
8 Oct 2024 19:18:43 EDT (-0400)
  Re: Largest POV image?  
From: Warp
Date: 22 Oct 2009 11:21:12
Message: <4ae07867@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> > In theory, POV-Ray needs to store (width*2) pixels.

  Is that so even if you don't use antialiasing?

> > libpng may need to store
> > a bunch more rows at once while compressing.

> libpng only needs the current and previous row to run the pixel filter. 
> I have no idea what the DEFLATE compressor needs.

  Compression algorithms almost invariably use a buffer for the data, and
this buffer is often surprisingly small (like some tens of kilobytes).
(This means that most compression algorithms will *not* find and compress
repetitions which are gigabytes apart from each other. This is for the
simple reason that the compression routine has to use only a limited amount
of memory and be relatively fast.)

  Some modern algorithms might use larger buffers (like some megabytes),
but they nevertheless don't achieve enormously better compression ratios
(for some reason there seems to be some kind of cutoff point after which
enlarging the compression buffer does not improve the compression of average
data significantly, but slows down the compression more than it's worth).

  The problem with compressing exabytes of data is not the compression
algorithm itself, but the file format into which the compressed data is
stored. (Theoretically if the data is compressed as a stream then there
shouldn't be any theoretical limit to how much data you can compress.
However, if the file format has headers and other data expressing the
amount of compressed data, you may run into limitations.)

> I note that to store (2^31 - 1) x (2^31 - 1) pixels, where each pixel 
> requires exactly 4 bytes, requires about 18 exabytes of storage. (!!)

  Depends on what those pixels contain, and how smart your compression
algorithm is. If the image is completely filled with black pixels, using
a smart compression algorithm could compress it down to some tens of
bytes. (Actual compression algorithms, however, often fail to do this
because of the limited compression buffer. Some of the most naive
algorithms achieve surprisingly poor compression ratios when the input
is full of the same byte value. Other smarter algorithms are more capable
of taking advantage of this.)

  Of course if your image actually contains something meaningful, the
best you can hope to compress (losslessly) is something like to 1/10th
of the original size.

  I don't think many file systems even support files which are exabytes
large.

-- 
                                                          - Warp


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.