|
|
Am 30.09.2014 13:49, schrieb Doctor John:
> On 29/09/14 18:45, Doctor John wrote:
>>
>> I strongly suspect that calculating render time would take as long as
>> the render itself - see Alan Turing's work for further details'
>>
>> I'm sorry if that seems glib but I can't think of a better authority atm
>
> I've just remembered an even better authority: Knuth, The Art Of
> Computer Programming. IIRC Volume 1 deals with run-time.
I guess that in this case Turing is actually the one to turn to: I
suspect that the problem of figuring out the time it takes to render a
POV-Ray scene is in the same ballpark as the halting problem.
If we take parsing time into consideration, then this should be obvious,
as the POV-Ray scene description language is Turing-complete.
As for the rendering itself, with enough mirrored surfaces you can
surely set up scenes in which light rays follow chaotic paths (i.e.
minor changes in the ray's origin and direction can have major changes
in the path travelled), and one of chaotic systems' inherent
characteristics is that they're not predictable.
If that's not enough, you can add a Mandelbrot or Julia fractal pattern
to the scene; these patterns have the inherent property that their
computing time fluctuates in a chaotic manner between diffent points on
the pattern and different also between different input parameters. If a
guesstimation of the render time doesn't break down elsewhere, it does here.
What Knuth might(*) tell you is the efficiency of the various
optimizations implemented in POV-Ray, and also guesstimates for
comparatively simple scenes; but the more complex a scene gets, the more
complex the formula for estimating the runtime will get, and the more
the error margins will add up, to the point where you have an answer
that consists of error margins only. (An answer like "1 hour, give or
take 5 days" doesn't really help, does it ;-))
(*) Actually, the old-school analysis of execution time is pretty
outdated anyway, as it doesn't account for modern hardware-based
optimizations, most notably caching. (Remember when /expensive/
mainboards had sockets to add /one/ level of cache? Now most CPUs have
three hierarchical levels of /inbuilt/ cache. Not to speak of virtual
memory - when viewed the other way round the concept might be considered
as using the hard disk as main memory, with the DRAM providing an
additional level of caching.) Besides, it was never meant to be a tool
to give an absolute guesstimate of execution time, but to compare the
relative execution time (or, more precisely, the "cost" in both
execution time and memory consumption) of different algorithms.
Post a reply to this message
|
|