|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I was wondering if anyone (up to and including developers) knew any way to work
out how much CPU work/time would be needed to render an image based on its size
and render options. It seems like it must be possible to find out.
Thanks,
Ross
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
cloudyroo <nomail@nomail> wrote:
> I was wondering if anyone (up to and including developers) knew any way to work
> out how much CPU work/time would be needed to render an image based on its size
> and render options. It seems like it must be possible to find out.
It's impossible to find out in advance because it depends on a ton of
things.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> It's impossible to find out in advance because it depends on a ton of
> things. - Warp
but surely those things are just parameters YOU enter (errorbounds, radiosity
etc.) that can be factored into an equation or something. Are there other
factors I'm not aware of?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 29/09/14 16:31, cloudyroo wrote:
> I was wondering if anyone (up to and including developers) knew any way to work
> out how much CPU work/time would be needed to render an image based on its size
> and render options. It seems like it must be possible to find out.
> Thanks,
> Ross
>
>
I strongly suspect that calculating render time would take as long as
the render itself - see Alan Turing's work for further details'
I'm sorry if that seems glib but I can't think of a better authority atm
Joh
--
Protect the Earth
It was not given to you by your parents
You hold it in trust for your children
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 29.09.2014 19:45, schrieb Doctor John:
> On 29/09/14 16:31, cloudyroo wrote:
>> I was wondering if anyone (up to and including developers) knew any way to work
>> out how much CPU work/time would be needed to render an image based on its size
>> and render options. It seems like it must be possible to find out.
>> Thanks,
>> Ross
>
> I strongly suspect that calculating render time would take as long as
> the render itself - see Alan Turing's work for further details'
Absolutely.
A guesstimate can be made based on a smaller render of the image, but
that's about it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 29/09/2014 17:31, cloudyroo wrote:
> I was wondering if anyone (up to and including developers) knew any way to work
> out how much CPU work/time would be needed to render an image based on its size
> and render options. It seems like it must be possible to find out.
> Thanks,
> Ross
>
>
For the same options on the same scene, the render time should scale
linearly with each dimensions (as long as the view remains the same)....
well, unless you hit a dark spot that was missed by the small scale
picture, or you use something like radiosity and/or photons.
Of course, that's not taking into account the parse time which is fixed
for the same scene.
So you already have an equation like:
cpu = parse + height*weight*mystery.
But mystery can be influenced by the content, placement and settings, if
not other things. And it might hide a bit of additional complexity too
(such as it's own constant for photons, and an exploding factor for
anti-aliasing as details get bigger).
Short of that, the best answer so far is the one from Doctor John.
Of course, if you have a specific scene, with some specific settings (or
a set of them), you could use a profiling tools (such as
valgrind/callgrind) to get the cost of the rendering at the various
resolutions.
But such data would be useless to predicts a different scene with accuracy.
Just collecting the valgrind/callgrind data will slow down the rendering
by about x50 to x200, and getting enough points to get a predictive
model is going to be... a test to your patience. And that's just
generating the data, you still have to collect them and find their
relations.
If you are lucky to have an Intel processor, Intel has a faster
profiling tools (which requires a kernel driver to be installed, if you
are on linux) in its wonderful (but expensive) compiler/debugger suits.
Yet, it won't reduce the number of renders to run.
Povray is not triangle-based, you cannot infer the render time of a
torus from the one of a sphere.
--
IQ of crossposters with FU: 100 / (number of groups)
IQ of crossposters without FU: 100 / (1 + number of groups)
IQ of multiposters: 100 / ( (number of groups) * (number of groups))
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 2014-09-29 12:59, clipka wrote:
> A guesstimate can be made based on a smaller render of the image, but
> that's about it.
'You have a few objects with transparency and IOR, photons, media, focal
blur, micronormals...yeah, this is going to take a while.'
Could do a ballpark figure of what percentage of the image the IOR'ed
object occupies as to how fast it WON'T be going, at least... ;)
--
Tim Cook
http://empyrean.sjcook.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 29/09/14 18:45, Doctor John wrote:
>
> I strongly suspect that calculating render time would take as long as
> the render itself - see Alan Turing's work for further details'
>
> I'm sorry if that seems glib but I can't think of a better authority atm
I've just remembered an even better authority: Knuth, The Art Of
Computer Programming. IIRC Volume 1 deals with run-time.
John
--
Protect the Earth
It was not given to you by your parents
You hold it in trust for your children
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Doctor John <j.g### [at] gmailcom> wrote:
> I strongly suspect that calculating render time would take as long as
> the render itself - see Alan Turing's work for further details'
A rough estimate could be done by rendering a smaller version of the
image and multiplying the time accordingly.
(It's still possible the estimate could end up being way off, depending
on what exactly will appear in the full-size image.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 30.09.2014 13:49, schrieb Doctor John:
> On 29/09/14 18:45, Doctor John wrote:
>>
>> I strongly suspect that calculating render time would take as long as
>> the render itself - see Alan Turing's work for further details'
>>
>> I'm sorry if that seems glib but I can't think of a better authority atm
>
> I've just remembered an even better authority: Knuth, The Art Of
> Computer Programming. IIRC Volume 1 deals with run-time.
I guess that in this case Turing is actually the one to turn to: I
suspect that the problem of figuring out the time it takes to render a
POV-Ray scene is in the same ballpark as the halting problem.
If we take parsing time into consideration, then this should be obvious,
as the POV-Ray scene description language is Turing-complete.
As for the rendering itself, with enough mirrored surfaces you can
surely set up scenes in which light rays follow chaotic paths (i.e.
minor changes in the ray's origin and direction can have major changes
in the path travelled), and one of chaotic systems' inherent
characteristics is that they're not predictable.
If that's not enough, you can add a Mandelbrot or Julia fractal pattern
to the scene; these patterns have the inherent property that their
computing time fluctuates in a chaotic manner between diffent points on
the pattern and different also between different input parameters. If a
guesstimation of the render time doesn't break down elsewhere, it does here.
What Knuth might(*) tell you is the efficiency of the various
optimizations implemented in POV-Ray, and also guesstimates for
comparatively simple scenes; but the more complex a scene gets, the more
complex the formula for estimating the runtime will get, and the more
the error margins will add up, to the point where you have an answer
that consists of error margins only. (An answer like "1 hour, give or
take 5 days" doesn't really help, does it ;-))
(*) Actually, the old-school analysis of execution time is pretty
outdated anyway, as it doesn't account for modern hardware-based
optimizations, most notably caching. (Remember when /expensive/
mainboards had sockets to add /one/ level of cache? Now most CPUs have
three hierarchical levels of /inbuilt/ cache. Not to speak of virtual
memory - when viewed the other way round the concept might be considered
as using the hard disk as main memory, with the DRAM providing an
additional level of caching.) Besides, it was never meant to be a tool
to give an absolute guesstimate of execution time, but to compare the
relative execution time (or, more precisely, the "cost" in both
execution time and memory consumption) of different algorithms.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |