POV-Ray : Newsgroups : povray.programming : Speeding render times : Speeding render times Server Time
7 Dec 2022 08:40:47 EST (-0500)
  Speeding render times  
From: Bald Eagle
Date: 28 Aug 2013 10:15:00
Message: <web.521e05a715e88f6edd2ebc560@news.povray.org>
OK, I had an idea (it happens frequently, and they're all worth what you pay for
them) [and I do realize that those in the know may look at this and say, 'well
that _sounds_ great, posterior-haberdash, but coding that would be a nightmare,
besides the fact that it just doesn't WORK that way...].

It seems like there are a lot of trivial, easy to do things that significantly
slow render times, and there are a few 'standard' way to avoid these pitfalls.

If a light source is inside a light-tight object, it seems that there might be
some way to recognize this fact with something along the lines of a bounding
box.  Turned around, perhaps the same could be done with the camera, for
instance, if the camera were looking at the inside of a hollow sphere, and there
were multiple light sources outside said sphere...
Sometimes there are multiple light sources in a scene, yet the user knows that
any light coming from those would not significantly impact the rest of the
scene, due to distance from the camera, or due to  the visible scene being
outside of the fade distance of the light source, and there seems no good reason
to take these light sources into account when rendering the visible scene.
Perhaps the same ideology can be applied to such lights, or one could have
"light-camera" blobs where the light source only gets recognized when the
defined radii of both the camera and the light source intersect.  Or just a
simple "proximity light" where the light source is only taken into account when
the vector length between the camera and light is less than a certain distance.
Or the option to user-define a bounding box for a light source.

I often see folks with issues where only a few components of a scene actually
change, while other parts are what are "actively" being rendered.  Would it be
possible to define "layers" - some of which would render and others wouldn't?

Which would be dependent upon, and leads me to the next fantasy feature, an
inter-frame storage buffer of sorts.  I've seen people discuss this whole
"saving radiosity data" business (which I'll admit, I have no idea how to do,
since I haven't progressed to using that feature) - and having an in-built
function, or an include file that saves certain data between frames seems like
it would greatly benefit the heavy users of such a tool.
Perhaps a scheme where POV-Ray could pre-render a scene, and then objects could
be tagged with no_render (like no_object, no_reflection, no_shadow) and thus be
excluded from future renders.

Coupling that with the rendering-in-layers idea, perhaps it would be possible to
render the initial "background" layers, store the information, and then simply
reload it before rendering the "active" layer...

In a WIP, I was alerted to the effect of multiple CSG operations on render time.
 Would it be possible to display a warning - something like "Object has
'excessive' number of CSG tests"?  Or a way to break down the contribution of
each object to the render time by separately displaying how many tests each one
contributes to the overall render?  #(;)display_object_tests ?
Perhaps a "reflective object, line XXX"
Anything that would help the user identify and analyze time-eating elements of
their scene.

Clearly some of these features would be user-invoked rather than default
behaviour of the rendering engine.

Just throwing out grist for the mind-mill, hoping they may benefit somebody,

Post a reply to this message

Copyright 2003-2021 Persistence of Vision Raytracer Pty. Ltd.