|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Having been away from POV since version 2-ish, I came back to it recently and
have been tinkering with an image. The image consists of a multi-floored
building with lots of intricate balconys that I was trying to replicate down
both sides of a street. The entire lot was constructed with nested #while loops
and pretty quickly I'd blown the 3GB memory limit on my windows box.
Now I can (and will) reduce the primitve count of the main object that are being
replicated, but that is just delaying the inevitable. If I increased the length
of the street by one or two more copies I'll be right back up against the memory
limitations.
I thought, perhaps somewhat naively, that when you #declare an object and then
use it multiple times that you don't actually recreate the entire set of objects
within for each instance. surely you'd just apply the inverse of the
transformation applied to the instance of the object to the ray and then use the
object declaration for the intersection tests. If this were the case then
cloning a large object heirarchy should just be a case of the memory involved in
storing the inverse transformation, the top level bound-volume in local
coordinates and whatever pointers link the object into the list of objects; in
all significantly less than reproducing a huge object heirarchy repeatedly.
To investigate this I drew up a quick test file:
// ---------------------------------------------------------------
// Persistence Of Vision raytracer version 3.5 sample file.
#include "colors.inc"
#declare UseDeclaration=1;
#declare BallDiameter = 1.0;
#declare BlockSize = 10;
#declare RepeatSize = 5;
// Camera is reasonably irrelevant, we're interested in total memory storage for
the scene.
camera { location <-10,10,-10> direction z look_at <0,0,0> }
// Create a cube of spheres
#macro SphereBlock(SB_BallDiameter,SB_BlockSize)
union
{
#local lx = 0;
#while (lx<SB_BlockSize)
#local ly = 0;
#while (ly<SB_BlockSize)
#local lz = 0;
#while (lz<SB_BlockSize)
sphere { <SB_BallDiameter*lx,SB_BallDiameter*ly,SB_BallDiameter*lz>
SB_BallDiameter/2 }
#local lz = lz+1;
#end // lz
#local ly = ly+1;
#end // ly
#local lx = lx+1;
#end // lx
}
#end // SphereBlock
// Declare a cube of spheres for replication
#declare SphereBlockDec = object { SphereBlock(BallDiameter,BlockSize) };
// Instantiate 25 of the cubes of spheres
object
{
union
{
#declare rx =0;
#while (rx<RepeatSize)
#declare ry =0;
#while (ry<RepeatSize)
#ifdef (UseDeclaration)
object { SphereBlockDec translate
<BlockSize*BallDiameter*rx,BlockSize*BallDiameter*ry,0> texture { pigment {
colour Red } finish { ambient 0.5 } } }
#else
object { SphereBlock(BallDiameter,BlockSize) translate
<BlockSize*BallDiameter*rx,BlockSize*BallDiameter*ry,0> texture { pigment {
colour Green } finish { ambient 0.5 } }}
#end
#local ry = ry+1;
#end // ly
#local rx = rx+1;
#end // lx
}
}
// ---------------------------------------------------------------
The UseDeclaration declaration on line6, is intended to control the behaviour of
the system. If enabled it uses a #declare which should instantiat 25 groups of
1000 spheres, otherwise it invokes the macro directly creating all 25,000
spheres individually.
In both cases the finite objects as given as 25,000 and peak memory as 11898018
bytes. Both are fairly coarse measures, but I read this as saying that the
system has done exactly the same thing in both cases, ie. create 25,000 spheres.
My question is: is it possible to "clone" a complex object heirarchy so that
multiple copies can be utilised without significant memory overheads? If so,
how? Trawling through the documentation didn't show up anything obvious.
Thanks
Brian
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 08/09/2010 10:59, Techdir a écrit :
>
> My question is: is it possible to "clone" a complex object heirarchy so that
> multiple copies can be utilised without significant memory overheads? If so,
> how? Trawling through the documentation didn't show up anything obvious.
>
Current answer: no.
Duplication at low memory cost is done so far only for mesh
--
A: Because it messes up the order in which people normally read text.<br/>
Q: Why is it such a bad thing?<br/>
A: Top-posting.<br/>
Q: What is the most annoying thing on usenet and in e-mail?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le_Forgeron <lef### [at] freefr> wrote:
> Duplication at low memory cost is done so far only for mesh
Actually there are a few other primitives where reference counting (rather
than deep-copying) is also used, such as blob and bicubic_patch, but with
those it's usually not such a big deal as with meshes.
A "reference object" feature has been suggested, but probably won't be
implemented in the near future. http://bugs.povray.org/task/87
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 08.09.2010 12:55, schrieb Le_Forgeron:
>>
>> My question is: is it possible to "clone" a complex object heirarchy so that
>> multiple copies can be utilised without significant memory overheads? If so,
>> how? Trawling through the documentation didn't show up anything obvious.
>>
>
> Current answer: no.
> Duplication at low memory cost is done so far only for mesh
... and blobs. (And IIRC one or two other such "bulk" objects.)
However, even for those objects there is some duplicated overhead, and
only the actual "bulk" data is shared.
Another thing these have in common is a special internal bounding
mechanism. CSG objects don't have that - their children are currently
hooked up into the global bounding hierarchy instead, which requites
absolute coordinates and a one-to-one mapping of bounding boxes to
objects, and therefore doesn't allow for sharing of member data among
two CSG objects.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> A "reference object" feature has been suggested, but probably won't be
> implemented in the near future. http://bugs.povray.org/task/87
>
> --
> - Warp
Is that a matter of available time with the current developers, or technical
complexity?
I'm a coder by trade, I've got some free time and I rather want this feature, so
I should get off my posterior and write a patch. On the complexity front I'm
fine with the manipulation of rays, bounding volumes and simple primitives but
probably not mathematically adept enough to come up with polynomial root solvers
and more involved stuff.
The description you linked to regarding the object_ref keyword mirrors what I'd
been thinking about over the last few days. I'll download the source, have a
look through and resume this thread in the programming group in a day or two's
time once I've got some sensible questions.
Brian
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Is that a matter of available time with the current developers, or
technical
> complexity?
I tried to do this feature once, probably with the 3.5 source code. I
hit a wall. I don't remember what it was, exactly... I think it was
related to objects storing temporary intersection related data within
themselves, causing problems if a ray hit an object, and then reflected
and hit a different version of that same object. It wasn't an unsolvable
problem, but it was enough to make me give up.
This was a long time ago, though. I may have simply misunderstood the
code, or not been good enough at C at the time. I also don't know how
the code has changed since then - it's likely that changes to make it
thread safe in 3.7 would fix any issue like the one I described above.
- Slime
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 10.09.2010 06:05, schrieb Slime:
> > Is that a matter of available time with the current developers, or
> technical
> > complexity?
>
> I tried to do this feature once, probably with the 3.5 source code. I
> hit a wall. I don't remember what it was, exactly... I think it was
> related to objects storing temporary intersection related data within
> themselves, causing problems if a ray hit an object, and then reflected
> and hit a different version of that same object. It wasn't an unsolvable
> problem, but it was enough to make me give up.
>
> This was a long time ago, though. I may have simply misunderstood the
> code, or not been good enough at C at the time. I also don't know how
> the code has changed since then - it's likely that changes to make it
> thread safe in 3.7 would fix any issue like the one I described above.
AFAIK all such data has been moved to the "Intersection" class by now.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 09.09.2010 08:57, schrieb Techdir:
>> A "reference object" feature has been suggested, but probably won't be
>> implemented in the near future. http://bugs.povray.org/task/87
>>
>> --
>> - Warp
>
> Is that a matter of available time with the current developers, or technical
> complexity?
Pretty much both actually, and then some.
Top priority ATM is to complete v3.70 for a release proper, which has
two consequences:
- Most of the developers dedicate their available time to fixing bugs
and re-implementing stuff that had been temporarily disabled for the
transition to multi-threaded rendering.
- The developers are quite reluctant to include features with a high
risk of breaking stuff in a way that would be difficult to track down or
even notice.
A "reference object" feature is guesstimated to take considerable
effort, and also dig deep into the bowels of POV-Ray where unexpected
side effects may be plenty.
Of course, the sooner we can get v3.70 out of the door, the sooner we
might be heading for a v3.71 or v4.0 that might be aiming for new
features again - so maybe you'd like to invest a bit of time and effort
to help with v3.70 ;-).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|