POV-Ray : Newsgroups : povray.off-topic : Euclidean infinite detail Server Time
28 Jun 2024 22:16:06 EDT (-0400)
  Euclidean infinite detail (Message 15 to 24 of 24)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: clipka
Subject: Re: Euclidean infinite detail
Date: 29 Oct 2016 02:08:16
Message: <58143cd0$1@news.povray.org>
Am 28.10.2016 um 18:44 schrieb Bald Eagle:

> I know we've touched on this topic, and you've even trimmed down the memory
> footprint of the spheres - can you give me a better idea of the memory
> difference?  I thought you had previously said there wasn't much of one, but
> perhaps I misunderstood exactly what you were saying.
> 
> I guess I just have trouble seeing how this
> 
>   blob {
>     threshold 0.6
>       sphere { <.75, 0, 0>, 1, 1 }
>       sphere { <-.375, .64952, 0>, 1, 1 }
>       sphere { <-.375, -.64952, 0>, 1, 1 }
>     }
> 
> uses less memory than
> 
>       sphere { <.75, 0, 0>, 1 }
>       sphere { <-.375, .64952, 0>, 1 }
>       sphere { <-.375, -.64952, 0>, 1 }

Each primitive has the following fields (data sizes given for 64-bit
Windows; data consumption may vary depending on operating system and/or
CPU architecture):

    - a pointer to the VMT(*) (8 bytes)
    - a type field (4 bytes)
    - a pointer to a texture (8 bytes overhead)
    - a pointer to an interior texture (8 bytes overhead)
    - a shared pointer to an interior (16 bytes overhead)
    - a list of bounded_by objects (32 bytes overhead)
    - a list of clipped_by objects (32 bytes overhead)
    - a list of light_group objects (32 bytes overhead)
    - a bounding box (24 bytes)
    - a pointer to a transformation matrix (8 bytes overhead)
    - a photons density field (4 bytes)
    - a radiosity importance setting (16 bytes)
    - a set of various flags (4 bytes)

(*VMT = Virtual Method Table, an artifact of many object-oriented
language implementations, serving to support a feature called polymorphism.)

Including a few more bytes of padding to satisfy data type alignment
constraints, this currently adds up to 208 bytes on a Windows machine.

Most of the pointers and lists will be left empty in your examples, so
will "only" occupy the aforementioned overhead; a notable exception is
the texture: Each and every primitive will reference a texture, and for
technical reasons it will have its own personal copy thereof (I'm still
seeking for a sane way to avoid this memory hog). Even if you do not
explicitly specify a texture, it will be a copy of the default texture,
and weigh in at 112 bytes (minimum) on a Windows machine.

Another notable exception is the interior, which whill be forced to a
"neutral" interior data block if not explicitly specified. Such a
neutral interior weighs in at another 96 bytes.

Add to that the sphere-specific data:

    - a center (24 bytes)
    - a radius (8 bytes)
    - an additional flag (1 byte)

With some more padding for data type alignment, this weighs in at
another 40 bytes of data.

Another 8 bytes are required for a pointer to actually hook the sphere
up into the scene, and another 40 bytes worth of data required to
represent it in the scene-level bounding hierarchy.

So for each and every full-fledged sphere, we start with a baseline of a
whopping 504 bytes worth of data.


Compare that to the requirements for a blob element(*):

    - a type field (2 bytes)
    - an index field (4 bytes)
    - an origin vector (24 bytes)
    - a cylinder length field (8 bytes)
    - a squared-radius field (8 bytes)
    - a triplet of coefficients (24 bytes)
    - a pointer to a texture (8 bytes overhead)
    - a pointer to a transformation matrix (8 bytes overhead)

Again including a few more bytes of alignment padding, this adds up to
just 88 bytes.

It is important to note that in contrast to primitives, no texture data
block is created for blob elements unless a texture is explicitly
specified, so we're only stuck with the pointer overhead.

Another 8 bytes are required for a pointer to actually hook the element
up into the blob, and another 48 bytes worth of data required to
represent it in the blob's internal bounding hierarchy.

Additional memory is required at run-time for temporary data, but this
is difficult to quantify since on one hand the corresponding data
structure is shared among all blobs, while on the other hand a separate
copy of the data structure is required for each thread. Effectively, the
largest blob in the scene requires another 104 bytes per blob element
per thread.

So if you have just one single blob in your scene, each of that blob's
elements weighs in at 144 bytes, plus 104 bytes per thread.

(*NB these are internal elements; each spherical element corresponds to
one of these, while each cylindrical element actually requires three of
these: One for the cylindrical portion, and two more for the
hemispherical end portions.)


If it wasn't for the temporary data structure, this would boil down to:

    (A) 504 bytes per genuine sphere, vs.
    (B) 144 bytes per spherical blob element.

If you use the naive approach of shoving all your spherical blob
elements into one large blob primitive, that advantage is quickly eaten
up on multi-core systems, and completely lost at 4 or more threads (an
effect I hadn't considered previously); however, presuming the goal is
indeed to get spheres rather than blobby things, the total cost can be
minimized by splitting up your N blob elements among C*sqrt(N) blob
primitives (with C being a constant depending on the number of threads),
in which case the per-thread cost becomes negligible for large N.


Post a reply to this message

From: Bald Eagle
Subject: Re: Euclidean infinite detail
Date: 31 Oct 2016 12:20:00
Message: <web.58176e398df1bb22b488d9aa0@news.povray.org>
Well *** GOLL-LY *** !!!
Thanks for that 'little' look under the hood  :O

It's always interesting to see things from a different perspective and be
reminded that what one sees in a high-level language like SDL has no relation to
what takes place on the lower levels.

> the total cost can be
> minimized by splitting up your N blob elements among C*sqrt(N) blob
> primitives (with C being a constant depending on the number of threads),
> in which case the per-thread cost becomes negligible for large N.

This would be a nice bit of advanced user info to add to the documentation, and
perhaps even develop into a macro or something in the source / parser / message
stream.   Something along the lines of "You have >n spheres, consider .... blobs
..... to reduce memory usage...."


It's always educational here at news.povray.org   :)


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 31 Oct 2016 12:42:53
Message: <5817748d@news.povray.org>
Am 31.10.2016 um 17:15 schrieb Bald Eagle:
> Well *** GOLL-LY *** !!!
> Thanks for that 'little' look under the hood  :O
> 
> It's always interesting to see things from a different perspective and be
> reminded that what one sees in a high-level language like SDL has no relation to
> what takes place on the lower levels.

As a matter of fact it's also interesting for me, because I rarely
examine such matters in such excessive detail. (My perfectionism tends
to reach its peak when communicating with other people.)

For example, as I had already hinted at, I had never before paid much
attention to the per-thread temporary data.

I'm sure there must be a way to significantly trim down that temporary
data store, at least for normal cases.

>> the total cost can be
>> minimized by splitting up your N blob elements among C*sqrt(N) blob
>> primitives (with C being a constant depending on the number of threads),
>> in which case the per-thread cost becomes negligible for large N.
> 
> This would be a nice bit of advanced user info to add to the documentation, and
> perhaps even develop into a macro or something in the source / parser / message
> stream.   Something along the lines of "You have >n spheres, consider .... blobs
> ..... to reduce memory usage...."

I don't think such a warning message would be a good idea; while using
blobs as a bulk sphere surrogate does reduce the memory footprint (if
done right), it certainly will be at the cost of significantly reduced
render speed.

Adding a corresponding section to the docs is another matter.


Post a reply to this message

From: Mike Horvath
Subject: Re: Euclidean infinite detail
Date: 31 Oct 2016 14:29:35
Message: <58178d8f@news.povray.org>
On 10/27/2016 9:54 PM, clipka wrote:
> Am 27.10.2016 um 20:51 schrieb Mike Horvath:
>
>> Also, maybe POV-Ray should support point clouds.
>
> You can always read point cloud data at parse time and convert it to a
> bunch of spheres (or a set of blob elements if you're low on memory).
>
> If you want something more complicated, then you'd first have to clarify
> what that something is; only then will it be possible to even discuss
> how that could be achieved, let alone whether it would be reasonable to
> hard-code into POV-Ray.
>

I was joking, mainly. I prefer the clean edges of primitives and meshes.

Mike


Post a reply to this message

From: Nekar Xenos
Subject: Re: Euclidean infinite detail
Date: 1 Nov 2016 15:32:37
Message: <5818edd5$1@news.povray.org>
On 2016/10/27 10:58 AM, scott wrote:
> So do you remember Euclidean and their "infinite detail" engine? If not
> there are plenty of videos on YouTube like this one:
>
> https://www.youtube.com/watch?v=00gAbgBu8R4
>
> Anyway, it seems they recently released a few more videos and in one of
> the comments I found a link to their patent:
>
> http://tinyurl.com/eucli
>
> It's quite interesting actually. They store their point cloud in an
> oct-tree structure and walk down the tree until you get to a single
> point in the data or a single pixel on the screen. But, the clever bit
> is that once they've done the perspective transform on an oct-tree node,
> they look at the "w" coordinate and use that to decide if an
> orthographic projection would be less than 1 pixel different from a full
> perspective projection. If yes, then all the child nodes'
> screen-positions can be computed very fast by just taking mid-points of
> the parent's 2D coordinates, rather than doing full perspective
> projections.
>
> Still their claim of "unlimited detail" is misleading, but you can see
> how with the above system it would be straightforward to
> page-in/download nodes as you move around the scene. And with oct-trees,
> you only need a few levels to get huge numbers of points, eg 10 deep
> gets you a billion points.

I saw this one recently:
https://youtu.be/F6MaZE9cU_c

I came to the conclusion that there is no real-time lighting. Maybe I'm 
wrong about that but the over exposed look reminds me of games from the 90's

-- 
________________________________________

-Nekar Xenos-


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 1 Nov 2016 18:12:33
Message: <58191351$1@news.povray.org>
Am 01.11.2016 um 20:32 schrieb Nekar Xenos:

> I saw this one recently:
> https://youtu.be/F6MaZE9cU_c
> 
> I came to the conclusion that there is no real-time lighting. Maybe I'm
> wrong about that but the over exposed look reminds me of games from the
> 90's

Not surprising, given that their approach is so different from the
well-trodden path of mesh-based rendering that they have to re-invent
each and every wheel virtually from scratch.


Post a reply to this message

From: Nekar Xenos
Subject: Re: Euclidean infinite detail
Date: 1 Nov 2016 22:26:57
Message: <58194ef1$1@news.povray.org>
On 2016/11/02 12:12 AM, clipka wrote:
> Am 01.11.2016 um 20:32 schrieb Nekar Xenos:
>
>> I saw this one recently:
>> https://youtu.be/F6MaZE9cU_c
>>
>> I came to the conclusion that there is no real-time lighting. Maybe I'm
>> wrong about that but the over exposed look reminds me of games from the
>> 90's
>
> Not surprising, given that their approach is so different from the
> well-trodden path of mesh-based rendering that they have to re-invent
> each and every wheel virtually from scratch.
>

It would be nice if the reinvention of realtime lighting led for their 
system led to a faster more realistic realtime raytracing system. But I 
doubt it because you don't have face directions any more if I understand 
it correctly.

-- 
________________________________________

-Nekar Xenos-


Post a reply to this message

From: Mike Horvath
Subject: Re: Euclidean infinite detail
Date: 2 Nov 2016 00:47:56
Message: <58196ffc$1@news.povray.org>
On 11/1/2016 3:32 PM, Nekar Xenos wrote:
> I saw this one recently:
> https://youtu.be/F6MaZE9cU_c

It's interesting that those "holograms" work for everyone in the room, 
including the cameraman videotaping the experience.

Mike


Post a reply to this message

From: scott
Subject: Re: Euclidean infinite detail
Date: 2 Nov 2016 04:01:39
Message: <58199d63$1@news.povray.org>
> It would be nice if the reinvention of realtime lighting led for their
> system led to a faster more realistic realtime raytracing system. But I
> doubt it because you don't have face directions any more if I understand
> it correctly.

I think it mentioned normals somewhere in the patent. Maybe they are 
pre-computed by sampling surrounding points when they build the oct-tree.


Post a reply to this message

From: scott
Subject: Re: Euclidean infinite detail
Date: 2 Nov 2016 04:04:43
Message: <58199e1b$1@news.povray.org>
>> I saw this one recently:
>> https://youtu.be/F6MaZE9cU_c
>
> It's interesting that those "holograms" work for everyone in the room,
> including the cameraman videotaping the experience.

Well we can't be sure it works for anyone other than the cameraman :-)


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.