POV-Ray : Newsgroups : povray.off-topic : Euclidean infinite detail Server Time
2 Jul 2024 23:03:41 EDT (-0400)
  Euclidean infinite detail (Message 11 to 20 of 24)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 4 Messages >>>
From: scott
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 10:21:24
Message: <58135ee4$1@news.povray.org>
> If I understand what you said. Voxel data can be or is a subset of point
> cloud data.

Yes, voxel data could just be thought of as point cloud data where all 
the points are on a finite uniform grid (with no empty cells) within 
known limits. In which case it becomes more efficient to store the data 
as a list of values (colour, transparency, whatever) in some agreed 
order, rather than listing the coordinates of every point.

Point cloud data is totally arbitrary, you could have points 1mm apart 
in one area, but points meters apart in others. This is usual if you got 
the data from a laser scanner. The coordinates of each point could be 
anything (within the resolution of the number format you are using), so 
are not on a grid or equally spaced at all.

Think in 2D, voxel data is like a bitmap image, whereas point cloud data 
is like a list of 2D coordinates (perhaps with associated information 
like colour as clipka mentioned).

> So what is the problem with redefining coordinates in a df3 to points or
> small spheres instead of a cubical volume?

My guess is all the algorithms used to render df3's won't work with just 
an arbitrary list of point coordinates.


Post a reply to this message

From: Stephen
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 11:09:56
Message: <58136a44$1@news.povray.org>
On 10/28/2016 3:21 PM, scott wrote:
>> If I understand what you said. Voxel data can be or is a subset of point
>> cloud data.
>
> Yes, voxel data could just be thought of as point cloud data where all
> the points are on a finite uniform grid (with no empty cells) within
> known limits. In which case it becomes more efficient to store the data
> as a list of values (colour, transparency, whatever) in some agreed
> order, rather than listing the coordinates of every point.
>
> Point cloud data is totally arbitrary, you could have points 1mm apart
> in one area, but points meters apart in others. This is usual if you got
> the data from a laser scanner. The coordinates of each point could be
> anything (within the resolution of the number format you are using), so
> are not on a grid or equally spaced at all.
>
> Think in 2D, voxel data is like a bitmap image, whereas point cloud data
> is like a list of 2D coordinates


Yes, but how is the list ordered if at all, at all?

(perhaps with associated information
> like colour as clipka mentioned).
>
>> So what is the problem with redefining coordinates in a df3 to points or
>> small spheres instead of a cubical volume?
>
> My guess is all the algorithms used to render df3's won't work with just
> an arbitrary list of point coordinates.
>


I never thought they would. But...
You could turn point cloud data into a df3 by defining a resolution and 
using that to dived the cloud (magically by hard sums) into averaged voxels.

Not that I am asking anyone to implement this.
'Just exploring.

-- 

Regards
     Stephen


Post a reply to this message

From: Bald Eagle
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 12:45:01
Message: <web.581380818df1bb22b488d9aa0@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> Am 27.10.2016 um 20:51 schrieb Mike Horvath:
>
> > Also, maybe POV-Ray should support point clouds.
>
> You can always read point cloud data at parse time and convert it to a
> bunch of spheres (or a set of blob elements if you're low on memory).

I know we've touched on this topic, and you've even trimmed down the memory
footprint of the spheres - can you give me a better idea of the memory
difference?  I thought you had previously said there wasn't much of one, but
perhaps I misunderstood exactly what you were saying.

I guess I just have trouble seeing how this

  blob {
    threshold 0.6
      sphere { <.75, 0, 0>, 1, 1 }
      sphere { <-.375, .64952, 0>, 1, 1 }
      sphere { <-.375, -.64952, 0>, 1, 1 }
    }

uses less memory than

      sphere { <.75, 0, 0>, 1 }
      sphere { <-.375, .64952, 0>, 1 }
      sphere { <-.375, -.64952, 0>, 1 }


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 22:47:52
Message: <58140dd8$1@news.povray.org>
Am 28.10.2016 um 17:09 schrieb Stephen:

>> Think in 2D, voxel data is like a bitmap image, whereas point cloud data
>> is like a list of 2D coordinates
> 
> Yes, but how is the list ordered if at all, at all?

Exactly that: Not at all.

(There /may/ happen to be /some/ inherent ordering due to the way the
data is acquired, but you can't rely on that.)


> You could turn point cloud data into a df3 by defining a resolution and
> using that to dived the cloud (magically by hard sums) into averaged
> voxels.

Depending on what the point cloud actually represents, you /may/ be able
to use that approach.

If it is just a bare point cloud representing a surface, then there's
obviously nothing to be averaged.


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 29 Oct 2016 02:08:16
Message: <58143cd0$1@news.povray.org>
Am 28.10.2016 um 18:44 schrieb Bald Eagle:

> I know we've touched on this topic, and you've even trimmed down the memory
> footprint of the spheres - can you give me a better idea of the memory
> difference?  I thought you had previously said there wasn't much of one, but
> perhaps I misunderstood exactly what you were saying.
> 
> I guess I just have trouble seeing how this
> 
>   blob {
>     threshold 0.6
>       sphere { <.75, 0, 0>, 1, 1 }
>       sphere { <-.375, .64952, 0>, 1, 1 }
>       sphere { <-.375, -.64952, 0>, 1, 1 }
>     }
> 
> uses less memory than
> 
>       sphere { <.75, 0, 0>, 1 }
>       sphere { <-.375, .64952, 0>, 1 }
>       sphere { <-.375, -.64952, 0>, 1 }

Each primitive has the following fields (data sizes given for 64-bit
Windows; data consumption may vary depending on operating system and/or
CPU architecture):

    - a pointer to the VMT(*) (8 bytes)
    - a type field (4 bytes)
    - a pointer to a texture (8 bytes overhead)
    - a pointer to an interior texture (8 bytes overhead)
    - a shared pointer to an interior (16 bytes overhead)
    - a list of bounded_by objects (32 bytes overhead)
    - a list of clipped_by objects (32 bytes overhead)
    - a list of light_group objects (32 bytes overhead)
    - a bounding box (24 bytes)
    - a pointer to a transformation matrix (8 bytes overhead)
    - a photons density field (4 bytes)
    - a radiosity importance setting (16 bytes)
    - a set of various flags (4 bytes)

(*VMT = Virtual Method Table, an artifact of many object-oriented
language implementations, serving to support a feature called polymorphism.)

Including a few more bytes of padding to satisfy data type alignment
constraints, this currently adds up to 208 bytes on a Windows machine.

Most of the pointers and lists will be left empty in your examples, so
will "only" occupy the aforementioned overhead; a notable exception is
the texture: Each and every primitive will reference a texture, and for
technical reasons it will have its own personal copy thereof (I'm still
seeking for a sane way to avoid this memory hog). Even if you do not
explicitly specify a texture, it will be a copy of the default texture,
and weigh in at 112 bytes (minimum) on a Windows machine.

Another notable exception is the interior, which whill be forced to a
"neutral" interior data block if not explicitly specified. Such a
neutral interior weighs in at another 96 bytes.

Add to that the sphere-specific data:

    - a center (24 bytes)
    - a radius (8 bytes)
    - an additional flag (1 byte)

With some more padding for data type alignment, this weighs in at
another 40 bytes of data.

Another 8 bytes are required for a pointer to actually hook the sphere
up into the scene, and another 40 bytes worth of data required to
represent it in the scene-level bounding hierarchy.

So for each and every full-fledged sphere, we start with a baseline of a
whopping 504 bytes worth of data.


Compare that to the requirements for a blob element(*):

    - a type field (2 bytes)
    - an index field (4 bytes)
    - an origin vector (24 bytes)
    - a cylinder length field (8 bytes)
    - a squared-radius field (8 bytes)
    - a triplet of coefficients (24 bytes)
    - a pointer to a texture (8 bytes overhead)
    - a pointer to a transformation matrix (8 bytes overhead)

Again including a few more bytes of alignment padding, this adds up to
just 88 bytes.

It is important to note that in contrast to primitives, no texture data
block is created for blob elements unless a texture is explicitly
specified, so we're only stuck with the pointer overhead.

Another 8 bytes are required for a pointer to actually hook the element
up into the blob, and another 48 bytes worth of data required to
represent it in the blob's internal bounding hierarchy.

Additional memory is required at run-time for temporary data, but this
is difficult to quantify since on one hand the corresponding data
structure is shared among all blobs, while on the other hand a separate
copy of the data structure is required for each thread. Effectively, the
largest blob in the scene requires another 104 bytes per blob element
per thread.

So if you have just one single blob in your scene, each of that blob's
elements weighs in at 144 bytes, plus 104 bytes per thread.

(*NB these are internal elements; each spherical element corresponds to
one of these, while each cylindrical element actually requires three of
these: One for the cylindrical portion, and two more for the
hemispherical end portions.)


If it wasn't for the temporary data structure, this would boil down to:

    (A) 504 bytes per genuine sphere, vs.
    (B) 144 bytes per spherical blob element.

If you use the naive approach of shoving all your spherical blob
elements into one large blob primitive, that advantage is quickly eaten
up on multi-core systems, and completely lost at 4 or more threads (an
effect I hadn't considered previously); however, presuming the goal is
indeed to get spheres rather than blobby things, the total cost can be
minimized by splitting up your N blob elements among C*sqrt(N) blob
primitives (with C being a constant depending on the number of threads),
in which case the per-thread cost becomes negligible for large N.


Post a reply to this message

From: Bald Eagle
Subject: Re: Euclidean infinite detail
Date: 31 Oct 2016 12:20:00
Message: <web.58176e398df1bb22b488d9aa0@news.povray.org>
Well *** GOLL-LY *** !!!
Thanks for that 'little' look under the hood  :O

It's always interesting to see things from a different perspective and be
reminded that what one sees in a high-level language like SDL has no relation to
what takes place on the lower levels.

> the total cost can be
> minimized by splitting up your N blob elements among C*sqrt(N) blob
> primitives (with C being a constant depending on the number of threads),
> in which case the per-thread cost becomes negligible for large N.

This would be a nice bit of advanced user info to add to the documentation, and
perhaps even develop into a macro or something in the source / parser / message
stream.   Something along the lines of "You have >n spheres, consider .... blobs
..... to reduce memory usage...."


It's always educational here at news.povray.org   :)


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 31 Oct 2016 12:42:53
Message: <5817748d@news.povray.org>
Am 31.10.2016 um 17:15 schrieb Bald Eagle:
> Well *** GOLL-LY *** !!!
> Thanks for that 'little' look under the hood  :O
> 
> It's always interesting to see things from a different perspective and be
> reminded that what one sees in a high-level language like SDL has no relation to
> what takes place on the lower levels.

As a matter of fact it's also interesting for me, because I rarely
examine such matters in such excessive detail. (My perfectionism tends
to reach its peak when communicating with other people.)

For example, as I had already hinted at, I had never before paid much
attention to the per-thread temporary data.

I'm sure there must be a way to significantly trim down that temporary
data store, at least for normal cases.

>> the total cost can be
>> minimized by splitting up your N blob elements among C*sqrt(N) blob
>> primitives (with C being a constant depending on the number of threads),
>> in which case the per-thread cost becomes negligible for large N.
> 
> This would be a nice bit of advanced user info to add to the documentation, and
> perhaps even develop into a macro or something in the source / parser / message
> stream.   Something along the lines of "You have >n spheres, consider .... blobs
> ..... to reduce memory usage...."

I don't think such a warning message would be a good idea; while using
blobs as a bulk sphere surrogate does reduce the memory footprint (if
done right), it certainly will be at the cost of significantly reduced
render speed.

Adding a corresponding section to the docs is another matter.


Post a reply to this message

From: Mike Horvath
Subject: Re: Euclidean infinite detail
Date: 31 Oct 2016 14:29:35
Message: <58178d8f@news.povray.org>
On 10/27/2016 9:54 PM, clipka wrote:
> Am 27.10.2016 um 20:51 schrieb Mike Horvath:
>
>> Also, maybe POV-Ray should support point clouds.
>
> You can always read point cloud data at parse time and convert it to a
> bunch of spheres (or a set of blob elements if you're low on memory).
>
> If you want something more complicated, then you'd first have to clarify
> what that something is; only then will it be possible to even discuss
> how that could be achieved, let alone whether it would be reasonable to
> hard-code into POV-Ray.
>

I was joking, mainly. I prefer the clean edges of primitives and meshes.

Mike


Post a reply to this message

From: Nekar Xenos
Subject: Re: Euclidean infinite detail
Date: 1 Nov 2016 15:32:37
Message: <5818edd5$1@news.povray.org>
On 2016/10/27 10:58 AM, scott wrote:
> So do you remember Euclidean and their "infinite detail" engine? If not
> there are plenty of videos on YouTube like this one:
>
> https://www.youtube.com/watch?v=00gAbgBu8R4
>
> Anyway, it seems they recently released a few more videos and in one of
> the comments I found a link to their patent:
>
> http://tinyurl.com/eucli
>
> It's quite interesting actually. They store their point cloud in an
> oct-tree structure and walk down the tree until you get to a single
> point in the data or a single pixel on the screen. But, the clever bit
> is that once they've done the perspective transform on an oct-tree node,
> they look at the "w" coordinate and use that to decide if an
> orthographic projection would be less than 1 pixel different from a full
> perspective projection. If yes, then all the child nodes'
> screen-positions can be computed very fast by just taking mid-points of
> the parent's 2D coordinates, rather than doing full perspective
> projections.
>
> Still their claim of "unlimited detail" is misleading, but you can see
> how with the above system it would be straightforward to
> page-in/download nodes as you move around the scene. And with oct-trees,
> you only need a few levels to get huge numbers of points, eg 10 deep
> gets you a billion points.

I saw this one recently:
https://youtu.be/F6MaZE9cU_c

I came to the conclusion that there is no real-time lighting. Maybe I'm 
wrong about that but the over exposed look reminds me of games from the 90's

-- 
________________________________________

-Nekar Xenos-


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 1 Nov 2016 18:12:33
Message: <58191351$1@news.povray.org>
Am 01.11.2016 um 20:32 schrieb Nekar Xenos:

> I saw this one recently:
> https://youtu.be/F6MaZE9cU_c
> 
> I came to the conclusion that there is no real-time lighting. Maybe I'm
> wrong about that but the over exposed look reminds me of games from the
> 90's

Not surprising, given that their approach is so different from the
well-trodden path of mesh-based rendering that they have to re-invent
each and every wheel virtually from scratch.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 4 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.