POV-Ray : Newsgroups : povray.off-topic : Euclidean infinite detail Server Time
31 Oct 2024 14:10:34 EDT (-0400)
  Euclidean infinite detail (Message 1 to 10 of 24)  
Goto Latest 10 Messages Next 10 Messages >>>
From: scott
Subject: Euclidean infinite detail
Date: 27 Oct 2016 04:58:54
Message: <5811c1ce$1@news.povray.org>
So do you remember Euclidean and their "infinite detail" engine? If not 
there are plenty of videos on YouTube like this one:

https://www.youtube.com/watch?v=00gAbgBu8R4

Anyway, it seems they recently released a few more videos and in one of 
the comments I found a link to their patent:

http://tinyurl.com/eucli

It's quite interesting actually. They store their point cloud in an 
oct-tree structure and walk down the tree until you get to a single 
point in the data or a single pixel on the screen. But, the clever bit 
is that once they've done the perspective transform on an oct-tree node, 
they look at the "w" coordinate and use that to decide if an 
orthographic projection would be less than 1 pixel different from a full 
perspective projection. If yes, then all the child nodes' 
screen-positions can be computed very fast by just taking mid-points of 
the parent's 2D coordinates, rather than doing full perspective 
projections.

Still their claim of "unlimited detail" is misleading, but you can see 
how with the above system it would be straightforward to 
page-in/download nodes as you move around the scene. And with oct-trees, 
you only need a few levels to get huge numbers of points, eg 10 deep 
gets you a billion points.


Post a reply to this message

From: Mike Horvath
Subject: Re: Euclidean infinite detail
Date: 27 Oct 2016 14:50:16
Message: <58124c68$1@news.povray.org>
On 10/27/2016 4:58 AM, scott wrote:
> So do you remember Euclidean and their "infinite detail" engine? If not
> there are plenty of videos on YouTube like this one:
>
> https://www.youtube.com/watch?v=00gAbgBu8R4
>
> Anyway, it seems they recently released a few more videos and in one of
> the comments I found a link to their patent:
>
> http://tinyurl.com/eucli
>
> It's quite interesting actually. They store their point cloud in an
> oct-tree structure and walk down the tree until you get to a single
> point in the data or a single pixel on the screen. But, the clever bit
> is that once they've done the perspective transform on an oct-tree node,
> they look at the "w" coordinate and use that to decide if an
> orthographic projection would be less than 1 pixel different from a full
> perspective projection. If yes, then all the child nodes'
> screen-positions can be computed very fast by just taking mid-points of
> the parent's 2D coordinates, rather than doing full perspective
> projections.
>
> Still their claim of "unlimited detail" is misleading, but you can see
> how with the above system it would be straightforward to
> page-in/download nodes as you move around the scene. And with oct-trees,
> you only need a few levels to get huge numbers of points, eg 10 deep
> gets you a billion points.


Does hardware acceleration work with this stuff?


Post a reply to this message

From: Mike Horvath
Subject: Re: Euclidean infinite detail
Date: 27 Oct 2016 14:50:54
Message: <58124c8e$1@news.povray.org>
On 10/27/2016 4:58 AM, scott wrote:
> So do you remember Euclidean and their "infinite detail" engine? If not
> there are plenty of videos on YouTube like this one:
>
> https://www.youtube.com/watch?v=00gAbgBu8R4
>
> Anyway, it seems they recently released a few more videos and in one of
> the comments I found a link to their patent:
>
> http://tinyurl.com/eucli
>
> It's quite interesting actually. They store their point cloud in an
> oct-tree structure and walk down the tree until you get to a single
> point in the data or a single pixel on the screen. But, the clever bit
> is that once they've done the perspective transform on an oct-tree node,
> they look at the "w" coordinate and use that to decide if an
> orthographic projection would be less than 1 pixel different from a full
> perspective projection. If yes, then all the child nodes'
> screen-positions can be computed very fast by just taking mid-points of
> the parent's 2D coordinates, rather than doing full perspective
> projections.
>
> Still their claim of "unlimited detail" is misleading, but you can see
> how with the above system it would be straightforward to
> page-in/download nodes as you move around the scene. And with oct-trees,
> you only need a few levels to get huge numbers of points, eg 10 deep
> gets you a billion points.


Also, maybe POV-Ray should support point clouds.


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 27 Oct 2016 21:49:42
Message: <5812aeb6$1@news.povray.org>
Am 27.10.2016 um 20:50 schrieb Mike Horvath:

> Does hardware acceleration work with this stuff?

I'd guess so (presuming by hardware acceleration you mean employing a GPU).

Modern GPUs have come a long way since the dedicated mesh 3D
acceleration cards of old, and have essentially evolved into
Extreme-SIMD (Single Instruction Multiple Data) generic number crunching
CPUs that happen to include a display interface. So whatever the actual
algorithm and data structure is -- as long as its critical portions can
be written as essentually branchless code to be applied to lots of data
items of uniform structure, GPU hardware acceleration is certainly a
feasible option.


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 27 Oct 2016 21:54:47
Message: <5812afe7$1@news.povray.org>
Am 27.10.2016 um 20:51 schrieb Mike Horvath:

> Also, maybe POV-Ray should support point clouds.

You can always read point cloud data at parse time and convert it to a
bunch of spheres (or a set of blob elements if you're low on memory).

If you want something more complicated, then you'd first have to clarify
what that something is; only then will it be possible to even discuss
how that could be achieved, let alone whether it would be reasonable to
hard-code into POV-Ray.


Post a reply to this message

From: Stephen
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 02:47:24
Message: <5812f47c$1@news.povray.org>
On 10/28/2016 2:54 AM, clipka wrote:
> Am 27.10.2016 um 20:51 schrieb Mike Horvath:
>
>> Also, maybe POV-Ray should support point clouds.
>
> You can always read point cloud data at parse time and convert it to a
> bunch of spheres (or a set of blob elements if you're low on memory).
>
> If you want something more complicated, then you'd first have to clarify
> what that something is; only then will it be possible to even discuss
> how that could be achieved, let alone whether it would be reasonable to
> hard-code into POV-Ray.
>

Isn't df3's created by tda2df3 part of the way there?
tda2df3.exe encodes the location and colour component, of each point.
With a bit of work with "DF3 Viewer Oosawa". The colour channels can be 
separated and used by PovRay.


-- 

Regards
     Stephen


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 03:04:17
Message: <5812f871$1@news.povray.org>
Am 28.10.2016 um 08:47 schrieb Stephen:

>> You can always read point cloud data at parse time and convert it to a
>> bunch of spheres (or a set of blob elements if you're low on memory).
>>
>> If you want something more complicated, then you'd first have to clarify
>> what that something is; only then will it be possible to even discuss
>> how that could be achieved, let alone whether it would be reasonable to
>> hard-code into POV-Ray.
> 
> Isn't df3's created by tda2df3 part of the way there?
> tda2df3.exe encodes the location and colour component, of each point.

Not sure what you're referring to there; presumably some external tool,
but certainly not part of the official POV-Ray package ;)

So this probably doesn't qualify as whatever Mike had in mind when
suggesting that "POV-Ray should support point clouds".

> With a bit of work with "DF3 Viewer Oosawa". The colour channels can be
> separated and used by PovRay.

Same with this one (though in this case I did manage to figure out what
that thing actually is ;))


Post a reply to this message

From: Stephen
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 05:35:18
Message: <58131bd6$1@news.povray.org>
On 10/28/2016 8:04 AM, clipka wrote:
> Am 28.10.2016 um 08:47 schrieb Stephen:
>
>>> You can always read point cloud data at parse time and convert it to a
>>> bunch of spheres (or a set of blob elements if you're low on memory).
>>>
>>> If you want something more complicated, then you'd first have to clarify
>>> what that something is; only then will it be possible to even discuss
>>> how that could be achieved, let alone whether it would be reasonable to
>>> hard-code into POV-Ray.
>>
>> Isn't df3's created by tda2df3 part of the way there?
>> tda2df3.exe encodes the location and colour component, of each point.
>
> Not sure what you're referring to there; presumably some external tool,
> but certainly not part of the official POV-Ray package ;)
>

Well spotted. But then, you have the source code memorised. ;)

> So this probably doesn't qualify as whatever Mike had in mind when
> suggesting that "POV-Ray should support point clouds".
>

Are you sure we are talking about the same thing?
It seems to me that df3's are point clouds.

>> With a bit of work with "DF3 Viewer Oosawa". The colour channels can be
>> separated and used by PovRay.
>
> Same with this one (though in this case I did manage to figure out what
> that thing actually is ;))
>
Oo! Tell me then, please.
I am lost.

-- 

Regards
     Stephen


Post a reply to this message

From: clipka
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 08:03:40
Message: <58133e9c$1@news.povray.org>
Am 28.10.2016 um 11:35 schrieb Stephen:

>> So this probably doesn't qualify as whatever Mike had in mind when
>> suggesting that "POV-Ray should support point clouds".
> 
> Are you sure we are talking about the same thing?

Pretty much so.

> It seems to me that df3's are point clouds.

Nope. DF3 is a voxel format, which is an entirely different beast.

You can think of a point cloud as just the bare vertices of a mesh, with
no triangles to connect them up. Each point is explicitly specified by
its set of coordinates, and the points are the primary data (although
additional attributes may be associated to them, such as colours). Point
clouds are often used to represent an object's volume by providing
sample points on that object's surface (though there are also measuring
processes that generate data points all across a given volume), and this
type of point cloud data is often the raw output format of
professional-grade 3D scanning devices.

Voxel data, on the other hand, is the 3D equivalent of a pixel image: An
implicit regular grid of "volume pixels" (hence the term) covering a
(typically box-shaped) region in 3D space, and explicit attributes
associated with each and every voxel, with the attribute values
constituting the primary data. That data may represent pretty much
anything: Colour, pressure, heat, wind speed and direction, or whatever.
Representing an object's shape is just one possible application, in
which case each voxel's data will represent what portion of each voxel
falls inside the object's volume. Voxel data is almost always derived
from other input data (such as point clouds).


Post a reply to this message

From: Stephen
Subject: Re: Euclidean infinite detail
Date: 28 Oct 2016 09:12:41
Message: <58134ec9$1@news.povray.org>
On 10/28/2016 1:03 PM, clipka wrote:
> Am 28.10.2016 um 11:35 schrieb Stephen:
>
>>> So this probably doesn't qualify as whatever Mike had in mind when
>>> suggesting that "POV-Ray should support point clouds".
>>
>> Are you sure we are talking about the same thing?
>
> Pretty much so.
>
>> It seems to me that df3's are point clouds.
>
> Nope. DF3 is a voxel format, which is an entirely different beast.
>
> You can think of a point cloud as just the bare vertices of a mesh, with
> no triangles to connect them up. Each point is explicitly specified by
> its set of coordinates, and the points are the primary data (although
> additional attributes may be associated to them, such as colours). Point
> clouds are often used to represent an object's volume by providing
> sample points on that object's surface (though there are also measuring
> processes that generate data points all across a given volume), and this
> type of point cloud data is often the raw output format of
> professional-grade 3D scanning devices.
>
> Voxel data, on the other hand, is the 3D equivalent of a pixel image: An
> implicit regular grid of "volume pixels" (hence the term) covering a
> (typically box-shaped) region in 3D space, and explicit attributes
> associated with each and every voxel, with the attribute values
> constituting the primary data. That data may represent pretty much
> anything: Colour, pressure, heat, wind speed and direction, or whatever.
> Representing an object's shape is just one possible application, in
> which case each voxel's data will represent what portion of each voxel
> falls inside the object's volume. Voxel data is almost always derived
> from other input data (such as point clouds).
>

If I understand what you said. Voxel data can be or is a subset of point 
cloud data.

So what is the problem with redefining coordinates in a df3 to points or 
small spheres instead of a cubical volume?

To all intents and purposes the df3s I have made in PovRay. Act like 
point clouds when using emitting media.


-- 

Regards
     Stephen


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.