|
|
On 5/13/2014 10:20 PM, Le_Forgeron wrote:
> Le 13/05/2014 23:06, Tim Cook a écrit :
>> What's interesting about it, from what I understand, is that it's
>> basically raytracing the point-cloud's data, without loading the entire
>> dataset into memory first. It's only reading the pieces of information
>> that correspond to the pixels being rendered.
>
> And compared to mesh, it's only cloud of points without the myriad of
> links (in a mesh, a point can be part of many links / faces). The point
> is just a bit more complex that just coordinates. But only a single set
> of data to scan, nice.
>
I think the fatal flaw is, from the standpoint of, say.. if you wanted
to produce immersion, like in a game, is actually that their method of
generating the data set (i.e. getting the real world data) produces
holes, so no matter how good it is, from some points, or with specific
objects, it falls apart. Now, I would love to see what you could do if
you took CG, then, instead of the normal trace system, kind of "swept
through", layer by layer, recording all possible points, so that *if*
you ever found yourself looking at the scene from "any" position, you
would get no holes at all. Forget mesh, just use that for your
"physics", then this thing to generate the "world". But, imagine what
you would end up with if you "did" produce that level of detail on a
single pass through, like you where making movie level CG effects, then
you just dropped the dataset for it on the disk, to be pulled in "as
needed"... Yikes!
--
Commander Vimes: "You take a bunch of people who don't seem any
different from you and me, but when you add them all together you get
this sort of huge raving maniac with national borders and an anthem."
Post a reply to this message
|
|