|
|
On 5/19/2014 1:28 AM, scott wrote:
>> I think the fatal flaw is, from the standpoint of, say.. if you wanted
>> to produce immersion, like in a game, is actually that their method of
>> generating the data set (i.e. getting the real world data) produces
>> holes, so no matter how good it is, from some points, or with specific
>> objects, it falls apart. Now, I would love to see what you could do if
>> you took CG, then, instead of the normal trace system, kind of "swept
>> through", layer by layer, recording all possible points, so that *if*
>> you ever found yourself looking at the scene from "any" position, you
>> would get no holes at all. Forget mesh, just use that for your
>> "physics", then this thing to generate the "world".
>
> FWIW this is exactly what the car racing sim iRacing does, they laser
> scan in the tracks and use this data for the physics. They then build up
> a traditional triangle-based mesh and textures for the graphics. Of
> course there is the risk the two are disjointed, and it sometimes
> happens that you get release notes like "fixed issue where a car appears
> to hover above the track at XXX".
>
Well, I was thinking of the exact opposite, actually. You could build
clean physics, without gaps, then use the point data to generate the
actual scene, which, since it was derived by mapping even "hidden"
points, which a laser scan can't see, wouldn't care what angle you
actually viewed it from, once "on world".
--
Commander Vimes: "You take a bunch of people who don't seem any
different from you and me, but when you add them all together you get
this sort of huge raving maniac with national borders and an anthem."
Post a reply to this message
|
|