|
|
> So how does that affect the end visual result? Are we talking about a big
> difference or a subtle one?
A subtle one that makes materials look more realistic. There's a section in
most 3D rendering books about it, the one I remember has a bronze vase, and
repeatedly compares the different algorithms to a photo. It's surprising
how tiny changes in the highlight can make you believe it's really bronze or
plastic or some unrealistic material.
> How about something that can do metalic paint? That would be nice...
Have you seen the "car paint" demo from the ATI developer website? That's
pretty cool.
> LOL! I think POV-Ray probably beats the crap out of my ray tracer with
> just 1 sphere. ;-) But hell yeah, faster == better!
I meant compare the speeds relatively, like do 10^N spheres on your tracer,
and then on POV, and compare the curves. POV doesn't simply test each ray
with every object during tracing...
>> NURBS are not isosurfaces though.
>
> Oh. Wait, you mean they're parametric surfaces then?
NURBS are just 2D equivalent of splines, basically one way to mathematically
define a surface. An isosurface defines a scalar field in 3D, and then a
surface is constructed where the field equals zero.
> Does it add more triangles to the areas of greatest curvature and fewer to
> the flat areas?
No, it just uses 32x32x32 marching cubes for each "block". The block size
depends on the distance from the camera.
> Even so, I would think that something like heavily textured rock would
> take an absurd number of triangles to capture every tiny crevice.
But it only needs to cover the ones you can see close up, and modern
graphics cards can render 10 billion vertices per second, so it should be
doable.
> And how do you avoid visible discontinuities as the LoD changes?
Alpha blend the old and new blocks (very old technique for LOD), apply some
bias to the isosurface function based on block size (this stops "fighting"
between the two surfaces during the transition). The transitions are so
small in screen space that you don't notice them.
> I often look at a game like HL and wonder how it's even possible. I mean,
> you walk through the map for, like, 20 minutes before you get to the other
> end.
Try driving at 150mph for an hour before you get to the other end :-)
> The total polygon count must be spine-tinglingly huge. And yet, even on a
> machine with only a few MB of RAM, it works. How can it store so much data
> at once? (Sure, on a more modern game, much of the detail is probably
> generated on the fly. But even so, maps are *big*...)
Mesh instancing. Like in POV you can draw the same mesh with very little
extra memory, ditto for the GPU. In fact you can even make subtle changes
to each mesh as you draw it (eg colour, vertex displacement, animation
cycles etc). You can draw a field of trees and grass moving in the wind,
with an army of 1000 men running over it with just a handful of meshes very
quickly. Nowhere in RAM or GPU RAM is the total triangle array held at any
time.
Since DX10 you can now use the GPU to actually create geometry on the fly,
so you are no longer limited to only modifying existing meshes. For
instance the CPU could provide a simplified mesh of a person, and a list of
10000 points where the person should be drawn. The GPU can then generate
more detailed geometry when needed (if the mesh is near the camera), animate
the mesh based on some walk cycle, change the colour of the clothes or
whatever, and then render it. It gives the impression of billions of
polygons, but taking up a tiny amount of RAM.
Also big games tend to load in data from the disc in the background when you
get near a different part of the level. They also need to shuffle about
things in the GPU memory as they go along.
Post a reply to this message
|
|