|
|
> Game engines implement an ample variety of techniques to reduce the
> amount
> of polygons sent to the graphics card.
True, but you very rarely need to keep a copy of the actual meshes used to
render. Usually for the sort of algorithms you mean, you never go down to
the polygon level because it's just faster to let the GPU handle that.
You'll usually be working with bounding boxes of meshes, not the actual mesh
data.
And as for the textures, I don't think you need them at all in CPU RAM
during the game.
> The optimal situation is that only
> those polygons which are visible in the current frame are sent to the
> graphics card to be rendered.
Maybe that was optimal 15 years ago, but today the GPU is way faster than
the CPU at culling polygons. Especially when you consider the overhead of
having to transfer all the mesh data per-frame to the GPU - that is slow and
nobody does it. The optimal approach nowadays is to have as much mesh and
texture data loaded in the GPU as possible, and per-frame the CPU can just
tell the GPU which meshes to render, based on some algorithm.
> Also in most modern games the scene to be rendered is seldom static,
> but changes all the time. For example dynamic shadows, at least if
> implemented using shadow volumes, require new shadow polygons to be
> created at each frame and sent to the graphics card.
No they don't, you can do it that way but it's slow. You usually make a
simplified version of your mesh, replace each edge with a quad compressed
down to that edge and then load it to the GPU. THen, per-frame, a vertex
shader running on the GPU decides if the edge is a silhouette edge, and
expands out the polygon as necessary in the direction away from the light.
Character animation and other dynamic effects (eg trees moving in the wind)
are usually done in the vertex shader too.
Post a reply to this message
|
|