POV-Ray : Newsgroups : povray.off-topic : Geometric puzzle : Re: Geometric puzzle Server Time
5 Sep 2024 01:26:00 EDT (-0400)
  Re: Geometric puzzle  
From: scott
Date: 18 Dec 2009 05:50:47
Message: <4b2b5e87$1@news.povray.org>
> Well, presumably you'd need an alpha channel. Otherwise any further 
> drawing to these partially-covered pixels won't look right...?

I don't understand how having an alpha channel would help?  Can you explain 
how it would solve the background-showing-through-the-joins problem?

> Interesting. I didn't know that. Last time I looked, the GPU takes a 
> polygon, and optionally a texture, optionally does some point-light 
> calculations, and draws a textured polygon according to the current camera 
> position. The textures are usually MIP-mapped - in other words, AA 
> precomputed - so that only leaves the polygon edges to worry about.

That *was* how GPUs worked, but since DirectX 8 there have been programmable 
pipelines, and since DirectX9 you've been forced to use them.

Your CPU program throws a load of vertices (a "mesh") at the GPU once.  Then 
every frame *your* GPU vertex shader program transforms those vertices into 
screen space using whatever algorithm you want (usually your CPU app would 
have prepared a 4x4 matrix in advance).  This GPU program can also "output" 
any other variables it likes, common ones are normal vectors, texture 
coordinates etc.  The GPU then takes this bunch of "output" data at each 
vertex and interpolates it across the triangle, pixel by pixel.  For each 
pixel it runs *your* pixel shader program with the interpolated data as the 
input.  It's completely up to you what you do in the pixel shader, commonly 
you use the interpolated texture coordinates to look up a colour from a 
texture, combine it with some lighting calculation, and return that. 
Whatever you return the GPU writes to the frame buffer.

> If I had to take a guess, I'd say pretend that the polygon extends to 
> infinity in all directions, run the shader as usual, and then just adjust 
> the alpha channel according to polygon coverage. (IOW, yes, the center of 
> the screen pixel.) OTOH, I haven't actually tried it to see what it looks 
> like...

What if you have a texture that is thin red/blue stripes (or any other 
detail)?  That method would likely pick the wrong colour if only a small 
portion of the pixel was actually visible.  Still I guess it's minor error, 
but one that multi-sampling would get right.

> Yeah, if you use particles you need to somehow construct a surface from 
> the particle positions. But saying "all points within K units of a 
> particle are designed as inside" is much easier than trying to determine 
> the curvature of a complex shape and tesselate it with just the right 
> number of triangles...

You could just use marching cubes, you don't even need to pre-calculate 
anything, when you come to each vertex of each cube just use the distance to 
the nearest particle minus K as the value.

BTW that isn't a very good way of making a fluid from particles, it's just 
going to look like a lump of spheres glued together.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.