POV-Ray : Newsgroups : povray.off-topic : Geometric puzzle : Re: Geometric puzzle Server Time
5 Sep 2024 01:25:18 EDT (-0400)
  Re: Geometric puzzle  
From: Invisible
Date: 18 Dec 2009 06:04:32
Message: <4b2b61c0$1@news.povray.org>
>> Well, presumably you'd need an alpha channel. Otherwise any further 
>> drawing to these partially-covered pixels won't look right...?
> 
> I don't understand how having an alpha channel would help?  Can you 
> explain how it would solve the background-showing-through-the-joins 
> problem?

Supposing that polygon A is red, polygon B is green, and the background 
is blue, the single pixel where the two edges intersect should 
presumably by drawn with nonzero values for all three channels.

The way I imagine it working is that when polygon A is drawn, the 
pixel's RGB data is set to pure red, and the alpha value is set 
according to pixel coverage. Then, when polygon B is drawn, the green is 
mixed in according to the pixel's current alpha value. The alpha of the 
new polygon is then added to the existing alpha. When all polygons have 
been drawn, the final alpha value is used to mix in the background.

At least, I presume that's how you'd do it...

> That *was* how GPUs worked, but since DirectX 8 there have been 
> programmable pipelines, and since DirectX9 you've been forced to use them.
> 
> Your CPU program throws a load of vertices (a "mesh") at the GPU once.  
> Then every frame *your* GPU vertex shader program transforms those 
> vertices into screen space using whatever algorithm you want (usually 
> your CPU app would have prepared a 4x4 matrix in advance).  This GPU 
> program can also "output" any other variables it likes, common ones are 
> normal vectors, texture coordinates etc.  The GPU then takes this bunch 
> of "output" data at each vertex and interpolates it across the triangle, 
> pixel by pixel.  For each pixel it runs *your* pixel shader program with 
> the interpolated data as the input.  It's completely up to you what you 
> do in the pixel shader, commonly you use the interpolated texture 
> coordinates to look up a colour from a texture, combine it with some 
> lighting calculation, and return that. Whatever you return the GPU 
> writes to the frame buffer.

Finally, an explanation of programmable GPUs that ACTUALLY MAKES SENSE. 
Thanks for that.

>> If I had to take a guess, I'd say pretend that the polygon extends to 
>> infinity in all directions, run the shader as usual, and then just 
>> adjust the alpha channel according to polygon coverage.
> 
> What if you have a texture that is thin red/blue stripes (or any other 
> detail)?  That method would likely pick the wrong colour if only a small 
> portion of the pixel was actually visible.  Still I guess it's minor 
> error, but one that multi-sampling would get right.

True I guess...

>> Yeah, if you use particles you need to somehow construct a surface 
>> from the particle positions.
> 
> You could just use marching cubes, you don't even need to pre-calculate 
> anything, when you come to each vertex of each cube just use the 
> distance to the nearest particle minus K as the value.

Mmm, interesting. I wonder if that actually works?

> BTW that isn't a very good way of making a fluid from particles, it's 
> just going to look like a lump of spheres glued together.

Well, no. You'd want to smooth it somehow, without slowing the 
computation to the speed of molasses...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.