|
 |
Kevin Wampler wrote:
> I haven't really done any shader coding myself, but IIRC the way to
> implement this would be with a summation tree.
That was my thought. You loop thru it lg_2(width) times, say, each time
doubling the offset, and each time setting this_pixel += pixel[offset].
Of course, I'll have to figure out if it's possible to *actually* access a
pixel at a particular index. It only took me about 3 hours to figure out my
erosion filter was comparing pixel[x] and pixel[x-1] and pixel[x+1], where x
runs from 0.0 to 1.0. Sheesh.
> standard shader language instead of CUDA or OpenCL I suppose you'd do
> this in about the same way that you'd manually generate mipmaps from a
> base texture, only with a downsampling operation which sums the counts
> of the non-white pixels instead of averaging their colors.
I'll read up on that, thanks.
> You'll need
> to either create a surface with float precision or manually pack the
> sub-counts into the color components though.
I think I can live with one byte of precision, really. The image is coming
off a 320x240 camera. If >256 out of 320 pixels changed, chances are they're
moving the camera. :-)
--
Darren New, San Diego CA, USA (PST)
The question in today's corporate environment is not
so much "what color is your parachute?" as it is
"what color is your nose?"
Post a reply to this message
|
 |