|
![](/i/fill.gif) |
In article <Pine.GSO.4.53.0304230228300.7720@blastwave>,
Dennis Clarke <dcl### [at] blastwave org> wrote:
> well .. not really. If by "blur" we mean the optical result of an image
> being out-of-focus then the data is all there, simply not arranged in a
> fashion that people prefer.
No, the blurred image contains less information. Small details are gone,
geometry and spatial information is lost.
> A digital blur of a digital image by way of
> a gaussian ( or similar algorithm ) approach merely distributes the data.
> There are algorithms that will un-blur an image.
No there aren't, not in the way you seem to be thinking anyway. You've
watched too many Hollywood movies. Information is irretrievably lost in
the blur process, you can not recover the exact original.
> These are the same tools
> that are used to unblur the linear movement of a projectile or the rotation
> of an engine component within a photograph.
This is a bit different...you have information about the motion other
than what is contained in the photo, and can use that to at least
partially reconstruct the original. And then you can't completely
recover the original.
> ok .. I'm with you on that. We are talking about using multiple sample
> points within a pixel then, probably distributed as a square matrix of
> samples. This would be the same then as simply having a higher resolution
> image and then doing a blur of the pixels on a block by block basis while
> ignoring neighbors. At least that is how I perceive the issue. The
> removal of jagged edges on lines and sharp boundaries can be achieved with
> either a high resolution image blured or a low-resolution image with a
> multi-sample per pixel approach.
Go ahead and blur a high-res image...you get a blurry high-res image
with no edges, not a smooth-edged one. Antialiasing requires inputting
several samples and outputting one, you have to end up with a smaller
image.
> well yes, that is clear. It makes no difference whether you sample each
> pixel 9 times ( 3x3 ) within a 100x100 data array or simply blur the 3x3
> pixel blocks of a 300x300 data array to produce a 100x100 result set. I
> think, however, that the result from the latter would be smoother than the
> former.
You don't get a smaller result set with a blur, you get one of the same
size as the source data. What you are talking about is downsampling.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |