|
 |
>> The problem is when you apply a convolution to an image, you get a bigger
>> image (the size of the original image + the size of the convolution
>> window). The extra data around the edges is needed in order to do the
>> deconvolution. Obviously in a photo you don't get this information.
>
> This merely means that you can't deconvolve the edges properly.
I thought about this some more and don't agree with you. Take this 1D
example:
Convolution kernel: {1,2,1}
Source image: {a,b,c,d,e,f,g,h}
Result: {12,12,12,12,12,12,12,12}
Can you figure out *any* of the pixels in the source image?
If you write out the convolution by hand, the first pixel gives you one
equation with 3 unknowns - obviously you can't solve this. Each additional
pixel gives you one more equation and 1 more unknown. You can never solve
this!
In the mathematical world though, your source image is actually:
{0,0,...,0,0,a,b,c,d,e,f,g,h,0,0,...,0,0}
and when you apply your convolution you get edge data too.
With this extra information you can solve the deconvolution (because you
know the source pixels are zero outside of the image).
IRL the source data is not zero outside of your photo.
Post a reply to this message
|
 |