|
 |
scott wrote:
>> This in the absence of noise (which is a *big* assumption) and
>> ignoring image boundaries you could compute (non-blind) deconvolution
>> by using FFT on the image and the kernel and dividing, which is a
>> pretty efficient operation.
>
> The problem is the convoluted version you have "the photo" is not made
> of complex numbers (which it would be if you did the convolution
> yourself). You need this in order to do the deconvolution, otherwise you
> don't have enough information. See my 1D example in the other post.
I'm a bit confused by what you mean here. If both the image and the
kernel are real then complex numbers would only be used in the frequency
space versions of the images, once you apply the IFFT everything should
be real again. I don't see how you'd need a (non-trivial) complex
version of the image anywhere.
Your 1D example seemed to be about the image boundaries rather than the
image being represented by real/complex numbers. The point about the
image boundaries is true, and will prevent an exact deconvolution as you
pointed out. You can still often get a good approximate solution by
either assuming boundary conditions or by using an image prior. I
believe the latter is these is what's generally used in practice for
image deblurring.
In the case of the most basic FFT method for convolution, the boundary
conditions for the image would be implicitly cyclic. I should point out
that (IIRC) the naive deconvolution method of x=ifft(fft(y)/fft(k))
generally doesn't work well in practice, but the idea of using the
Fourier transform to accelerate the computations still carries across to
more sophisticated approaches.
Unless I'm missing something somewhere?
Post a reply to this message
|
 |