POV-Ray : Newsgroups : povray.off-topic : Deconvolution : Re: Deconvolution Server Time
4 Sep 2024 11:15:21 EDT (-0400)
  Re: Deconvolution  
From: Kevin Wampler
Date: 17 May 2010 15:51:48
Message: <4bf19e54@news.povray.org>
scott wrote:
>> Your 1D example seemed to be about the image boundaries rather than 
>> the image being represented by real/complex numbers.  The point about 
>> the image boundaries is true, and will prevent an exact deconvolution 
>> as you pointed out.  You can still often get a good approximate 
>> solution by either assuming boundary conditions or by using an image 
>> prior.  I believe the latter is these is what's generally used in 
>> practice for image deblurring.
> 
> The example I was referring to was this one:
> 
> Convolution kernel: {1,2,1}
> Source image: {a,b,c,d,e,f,g,h}
> Result: {12,12,12,12,12,12,12,12}
> 
> What is the source image?
> 


I think my last post was not clear enough.  I'm agreeing with you that 
you can't exactly recover the source image in your example.  What I'm 
disagreeing with are two things:

1) This has anything to do with complex numbers, even if you use the FFT 
method to recover the source image.

2) This prevents you from getting an approximate answer in practice.

Both of these assertions can be tested pretty easily, so I'll treat them 
in turn.  Also, sorry for the long post, but it seemed useful to go into 
a fair bit of detail.


---------- (1) complex numbers aren't the issue

First of all, let's try the FFT method for solving for the source image. 
  This code should run in both matlab and octave:

n = 8; % size of images
k = [1 2 1]; % kernel
k2 = [zeros(1,ceil((n-3)/2)) k zeros(1,floor((n-3)/2))]; % padded kernel
y = 12 + zeros(1,n); % result image

disp('source image (FFT method)');
ifft(fft(y)./(fft(k2)+1e-8)) % +1e-8 is a regularization hack

 >>>>>> result:

{3 3 3 3 3 3 3 3}

So despite not having any complex numbers in "the photo" (i.e. the 
result image) the method worked just fine and gave us a pretty 
reasonable looking result.

"But there's not enough information to derive a result!" I imagine 
you're thinking.  This is correct, so clearly something was happening 
behind the scenes that allows the method to actually compute a source 
image.  The answer is that the FFT method makes an implicit assumption 
about what happens beyond the boundaries of the result image by treating 
the image as cyclic.  Thus the source image isn't just 
{a,b,c,d,e,f,g,h}, it's actually treated as 
{...,f,g,h,a,b,c,d,e,f,g,h,a,b,c,...}, which is what allows this method 
to work.

This is the point I was making earlier that one way to solve the problem 
is to make an assumption about what happens beyond the boundaries of the 
image.  A natural next question to ask is what sort of errors this 
outright assumption might give in the reconstructed source image.


---------- (2) approximate solutions are possible

Let's look at the source images reconstructed under three different 
"reasonable" assumptions about what happens beyond the boundaries of the 
image:

cyclic: {...,f,g,h,a,b,c,d,e,f,g,h,a,b,c,...}

zero:  {...,0,0,0,a,b,c,d,e,f,g,h,0,0,0,...}

full-intensity: {...,12,12,12,a,b,c,d,e,f,g,h,12,12,12,...}

For each case the resulting source images are:

cyclic-source: {3 3 3 3 3 3 3 3}

zero-source: {5.3333, 1.3333, 4.0000, 2.6667, 2.6667, 4.0000, 1.3333, 
5.3333}

full-intensity-source: {4.5556, 1.8889, 3.6667, 2.7778, 2.7778, 3.6667, 
1.8889, 4.5556}

Clearly the different assumptions have given rise to different 
reconstructed images.  Nevertheless, notice that the center pixels in 
each of the three cases take the values of 3.0, 2.67, and 2.78.  Despite 
the fact that the assumptions for what happened beyond the boundary 
range from 0 to 12, the value of the center pixels only ranges from 2.67 
to 3.0 -- a factor of 36 less.

Basically, an incorrect assumption about what happens beyond the edges 
of the image resulted in errors concentrated at the edges of the 
reconstructed image, so in the center of the source image the FFT method 
actually can actually give a pretty accurate answer.  For a larger image 
(and the same kernel) the center values will be even more accurate.


---------- side notes

I want to emphasize that the deblurring method I gave is only to 
illustrate the basic theoretical principles behind using FFT to 
accelerate the deconvolution process.  In practice inaccurate kernels 
and image noise will result in a pretty poor result if you just use the 
method I gave.  However, people can and have developed pretty good 
deblurring methods which are more sophisticated and do work well in 
practice.  In many (but not all) of these approaches, using the FFT and 
performing some operations in frequency space is very useful for 
speeding up the algorithm.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.