|
 |
John VanSickle wrote:
>
> Now you could make a huge matrix (N x N, where N is the number of
> elements in the image to be sharpened) and solve it. But for an image
> of, say, typical embedded YouTube size, you are looking an matrix that
> is on the order of 80K x 80k elements. Granted, most of the elements
> are zero, but it's still a headache.
In the particular case of deconvolution there are much more efficient
algorithms available. There's various techniques, but one insight that
is commonly exploited is that convolution in the spatial domain is
equivalent to multiplication in the Fourier domain. This in the absence
of noise (which is a *big* assumption) and ignoring image boundaries you
could compute (non-blind) deconvolution by using FFT on the image and
the kernel and dividing, which is a pretty efficient operation.
IIRC the approach above isn't used much in practice, largely because of
the noise assumptions, but some of the more sophisticated methods still
perform operations in frequency space for efficiency.
Post a reply to this message
|
 |