|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Echo, focal blur and motion blur are all instances of convolution. That
means, hypothetically, that by applying a suitable *deconvolution*, you
should be able to get back the original signal.
Question: Is it actually feasible to do this in the real world?
If I have an image with camera shake, can you really apply some math to
it and get back the unblurred image? Or is that still the domain of
science fiction?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Echo, focal blur and motion blur are all instances of convolution.
Are you sure that focal blur is a pure 2D convolution? If it was, wouldn't
you be able to add physically correct focal-blur to a perfectly sharp image
(eg a ray-traced image) with a simple convolution?
Ditto for motion blur - it's impossible to add motion blur by simply using a
convolution (for example some detail might be hidden behind the moving
object which then becomes visible in real motion blur).
> That means, hypothetically, that by applying a suitable *deconvolution*,
> you should be able to get back the original signal.
>
> Question: Is it actually feasible to do this in the real world?
The problem is when you apply a convolution to an image, you get a bigger
image (the size of the original image + the size of the convolution window).
The extra data around the edges is needed in order to do the deconvolution.
Obviously in a photo you don't get this information.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
And lo On Thu, 06 May 2010 15:19:15 +0100, Invisible <voi### [at] devnull> did
spake thusly:
> Echo, focal blur and motion blur are all instances of convolution. That
> means, hypothetically, that by applying a suitable *deconvolution*, you
> should be able to get back the original signal.
>
> Question: Is it actually feasible to do this in the real world?
>
> If I have an image with camera shake, can you really apply some math to
> it and get back the unblurred image? Or is that still the domain of
> science fiction?
You mean something like Unshake?
http://www.zen147963.zen.co.uk/
--
Phil Cook
--
I once tried to be apathetic, but I just couldn't be bothered
http://flipc.blogspot.com
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
>> Echo, focal blur and motion blur are all instances of convolution.
>
> Are you sure that focal blur is a pure 2D convolution?
No. Only if everything in the frame is at approximately the same focal
distance does it approximate a 2D convolution.
> Ditto for motion blur - it's impossible to add motion blur by simply
> using a convolution.
If the motion is "small" relative to the distance of the objects from
the camera, then the blur should be approximately a 2D convolution.
>> That means, hypothetically, that by applying a suitable
>> *deconvolution*, you should be able to get back the original signal.
>>
>> Question: Is it actually feasible to do this in the real world?
>
> The problem is when you apply a convolution to an image, you get a
> bigger image (the size of the original image + the size of the
> convolution window). The extra data around the edges is needed in order
> to do the deconvolution. Obviously in a photo you don't get this
> information.
This merely means that you can't deconvolve the edges properly.
A much bigger problem is figuring out what the hell the convolution
kernel might have been, given only the blurred image...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On Thu, 06 May 2010 16:19:15 +0200, Invisible <voi### [at] devnull> wrote:
> Echo, focal blur and motion blur are all instances of convolution. That
> means, hypothetically, that by applying a suitable *deconvolution*, you
> should be able to get back the original signal.
>
> Question: Is it actually feasible to do this in the real world?
>
> If I have an image with camera shake, can you really apply some math to
> it and get back the unblurred image? Or is that still the domain of
> science fiction?
http://www.focusmagic.com/
--
FE
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> Echo, focal blur and motion blur are all instances of convolution. That
> means, hypothetically, that by applying a suitable *deconvolution*, you
> should be able to get back the original signal.
>
> Question: Is it actually feasible to do this in the real world?
>
> If I have an image with camera shake, can you really apply some math to
> it and get back the unblurred image? Or is that still the domain of
> science fiction?
The hardest part in practice is determining the blur kernel. Even if
you can determine it exactly the result still won't be perfect, but it
will be noticeably (potentially significantly) deblurred (at least for
camera shake, defocus works less well).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Kevin Wampler wrote:
> will be noticeably (potentially significantly) deblurred (at least for
> camera shake, defocus works less well).
I would think the kernel for defocus would be different at each pixel,
depending on the original depth from that pixel through the lens to whatever
you're seeing there, yes? Camera shake is probably easier because all the
pixels are going to be blurred with essentially the same convolution (modulo
camera rotation about the axis of the lens).
--
Darren New, San Diego CA, USA (PST)
Linux: Now bringing the quality and usability of
open source desktop apps to your personal electronics.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Kevin Wampler wrote:
>> will be noticeably (potentially significantly) deblurred (at least for
>> camera shake, defocus works less well).
>
> I would think the kernel for defocus would be different at each pixel,
> depending on the original depth from that pixel through the lens to
> whatever you're seeing there, yes?
Indeed, but even if everything was all at the same depth the kernel of a
defocus filter tends to be shaped such that reconstruction is less
accurate due to floating-point resolution issues (although since the
kernel is generally simple it's much easier to implement).
If things are at different depths then as you say it's a harder problem.
Ditto for motion blur when parallax matters or there are independently
moving or non-rigid objects in the scene.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> No. Only if everything in the frame is at approximately the same focal
> distance does it approximate a 2D convolution.
I doubt that happens very often, usually the camera gets a point in focus
you didn't intend (and the bit you did intend is then out of focus).
> If the motion is "small" relative to the distance of the objects from the
> camera, then the blur should be approximately a 2D convolution.
But is it accurate enough to visibly improve the sharpness of an image?
Judging by all the software I've seen that claims to do this, usually not.
> This merely means that you can't deconvolve the edges properly.
True, more precisely a region half the size of the convolution kernel along
each edge.
> A much bigger problem is figuring out what the hell the convolution kernel
> might have been, given only the blurred image...
Indeed, and it might not be constant for every pixel.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Question: Is it actually feasible to do this in the real world?
>
> http://www.focusmagic.com/
Judging by the demo images... it's feasible, but the results aren't
worth it.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |