|
 |
On 14/09/2010 01:04 PM, Mike Raiford wrote:
> A couple demos:
>
> http://math.hws.edu/eck/math371/applets/Haar.html
>
> This is what's used in jpeg compression...
No, I'm fairly sure JPEG uses the Discrete Cosine Transform. Now
JPEG2000 really *does* use some manner of wavelet transform (although I
don't remember which one).
> And is rather clever, when you think about it.
>
> Take a signal, repeatedly break it up into pairs of low and high
> frequencies, subsample those segments, and place them back into the buffer.
I'm not at all clear on exactly how it does that.
> Now, once that has been completed, discard the samples that have the
> lowest amplitudes.
>
> Rebuild the signal with the remaining samples. You've dropped a ton of
> data, but still have a very close approximation of the original data.
Most transform coding methods work not by *discarding* points, but by
*quantinising* them according to how "important" they are deemed to be.
This one seems a little unusual in that respect.
> Another use of wavelets: Fourier analysis of signals.
It's not Fourier analysis if it's not a Fourier transform. ;-)
In fact, you can apparently decompose any function into the sum of a
suitable set of basis functions. (I believe that the set of basis
functions is required to be "orthogonal" and "complete", but beyond that
they can be anything.) Depending on what basis functions you choose,
discarding or quantinising points will distort the reconstructed
function in different ways. The trick is to find a good set of basis
functions for the problem in question.
(E.g., for audio data, sine waves are the obvious candidate. The only
question is whether they should be finite or not. For image data, the
solution is less obvious.)
> Much more precision in the frequency domain. The time domain, obviously
> will suffer for these lower frequencies. You can't pinpoint as
> accurately where they are in time, due to the slowness of their
> oscillation. There are ways, I think, of getting around this, but I
> haven't gotten that far in my understanding.
Apparently there's a limit to how much precision you can get in time and
frequency. Increasing the resolution of one necessarily decreases the
resolution of the other. This is apparently due to the Heisenberg
uncertainty principle. (Which is interesting, since I thought that
applies only to quantum mechanics, not to general mathematical phenomena...)
Post a reply to this message
|
 |