|
 |
A couple demos:
http://math.hws.edu/eck/math371/applets/Haar.html
This is what's used in jpeg compression...
And is rather clever, when you think about it.
Take a signal, repeatedly break it up into pairs of low and high
frequencies, subsample those segments, and place them back into the buffer.
Now, once that has been completed, discard the samples that have the
lowest amplitudes.
Rebuild the signal with the remaining samples. You've dropped a ton of
data, but still have a very close approximation of the original data.
Another use of wavelets: Fourier analysis of signals.
Break the signals down using wavelets, then take each section of the
decomposition, and run a Fourier transform on each section. The lower
frequencies will be pushed into the higher bands, allowing for higher
frequency resolution of the lower frequencies.
(Typically, with a Fourier transform low frequencies don't have very
good resolution, since the progression of frequencies are such:
1x 2x 3x 4x 5x 6x 7x 8x 9x ...
If the frequency band you're interested in is close to the fundamental
of the transform; you have huge jumps in frequency. take the first two
samples: Fundamental and double the fundamental. Nothing in between.
Move that to the right:
32x 33x 34x 35x 36x 37x 38x 39x ...
Now you step between the fundamental at 32 and the next highest sample
is 1.03x so, now you're looking at samples with this sort of stepping:
1x 1.03x 1.06x 1.09x 1.12x 1.15x ...
Much more precision in the frequency domain. The time domain, obviously
will suffer for these lower frequencies. You can't pinpoint as
accurately where they are in time, due to the slowness of their
oscillation. There are ways, I think, of getting around this, but I
haven't gotten that far in my understanding.
--
~Mike
Post a reply to this message
|
 |