 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> A couple demos:
>>
>> http://math.hws.edu/eck/math371/applets/Haar.html
>>
>> This is what's used in jpeg compression...
>
> No, I'm fairly sure JPEG uses the Discrete Cosine Transform.
http://en.wikipedia.org/wiki/Jpeg#JPEG_codec_example
Nice to know I'm right for once. ;-)
> Now
> JPEG2000 really *does* use some manner of wavelet transform (although I
> don't remember which one).
http://en.wikipedia.org/wiki/JPEG_2000#Technical_discussion
It's the Discrete Wavelet Transform (DWT).
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 9/14/2010 8:40 AM, Invisible wrote:
> No, I'm fairly sure JPEG uses the Discrete Cosine Transform. Now
> JPEG2000 really *does* use some manner of wavelet transform (although I
> don't remember which one).
Ah.. You're right, it is JPEG2000.
>> And is rather clever, when you think about it.
>>
>> Take a signal, repeatedly break it up into pairs of low and high
>> frequencies, subsample those segments, and place them back into the
>> buffer.
>
> I'm not at all clear on exactly how it does that.
>
Using a pair of filters, low pass and high pass in a bank,
so you filter low frequencies, this becomes the first stage, which is
downsampled by 2.
Let see, this is a nice diagram:
http://en.wikipedia.org/wiki/Discrete_wavelet_transform#Cascading_and_Filter_banks
Apparently, once the filters are run over it, you can downsample each of
the filtered components. The low pass can then be filtered again How
reducing the sample rate of the high pass data doesn't discard data I'm
not sure, yet.
>
> Most transform coding methods work not by *discarding* points, but by
> *quantinising* them according to how "important" they are deemed to be.
> This one seems a little unusual in that respect.
Right. *But* you can drop every other sample because of the Nyquist limit!
>
>> Another use of wavelets: Fourier analysis of signals.
>
> It's not Fourier analysis if it's not a Fourier transform. ;-)
>
ERRRR... Frequency domain analysis. In the end you're doing Fourier
transforms on each portion you've separated out in the decomposition.
> Apparently there's a limit to how much precision you can get in time and
> frequency. Increasing the resolution of one necessarily decreases the
> resolution of the other. This is apparently due to the Heisenberg
> uncertainty principle. (Which is interesting, since I thought that
> applies only to quantum mechanics, not to general mathematical
> phenomena...)
Interesting. It makes sense, though. Low frequencies change relatively
little in the time domain. You can tell when it occurs, when exactly a
peak is, but now how it changes if delta-t is less than the frequency of
the wave.
--
~Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>>> Take a signal, repeatedly break it up into pairs of low and high
>>> frequencies, subsample those segments, and place them back into the
>>> buffer.
>>
>> I'm not at all clear on exactly how it does that.
>
> Using a pair of filters, low pass and high pass in a bank,
Doesn't look like any kind of lowpass or highpass filter response to me.
Looks more like it's rearranging the order of the samples or something.
> so you filter low frequencies, this becomes the first stage, which is
> downsampled by 2.
>
> Apparently, once the filters are run over it, you can downsample each of
> the filtered components. The low pass can then be filtered again How
> reducing the sample rate of the high pass data doesn't discard data I'm
> not sure, yet.
When downsampling, frequencies above the Nyquist limit get reflected to
the other side of the Nyquist limit. Usually this means that the high
frequencies collide with the low frequencies - but if you've already
filtered out the low frequencies, all that happens is that the spectrum
gets flipped upside-down. You can completely reverse this process by
upsampling and then flipping the spectrum back the right way round
again. QED.
>> Most transform coding methods work not by *discarding* points, but by
>> *quantinising* them according to how "important" they are deemed to be.
>> This one seems a little unusual in that respect.
>
> Right. *But* you can drop every other sample because of the Nyquist limit!
Yes. I was just pointing out that other transform codings work in a
rather different way.
>> Apparently there's a limit to how much precision you can get in time and
>> frequency. Increasing the resolution of one necessarily decreases the
>> resolution of the other. This is apparently due to the Heisenberg
>> uncertainty principle. (Which is interesting, since I thought that
>> applies only to quantum mechanics, not to general mathematical
>> phenomena...)
>
> Interesting. It makes sense, though. Low frequencies change relatively
> little in the time domain. You can tell when it occurs, when exactly a
> peak is, but now how it changes if delta-t is less than the frequency of
> the wave.
Weird, but true. And since quantum particles ARE ALSO WAVES, you start
to understand why this might be true...
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 9/14/2010 9:44 AM, Invisible wrote:
>>>> Take a signal, repeatedly break it up into pairs of low and high
>>>> frequencies, subsample those segments, and place them back into the
>>>> buffer.
>>>
>>> I'm not at all clear on exactly how it does that.
>>
>> Using a pair of filters, low pass and high pass in a bank,
>
> Doesn't look like any kind of lowpass or highpass filter response to me.
> Looks more like it's rearranging the order of the samples or something.
>
What it appears to be doing is for every 2 samples:
The first of the pair gets the same positive value on its respective
sample in the first and second half. The second of the pair is then
added to the sample on the first half, and subtracted from the sample on
the second half.
Subsample and repeat the desired number of times.
> When downsampling, frequencies above the Nyquist limit get reflected to
> the other side of the Nyquist limit. Usually this means that the high
> frequencies collide with the low frequencies - but if you've already
> filtered out the low frequencies, all that happens is that the spectrum
> gets flipped upside-down. You can completely reverse this process by
> upsampling and then flipping the spectrum back the right way round
> again. QED.
OK, Makes sense.
> Weird, but true. And since quantum particles ARE ALSO WAVES, you start
> to understand why this might be true...
Right.
--
~Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> Doesn't look like any kind of lowpass or highpass filter response to me.
>> Looks more like it's rearranging the order of the samples or something.
>>
>
> What it appears to be doing is for every 2 samples:
>
> The first of the pair gets the same positive value on its respective
> sample in the first and second half. The second of the pair is then
> added to the sample on the first half, and subtracted from the sample on
> the second half.
>
> Subsample and repeat the desired number of times.
So for every pair of samples, it simply computes the sum and the difference?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 9/15/2010 9:56 AM, Invisible wrote:
>>> Doesn't look like any kind of lowpass or highpass filter response to me.
>>> Looks more like it's rearranging the order of the samples or something.
>>>
>>
>> What it appears to be doing is for every 2 samples:
>>
>> The first of the pair gets the same positive value on its respective
>> sample in the first and second half. The second of the pair is then
>> added to the sample on the first half, and subtracted from the sample on
>> the second half.
>>
>> Subsample and repeat the desired number of times.
>
> So for every pair of samples, it simply computes the sum and the
> difference?
Recursively, yes...
--
~Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> So for every pair of samples, it simply computes the sum and the
>> difference?
>
> Recursively, yes...
Right. Well if you say "it calculates the sum and the difference of each
pair of samples", suddenly it becomes pretty obvious what it's doing,
and why it's reversible.
It also explains why the partial reconstructed wave is the shape it is.
And it suggests that having fewer terms available is going to futz with
the high frequencies first.
(Then again, this sort of transform is probably more useful for
time-domain signals like image data rather than frequency-domain data
like sound.)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 9/15/2010 10:44 AM, Invisible wrote:
>>> So for every pair of samples, it simply computes the sum and the
>>> difference?
>>
>> Recursively, yes...
>
> Right. Well if you say "it calculates the sum and the difference of each
> pair of samples", suddenly it becomes pretty obvious what it's doing,
> and why it's reversible.
>
> It also explains why the partial reconstructed wave is the shape it is.
> And it suggests that having fewer terms available is going to futz with
> the high frequencies first.
>
> (Then again, this sort of transform is probably more useful for
> time-domain signals like image data rather than frequency-domain data
> like sound.)
Right! But, the trick also gives better frequency resolution at lower
frequencies, simply because of the subsampling that takes place.
--
~Mike
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> Most transform coding methods work not by *discarding* points, but by
> *quantinising* them according to how "important" they are deemed to be.
IIRC, normal JPEG actually does indeed not only quantitize the samples but
can also set some of them strictly to zero. I.e., all you need to do is
quantitize with sufficiently coarse resolution such that every possible
value falls into the same "zero" bucket and bob's your uncle.
--
Darren New, San Diego CA, USA (PST)
Quoth the raven:
Need S'Mores!
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> Most transform coding methods work not by *discarding* points, but by
>> *quantinising* them according to how "important" they are deemed to be.
>
> IIRC, normal JPEG actually does indeed not only quantitize the samples
> but can also set some of them strictly to zero. I.e., all you need to do
> is quantitize with sufficiently coarse resolution such that every
> possible value falls into the same "zero" bucket and bob's your uncle.
Yes, that's true. But quantinising is strictly more general than just
discarding points.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |