POV-Ray : Newsgroups : povray.off-topic : Wavelet Server Time
3 Sep 2024 17:12:14 EDT (-0400)
  Wavelet (Message 1 to 10 of 12)  
Goto Latest 10 Messages Next 2 Messages >>>
From: Mike Raiford
Subject: Wavelet
Date: 14 Sep 2010 08:07:49
Message: <4c8f6595$1@news.povray.org>
A couple demos:

http://math.hws.edu/eck/math371/applets/Haar.html

This is what's used in jpeg compression...

And is rather clever, when you think about it.

Take a signal, repeatedly break it up into pairs of low and high 
frequencies, subsample those segments, and place them back into the buffer.

Now, once that has been completed, discard the samples that have the 
lowest amplitudes.

Rebuild the signal with the remaining samples. You've dropped a ton of 
data, but still have a very close approximation of the original data.

Another use of wavelets: Fourier analysis of signals.

Break the signals down using wavelets, then take each section of the 
decomposition, and run a Fourier transform on each section. The lower 
frequencies will be pushed into the higher bands, allowing for higher 
frequency resolution of the lower frequencies.

(Typically, with a Fourier transform low frequencies don't have very 
good resolution, since the progression of frequencies are such:

1x 2x 3x 4x 5x 6x 7x 8x 9x ...

If the frequency band you're interested in is close to the fundamental 
of the transform; you have huge jumps in frequency. take the first two 
samples: Fundamental and double the fundamental. Nothing in between. 
Move that to the right:

32x 33x 34x 35x 36x 37x 38x 39x ...

Now you step between the fundamental at 32 and the next highest sample 
is 1.03x so, now you're looking at samples with this sort of stepping:

1x 1.03x 1.06x 1.09x 1.12x 1.15x ...

Much more precision in the frequency domain. The time domain, obviously 
will suffer for these lower frequencies. You can't pinpoint as 
accurately where they are in time, due to the slowness of their 
oscillation. There are ways, I think, of getting around this, but I 
haven't gotten that far in my understanding.

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: Wavelet
Date: 14 Sep 2010 09:40:13
Message: <4c8f7b3d$1@news.povray.org>
On 14/09/2010 01:04 PM, Mike Raiford wrote:
> A couple demos:
>
> http://math.hws.edu/eck/math371/applets/Haar.html
>
> This is what's used in jpeg compression...

No, I'm fairly sure JPEG uses the Discrete Cosine Transform. Now 
JPEG2000 really *does* use some manner of wavelet transform (although I 
don't remember which one).

> And is rather clever, when you think about it.
>
> Take a signal, repeatedly break it up into pairs of low and high
> frequencies, subsample those segments, and place them back into the buffer.

I'm not at all clear on exactly how it does that.

> Now, once that has been completed, discard the samples that have the
> lowest amplitudes.
>
> Rebuild the signal with the remaining samples. You've dropped a ton of
> data, but still have a very close approximation of the original data.

Most transform coding methods work not by *discarding* points, but by 
*quantinising* them according to how "important" they are deemed to be. 
This one seems a little unusual in that respect.

> Another use of wavelets: Fourier analysis of signals.

It's not Fourier analysis if it's not a Fourier transform. ;-)

In fact, you can apparently decompose any function into the sum of a 
suitable set of basis functions. (I believe that the set of basis 
functions is required to be "orthogonal" and "complete", but beyond that 
they can be anything.) Depending on what basis functions you choose, 
discarding or quantinising points will distort the reconstructed 
function in different ways. The trick is to find a good set of basis 
functions for the problem in question.

(E.g., for audio data, sine waves are the obvious candidate. The only 
question is whether they should be finite or not. For image data, the 
solution is less obvious.)

> Much more precision in the frequency domain. The time domain, obviously
> will suffer for these lower frequencies. You can't pinpoint as
> accurately where they are in time, due to the slowness of their
> oscillation. There are ways, I think, of getting around this, but I
> haven't gotten that far in my understanding.

Apparently there's a limit to how much precision you can get in time and 
frequency. Increasing the resolution of one necessarily decreases the 
resolution of the other. This is apparently due to the Heisenberg 
uncertainty principle. (Which is interesting, since I thought that 
applies only to quantum mechanics, not to general mathematical phenomena...)


Post a reply to this message

From: Invisible
Subject: Re: Wavelet
Date: 14 Sep 2010 10:00:59
Message: <4c8f801b$1@news.povray.org>
>> A couple demos:
>>
>> http://math.hws.edu/eck/math371/applets/Haar.html
>>
>> This is what's used in jpeg compression...
>
> No, I'm fairly sure JPEG uses the Discrete Cosine Transform.

http://en.wikipedia.org/wiki/Jpeg#JPEG_codec_example

Nice to know I'm right for once. ;-)

> Now
> JPEG2000 really *does* use some manner of wavelet transform (although I
> don't remember which one).

http://en.wikipedia.org/wiki/JPEG_2000#Technical_discussion

It's the Discrete Wavelet Transform (DWT).


Post a reply to this message

From: Mike Raiford
Subject: Re: Wavelet
Date: 14 Sep 2010 10:27:12
Message: <4c8f8640$1@news.povray.org>
On 9/14/2010 8:40 AM, Invisible wrote:

> No, I'm fairly sure JPEG uses the Discrete Cosine Transform. Now
> JPEG2000 really *does* use some manner of wavelet transform (although I
> don't remember which one).

Ah.. You're right, it is JPEG2000.

>> And is rather clever, when you think about it.
>>
>> Take a signal, repeatedly break it up into pairs of low and high
>> frequencies, subsample those segments, and place them back into the
>> buffer.
>
> I'm not at all clear on exactly how it does that.
>

Using a pair of filters, low pass and high pass in a bank,

so you filter low frequencies, this becomes the first stage, which is 
downsampled by 2.

Let see, this is a nice diagram:

http://en.wikipedia.org/wiki/Discrete_wavelet_transform#Cascading_and_Filter_banks

Apparently, once the filters are run over it, you can downsample each of 
the filtered components. The low pass can then be filtered again How 
reducing the sample rate of the high pass data doesn't discard data I'm 
not sure, yet.

>
> Most transform coding methods work not by *discarding* points, but by
> *quantinising* them according to how "important" they are deemed to be.
> This one seems a little unusual in that respect.

Right. *But* you can drop every other sample because of the Nyquist limit!

>
>> Another use of wavelets: Fourier analysis of signals.
>
> It's not Fourier analysis if it's not a Fourier transform. ;-)
>

ERRRR... Frequency domain analysis. In the end you're doing Fourier 
transforms on each portion you've separated out in the decomposition.

> Apparently there's a limit to how much precision you can get in time and
> frequency. Increasing the resolution of one necessarily decreases the
> resolution of the other. This is apparently due to the Heisenberg
> uncertainty principle. (Which is interesting, since I thought that
> applies only to quantum mechanics, not to general mathematical
> phenomena...)

Interesting. It makes sense, though. Low frequencies change relatively 
little in the time domain. You can tell when it occurs, when exactly a 
peak is, but now how it changes if delta-t is less than the frequency of 
the wave.

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: Wavelet
Date: 14 Sep 2010 10:44:29
Message: <4c8f8a4d$1@news.povray.org>
>>> Take a signal, repeatedly break it up into pairs of low and high
>>> frequencies, subsample those segments, and place them back into the
>>> buffer.
>>
>> I'm not at all clear on exactly how it does that.
>
> Using a pair of filters, low pass and high pass in a bank,

Doesn't look like any kind of lowpass or highpass filter response to me. 
Looks more like it's rearranging the order of the samples or something.

> so you filter low frequencies, this becomes the first stage, which is
> downsampled by 2.
>
> Apparently, once the filters are run over it, you can downsample each of
> the filtered components. The low pass can then be filtered again How
> reducing the sample rate of the high pass data doesn't discard data I'm
> not sure, yet.

When downsampling, frequencies above the Nyquist limit get reflected to 
the other side of the Nyquist limit. Usually this means that the high 
frequencies collide with the low frequencies - but if you've already 
filtered out the low frequencies, all that happens is that the spectrum 
gets flipped upside-down. You can completely reverse this process by 
upsampling and then flipping the spectrum back the right way round 
again. QED.

>> Most transform coding methods work not by *discarding* points, but by
>> *quantinising* them according to how "important" they are deemed to be.
>> This one seems a little unusual in that respect.
>
> Right. *But* you can drop every other sample because of the Nyquist limit!

Yes. I was just pointing out that other transform codings work in a 
rather different way.

>> Apparently there's a limit to how much precision you can get in time and
>> frequency. Increasing the resolution of one necessarily decreases the
>> resolution of the other. This is apparently due to the Heisenberg
>> uncertainty principle. (Which is interesting, since I thought that
>> applies only to quantum mechanics, not to general mathematical
>> phenomena...)
>
> Interesting. It makes sense, though. Low frequencies change relatively
> little in the time domain. You can tell when it occurs, when exactly a
> peak is, but now how it changes if delta-t is less than the frequency of
> the wave.

Weird, but true. And since quantum particles ARE ALSO WAVES, you start 
to understand why this might be true...


Post a reply to this message

From: Mike Raiford
Subject: Re: Wavelet
Date: 15 Sep 2010 10:43:42
Message: <4c90db9e@news.povray.org>
On 9/14/2010 9:44 AM, Invisible wrote:
>>>> Take a signal, repeatedly break it up into pairs of low and high
>>>> frequencies, subsample those segments, and place them back into the
>>>> buffer.
>>>
>>> I'm not at all clear on exactly how it does that.
>>
>> Using a pair of filters, low pass and high pass in a bank,
>
> Doesn't look like any kind of lowpass or highpass filter response to me.
> Looks more like it's rearranging the order of the samples or something.
>

What it appears to be doing is for every 2 samples:

The first of the pair gets the same positive value on its respective 
sample in the first and second half. The second of the pair is then 
added to the sample on the first half, and subtracted from the sample on 
the second half.

Subsample and repeat the desired number of times.

> When downsampling, frequencies above the Nyquist limit get reflected to
> the other side of the Nyquist limit. Usually this means that the high
> frequencies collide with the low frequencies - but if you've already
> filtered out the low frequencies, all that happens is that the spectrum
> gets flipped upside-down. You can completely reverse this process by
> upsampling and then flipping the spectrum back the right way round
> again. QED.

OK, Makes sense.


> Weird, but true. And since quantum particles ARE ALSO WAVES, you start
> to understand why this might be true...

Right.

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: Wavelet
Date: 15 Sep 2010 10:56:00
Message: <4c90de80$1@news.povray.org>
>> Doesn't look like any kind of lowpass or highpass filter response to me.
>> Looks more like it's rearranging the order of the samples or something.
>>
>
> What it appears to be doing is for every 2 samples:
>
> The first of the pair gets the same positive value on its respective
> sample in the first and second half. The second of the pair is then
> added to the sample on the first half, and subtracted from the sample on
> the second half.
>
> Subsample and repeat the desired number of times.

So for every pair of samples, it simply computes the sum and the difference?


Post a reply to this message

From: Mike Raiford
Subject: Re: Wavelet
Date: 15 Sep 2010 11:19:50
Message: <4c90e416$1@news.povray.org>
On 9/15/2010 9:56 AM, Invisible wrote:
>>> Doesn't look like any kind of lowpass or highpass filter response to me.
>>> Looks more like it's rearranging the order of the samples or something.
>>>
>>
>> What it appears to be doing is for every 2 samples:
>>
>> The first of the pair gets the same positive value on its respective
>> sample in the first and second half. The second of the pair is then
>> added to the sample on the first half, and subtracted from the sample on
>> the second half.
>>
>> Subsample and repeat the desired number of times.
>
> So for every pair of samples, it simply computes the sum and the
> difference?

Recursively, yes...

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: Wavelet
Date: 15 Sep 2010 11:44:24
Message: <4c90e9d8@news.povray.org>
>> So for every pair of samples, it simply computes the sum and the
>> difference?
>
> Recursively, yes...

Right. Well if you say "it calculates the sum and the difference of each 
pair of samples", suddenly it becomes pretty obvious what it's doing, 
and why it's reversible.

It also explains why the partial reconstructed wave is the shape it is. 
And it suggests that having fewer terms available is going to futz with 
the high frequencies first.

(Then again, this sort of transform is probably more useful for 
time-domain signals like image data rather than frequency-domain data 
like sound.)


Post a reply to this message

From: Mike Raiford
Subject: Re: Wavelet
Date: 15 Sep 2010 15:02:45
Message: <4c911855$1@news.povray.org>
On 9/15/2010 10:44 AM, Invisible wrote:
>>> So for every pair of samples, it simply computes the sum and the
>>> difference?
>>
>> Recursively, yes...
>
> Right. Well if you say "it calculates the sum and the difference of each
> pair of samples", suddenly it becomes pretty obvious what it's doing,
> and why it's reversible.
>
> It also explains why the partial reconstructed wave is the shape it is.
> And it suggests that having fewer terms available is going to futz with
> the high frequencies first.
>
> (Then again, this sort of transform is probably more useful for
> time-domain signals like image data rather than frequency-domain data
> like sound.)

Right! But, the trick also gives better frequency resolution at lower 
frequencies, simply because of the subsampling that takes place.

-- 
~Mike


Post a reply to this message

Goto Latest 10 Messages Next 2 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.