POV-Ray : Newsgroups : povray.off-topic : DFT and FFT Server Time
9 Oct 2024 19:14:47 EDT (-0400)
  DFT and FFT (Message 26 to 35 of 55)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Invisible
Subject: Re: ax^2 + bx + c = 0
Date: 19 Jan 2009 09:01:15
Message: <497487ab$1@news.povray.org>
scott wrote:
>> Unless my eyes deceive me, this is equivilent:
>>
>> J = 2a^3 - 9ab + 26c
>> K = Sqrt(J^2 + 4(-a^2 + 3b)^3)
>> M = (1 + i Sqrt(3))/2
>> N = (-1 + i Sqrt(3))/2
>>
>> r1 = -(a/3) +   Cbrt((-J + K)/54) +   Cbrt((-J - K)/54)
>> r2 = -(a/3) + M Cbrt((-J + K)/54) + N Cbrt((-J - K)/54)
>> r3 = -(a/3) + N Cbrt((-J + K)/54) + M Cbrt((-J - K)/54)
>>
>> While hardly on the same level as the quadratic, it's not "that" hard.
> 
> Did you try to derive those equations from ax^3+bx^2+cx+d=0 ?? :-)

No - because it solves the *monic* equation x^3 + ax^2 + bx + c = 0. :-P 
(Notice the absence of a "d" in any of the equations above.)

I'm surprised nobody has yet pointed out that it's "+ 27c" and not "+ 26c".


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 19 Jan 2009 09:37:19
Message: <4974901f$1@news.povray.org>
triple_r wrote:
> Orchid XP v8 <voi### [at] devnull> wrote:
>> I still can't figure out what a "tensor" is. :-P
> 
> scalar = tensor of rank 0
> vector = tensor of rank 1

So... a rank-0 tensor is just a number, a rank-1 tensor is a 1D grid of 
numbers, a rank-2 tensor is a 2D grid of numbers, and a rank-3 tensor 
is... what? A 3D grid of numbers?

I guess that's simple enough. How how do you do arithmetic with them? ;-)


Post a reply to this message

From: Mike Raiford
Subject: Re: DFT and FFT
Date: 19 Jan 2009 11:42:20
Message: <4974ad6c$1@news.povray.org>
Orchid XP v8 wrote:

> Recall that the convolution of a signal by some kernel is *longer* than 
> the original signal. This is the effect you have to repeat when doing 
> FFT convolution. (The details escape me at this exact moment... Suffice 
> it to say that your inverse-FFT results have to be overlapped to get a 
> smooth result.)
> 

Looking at dspguide.com (The first site that actually got me to make a 
bit of sense out of FFT in the first place) it seems the overlap is 
quite simple. What to overlap seems to be the length of the filter 
kernel... But, how do you determine that if, say ... you present the 
user with an interface that allows them to create a graph of which 
frequencies to pass...?

I suppose this is where you'd want to use a Blackmann window to clean up 
the edges of the kernel before applying it to the input data.


-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 19 Jan 2009 12:01:47
Message: <4974b1fb$1@news.povray.org>
Mike Raiford wrote:

> Looking at dspguide.com (The first site that actually got me to make a 
> bit of sense out of FFT in the first place) it seems the overlap is 
> quite simple.

dspguide.com is the resource I used - and yet, it's really very good, 
IMHO. The stuff about IIR filter design is quite fun. ;-)

> What to overlap seems to be the length of the filter 
> kernel... But, how do you determine that if, say ... you present the 
> user with an interface that allows them to create a graph of which 
> frequencies to pass...?

If you have an N-point filter kernel, you can take X samples of input, 
add on N-1 zeros at the end, FFT, multiply, inverse-FFT, overlap by N-1 
samples with the previous window. This should produce the same results 
as doing a normal convolution.

Using a larger N gives you more precise control over the frequency 
response (because there are more points in it).

> I suppose this is where you'd want to use a Blackmann window to clean up 
> the edges of the kernel before applying it to the input data.

You know how if you JPEG-compress something too much, you get ghosting? 
Well if you aren't careful with your filter kernel, your frequency 
response ends up with ghosting. The Blackmann window is a way to try to 
get rid of that. (By blurring the whole thing, unfortunately.)


Post a reply to this message

From: Darren New
Subject: Re: DFT and FFT
Date: 19 Jan 2009 12:54:39
Message: <4974be5f$1@news.povray.org>
Invisible wrote:
> So... a rank-0 tensor is just a number, a rank-1 tensor is a 1D grid of 
> numbers, a rank-2 tensor is a 2D grid of numbers, and a rank-3 tensor 
> is... what? A 3D grid of numbers?
> 
> I guess that's simple enough. How how do you do arithmetic with them? ;-)

I think they actually have to be vectors, don't they? Not just pairs of numbers?

-- 
   Darren New, San Diego CA, USA (PST)
   Why is there a chainsaw in DOOM?
   There aren't any trees on Mars.


Post a reply to this message

From: Mike Raiford
Subject: Re: DFT and FFT
Date: 19 Jan 2009 14:13:28
Message: <4974d0d8@news.povray.org>
Invisible wrote:

> You know how if you JPEG-compress something too much, you get ghosting? 
> Well if you aren't careful with your filter kernel, your frequency 
> response ends up with ghosting. The Blackmann window is a way to try to 
> get rid of that. (By blurring the whole thing, unfortunately.)

According to the book, my thinking was sound. You take an iFFT of the 
arbitrary filter, apply a Hamming or Blackmann window to the shifted and 
zero-padded filter kernel. Then you execute the filter by convolution, 
or FFT convolution. Depending on the size of the filter kernel you've 
generated. Bigger means FFT is more advantageous.

That's for FIR filters.

Now I'm reading a bout IIR filters and ... they're simple to code, but a 
bear to design..

-- 
~Mike


Post a reply to this message

From: Orchid XP v8
Subject: Re: DFT and FFT
Date: 19 Jan 2009 14:21:29
Message: <4974d2b9$1@news.povray.org>
Mike Raiford wrote:

> Now I'm reading a bout IIR filters and ... they're simple to code, but a 
> bear to design..

Exactly. But great for simulating old analogue subtractive synthesizers. ;-)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 20 Jan 2009 08:16:45
Message: <4975cebd$1@news.povray.org>
Mike Raiford wrote:

> According to the book, my thinking was sound. You take an iFFT of the 
> arbitrary filter, apply a Hamming or Blackmann window to the shifted and 
> zero-padded filter kernel. Then you execute the filter by convolution, 
> or FFT convolution. Depending on the size of the filter kernel you've 
> generated. Bigger means FFT is more advantageous.
> 
> That's for FIR filters.

Not quite...

Let's say you want to build a band-pass filter. It passes 100 Hz 
unchanged, but it filters out everything else. A "perfect" version of 
this filter is very simple - and very impossible. Its impulse response 
would simply by a 100 Hz sinewave extending an infinite distance into 
the past and into the future.

It should be completely obvious that no filter real-world filter can 
possibly react to inputs that haven't happened yet - but that is what 
this design calls for. (It is a "non-causal filter".)

Anyway, if you build an FIR by simply taking that infinite sinewave and 
chopping the ends off to make it finite, you've just truncated the 
impulse response. And that distorts the frequency response. The longer 
the IR is, the less the distortion - and the more arithmetic your 
digital filter is going to have to do when you run it.

If, instead of just chopping the ends off, you use a Blackmann window... 
Well, you're multiplying the IR by the window. Multiplication in the 
time domain is convolution in the frequency domain. The frequency 
spectrum of a Blackmann window is such that this convolution is 
essentially a blurring operation. It blurs out the small ripples, 
without destroying the peak at 100 Hz which we'd like to keep. (It does 
widen it slightly though.)

Anyway, coming back to general filter design...

- You figure out what frequency response you want, and how many points 
you've going to use. (Note that before and after the inverse-FFT, the 
number of "points" remains the same.)

- Obviously more points give you more control over the frequency 
response; you've got more control points to move. It also means less 
rippling between the centers of those control points.

- Take your desired frequency response. Take the inverse-FFT. Now 
multiply that by your chosen windowing function.

- You now have your filter kernel.

Notice there's no zero-padding here. (I think you're possibly confused 
by the guide's suggestion that you can zero-pad the IR before taking an 
FFT in order to get better frequency resolution so you can see what your 
filter's "real" frequency response is - this does not exactly match what 
you originally requested because the IR is finite.)

Anyway, you can now "run" the filter in two ways: Take the convolution 
of the kernel and your signal, or do an FFT-convolution.

To do an FFT-convolution, you want the filter's frequency response (not 
its impulse response). Chop your input signal into blocks, zero-pad each 
block, FFT, multiply, inverse-FFT, overlap consecutive blocks to build 
an output signal. You're done.

(It strikes me that if you were trying to do this "real time", the 
blocking would necessarily add some latency to the system that a 
convolution implementation wouldn't have...)


Post a reply to this message

From: Mike Raiford
Subject: Re: DFT and FFT
Date: 20 Jan 2009 09:12:43
Message: <4975dbdb$1@news.povray.org>
Invisible wrote:
> Mike Raiford wrote:
> 
>> According to the book, my thinking was sound. You take an iFFT of the 
>> arbitrary filter, apply a Hamming or Blackmann window to the shifted 
>> and zero-padded filter kernel. Then you execute the filter by 
>> convolution, or FFT convolution. Depending on the size of the filter 
>> kernel you've generated. Bigger means FFT is more advantageous.
>>
>> That's for FIR filters.
> 
> Not quite...
> 
> Let's say you want to build a band-pass filter. It passes 100 Hz 
> unchanged, but it filters out everything else. A "perfect" version of 
> this filter is very simple - and very impossible. Its impulse response 
> would simply by a 100 Hz sinewave extending an infinite distance into 
> the past and into the future.

Right, no way that could ever work.

> It should be completely obvious that no filter real-world filter can 
> possibly react to inputs that haven't happened yet - but that is what 
> this design calls for. (It is a "non-causal filter".)
> 
> Anyway, if you build an FIR by simply taking that infinite sinewave and 
> chopping the ends off to make it finite, you've just truncated the 
> impulse response. And that distorts the frequency response. The longer 
> the IR is, the less the distortion - and the more arithmetic your 
> digital filter is going to have to do when you run it.
> 
> If, instead of just chopping the ends off, you use a Blackmann window... 
> Well, you're multiplying the IR by the window. Multiplication in the 
> time domain is convolution in the frequency domain. The frequency 
> spectrum of a Blackmann window is such that this convolution is 
> essentially a blurring operation. It blurs out the small ripples, 
> without destroying the peak at 100 Hz which we'd like to keep. (It does 
> widen it slightly though.)
> 

Right... once you've chosen the frequencies to pass, you can then obtain 
the impulse response by an iFFT, and applying the window. Depending on 
the number of points the filter's frequency response will be an 
approximation of the desired frequency response.

> Anyway, coming back to general filter design...
> 
> - You figure out what frequency response you want, and how many points 
> you've going to use. (Note that before and after the inverse-FFT, the 
> number of "points" remains the same.)
> 
> - Obviously more points give you more control over the frequency 
> response; you've got more control points to move. It also means less 
> rippling between the centers of those control points.
> 
> - Take your desired frequency response. Take the inverse-FFT. Now 
> multiply that by your chosen windowing function.
> 
> - You now have your filter kernel.
> 
> Notice there's no zero-padding here. (I think you're possibly confused 
> by the guide's suggestion that you can zero-pad the IR before taking an 
> FFT in order to get better frequency resolution so you can see what your 
> filter's "real" frequency response is - this does not exactly match what 
> you originally requested because the IR is finite.)

You'd want to zero pad the kernel for FFT convolution though, right?

> Anyway, you can now "run" the filter in two ways: Take the convolution 
> of the kernel and your signal, or do an FFT-convolution.
> 
> To do an FFT-convolution, you want the filter's frequency response (not 
> its impulse response). Chop your input signal into blocks, zero-pad each 
> block, FFT, multiply, inverse-FFT, overlap consecutive blocks to build 
> an output signal. You're done.
> 
> (It strikes me that if you were trying to do this "real time", the 
> blocking would necessarily add some latency to the system that a 
> convolution implementation wouldn't have...)

Yeah, there would be a bit of latency I would expect.

FWIW, a long time ago I fooled around with an emulation project for the 
MT-32 sound module, which has a resonant low-pass filter.  With a few 
multiplies and a few adds per cycle, it would filter in realtime pretty 
easily. I didn't understand the theory behind it, but the code looked 
deceptively simple... Until I found the code for calculating the 
coefficients. I found the same filter code on Gamedev.net. From the 
looks of it, it's simply a 6-pole Chebyshev filter. The coefficients 
were generated by what appeared to be the z-transform you were 
discussing earlier. The resonance parameter is apparently is how much 
ripple was allowed in the passband. More ripple = bigger spike at the 
cut-off frequency.

Other interesting things about the module itself: It's essentially 
subtractive synthesis combined with PCM samples. Each patch could 
contain up to 4 partials: 2 pairs. Each pair had a ring modulator. If 
one side of the modulator was unconnected (no partial) it would simply 
generate white noise. IIRC, it had a few different waveforms, including 
the PCM "waveform."

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 20 Jan 2009 09:50:25
Message: <4975e4b1@news.povray.org>
>> Let's say you want to build a band-pass filter. It passes 100 Hz 
>> unchanged, but it filters out everything else. A "perfect" version of 
>> this filter is very simple - and very impossible. Its impulse response 
>> would simply by a 100 Hz sinewave extending an infinite distance into 
>> the past and into the future.
> 
> Right, no way that could ever work.

Indeed.

(Although you *can* apply that kind of thing in the weird world of 
mathematics. If you have a mathematical formula that describes your 
entire signal - past, present and future - then you really *can* pass it 
through a "perfect" filter like this. That's the magic of integral 
calculus. But I digress...)

> Right... once you've chosen the frequencies to pass, you can then obtain 
> the impulse response by an iFFT, and applying the window.

Yes.

> Depending on 
> the number of points the filter's frequency response will be an 
> approximation of the desired frequency response.

Yes.

In general, if you ask for the frequency response that changes smoothly, 
you'll get a close approximation. If you ask for a frequency response 
with sharp discontinuities in it, you'll get ripples. (The inverse-DFT 
is almost identical to the forward-DFT, remember.)

A lowpass filter consists of a straight line from 0 Hz to the cutoff 
frequency, a sharp discontinuity from 100% to 0%, and then a straight 
line from there onwards. Hence the impossibility of implementing it 
"perfectly". Most other "standard" filter types have abrupt changes like 
this too.

>> Notice there's no zero-padding here. (I think you're possibly confused 
>> by the guide's suggestion that you can zero-pad the IR before taking 
>> an FFT in order to get better frequency resolution so you can see what 
>> your filter's "real" frequency response is - this does not exactly 
>> match what you originally requested because the IR is finite.)
> 
> You'd want to zero pad the kernel for FFT convolution though, right?

Checking the documentation carefully, it appears that the procedure is thus:

   To FFT-convolve a kernel containing X samples with a signal block 
containing Y samples, pad *both* the kernel and the signal block to 
X+Y-1 samples long, FFT to the frequency domain, multiply, inverse-FFT 
back to the time domain again. Your new output signal block is larger 
than the block you started with, so overlap it with its siblings.

So yes, you do pad the kernel as well as the input signal blocks. (The 
amount of padding depends on the size of block you chop the signal 
into.) I hypothesize that small blocks = FFT isn't much faster, large 
blocks = you have to do lots of multiplies, so there's some sweet spot 
somewhere in the middle for best performance.

>> (It strikes me that if you were trying to do this "real time", the 
>> blocking would necessarily add some latency to the system that a 
>> convolution implementation wouldn't have...)
> 
> Yeah, there would be a bit of latency I would expect.

...whereas if you just do a 500-point convolution, there will be 500 
samples woth of latency. No more, no less.

> FWIW, a long time ago I fooled around with an emulation project for the 
> MT-32 sound module, which has a resonant low-pass filter.  With a few 
> multiplies and a few adds per cycle, it would filter in realtime pretty 
> easily.

That's the magic of IIR. It's *very* compute-efficient.

> I didn't understand the theory behind it, but the code looked 
> deceptively simple... Until I found the code for calculating the 
> coefficients.

Uh, yes. ;-)

> I found the same filter code on Gamedev.net. From the 
> looks of it, it's simply a 6-pole Chebyshev filter. The coefficients 
> were generated by what appeared to be the z-transform you were 
> discussing earlier.

Precisely.

The math isn't actually "hard" - again it's a few trig functions and a 
couple of multiplies - it's just very "fiddly" to implement correctly. 
There are lots of opportunities to screw it up.

> The resonance parameter is apparently is how much 
> ripple was allowed in the passband. More ripple = bigger spike at the 
> cut-off frequency.

Indeed.

Actually - when you get that far - these IIR filters are designed by 
looking at the complex plane. You position "poles" and "zeros", and 
these define a surface. The vertical slice going through the origin is 
the frequency response of the filter. So it's like having a giant 
elastic sheet, and moving these poles and zeros around to bend the part 
of the sheet crossing that line into the shape you want.

A Chebyshev low-pass filter arranges poles in a semicircle. The 
resonance comes from deforming this circle to make it elliptical. As you 
do so, the poles get nearer to the line, and the "ripples" you see are 
actually the edges of the peak surrounding each pole. I'll draw you a 
graph if you like... ;-)

The other fun thing is, the mathematical analysis may show you that an 
IIR filter will do one thing, but when you actually implement it 
something totally different happens due to rounding errors! IIR filters 
work using feedback, and feedback magnifies rounding errors. (FIR 
filters by contrast are pretty much unaffected by them, because there's 
no feedback.)

> Other interesting things about the module itself: It's essentially 
> subtractive synthesis combined with PCM samples. Each patch could 
> contain up to 4 partials: 2 pairs. Each pair had a ring modulator. If 
> one side of the modulator was unconnected (no partial) it would simply 
> generate white noise. IIRC, it had a few different waveforms, including 
> the PCM "waveform."

Sounds pretty interesting.

Personally I'd love to implement something like that myself - but I 
don't have access to anything that can talk to the sound hardware. :-(

(Haskell has a binding for libSDL - but it doesn't work on Windoze.)


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.