POV-Ray : Newsgroups : povray.off-topic : DFT and FFT Server Time
6 Sep 2024 17:17:35 EDT (-0400)
  DFT and FFT (Message 31 to 40 of 55)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Mike Raiford
Subject: Re: DFT and FFT
Date: 19 Jan 2009 14:13:28
Message: <4974d0d8@news.povray.org>
Invisible wrote:

> You know how if you JPEG-compress something too much, you get ghosting? 
> Well if you aren't careful with your filter kernel, your frequency 
> response ends up with ghosting. The Blackmann window is a way to try to 
> get rid of that. (By blurring the whole thing, unfortunately.)

According to the book, my thinking was sound. You take an iFFT of the 
arbitrary filter, apply a Hamming or Blackmann window to the shifted and 
zero-padded filter kernel. Then you execute the filter by convolution, 
or FFT convolution. Depending on the size of the filter kernel you've 
generated. Bigger means FFT is more advantageous.

That's for FIR filters.

Now I'm reading a bout IIR filters and ... they're simple to code, but a 
bear to design..

-- 
~Mike


Post a reply to this message

From: Orchid XP v8
Subject: Re: DFT and FFT
Date: 19 Jan 2009 14:21:29
Message: <4974d2b9$1@news.povray.org>
Mike Raiford wrote:

> Now I'm reading a bout IIR filters and ... they're simple to code, but a 
> bear to design..

Exactly. But great for simulating old analogue subtractive synthesizers. ;-)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 20 Jan 2009 08:16:45
Message: <4975cebd$1@news.povray.org>
Mike Raiford wrote:

> According to the book, my thinking was sound. You take an iFFT of the 
> arbitrary filter, apply a Hamming or Blackmann window to the shifted and 
> zero-padded filter kernel. Then you execute the filter by convolution, 
> or FFT convolution. Depending on the size of the filter kernel you've 
> generated. Bigger means FFT is more advantageous.
> 
> That's for FIR filters.

Not quite...

Let's say you want to build a band-pass filter. It passes 100 Hz 
unchanged, but it filters out everything else. A "perfect" version of 
this filter is very simple - and very impossible. Its impulse response 
would simply by a 100 Hz sinewave extending an infinite distance into 
the past and into the future.

It should be completely obvious that no filter real-world filter can 
possibly react to inputs that haven't happened yet - but that is what 
this design calls for. (It is a "non-causal filter".)

Anyway, if you build an FIR by simply taking that infinite sinewave and 
chopping the ends off to make it finite, you've just truncated the 
impulse response. And that distorts the frequency response. The longer 
the IR is, the less the distortion - and the more arithmetic your 
digital filter is going to have to do when you run it.

If, instead of just chopping the ends off, you use a Blackmann window... 
Well, you're multiplying the IR by the window. Multiplication in the 
time domain is convolution in the frequency domain. The frequency 
spectrum of a Blackmann window is such that this convolution is 
essentially a blurring operation. It blurs out the small ripples, 
without destroying the peak at 100 Hz which we'd like to keep. (It does 
widen it slightly though.)

Anyway, coming back to general filter design...

- You figure out what frequency response you want, and how many points 
you've going to use. (Note that before and after the inverse-FFT, the 
number of "points" remains the same.)

- Obviously more points give you more control over the frequency 
response; you've got more control points to move. It also means less 
rippling between the centers of those control points.

- Take your desired frequency response. Take the inverse-FFT. Now 
multiply that by your chosen windowing function.

- You now have your filter kernel.

Notice there's no zero-padding here. (I think you're possibly confused 
by the guide's suggestion that you can zero-pad the IR before taking an 
FFT in order to get better frequency resolution so you can see what your 
filter's "real" frequency response is - this does not exactly match what 
you originally requested because the IR is finite.)

Anyway, you can now "run" the filter in two ways: Take the convolution 
of the kernel and your signal, or do an FFT-convolution.

To do an FFT-convolution, you want the filter's frequency response (not 
its impulse response). Chop your input signal into blocks, zero-pad each 
block, FFT, multiply, inverse-FFT, overlap consecutive blocks to build 
an output signal. You're done.

(It strikes me that if you were trying to do this "real time", the 
blocking would necessarily add some latency to the system that a 
convolution implementation wouldn't have...)


Post a reply to this message

From: Mike Raiford
Subject: Re: DFT and FFT
Date: 20 Jan 2009 09:12:43
Message: <4975dbdb$1@news.povray.org>
Invisible wrote:
> Mike Raiford wrote:
> 
>> According to the book, my thinking was sound. You take an iFFT of the 
>> arbitrary filter, apply a Hamming or Blackmann window to the shifted 
>> and zero-padded filter kernel. Then you execute the filter by 
>> convolution, or FFT convolution. Depending on the size of the filter 
>> kernel you've generated. Bigger means FFT is more advantageous.
>>
>> That's for FIR filters.
> 
> Not quite...
> 
> Let's say you want to build a band-pass filter. It passes 100 Hz 
> unchanged, but it filters out everything else. A "perfect" version of 
> this filter is very simple - and very impossible. Its impulse response 
> would simply by a 100 Hz sinewave extending an infinite distance into 
> the past and into the future.

Right, no way that could ever work.

> It should be completely obvious that no filter real-world filter can 
> possibly react to inputs that haven't happened yet - but that is what 
> this design calls for. (It is a "non-causal filter".)
> 
> Anyway, if you build an FIR by simply taking that infinite sinewave and 
> chopping the ends off to make it finite, you've just truncated the 
> impulse response. And that distorts the frequency response. The longer 
> the IR is, the less the distortion - and the more arithmetic your 
> digital filter is going to have to do when you run it.
> 
> If, instead of just chopping the ends off, you use a Blackmann window... 
> Well, you're multiplying the IR by the window. Multiplication in the 
> time domain is convolution in the frequency domain. The frequency 
> spectrum of a Blackmann window is such that this convolution is 
> essentially a blurring operation. It blurs out the small ripples, 
> without destroying the peak at 100 Hz which we'd like to keep. (It does 
> widen it slightly though.)
> 

Right... once you've chosen the frequencies to pass, you can then obtain 
the impulse response by an iFFT, and applying the window. Depending on 
the number of points the filter's frequency response will be an 
approximation of the desired frequency response.

> Anyway, coming back to general filter design...
> 
> - You figure out what frequency response you want, and how many points 
> you've going to use. (Note that before and after the inverse-FFT, the 
> number of "points" remains the same.)
> 
> - Obviously more points give you more control over the frequency 
> response; you've got more control points to move. It also means less 
> rippling between the centers of those control points.
> 
> - Take your desired frequency response. Take the inverse-FFT. Now 
> multiply that by your chosen windowing function.
> 
> - You now have your filter kernel.
> 
> Notice there's no zero-padding here. (I think you're possibly confused 
> by the guide's suggestion that you can zero-pad the IR before taking an 
> FFT in order to get better frequency resolution so you can see what your 
> filter's "real" frequency response is - this does not exactly match what 
> you originally requested because the IR is finite.)

You'd want to zero pad the kernel for FFT convolution though, right?

> Anyway, you can now "run" the filter in two ways: Take the convolution 
> of the kernel and your signal, or do an FFT-convolution.
> 
> To do an FFT-convolution, you want the filter's frequency response (not 
> its impulse response). Chop your input signal into blocks, zero-pad each 
> block, FFT, multiply, inverse-FFT, overlap consecutive blocks to build 
> an output signal. You're done.
> 
> (It strikes me that if you were trying to do this "real time", the 
> blocking would necessarily add some latency to the system that a 
> convolution implementation wouldn't have...)

Yeah, there would be a bit of latency I would expect.

FWIW, a long time ago I fooled around with an emulation project for the 
MT-32 sound module, which has a resonant low-pass filter.  With a few 
multiplies and a few adds per cycle, it would filter in realtime pretty 
easily. I didn't understand the theory behind it, but the code looked 
deceptively simple... Until I found the code for calculating the 
coefficients. I found the same filter code on Gamedev.net. From the 
looks of it, it's simply a 6-pole Chebyshev filter. The coefficients 
were generated by what appeared to be the z-transform you were 
discussing earlier. The resonance parameter is apparently is how much 
ripple was allowed in the passband. More ripple = bigger spike at the 
cut-off frequency.

Other interesting things about the module itself: It's essentially 
subtractive synthesis combined with PCM samples. Each patch could 
contain up to 4 partials: 2 pairs. Each pair had a ring modulator. If 
one side of the modulator was unconnected (no partial) it would simply 
generate white noise. IIRC, it had a few different waveforms, including 
the PCM "waveform."

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 20 Jan 2009 09:50:25
Message: <4975e4b1@news.povray.org>
>> Let's say you want to build a band-pass filter. It passes 100 Hz 
>> unchanged, but it filters out everything else. A "perfect" version of 
>> this filter is very simple - and very impossible. Its impulse response 
>> would simply by a 100 Hz sinewave extending an infinite distance into 
>> the past and into the future.
> 
> Right, no way that could ever work.

Indeed.

(Although you *can* apply that kind of thing in the weird world of 
mathematics. If you have a mathematical formula that describes your 
entire signal - past, present and future - then you really *can* pass it 
through a "perfect" filter like this. That's the magic of integral 
calculus. But I digress...)

> Right... once you've chosen the frequencies to pass, you can then obtain 
> the impulse response by an iFFT, and applying the window.

Yes.

> Depending on 
> the number of points the filter's frequency response will be an 
> approximation of the desired frequency response.

Yes.

In general, if you ask for the frequency response that changes smoothly, 
you'll get a close approximation. If you ask for a frequency response 
with sharp discontinuities in it, you'll get ripples. (The inverse-DFT 
is almost identical to the forward-DFT, remember.)

A lowpass filter consists of a straight line from 0 Hz to the cutoff 
frequency, a sharp discontinuity from 100% to 0%, and then a straight 
line from there onwards. Hence the impossibility of implementing it 
"perfectly". Most other "standard" filter types have abrupt changes like 
this too.

>> Notice there's no zero-padding here. (I think you're possibly confused 
>> by the guide's suggestion that you can zero-pad the IR before taking 
>> an FFT in order to get better frequency resolution so you can see what 
>> your filter's "real" frequency response is - this does not exactly 
>> match what you originally requested because the IR is finite.)
> 
> You'd want to zero pad the kernel for FFT convolution though, right?

Checking the documentation carefully, it appears that the procedure is thus:

   To FFT-convolve a kernel containing X samples with a signal block 
containing Y samples, pad *both* the kernel and the signal block to 
X+Y-1 samples long, FFT to the frequency domain, multiply, inverse-FFT 
back to the time domain again. Your new output signal block is larger 
than the block you started with, so overlap it with its siblings.

So yes, you do pad the kernel as well as the input signal blocks. (The 
amount of padding depends on the size of block you chop the signal 
into.) I hypothesize that small blocks = FFT isn't much faster, large 
blocks = you have to do lots of multiplies, so there's some sweet spot 
somewhere in the middle for best performance.

>> (It strikes me that if you were trying to do this "real time", the 
>> blocking would necessarily add some latency to the system that a 
>> convolution implementation wouldn't have...)
> 
> Yeah, there would be a bit of latency I would expect.

...whereas if you just do a 500-point convolution, there will be 500 
samples woth of latency. No more, no less.

> FWIW, a long time ago I fooled around with an emulation project for the 
> MT-32 sound module, which has a resonant low-pass filter.  With a few 
> multiplies and a few adds per cycle, it would filter in realtime pretty 
> easily.

That's the magic of IIR. It's *very* compute-efficient.

> I didn't understand the theory behind it, but the code looked 
> deceptively simple... Until I found the code for calculating the 
> coefficients.

Uh, yes. ;-)

> I found the same filter code on Gamedev.net. From the 
> looks of it, it's simply a 6-pole Chebyshev filter. The coefficients 
> were generated by what appeared to be the z-transform you were 
> discussing earlier.

Precisely.

The math isn't actually "hard" - again it's a few trig functions and a 
couple of multiplies - it's just very "fiddly" to implement correctly. 
There are lots of opportunities to screw it up.

> The resonance parameter is apparently is how much 
> ripple was allowed in the passband. More ripple = bigger spike at the 
> cut-off frequency.

Indeed.

Actually - when you get that far - these IIR filters are designed by 
looking at the complex plane. You position "poles" and "zeros", and 
these define a surface. The vertical slice going through the origin is 
the frequency response of the filter. So it's like having a giant 
elastic sheet, and moving these poles and zeros around to bend the part 
of the sheet crossing that line into the shape you want.

A Chebyshev low-pass filter arranges poles in a semicircle. The 
resonance comes from deforming this circle to make it elliptical. As you 
do so, the poles get nearer to the line, and the "ripples" you see are 
actually the edges of the peak surrounding each pole. I'll draw you a 
graph if you like... ;-)

The other fun thing is, the mathematical analysis may show you that an 
IIR filter will do one thing, but when you actually implement it 
something totally different happens due to rounding errors! IIR filters 
work using feedback, and feedback magnifies rounding errors. (FIR 
filters by contrast are pretty much unaffected by them, because there's 
no feedback.)

> Other interesting things about the module itself: It's essentially 
> subtractive synthesis combined with PCM samples. Each patch could 
> contain up to 4 partials: 2 pairs. Each pair had a ring modulator. If 
> one side of the modulator was unconnected (no partial) it would simply 
> generate white noise. IIRC, it had a few different waveforms, including 
> the PCM "waveform."

Sounds pretty interesting.

Personally I'd love to implement something like that myself - but I 
don't have access to anything that can talk to the sound hardware. :-(

(Haskell has a binding for libSDL - but it doesn't work on Windoze.)


Post a reply to this message

From: Mike Raiford
Subject: Re: DFT and FFT
Date: 20 Jan 2009 10:04:32
Message: <4975e800$1@news.povray.org>
Invisible wrote:
> 
> Sounds pretty interesting.
> 
> Personally I'd love to implement something like that myself - but I 
> don't have access to anything that can talk to the sound hardware. :-(
> 
> (Haskell has a binding for libSDL - but it doesn't work on Windoze.)

I know this is probably painful, but ... under the windows platform does 
Haskell have access to COM libraries? DirectX uses COM, you may be able 
to use DirectSound (or is it DirectShow, now?) via its COM interfaces..

Either than or write a simple C Dll, and import the functions from that 
DLL into the Haskell application.

You'll of course need to download the DirectX SDK for that to work.

-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: DFT and FFT
Date: 20 Jan 2009 10:11:21
Message: <4975e999$1@news.povray.org>
Mike Raiford wrote:

>> (Haskell has a binding for libSDL - but it doesn't work on Windoze.)
> 
> I know this is probably painful, but ... under the windows platform does 
> Haskell have access to COM libraries?

It has access to most of the Win32 API. If I can figure out what calls 
to execute, it should be doable.

> DirectX uses COM, you may be able 
> to use DirectSound (or is it DirectShow, now?) via its COM interfaces..

Seems plausible.

> Either than or write a simple C Dll, and import the functions from that 
> DLL into the Haskell application.

Heh. As if I could *ever* do that... I don't even know how DLLs work!

> You'll of course need to download the DirectX SDK for that to work.

That's the easy part. ;-)


Post a reply to this message

From: scott
Subject: Re: DFT and FFT
Date: 20 Jan 2009 10:24:07
Message: <4975ec97$1@news.povray.org>
> Personally I'd love to implement something like that myself - but I don't 
> have access to anything that can talk to the sound hardware. :-(

You could always just output a raw .wav file and then send it somewhere to 
play it (eg mediaplayer or winamp, surely also a smaller command line player 
somewhere out there).


Post a reply to this message

From: Mike Raiford
Subject: Re: DFT and FFT
Date: 20 Jan 2009 10:24:30
Message: <4975ecae$1@news.povray.org>
Invisible wrote:

> 
> Heh. As if I could *ever* do that... I don't even know how DLLs work!
> 

Oh, now. DLLs aren't big scary things. They're just libraries. Creating 
a DLL project in Visual C is a snap. Then you just create the functions 
you want to export using the extern "C" __declspec( dllexport ) 
decoration on the function.

>> You'll of course need to download the DirectX SDK for that to work.
> 
> That's the easy part. ;-)

Depends ... ;) if you're impatient like me it's excruciating.
-- 
~Mike


Post a reply to this message

From: scott
Subject: Re: DFT and FFT
Date: 20 Jan 2009 10:34:44
Message: <4975ef14@news.povray.org>
>> I know this is probably painful, but ... under the windows platform does 
>> Haskell have access to COM libraries?
>
> It has access to most of the Win32 API. If I can figure out what calls to 
> execute, it should be doable.

http://msdn.microsoft.com/en-us/library/ms712879.aspx

You could first start off by checking that it plays a pre-existing WAV file, 
then write the WAV file and play it immediately afterwards, then try out the 
SND_MEMORY flag to play it directly from memory...


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.