|
|
>> Let's say you want to build a band-pass filter. It passes 100 Hz
>> unchanged, but it filters out everything else. A "perfect" version of
>> this filter is very simple - and very impossible. Its impulse response
>> would simply by a 100 Hz sinewave extending an infinite distance into
>> the past and into the future.
>
> Right, no way that could ever work.
Indeed.
(Although you *can* apply that kind of thing in the weird world of
mathematics. If you have a mathematical formula that describes your
entire signal - past, present and future - then you really *can* pass it
through a "perfect" filter like this. That's the magic of integral
calculus. But I digress...)
> Right... once you've chosen the frequencies to pass, you can then obtain
> the impulse response by an iFFT, and applying the window.
Yes.
> Depending on
> the number of points the filter's frequency response will be an
> approximation of the desired frequency response.
Yes.
In general, if you ask for the frequency response that changes smoothly,
you'll get a close approximation. If you ask for a frequency response
with sharp discontinuities in it, you'll get ripples. (The inverse-DFT
is almost identical to the forward-DFT, remember.)
A lowpass filter consists of a straight line from 0 Hz to the cutoff
frequency, a sharp discontinuity from 100% to 0%, and then a straight
line from there onwards. Hence the impossibility of implementing it
"perfectly". Most other "standard" filter types have abrupt changes like
this too.
>> Notice there's no zero-padding here. (I think you're possibly confused
>> by the guide's suggestion that you can zero-pad the IR before taking
>> an FFT in order to get better frequency resolution so you can see what
>> your filter's "real" frequency response is - this does not exactly
>> match what you originally requested because the IR is finite.)
>
> You'd want to zero pad the kernel for FFT convolution though, right?
Checking the documentation carefully, it appears that the procedure is thus:
To FFT-convolve a kernel containing X samples with a signal block
containing Y samples, pad *both* the kernel and the signal block to
X+Y-1 samples long, FFT to the frequency domain, multiply, inverse-FFT
back to the time domain again. Your new output signal block is larger
than the block you started with, so overlap it with its siblings.
So yes, you do pad the kernel as well as the input signal blocks. (The
amount of padding depends on the size of block you chop the signal
into.) I hypothesize that small blocks = FFT isn't much faster, large
blocks = you have to do lots of multiplies, so there's some sweet spot
somewhere in the middle for best performance.
>> (It strikes me that if you were trying to do this "real time", the
>> blocking would necessarily add some latency to the system that a
>> convolution implementation wouldn't have...)
>
> Yeah, there would be a bit of latency I would expect.
...whereas if you just do a 500-point convolution, there will be 500
samples woth of latency. No more, no less.
> FWIW, a long time ago I fooled around with an emulation project for the
> MT-32 sound module, which has a resonant low-pass filter. With a few
> multiplies and a few adds per cycle, it would filter in realtime pretty
> easily.
That's the magic of IIR. It's *very* compute-efficient.
> I didn't understand the theory behind it, but the code looked
> deceptively simple... Until I found the code for calculating the
> coefficients.
Uh, yes. ;-)
> I found the same filter code on Gamedev.net. From the
> looks of it, it's simply a 6-pole Chebyshev filter. The coefficients
> were generated by what appeared to be the z-transform you were
> discussing earlier.
Precisely.
The math isn't actually "hard" - again it's a few trig functions and a
couple of multiplies - it's just very "fiddly" to implement correctly.
There are lots of opportunities to screw it up.
> The resonance parameter is apparently is how much
> ripple was allowed in the passband. More ripple = bigger spike at the
> cut-off frequency.
Indeed.
Actually - when you get that far - these IIR filters are designed by
looking at the complex plane. You position "poles" and "zeros", and
these define a surface. The vertical slice going through the origin is
the frequency response of the filter. So it's like having a giant
elastic sheet, and moving these poles and zeros around to bend the part
of the sheet crossing that line into the shape you want.
A Chebyshev low-pass filter arranges poles in a semicircle. The
resonance comes from deforming this circle to make it elliptical. As you
do so, the poles get nearer to the line, and the "ripples" you see are
actually the edges of the peak surrounding each pole. I'll draw you a
graph if you like... ;-)
The other fun thing is, the mathematical analysis may show you that an
IIR filter will do one thing, but when you actually implement it
something totally different happens due to rounding errors! IIR filters
work using feedback, and feedback magnifies rounding errors. (FIR
filters by contrast are pretty much unaffected by them, because there's
no feedback.)
> Other interesting things about the module itself: It's essentially
> subtractive synthesis combined with PCM samples. Each patch could
> contain up to 4 partials: 2 pairs. Each pair had a ring modulator. If
> one side of the modulator was unconnected (no partial) it would simply
> generate white noise. IIRC, it had a few different waveforms, including
> the PCM "waveform."
Sounds pretty interesting.
Personally I'd love to implement something like that myself - but I
don't have access to anything that can talk to the sound hardware. :-(
(Haskell has a binding for libSDL - but it doesn't work on Windoze.)
Post a reply to this message
|
|