|
 |
On 09/09/2010 05:13 PM, Mike Raiford wrote:
> So,
>
> Andrew prodded me a bit with the whole "You know, you could use a linear
> filter rather than an fft" bit. It got me thinking about IIR (infinite
> impulse response) filters.
>
> I made some offhand remark about IIR filters being exceedingly difficult
> to design.
And I remember thinking "yeah, it's harder than FIR, but not as hard as
you might imagine".
> So. I first read a primer on how IIR works. OK, seems simple enough.
> Multiply some of the input stream by a few coefficients and multiply
> some of the output stream by a few coefficients add it all together, and
> you get a filter.
It's like two FIR filters in a feedback loop. Instead of using a
bazillion coefficients to generate the impulse response you want, you
generate it by feedback.
Now, a perfect lowpass filter has sinc(t) as its impulse response. The
sinc function decays as 1/t. A feedback loop can produce a signal which
decays as exp(-t), which is nearly the same (but not quite). Because of
this coincidence, you can build a very good IIR lowpass filter with very
few coefficients. If you wanted to build some completely arbitrary kind
of filter, you'd likely have much more of a problem. But all four common
filter types (i.e., lowpass, highpass, bandpass and bandreject) just
happen to have sinusoidal impulse responses that decay in a nearly
exponential way.
The net result is a fairly good filter with very little computer power.
> Good. Remarkably simple and very easy to calculate. But how does one
> design the coefficients? Poles and Zeros and something called the z
> transform, the discrete cousin of the Laplace transform. Laplace
> transform.... ooh, isn't that complicated?
You don't need to actually *perform* the Laplace transform. Fortunately...
> I was surprised when I realized just how simple it really is.
> Essentially a Fourier transform, but adding exponential curves to the
> mix. So you have increasing and decaying waveforms.
And, just as you have the continuous Fourier transform and the discrete
Fourier transform, you have the Laplace transform (continuous) and the
Z-transform (discrete).
> Now, my understanding of how poles and zeros relate to the way a filter
> will respond is this:
>
> The poles and zeros in the s-domain are arranged along the imaginary
> axis, and their position on this axis determines the frequency they will
> affect. The higher the imaginary portion, the higher the frequency, the
> real portion affects the exponential attack or decay. Obviously, the
> magnitude of the effect is related to their distance from the axis.
> Poles cannot be on the positive side of this axis. (The exponent
> increases on the positive side, therefore any filter built on that would
> be unstable and oscillate)
> What's the frequency response? In the s-domain, it's the slice of the
> complex plane along the imaginary axis. In the z-domain, it's the
> cylinder of the complex plane defined by the unit circle, and only
> really the top half of that.
Indeed. If you imagine the s-domain as a giant sheet of rubber, the
poles pull it upwards, and the zeros pull it downwards. (Except that
that analogy still doesn't explain why arranging a few poles in a
semicircle makes the sheet approach zero at the edges...) If you look at
the mathematics of a polynomial, it all becomes clear(er).
> While I understand what the Z transform does, I'm not quite sure exactly
> what steps were taken to arrive at the form presented above.
Let's back up here a moment.
You can take the Fourier transform of any signal. Likewise, you can take
the Laplace transform (or Z-transform) of any signal. I can draw some
random waveform and Z-transform it, and I'll get some sort of result in
the Z-domain.
However, if you take the Laplace transform of a waveform which just
happens to be the solution to a (linear) differential equation,
something very special happens: the resulting function is always the
ratio of two polynomials. Something even more impressive happens, in
fact: the coefficients of this polynomial are *related* to the
coefficients of the differential equation!
When you move to the Z-transform, a similar effect occurs.
If I'm remembering this right (from off the top of my head), you end up with
T(z) = (a[0] z^0 + a[1] z^-1 + a[2] z^-2 + ...) / (1 - b[0] z^0 -
b[1] z^-1 - b[2] z^-2 - ...)
where a[n] and b[n] are the coefficients from the difference equation
(i.e., the filter coefficients).
Of course, to build a function with specific poles and zeros, you write
T(z) = (z - r0)(z - r1)(z - r2)... / (z - p0)(z - p1)(z - p2)...
where r0, r1, r2... are the zeros, and p0, p1, p2... are the poles. But
for the previous equation, you want the function's coefficients, not its
roots.
Fortunately, computing the coefficients from the roots is a trivial
matter of opening some brackets.
> Going the other way, however is a different story....
Unfortunately, computing the roots from the coefficients means solving a
(possibly very high degree) rational polynomial - a highly non-trivial
operation.
But, fortunately, you never need to go in this direction to design a
filter, only to analyse one. (And in that case, you don't need exact
solutions anyway.)
Notice that you do not, at any point, need to *perform* a Z-transform.
It's just that if you take an arbitrary difference equation and
Z-transform it, you see that a rational function plops out, and its
coefficients are related to the original difference equation. Once you
know that relationship exists, you can use it to design filters.
And that's why the Z-transform matters, even though you don't actually
use it yourself.
Next fun thing: Transforming from rectangular to polar coordinates. It's
not as simple as you think it is. If you just transform all the poles
and zeros, the resulting filter doesn't have the same frequency response
as the original. You need to be more clever for that...
Post a reply to this message
|
 |