POV-Ray : Newsgroups : povray.off-topic : Poles and zeroes (Or Invisible is Evil...) Server Time
3 Sep 2024 17:12:51 EDT (-0400)
  Poles and zeroes (Or Invisible is Evil...) (Message 1 to 5 of 5)  
From: Mike Raiford
Subject: Poles and zeroes (Or Invisible is Evil...)
Date: 9 Sep 2010 12:16:28
Message: <4c89085c$1@news.povray.org>
So,

Andrew prodded me a bit with the whole "You know, you could use a linear 
filter rather than an fft" bit. It got me thinking about IIR (infinite 
impulse response) filters.

I made some offhand remark about IIR filters being exceedingly difficult 
to design.

So. I first read a primer on how IIR works. OK, seems simple enough. 
Multiply some of the input stream by a few coefficients and multiply 
some of the output stream by a few coefficients add it all together, and 
you get a filter.

Good. Remarkably simple and very easy to calculate. But how does one 
design the coefficients? Poles and Zeros and something called the z 
transform, the discrete cousin of the Laplace transform. Laplace 
transform.... ooh, isn't that complicated?

I was surprised when I realized just how simple it really is. 
Essentially a Fourier transform, but adding exponential curves to the 
mix. So you have increasing and decaying waveforms. So, after reading 
about that and gaining an understanding on that, I turned to the Z 
transform. Which is a bit different. Foremost, it's discrete, of course, 
but also uses polar coordinates rather than rectangular.

Now, my understanding of how poles and zeros relate to the way a filter 
will respond is this:

The poles and zeros in the s domain are arranged along the imaginary 
axis, and their position on this axis determines the frequency they will 
affect. The higher the imaginary portion, the higher the frequency, the 
real portion affects the exponential attack or decay. Obviously, the 
magnitude of the effect is related to their distance from the axis. 
Poles cannot be on the positive side of this axis. (The exponent 
increases on the positive side, therefore any filter built on that would 
be unstable and oscillate)

Now, that's all fine and great for doing nice manipulations to equations 
and designing analog filters, but it doesn't quite fit into the discrete 
world. Enter the z domain.

The z domain has similar properties to the s domain, only polar. So the 
magnitude relates to how the exponent will attack or decay, and the 
phase relates to the frequency. The polar orientation of the z domain 
actually makes sense when you think about it. There is only a finite 
number of frequencies available, and like the DFT, they will reflect at 
the halfway point. This is also why poles and zeros are symmetrical 
about the Real axis.

So, the polar coordinates will need to be translated to rectangular 
before they can be plugged into an equation as complex numbers.

What's the frequency response? In the s domain, it's the slice of the 
complex plane along the imaginary axis. In the z domain, it's the 
cylinder of the complex plane defined by the unit circle, and only 
really the top half of that.

So once you have the coordinates, they need to be plugged into something 
to get the coefficients for the filter. That's where the z transform 
comes in. Which for each unit of the summation does this:

X[z] = x[0]*z^0 + x[1]*z^-1 + ... + x[n]*z^-n

So then,

The equation for an IIR filter:

y[n] = a0*x[n] + a1*x[n-1] + ... + b1*y[n-1] + ...

becomes, through the Z transform:

H[z] = (a0 + a1*z^-1  + a2*z^-2 + a3*z^-3 ...)/(- b1*z^-1  - b2*z^-2 + 
-b3*z^-3 ...)

While I understand what the Z transform does, I'm not quite sure exactly 
what steps were taken to arrive at the form presented above.

But with that, it becomes rather easy to translate the locations of 
poles and zeros to the coefficients:

for(n = 0; n < n_poles; n++)
{
z[n] = z[n-1]

for(i = n-1; i > 0; i--)
{
  z[i] = -p[n]*z[i] + z[i-1]
}

z[0] = -p[n]*z[0]
}

The code may be a bit buggy.... I grabbed that from my notes from the 
other day.

(Keeping in mind, of course, complex numbers are used in the code above)

Going the other way, however is a different story....
-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: Poles and zeroes (Invisible IS Evil...)
Date: 10 Sep 2010 04:38:13
Message: <4c89ee75$1@news.povray.org>
On 09/09/2010 05:13 PM, Mike Raiford wrote:
> So,
>
> Andrew prodded me a bit with the whole "You know, you could use a linear
> filter rather than an fft" bit. It got me thinking about IIR (infinite
> impulse response) filters.
>
> I made some offhand remark about IIR filters being exceedingly difficult
> to design.

And I remember thinking "yeah, it's harder than FIR, but not as hard as 
you might imagine".

> So. I first read a primer on how IIR works. OK, seems simple enough.
> Multiply some of the input stream by a few coefficients and multiply
> some of the output stream by a few coefficients add it all together, and
> you get a filter.

It's like two FIR filters in a feedback loop. Instead of using a 
bazillion coefficients to generate the impulse response you want, you 
generate it by feedback.

Now, a perfect lowpass filter has sinc(t) as its impulse response. The 
sinc function decays as 1/t. A feedback loop can produce a signal which 
decays as exp(-t), which is nearly the same (but not quite). Because of 
this coincidence, you can build a very good IIR lowpass filter with very 
few coefficients. If you wanted to build some completely arbitrary kind 
of filter, you'd likely have much more of a problem. But all four common 
filter types (i.e., lowpass, highpass, bandpass and bandreject) just 
happen to have sinusoidal impulse responses that decay in a nearly 
exponential way.

The net result is a fairly good filter with very little computer power.

> Good. Remarkably simple and very easy to calculate. But how does one
> design the coefficients? Poles and Zeros and something called the z
> transform, the discrete cousin of the Laplace transform. Laplace
> transform.... ooh, isn't that complicated?

You don't need to actually *perform* the Laplace transform. Fortunately...

> I was surprised when I realized just how simple it really is.
> Essentially a Fourier transform, but adding exponential curves to the
> mix. So you have increasing and decaying waveforms.

And, just as you have the continuous Fourier transform and the discrete 
Fourier transform, you have the Laplace transform (continuous) and the 
Z-transform (discrete).

> Now, my understanding of how poles and zeros relate to the way a filter
> will respond is this:
>
> The poles and zeros in the s-domain are arranged along the imaginary
> axis, and their position on this axis determines the frequency they will
> affect. The higher the imaginary portion, the higher the frequency, the
> real portion affects the exponential attack or decay. Obviously, the
> magnitude of the effect is related to their distance from the axis.
> Poles cannot be on the positive side of this axis. (The exponent
> increases on the positive side, therefore any filter built on that would
> be unstable and oscillate)

> What's the frequency response? In the s-domain, it's the slice of the
> complex plane along the imaginary axis. In the z-domain, it's the
> cylinder of the complex plane defined by the unit circle, and only
> really the top half of that.

Indeed. If you imagine the s-domain as a giant sheet of rubber, the 
poles pull it upwards, and the zeros pull it downwards. (Except that 
that analogy still doesn't explain why arranging a few poles in a 
semicircle makes the sheet approach zero at the edges...) If you look at 
the mathematics of a polynomial, it all becomes clear(er).

> While I understand what the Z transform does, I'm not quite sure exactly
> what steps were taken to arrive at the form presented above.

Let's back up here a moment.

You can take the Fourier transform of any signal. Likewise, you can take 
the Laplace transform (or Z-transform) of any signal. I can draw some 
random waveform and Z-transform it, and I'll get some sort of result in 
the Z-domain.

However, if you take the Laplace transform of a waveform which just 
happens to be the solution to a (linear) differential equation, 
something very special happens: the resulting function is always the 
ratio of two polynomials. Something even more impressive happens, in 
fact: the coefficients of this polynomial are *related* to the 
coefficients of the differential equation!

When you move to the Z-transform, a similar effect occurs.

If I'm remembering this right (from off the top of my head), you end up with

   T(z) = (a[0] z^0 + a[1] z^-1 + a[2] z^-2 + ...) / (1 - b[0] z^0 - 
b[1] z^-1 - b[2] z^-2 - ...)

where a[n] and b[n] are the coefficients from the difference equation 
(i.e., the filter coefficients).

Of course, to build a function with specific poles and zeros, you write

   T(z) = (z - r0)(z - r1)(z - r2)... / (z - p0)(z - p1)(z - p2)...

where r0, r1, r2... are the zeros, and p0, p1, p2... are the poles. But 
for the previous equation, you want the function's coefficients, not its 
roots.

Fortunately, computing the coefficients from the roots is a trivial 
matter of opening some brackets.

> Going the other way, however is a different story....

Unfortunately, computing the roots from the coefficients means solving a 
(possibly very high degree) rational polynomial - a highly non-trivial 
operation.

But, fortunately, you never need to go in this direction to design a 
filter, only to analyse one. (And in that case, you don't need exact 
solutions anyway.)

Notice that you do not, at any point, need to *perform* a Z-transform. 
It's just that if you take an arbitrary difference equation and 
Z-transform it, you see that a rational function plops out, and its 
coefficients are related to the original difference equation. Once you 
know that relationship exists, you can use it to design filters.

And that's why the Z-transform matters, even though you don't actually 
use it yourself.

Next fun thing: Transforming from rectangular to polar coordinates. It's 
not as simple as you think it is. If you just transform all the poles 
and zeros, the resulting filter doesn't have the same frequency response 
as the original. You need to be more clever for that...


Post a reply to this message

From: Invisible
Subject: Re: Poles and zeroes (Invisible IS Evil...)
Date: 10 Sep 2010 04:42:40
Message: <4c89ef80$1@news.povray.org>
On 10/09/2010 09:38 AM, Invisible wrote:

> Next fun thing: Transforming from rectangular to polar coordinates. It's
> not as simple as you think it is.

Other fun things:

* If you place a pole and a zero on top of each other, mathematics says 
they should cancel each other out. In the real world, this doesn't work 
at all! (It's called a pole-zero cancellation, and you want to stay the 
hell away from it.)

* An IIR filter is a feedback loop. Feedback magnifies rounding errors. 
So after you design your filter, you may find that it doesn't actually 
have the frequency response it's supposed to have. (Depending on whether 
you use fixed-point or floating-point arithmetic, and at what precision.)

* The more poles a lowpass filter has, the better its frequency response 
theoretically becomes. However, more poles increases the passband gain, 
so you have to turn down the A-coefficients, while the extra poles keep 
increasing the number and magnitude of the B-coefficients. Gradually you 
end up with a filter that's less and less sensitive to its input 
(A-coefficients), and has higher and higher gain in the feedback path 
(B-coefficients). Can you spell "unstable"?


Post a reply to this message

From: Mike Raiford
Subject: Re: Poles and zeroes (Invisible IS Evil...)
Date: 10 Sep 2010 08:38:34
Message: <4c8a26ca@news.povray.org>
On 9/10/2010 3:42 AM, Invisible wrote:
> On 10/09/2010 09:38 AM, Invisible wrote:
>
>> Next fun thing: Transforming from rectangular to polar coordinates. It's
>> not as simple as you think it is.
>

What the whole arctangent thing? Oh, yeah... Thankfully most programming 
languages have a function called atan2 ;)

Magnitude is simple to calculate, though. It's just the distance formula 
after all.

I only wish I had this enthusiasm for mathematics when I was in school. 
It would have made my algebra, geometry and trig classes much more 
enjoyable.

> Other fun things:
>
> * If you place a pole and a zero on top of each other, mathematics says
> they should cancel each other out. In the real world, this doesn't work
> at all! (It's called a pole-zero cancellation, and you want to stay the
> hell away from it.)

Ooh, Hmm... lets see: 0/0 .... would that be a big fat "NaN"? I know 
that as a denominator approaches 0 the quotient approaches infinity. 0 
in the numerator is always 0, but I don't know that 0 divided by itself 
is defined.

And yes, it appears that for IEEE floats that is definitely the case (at 
least, on the Intel architecture.) that 0 divided by itself is NaN.

> * An IIR filter is a feedback loop. Feedback magnifies rounding errors.
> So after you design your filter, you may find that it doesn't actually
> have the frequency response it's supposed to have. (Depending on whether
> you use fixed-point or floating-point arithmetic, and at what precision.)

Right. I was having a bit of fun with this exact fact the other day with 
Falstad's filter applet. I pushed the filter to the point where it would 
rounding error slowly crept in. It was interesting to listen to as it 
started as white noise, but occasional harmonics would sneak in and 
dance around. Eventually the program would give up, claiming 
instability. I'm not sure when the applet decides the filter is 
unstable, My guess is when a result is infinite or NaN that's it. You 
can't adjust gain on that, and it doesn't translate to a finite integer 
very well. I suppose you could clip....

> * The more poles a lowpass filter has, the better its frequency response
> theoretically becomes. However, more poles increases the passband gain,
> so you have to turn down the A-coefficients, while the extra poles keep
> increasing the number and magnitude of the B-coefficients. Gradually you
> end up with a filter that's less and less sensitive to its input
> (A-coefficients), and has higher and higher gain in the feedback path
> (B-coefficients). Can you spell "unstable"?

Yep. With all filter types there's a limit to the number of pole/zero 
pairs you can use before the filter loses stability.

What's also extremely interesting is the fact that the poles/zeros of 
the IIR filter can be used to design that filter as an electronic circuit.



-- 
~Mike


Post a reply to this message

From: Invisible
Subject: Re: Poles and zeroes (Invisible IS Evil...)
Date: 10 Sep 2010 10:54:35
Message: <4c8a46ab@news.povray.org>
>>> Next fun thing: Transforming from rectangular to polar coordinates. It's
>>> not as simple as you think it is.
>>
>
> What the whole arctangent thing? Oh, yeah... Thankfully most programming
> languages have a function called atan2 ;)
>
> Magnitude is simple to calculate, though. It's just the distance formula
> after all.

Actually, a complex log() will do it for you quite nicely. But that's 
not my point; my point is that if you just transform all the poles and 
zeros, the filter's response changes. The tricky part is transforming 
from the s-domain to the z-domain while still keeping approximately the 
same frequency response.

> I only wish I had this enthusiasm for mathematics when I was in school.
> It would have made my algebra, geometry and trig classes much more
> enjoyable.

Join the friggin' club! All we did in math classes at school was long 
division. Pages and pages and pages of it. No wonder kids grow up 
thinking that that's all there is to mathematics...

(Also, "hey, GET OFF MY LAWN!")

>> * If you place a pole and a zero on top of each other, mathematics says
>> they should cancel each other out. In the real world, this doesn't work
>> at all! (It's called a pole-zero cancellation, and you want to stay the
>> hell away from it.)
>
> Ooh, Hmm... lets see: 0/0 .... would that be a big fat "NaN"?

If you try to run a filter with a pole and a zero on top of each other, 
what actually happens is that it's wildly unstable. The mathematics says 
the two could cancel out and give you a flat response, but in reality 
that's not what happens. (Regardless of what type of arithmetic you use 
- whether it has a concept of NaN or not.)

>> * An IIR filter is a feedback loop. Feedback magnifies rounding errors.
>> So after you design your filter, you may find that it doesn't actually
>> have the frequency response it's supposed to have. (Depending on whether
>> you use fixed-point or floating-point arithmetic, and at what precision.)
>
> Right. I was having a bit of fun with this exact fact the other day with
> Falstad's filter applet. I pushed the filter to the point where it would
> rounding error slowly crept in. It was interesting to listen to as it
> started as white noise, but occasional harmonics would sneak in and
> dance around. Eventually the program would give up, claiming
> instability. I'm not sure when the applet decides the filter is
> unstable, My guess is when a result is infinite or NaN that's it. You
> can't adjust gain on that, and it doesn't translate to a finite integer
> very well. I suppose you could clip....

When a filter goes unstable, the output magnitude increases 
exponentially. It can't be hard to detect; if the filter operates on 
numbers +/-1 and you start getting numbers > 1,000, it's probably gone 
wrong. ;-)

>> * The more poles a lowpass filter has, the better its frequency response
>> theoretically becomes. However, more poles increases the passband gain,
>> so you have to turn down the A-coefficients, while the extra poles keep
>> increasing the number and magnitude of the B-coefficients. Gradually you
>> end up with a filter that's less and less sensitive to its input
>> (A-coefficients), and has higher and higher gain in the feedback path
>> (B-coefficients). Can you spell "unstable"?
>
> Yep. With all filter types there's a limit to the number of pole/zero
> pairs you can use before the filter loses stability.

Only IIR filters have poles. Technically, if you take the Laplace or 
Z-transform of an FIR filter, you'll find it has lots of zeros (one for 
every point in the kernel) and no poles. But that's not a very useful 
way to analyse an FIR filter...

> What's also extremely interesting is the fact that the poles/zeros of
> the IIR filter can be used to design that filter as an electronic circuit.

Filter design *began* with using the Laplace transform to design 
analogue IIR filters. FIR filters are more or less only feasible with 
digital electronics. They're much easier to design, they have far 
superior precision, and they require a crapload more compute power.

If you want to make precision alterations to a signal, you want FIR. If 
you want to make stuff that sounds cool, you want IIR. ;-)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.