|
|
>
> 166 lines isn't bad at all so go ahead and post it. I've had a weak
> spot for fractals since I discovered Fractint many moons ago.
>
> /Erkki
Hi Erkki,
I was prepared for something like "who needs more of those useless fractals
in POV", so this was quite a positive surprise ...
Anyway: you asked for it - you'll get it :-)
I was quite surprised when I read in POV's manual that the
"... hypercomplex numbers are more useful for our purposes, since complex
valued
functions such as sin, cos, etc. can be generalized to work for hypercomplex
numbers in a uniform way".
This implies that those functions can not be generalized for quaternions -
but
they can ! From a purely mathematical point of view, all those functions
can,
without regard to their "everyday-meaning" ( e.g. sin and cos ), be DEFINED
by
means of their Taylor-series. Having a well-defined multiplication, higher
powers of quaternions are defined as well, and so an expression like
(h^n)/n!
makes sense for every quaternion h. Sum that up for all n, and you have the
Taylor-series of the exp-function.
Two questions come to mind:
1) does the series converge ?
2) can this be expressed in terms of known functions of the 4 coordinates of
the quaternion h so that we can use the compiler's math-library and don't
have to write stuff for general Taylor-series ?
As you can probably imagine, both answers are YES, or else I wouldn't be
writing
this ;-)
Please, DON'T panic - I won't give you rigid mathematical proofs on
convergence
and stuff, I'll just try to give you a feeling (!) why everything will work
out
allright, but I fear that a little bit of math can't be avoided ...
First, let's have a look at quaternion multiplication. The description in
POV's
manual is correct, of course, but it's not very helpful if you want to
express
general results. From now on, try to imagine a quaternion NOT as a vector of
four real numbers, but a "strange pair" of a real number and a 3d-vector.
The real number belongs to the basis vector "1 = <1,0,0,0>", the three
components of the 3d-vector to the other three basis vectors I,J and K.
More explicitly, if the quaternion were <a,b,c,d>, let's write it as <a,A>,
"a" being the same number as before, and A.x=b, A.y=c, A.z=d.
From now on I will refer to "a" as the "real part" of h and to "A" as the
"vector part" of h.
It's a quite boring task to verify that with that notation the product of
two
quaternions can be expressed as follows:
<a,A>*<b,B> = < a*b-vdot(A,B), a*B+b*A+vcross(A,B) >
With this equation, it is relatively simple to prove (but I won't do it
here,
as promised ;-) ), that, just as with complex numbers, the multiplication is
well behaved with regard to absolute values, meaning:
vlength4D( <a,A>*<b,B> ) = vlength4D(<a,A>) * vlength4D(<b,B>)
This directly leads to the answer for the first question concerning
convergence of the series. Easy ;-) induction yields
vlength4D( <a,A>^n ) = ( vlength4D(<a,A>) )^n for every natural number n
and this is enough to prove that the series converges, since the series of
the
absolute values converges and ... ah well, no details I promised !
I suppose, the more mathematically inclined get my point, and the others
won't
care at all.
Whatever, we now have the right to write exp(h) ( h a quaternion ); it's
well-defined since the series IS converging. Since the Taylor-series of
sinh, cosh, sin and cos are just the odd and even parts of the exp-series,
give or take a few "-"-signs, they converge as well, but WHERE to ??
As a first step to explore this, let's take any 3d-vector V and
multiply <0,V> by itself. Having a=b=0 and A=B=V, the formula of
multiplication gives
<0,V>*<0,V> = < 0*0-vdot(V,V), 0*V+0*V+vcross(V,V) >
Now, for EVERY 3d-vector V vdot(V,V)=vlength(V)^2 and vcross(V,V)=<0,0,0>,
so
<0,V>*<0,V> = < -(vlength(V)^2), <0,0,0> >
If we chose a vector U such that vlength(U)=1.0 ( a "U"nitVector ), this
gives
<0,U>*<0,U> = < -1, <0,0,0> >
which shows that not only the basis vector I ( as mentioned in the manual ),
but EVERY vector of unit length taken as vector part of a quaternion with
vanishing real part behaves like the complex number i. As a sidetrack, we
have shown that the equation h^2+1=0 has infinitely many solutions h in
quaternion space and that might even be true for more polynomials; some
people probably won't like that, but for the issue at hand it's irrelevant
;-).
Let's go on with powers of pure vector-quaternions. We had
<0,V>^2 = < -(vlength(V)^2), <0,0,0> >
multiplying again by <0,V> gives
<0,V>^3 = < 0, -(vlength(V)^2)*V >
again:
<0,V>^4 = < vlength(V)^4, <0,0,0> >
or, in general:
even powers: <0,V>^2k = < (-1)^k*vlength(V)^2k, <0,0,0> >
odd powers: <0,V>^(2k+1) = < 0, (-1)^k*((vlength(V)^2k)*V >
The even powers of <0,V> only have a real part, and the odd powers only
have a vector part, which is proportional to V. If we throw that into the
Taylor-series for the exp-function, sort for even and odd powers, remember
the series expansion for sin and cos, and rewrite the result a bit, we get
exp( <0,V> ) = < cos( vlength(V) ), sin( vlength(V) )*vnormalize(V) >
or, if we take U=vnormalize(V), t=vlength(V) and write <a,A> sloppily as
a+A:
exp( <0,t*U> ) = cos(t) + U*sin(t)
which strengthens the notion that every unit-vector behaves like the complex
number i; remember exp(iy) = cos(y)+i*sin(y) ?!
Now for full quaternions with real part, we have to take a little caution
at first. The method for complex numbers is exp( x+iy ) = exp(x)*exp(iy),
and then everything falls into places ...
With complex numbers z and w, exp(z+w)=exp(z)*exp(w) holds for all z and w.
This can be proven from the Taylor-series, but the proof relies heavily on
the binomial formulas. With quaternions those aren't generally true, because
( h1 + h2 )^2= h1^2 + h1*h2 + h2*h1 + h2^2, and the famous "2*h1*h2" in the
middle relies on h1*h2 and h2*h1 being equal. This is not the case with
quaternions in general, but it IS true if one of them has vanishing vector
part; prove that for yourself, please. Thus we get:
exp( <a,A> ) = exp( <a,<0,0,0>>+<0,A> )
= exp(a) * exp(A)
= exp(a) * ( cos( vlength(A) ) + sin( vlength(A) )*vnormalize(A) )
Compare that again to
exp( x+iy ) = exp(x) * ( cos(y) + i*sin(y) )
In general the following rule applies:
For any function F find in some mathematical textbook the formula for that
function for complex numbers:
F( x+iy ) = G( x,y ) + i*H( x,y ) ( G,H real-valued )
Then, replace "i" by vnormalize(A) and "y" by vlength(A) and get
F( <a,A> ) = G( a, vlength(A) ) + H( a, vlength(A) )*vnormalize(A)
or, if we take U=vnormalize(A), t=vlength(A) as before
F( <a,tU> ) = G( a,t ) + U*H( a,t )
where again, this notation is a sloppy version of
F( <a,tU> ) = < G(a,t), H( a,t )*U >
Without further ado we get
sin( <a,tU> ) = < sin(a)cosh(t), cos(a)sinh(t)*U >
cos( <a,tU> ) = < cos(a)cosh(t), -sin(a)sinh(t)*U >
sinh( <a,tU> ) = < sinh(a)cos(t), cosh(a)sin(t)*U >
cosh( <a,tU> ) = < cosh(a)cos(t), sinh(a)sin(t)*U >
Now, DO we find any volunteers for the rest of the "interesting" functions,
and, more important, to CODE that into POV's julia_fractal ?!
Thanks for reading and for your patience.
Comments appreciated, post here or mail to karl[dot]anders[at]web[dot]de
Karl Anders
--
The two most common things in the universe are hydrogen and stupidity.
(H.Ellison)
----------
Post a reply to this message
|
|