POV-Ray : Newsgroups : povray.off-topic : A question of pure mathematics Server Time
14 Nov 2024 23:09:31 EST (-0500)
  A question of pure mathematics (Message 1 to 10 of 25)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Invisible
Subject: A question of pure mathematics
Date: 19 Nov 2007 08:40:02
Message: <47419232@news.povray.org>
Can somebody who knows what they're talking about confirm this?

If I'm understanding this right, if I can find a complete set of 
orthogonal functions, I should be able to construct any possible 
function as a linear combination of them.

For example, the Fourier transform allows you to construct any function 
from sine and cosine functions. (On the other hand, except in the 
discrete case, you might need an infinite set of these functions to make 
an exact reconstruction... but the discrete case is the one that really 
interests me.)

If I'm not mistaken, "orthogonal" means that one function can't be 
constructed from the others (so there's no duplication of 
"information"), and "complete" just means you've got all the functions 
you need.

So, like, how do you tell if two functions are orthogonal? And how do 
you tell when a set of them is complete?


Post a reply to this message

From: Vincent Le Chevalier
Subject: Re: A question of pure mathematics
Date: 19 Nov 2007 09:21:01
Message: <47419bcd$1@news.povray.org>

> Can somebody who knows what they're talking about confirm this?
> 
> If I'm understanding this right, if I can find a complete set of 
> orthogonal functions, I should be able to construct any possible 
> function as a linear combination of them.
> 

That's just what being complete means. IIRC, orthogonal adds the 
property that the linear combination is unique, i.e. you have a true 
basis and not just a spanning set, as it seems to be called in English.

> If I'm not mistaken, "orthogonal" means that one function can't be 
> constructed from the others (so there's no duplication of 
> "information"), and "complete" just means you've got all the functions 
> you need.
> 
> So, like, how do you tell if two functions are orthogonal? And how do 
> you tell when a set of them is complete?

Orthogonal means only one precise thing: their scalar product is 0. 
Whether two functions are orthogonal really depends on the scalar 
product you choose, and in turn on the set of functions you consider.

The common choice, for sufficiently well-behaved functions, is:

f.g = \sum{f g}, where . is the scalar product.

That's how you prove that the sine functions used in Fourier transform 
are orthogonal.

Proving that a set is complete ordinarily means showing a way to build 
the decomposition of any given element in the space you consider. When 
you are in a finite dimension space it is easy enough, you check that 
you have a number of vectors equal to the dimension of the space, and if 
they are linearly independent (orthogonal to each other, for example) 
you have a complete set...

-- 
Vincent


Post a reply to this message

From: Kevin Wampler
Subject: Re: A question of pure mathematics
Date: 19 Nov 2007 23:02:54
Message: <47425c6e$1@news.povray.org>
Invisible wrote:
> 
> For example, the Fourier transform allows you to construct any function 
> from sine and cosine functions.
> 

This isn't quite true over the reals, even assuming you're only looking 
for functions with a given period.  For example the function which is 
zero everywhere except being 1 at a single point will generate the same 
Fourier representation as the constant zero function since it will have 
the same integrals.

I think (no proof) that you can reconstruct any function up to the 
addition of a function which is nonzero over an area of zero `volume' 
though (assuming you don't count things like a delta functions).  Not 
that it matters for what you're doing of course, but you seem like the 
sort of chap who might find it interesting.

> 
> So, like, how do you tell if two functions are orthogonal? And how do 
> you tell when a set of them is complete?

What are you using these function for?  There may be better or worse 
ways to do things depending on what you want.


Post a reply to this message

From: Mueen Nawaz
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 01:26:24
Message: <47427e10$1@news.povray.org>
Invisible wrote:
> Can somebody who knows what they're talking about confirm this?

	That probably shouldn't include me, but I'll try.

> If I'm understanding this right, if I can find a complete set of
> orthogonal functions, I should be able to construct any possible
> function as a linear combination of them.

	By definition, you can construct any possible piecewise continuous
function with them: In the domain that they are complete (which may only
be an interval on the real line).

See http://mathworld.wolfram.com/CompleteOrthogonalSystem.html

	Now when they say that the limit of the error goes to 0, they mean in a
sort of "average" sense (see the Lebesgue integral?).

	More concretely, this means that if you have your (usually infinite)
set of complete orthogonal functions, and a given function f, you can
construct something that's "almost" identical (if not identical) to f
using a linear combination of the orthogonal functions - BUT it may
disagree with f on a number of points (which constitutes a set of
measure 0).

	For a concrete example, take your sines and cosines that you mentioned.
It can (easily) be shown that any linear combination of them (even an
infinite one), must be continuous. Hence, you cannot use them to
construct discontinuous functions (like a square wave) *exactly*.
However, you can construct something that looks a lot like it but
differs at only a "few" points. The square wave and your constructed
function will disagree at the points of discontinuity.

	See Gibb's Phemonenon:

http://en.wikipedia.org/wiki/Gibbs_phenomenon

	The "overshoot" at the point of discontinuity for a square wave is 9%
even when you take the infinite limit (the Wikipedia article, I believe,
is incorrect in claiming it goes away in that limit).

	The reason sin and cos are labeled as "complete" even though they can't
replicate a simple square wave is that in the infinite limit, the sum is
identical to the square wave everywhere except at a finite number of
points.

> If I'm not mistaken, "orthogonal" means that one function can't be
> constructed from the others (so there's no duplication of
	
	Not quite. You can have a function that cannot be obtained from the
others, but not be orthogonal to them.

	In function spaces, a scalar (or inner) product is defined, akin to the
one that you get with vectors in space. You take two functions,
calculate their scalar product, and get a scalar. Two functions are
orthogonal if their scalar product is 0.

	The analogy with vectors is a good one. Think in 3-D space: You can
easily get unit vectors from which you can construct any 3 dimensional
vector, but with none of these basis vectors being orthogonal. An
example would be:

A=[1 1 0], B=[0 1 1], C = [1 0 1]

(all divided by sqrt(2) for normalization). One wants orthogonality in
functions for the same reason you do with vectors: Computational
convenience.

	There are a number of ways to define a scalar product of 2 functions,
but they usually have some properties:

http://mathworld.wolfram.com/InnerProduct.html

(For function spaces, things are often complex (vs real), and so you use
the "alternate" rule given further down the page in place of rule 3).
For systems I occasionally deal with, the inner product of 2 functions
is given by:

(f, g) = Integral(f*g') where g' is complex conjugation. Occasionally a
weighting function is included as well.

> So, like, how do you tell if two functions are orthogonal? And how do
> you tell when a set of them is complete?

	To test for orthogonality, compute the scalar product. OR decompose it
into its components with respect to a known basis (e.g. sines and
cosines) and then take the inner product of those (hard to describe
here, but if you can do the decomposition, the inner product is trivial
as you know that the sines and cosines are orthogonal).

	Forgot how to test for completeness...

	I may not have described things well here, but once you "get" it, it's
quite illuminating. Basically an extension of how you dealt with vectors
in space. All concepts there have analogs in function spaces.

-- 
ASCII stupid question... get a stupid ANSI!


                    /\  /\               /\  /
                   /  \/  \ u e e n     /  \/  a w a z
                       >>>>>>mue### [at] nawazorg<<<<<<
                                   anl


Post a reply to this message

From: Invisible
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 04:20:39
Message: <4742a6e7$1@news.povray.org>
>> For example, the Fourier transform allows you to construct any 
>> function from sine and cosine functions.
>>
> 
> This isn't quite true over the reals, even assuming you're only looking 
> for functions with a given period.  For example the function which is 
> zero everywhere except being 1 at a single point will generate the same 
> Fourier representation as the constant zero function since it will have 
> the same integrals.

O RLY?

My DSP textbook says the Fourier transform of the delta function yields 
an amplitude of 1 for all frequencies. (Whereas the Fourier transform of 
a zero signal would be a zero signal.)

> I think (no proof) that you can reconstruct any function up to the 
> addition of a function which is nonzero over an area of zero `volume' 
> though (assuming you don't count things like a delta functions).  Not 
> that it matters for what you're doing of course, but you seem like the 
> sort of chap who might find it interesting.
> 
>>
>> So, like, how do you tell if two functions are orthogonal? And how do 
>> you tell when a set of them is complete?
> 
> What are you using these function for?  There may be better or worse 
> ways to do things depending on what you want.

JPEG compression works by decomposing an image into a set of (2D) cosine 
waves, and then quantinising the data. In my opinion, cosines are not 
actually a particularly good choice for this. (Gibb's phenominon is very 
ugly to look at.) So I'd like to try it with other functions - but first 
I need to work out how...

(A similar thing could be said about most [lossy] audio codecs known to 
man. Of course, here Gibb's isn't a problem at all. Here the problem 
becomes time invariance...)


Post a reply to this message

From: Warp
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 04:51:30
Message: <4742ae22@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> JPEG compression works by decomposing an image into a set of (2D) cosine 
> waves, and then quantinising the data. In my opinion, cosines are not 
> actually a particularly good choice for this. (Gibb's phenominon is very 
> ugly to look at.) So I'd like to try it with other functions - but first 
> I need to work out how...

  So you are going to reinvent JPEG2000 then?-)

-- 
                                                          - Warp


Post a reply to this message

From: scott
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 04:56:02
Message: <4742af32$1@news.povray.org>
>> This isn't quite true over the reals, even assuming you're only looking 
>> for functions with a given period.  For example the function which is 
>> zero everywhere except being 1 at a single point will generate the same 
>> Fourier representation as the constant zero function since it will have 
>> the same integrals.
>
> O RLY?
>
> My DSP textbook says the Fourier transform of the delta function yields an 
> amplitude of 1 for all frequencies. (Whereas the Fourier transform of a 
> zero signal would be a zero signal.)

A function that is 0 everywhere except for f(0)=1 is not a delta funciton. 
A delta function has f(0)=infinity and when integrated it gives a non-zero 
value (ie it has area, unlike the function Kevin described).

> JPEG compression works by decomposing an image into a set of (2D) cosine 
> waves, and then quantinising the data. In my opinion, cosines are not 
> actually a particularly good choice for this. (Gibb's phenominon is very 
> ugly to look at.) So I'd like to try it with other functions - but first I 
> need to work out how...

Have you read:

http://en.wikipedia.org/wiki/Wavelet


Post a reply to this message

From: Invisible
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 04:57:29
Message: <4742af89$1@news.povray.org>
Warp wrote:
> Invisible <voi### [at] devnull> wrote:
>> JPEG compression works by decomposing an image into a set of (2D) cosine 
>> waves, and then quantinising the data. In my opinion, cosines are not 
>> actually a particularly good choice for this. (Gibb's phenominon is very 
>> ugly to look at.) So I'd like to try it with other functions - but first 
>> I need to work out how...
> 
>   So you are going to reinvent JPEG2000 then?-)

Nah. JPEG2000 uses wavelets - a technology I have repeatedly failed to 
comprehend. I'm just using a decomposition with a different set of basis 
functions. ;-)

[We casually overlook the fact that JPEG doesn't work in the RGB colour 
space, it works in some weird custom space to better accomodate the 
peculiarities of the human eye. There's also several layers of plain 
ordinary lossless compression happening in addition to the lossy DCT 
quantinisation step. And... basically JPEG is complicated. I'm only 
trying to frobnicate one tiny step in the process.]


Post a reply to this message

From: Invisible
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 05:01:12
Message: <4742b068$1@news.povray.org>
scott wrote:
>>> This isn't quite true over the reals, even assuming you're only 
>>> looking for functions with a given period.  For example the function 
>>> which is zero everywhere except being 1 at a single point will 
>>> generate the same Fourier representation as the constant zero 
>>> function since it will have the same integrals.
>>
>> O RLY?
>>
>> My DSP textbook says the Fourier transform of the delta function 
>> yields an amplitude of 1 for all frequencies. (Whereas the Fourier 
>> transform of a zero signal would be a zero signal.)
> 
> A function that is 0 everywhere except for f(0)=1 is not a delta 
> funciton. A delta function has f(0)=infinity and when integrated it 
> gives a non-zero value (ie it has area, unlike the function Kevin 
> described).

http://en.wikipedia.org/wiki/Kronecker_delta
http://en.wikipedia.org/wiki/Dirac_delta_function

So you see, we are in fact both right, for a suitable definition of 
"delta function". (I'm going by a DSP textbook.)

The Dirac delta doesn't interest me - my signals won't ever contain 
infinity.

> Have you read:
> 
> http://en.wikipedia.org/wiki/Wavelet

I have. Multiple times. I still don't understand.


Post a reply to this message

From: Warp
Subject: Re: A question of pure mathematics
Date: 20 Nov 2007 05:05:32
Message: <4742b16c@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> [We casually overlook the fact that JPEG doesn't work in the RGB colour 
> space, it works in some weird custom space to better accomodate the 
> peculiarities of the human eye.

  It's not a "custom space". It's the YCbCr color space, which has been
used in television and video for like 50 years.

-- 
                                                          - Warp


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.