|
|
> Can somebody who knows what they're talking about confirm this?
>
> If I'm understanding this right, if I can find a complete set of
> orthogonal functions, I should be able to construct any possible
> function as a linear combination of them.
>
That's just what being complete means. IIRC, orthogonal adds the
property that the linear combination is unique, i.e. you have a true
basis and not just a spanning set, as it seems to be called in English.
> If I'm not mistaken, "orthogonal" means that one function can't be
> constructed from the others (so there's no duplication of
> "information"), and "complete" just means you've got all the functions
> you need.
>
> So, like, how do you tell if two functions are orthogonal? And how do
> you tell when a set of them is complete?
Orthogonal means only one precise thing: their scalar product is 0.
Whether two functions are orthogonal really depends on the scalar
product you choose, and in turn on the set of functions you consider.
The common choice, for sufficiently well-behaved functions, is:
f.g = \sum{f g}, where . is the scalar product.
That's how you prove that the sine functions used in Fourier transform
are orthogonal.
Proving that a set is complete ordinarily means showing a way to build
the decomposition of any given element in the space you consider. When
you are in a finite dimension space it is easy enough, you check that
you have a number of vectors equal to the dimension of the space, and if
they are linearly independent (orthogonal to each other, for example)
you have a complete set...
--
Vincent
Post a reply to this message
|
|