|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Suppose you have a weight attached to a spring. Each time you move the
weight, the spring pushes or pulls it in the opposite direction, trying
to return it to its starting point.
If we assume the spring follows Hooke's law, then the force increases
linearly as the displacement increases. If we assume that the spring
constant is unity, then we have
force = -displacement
Now force = mass * acceleration, or (more usefully) acceleration = force
/ mass. If we assume that the mass of our weight is unity, then we have
acceleration = force
Putting these together, if we assume that f(t) computes the displacement
at time t, then we have
f''(t) = -f(t)
In other words, for any function describing the displacement of the
weight over time, we must have that the second derivative of this
function is equal to the additive inverse of itself. Here we have ended
up with a differential equation describing a property that the weight's
motion must obey. But that still doesn't actually /tell/ us how the
weight moves. We just know what property to look for.
It is not immediately obvious what functions might have this property. A
moment's reflection reveals that
f(t) = 0
is a perfectly correct solution, since then f''(t) = 0 also, and clearly
0 = -0. This corresponds to the weight remaining stationary for all
eternity - a physically valid outcome, but not a very interesting one.
A few moments of further contemplation reveal that the derivative of sin
is cos, and the derivative of cos is -sin. Thus, if we have
f(t) = sin t
then it would follow that
f''(t) = -sin t
which satisfies the required differential equation. By a nearly
identical chain of reasoning, f(t) = cos t works just as well.
Both of these solutions correspond to the fact that if you pull the
weight back and then let go, it oscillates back and forth. (We're
ignoring friction, which IRL would eventually slow the system to a halt.)
Now suppose that by some bizarre mechanism, the spring actually pushes
the weight in the direction it's already going, rather than back against
it. Then our differential equation becomes
f''(t) = f(t)
Again, f(t) = 0 is one valid solution. But supposing the weight /does/
ever move, it seems obvious that it would accelerate forever, without
limit. And indeed, any elementary calculus textbook will reveal that
there is precisely /one/ function who's derivative equals the original
function. This function is exp. So if we write
f(t) = exp t
then it follows that /all/ derivatives of f (including f'') would equal
f. In other words, this solves our differential equation.
So here we have two differential equations. One represents a system with
negative feedback, and produces oscillations in terms of sin and/or cos.
The other represents a system with positive feedback, which grows
exponentially. As you'd expect both systems produce very different
behaviour, and hence both of them have very different-looking solutions.
...and then we recall Euler's relation: If you take the Taylor series
expansion of exp x
1 + x + x^2/2! + x^3/3! + x^4/4! + ...
and replace x with xi
1 + xi - x^2/2! - x^3i/3! + x^4i/4! + ...
it splits into two series, one real, and one imaginary. And it /just so
happens/ that these series exactly match the Taylor series for sin and cos:
exp xi = cos x + i sin x
In fact, sin can be expressed in terms of exp, and exp can be expressed
in terms of sin:
sin(x) = (exp(xi) - exp(-xi)) / 2i
exp(x) = sin(xi)/i + cos(xi)
Now I can't help but wonder... Is this a coincidence? Or some deep
fundamental result? Is there some way of writing a solution to an
equation like
f''(t) = k f(t)
such that you get exp() or sin() depending on the sign of k?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 26/07/2012 12:51 PM, Invisible wrote:
> Now suppose that by some bizarre mechanism, the spring actually pushes
> the weight in the direction it's already going, rather than back against
> it. Then our differential equation becomes
>
> f''(t) = f(t)
>
> Again, f(t) = 0 is one valid solution. But supposing the weight /does/
> ever move, it seems obvious that it would accelerate forever, without
> limit. And indeed, any elementary calculus textbook will reveal that
> there is precisely /one/ function who's derivative equals the original
> function. This function is exp. So if we write
>
> f(t) = exp t
>
> then it follows that /all/ derivatives of f (including f'') would equal
> f. In other words, this solves our differential equation.
It gets better. I just looked it up, and apparently the derivative of
sinh is cosh, and the derivating of cosh is sinh again. So we have
f(t) = sinh(t)
f''(t) = sinh(t)
Or, equally well,
f(t) = cosh(t)
f''(t) = cosh(t)
So the solutions to one equation are sin and cos, and to the other are
sinh and cosh. That's really pretty...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 26/07/2012 13:51, Invisible nous fit lire :
> Now I can't help but wonder... Is this a coincidence? Or some deep
> fundamental result? Is there some way of writing a solution to an
> equation like
>
> f''(t) = k f(t)
>
> such that you get exp() or sin() depending on the sign of k?
what about f''(t) = exp(abs(sqrt(k)).t) ?
when k = -1, abs(sqrt(k)) is i
when k = 1, it's 1.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 26/07/2012 05:22 PM, Le_Forgeron wrote:
> Le 26/07/2012 13:51, Invisible nous fit lire :
>> Now I can't help but wonder... Is this a coincidence? Or some deep
>> fundamental result? Is there some way of writing a solution to an
>> equation like
>>
>> f''(t) = k f(t)
>>
>> such that you get exp() or sin() depending on the sign of k?
>
> what about f''(t) = exp(abs(sqrt(k)).t) ?
>
> when k = -1, abs(sqrt(k)) is i
> when k = 1, it's 1.
What do you need abs() for?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 26/07/2012 18:27, Orchid Win7 v1 nous fit lire :
> On 26/07/2012 05:22 PM, Le_Forgeron wrote:
>> Le 26/07/2012 13:51, Invisible nous fit lire :
>>> Now I can't help but wonder... Is this a coincidence? Or some deep
>>> fundamental result? Is there some way of writing a solution to an
>>> equation like
>>>
>>> f''(t) = k f(t)
>>>
>>> such that you get exp() or sin() depending on the sign of k?
>>
>> what about f''(t) = exp(abs(sqrt(k)).t) ?
>>
>> when k = -1, abs(sqrt(k)) is i
>> when k = 1, it's 1.
>
> What do you need abs() for?
well, there is two values for a square root of number from R (only one
exception: 0)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 7/26/2012 4:51 AM, Invisible wrote:
>
> Now I can't help but wonder... Is this a coincidence? Or some deep
> fundamental result? Is there some way of writing a solution to an
> equation like
Everything you've mentioned including the sinh and cosh stuff is a
pretty immediate consequence of how e^(a*x) differentiates and the fact
that solutions to the ODE's you're looking at are closed under linear
combinations. Think in terms of roots of unity and you should see how
it can easily be extended to nth derivatives.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 26/07/2012 02:58 PM, Invisible wrote:
> So the solutions to one equation are sin and cos, and to the other are
> sinh and cosh. That's really pretty...
Of course, sin and sinh look totally unrelated on the real line. But in
the complex plane, one is a trivially rotated version of the other. (The
same goes for cos verses cosh.)
Wolfram|Alpha claims that f''=f actually has "the" solution
f(t) = A exp(t) + B exp(-t)
I notice the following:
* If A=B=0 then we have f(t)=0.
* With A=1 and B=0, we have f(t)=exp(t).
* Setting A=B=1/2 gives us f(t)=cosh(t).
* Finally, A=1/2 and B=-1/2 gives us f(t)=cosh(t).
More tantalisingly, sinh and cosh are defined as
sinh(x) = 1/2 [exp(x) - exp(-x)]
cosh(x) = 1/2 [exp(x) + exp(-x)]
and sin and cos can be similarly defined as
sin(x) = 1/2i [exp(ix) - exp(-ix)]
cos(x) = 1/2 [exp(ix) + exp(-ix)]
This is the "trivial rotation in the complex plane" which I mentioned.
So perhaps the most general solution to f''=-f is given by
f(t) = A exp(it) + B exp(-it)
It's certainly an interesting idea.
Getting back to physics for a moment, we know that in the case of
f''=-f, one solution is f(t) = sin(t). But would, say, f(t) = sin(2t) work?
Looking up my table of derivatives, it appears that we have
f(t) = sin(2t)
f'(t) = 2cos(2t)
f''(t) = -4sin(2t)
So that doesn't work at all. It seems our system can oscillate at
exactly one frequency only. But how able different amplitudes?
f(t) = 2sin(t)
f'(t) = 2cos(t)
f''(t) = -2sin(t)
OK, so that works. It seems our system can oscillate at different
amplitudes, but only one frequency. So how about different phases? We
know that both sin and cos work, but how about a combination of them?
f(t) = sin(t) + cos(t)
f'(t) = cos(t) - sin(t)
f''(t) = -sin(t) - cos(t)
OK, so that seems to work. So it appears that we have two basic
solutions, sin(t) and cos(t), and /any/ linear combination of them is
also a solution. That's interesting.
Does the same hold for the f''=f case? Let's look.
f(t) = sinh(2t)
f'(t) = 2cosh(2t)
f''(t) = 4sinh(2t)
That doesn't work, as before.
f(t) = 2sinh(t)
f'(t) = 2cosh(t)
f''(t) = 2sinh(t)
That still works.
f(t) = sinh(t) + cosh(t)
f'(t) = cosh(t) + sinh(t)
f''(t) = sinh(t) + cosh(t)
So again, it appears that any linear combination of sinh(t) and cosh(t)
should be a solution.
Alternatively, we can say that in one case the solutions are exp(x) and
exp(-x) and in the other they are exp(ix) and exp(-ix), and that any
linear combination of these works. This is slightly more fiddly, since
in the second case we need to keep the solutions purely real (so the
multiplication factors sometimes become imaginary).
Either way, it's a fascinating result...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|