|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> Wouldn't that be increadibly unstable, numerically?
How could it be? If you're taking the limit of something that's going to a
finite value in a continuous curve, how could your calculation of the value
closer to the limit have more error than the error calculated farther from
the limit?
--
Darren New, San Diego CA, USA (PST)
Forget "focus follows mouse." When do
I get "focus follows gaze"?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
>> If it's a limit, just compute it for numbers as close as possible to
>> the limiting value (or really really big numbers if the limiting value
>> is infinite). The closer (or the bigger) the number you compute it
>> for the closer your answer will be.
>
> Wouldn't that be increadibly unstable, numerically?
How much a a problem this is depends on the limit. For most limits that
I've ever numerically computed it's worked fine. If you're taking the
limit of a series (as opposed to the limit of a function) then it'll
almost certainly be fine.
>> If it's an integral do something like this:
>> http://en.wikipedia.org/wiki/Riemann_sum
>
> I see. (Although I'm still not sure how you compute an infinite integral
> this way...)
Combine this definition with how you compute the limit of an infinite
sum. So you'll have two approximations, one in approximating the
integral with an infinite sum, and another with approximating the
infinite sum with a finite one. If you're doing this, it'd be wise to
prove that these approximations both converge to the correct value.
>>> But you can't compute an infinite product.
>>
>> If the product converges (which it does, otherwise you couldn't define
>> a number/function with it) then you can by definition get an
>> arbitrarily good approximation by computing the product of the first n
>> terms for a large enough n.
>
> I don't see how that is the case.
>
> If you have an infinite *sum*, then as long as the terms get
> progressively more tiny and never get larger again, you can disregard
> all the terms after a certain point. But if you're taking a *product*
> then any term, anywhere in the series could radically alter the final
> result.
The same is true for a sum actually. The point is proving that an
infinite sum or product *converges* is to show that this doesn't happen.
Basically you show that the partial sums/products form a convergent
sequence (see the "Limit of a sequence" Wikipedia article I linked).
Then you know by definition that you can get a good approximation by
computing a large enough partial sum/product.
In the case of a sum this requires that the terms progressively get
closer to zero, for a product it'll require they get closer to one.
>> Out of curiosity have you ever had a calculus class?
>
> I've never had *any* maths class!
>
> (Unless you count what we did at school. This simply involved filling
> out hundreds of thousands of pages of long-division problems over a
> 7-year period...)
>
> Hypothetically I shouldn't be able to do algebra at all...
That's too bad, it seems like you would have greatly enjoyed being able
to do some math that was more advanced than long division.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Invisible wrote:
>> Wouldn't that be increadibly unstable, numerically?
>
> How could it be? If you're taking the limit of something that's going
> to a finite value in a continuous curve, how could your calculation of
> the value closer to the limit have more error than the error calculated
> farther from the limit?
According to Google calculator, sin(1e-14)/1e-14 = 1, but
sin(1e-15)/1e-15 = 0.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Kevin Wampler wrote:
>
> In the case of a sum this requires that the terms progressively get
> closer to zero, for a product it'll require they get closer to one.
>
*caveat: unless the product converges to zero.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>> Wouldn't that be increadibly unstable, numerically?
>>
>> How could it be? If you're taking the limit of something that's going
>> to a finite value in a continuous curve, how could your calculation of
>> the value closer to the limit have more error than the error
>> calculated farther from the limit?
>
> According to Google calculator, sin(1e-14)/1e-14 = 1, but
> sin(1e-15)/1e-15 = 0.
Indeed, this was going to be my example. When x is small, sin(x) is
approximately equal to x. But whatever, sin(x)/x when x is small
involves division by a tiny number - which is equivilant to
multiplication by a huge number. It's numerically unstable.
(Of course, the limit of sin(x)/x as x approaches zero is just 1, which
you don't need to "estimate" in the first place. But if you had a
similar formula which approximates some irrational quantity...)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>> If it's a limit, just compute it for numbers as close as possible to
>>> the limiting value (or really really big numbers if the limiting
>>> value is infinite). The closer (or the bigger) the number you
>>> compute it for the closer your answer will be.
>>
>> Wouldn't that be increadibly unstable, numerically?
>
> How much of a problem this is depends on the limit.
True enough.
>>>> But you can't compute an infinite product.
>>>
>>> If the product converges then you can by definition get an
>>> arbitrarily good approximation by computing the product of the first
>>> n terms for a large enough n.
>>
>> I don't see how that is the case.
>>
>> If you have an infinite *sum*, then as long as the terms get
>> progressively more tiny and never get larger again, you can disregard
>> all the terms after a certain point. But if you're taking a *product*
>> then any term, anywhere in the series could radically alter the final
>> result.
>
> The same is true for a sum actually. The point is proving that an
> infinite sum or product *converges* is to show that this doesn't happen.
> Basically you show that the partial sums/products form a convergent
> sequence (see the "Limit of a sequence" Wikipedia article I linked).
> Then you know by definition that you can get a good approximation by
> computing a large enough partial sum/product.
>
> In the case of a sum this requires that the terms progressively get
> closer to zero, for a product it'll require they get closer to one.
I guess it's just the case that an infinite sum or product can be
convergent, and yet converge really, *really* slowly...
>>> Out of curiosity have you ever had a calculus class?
>>
>> I've never had *any* maths class!
>>
>> (Unless you count what we did at school. This simply involved filling
>> out hundreds of thousands of pages of long-division problems over a
>> 7-year period...)
>>
>> Hypothetically I shouldn't be able to do algebra at all...
>
> That's too bad, it seems like you would have greatly enjoyed being able
> to do some math that was more advanced than long division.
Well, I *did* go to a school for stupid people, after all...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> Nicolas Alvarez wrote:
>> Invisible wrote:
>>> http://office.microsoft.com/en-us/excel/HP052090051033.aspx
>>
>> Nice, that link crashed my browser :) Clearly Microsoft and KDE don't
>> like each other.
>
> Crashed it? Or just upset it?
Crashed. A khtml::BidiContext object supposedly on the memory address
0xffffffffffffffff had one of its member methods called. *Boom*.
I filed a bug: http://bugs.kde.org/show_bug.cgi?id=225954
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>>> http://office.microsoft.com/en-us/excel/HP052090051033.aspx
>>> Nice, that link crashed my browser :) Clearly Microsoft and KDE don't
>>> like each other.
>> Crashed it? Or just upset it?
>
> Crashed. A khtml::BidiContext object supposedly on the memory address
> 0xffffffffffffffff had one of its member methods called. *Boom*.
>
> I filed a bug: http://bugs.kde.org/show_bug.cgi?id=225954
OK, that's not good...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |