 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> I was curious to see how much of a difference sqrt() really makes compared
> to avoiding it, so I made this C++ file:
> I compiled with "-O3 -march=native" and got this result:
>
> circlesCollide1: 200000000 iterations took 16.23 seconds.
> circlesCollide2: 200000000 iterations took 5.42 seconds.
I would imagine this sort of thing varies by processor, but yeah... I'm
often curious to know how "expensive" various math operations are in
relation to each other. (Of course, real programs are also affected by
cache issues and which combinations of math ops you perform together and
branch prediction issues and so forth... But typically you can't do much
about that.)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> If you're that hell-bent on avoiding the square root, it can be
> approximated as follows:
>
> Suppose that the most significant non-zero digit of x is in the 2^n
> place. Let y = x / 2^(n/2). Now Sqrt(x) is approximately x / y.
>
> Usually this estimated value is greater than the true square root (but
> always by less than 45%). However, exact powers of 2 seem to come out
> slightly below the true value. (By about 10^-16.) I don't know if that's
> a glitch in my test program or what... It shouldn't be insurmountable
> though.
One would have to be very hell-bent indeed to employ, in lieu of sqrt(),
functions that are probably just as computationally expensive, like ln()
and exp(). Unless you have a quick way of getting n and 2^(n/2) without
ln() and exp(), you'll need them.
Now if you know the internal representation of your floats (or doubles),
you can slap together a union like
union MajorKludge {
double dvalue;
char carray[sizeof double];
};
and get both n and 2^(n/2) using some very simple bitwise logical
operations (as well as code whose portability will be highly suspect).
Regards,
John
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
John VanSickle wrote:
> Invisible wrote:
>
>> If you're that hell-bent on avoiding the square root, it can be
>> approximated as follows:
>>
>> Suppose that the most significant non-zero digit of x is in the 2^n
>> place. Let y = x / 2^(n/2). Now Sqrt(x) is approximately x / y.
>>
>> Usually this estimated value is greater than the true square root (but
>> always by less than 45%). However, exact powers of 2 seem to come out
>> slightly below the true value. (By about 10^-16.) I don't know if
>> that's a glitch in my test program or what... It shouldn't be
>> insurmountable though.
>
> One would have to be very hell-bent indeed to employ, in lieu of sqrt(),
> functions that are probably just as computationally expensive, like ln()
> and exp(). Unless you have a quick way of getting n and 2^(n/2) without
> ln() and exp(), you'll need them.
>
> Now if you know the internal representation of your floats (or doubles),
Indeed, the *only* reason you would ever use a construction like this is
because it's very fast and efficient to get at these numbers using the
binary representation of a float.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |