|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> No need to make AB unital for the cross product method (saves a square
> root and divide).
Actually, thinking about it, you wouldn't need to for the dot product
either. I should have seen that...
>> X = (1,0)
>> X . V - k = (1,0) . (-1,0) - 0 = -1
>
> X cross AB = (1,0) x (0,1) = (1)(1) - (0)(0) = 1
> (saves one subtraction compared to your method)
Oh, I see. You're only computing the Z ordinate of the cross product?
Well, I guess that could be quicker, yes...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Oh, I see. You're only computing the Z ordinate of the cross product?
Well the other 2 ordinates are always zero, so there seems little point :-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott wrote:
>> Oh, I see. You're only computing the Z ordinate of the cross product?
>
> Well the other 2 ordinates are always zero, so there seems little point :-)
...that's true, of course. I had it in my head that you would only
compute the X and Y ordinates (which would take *more* effort than a
simple dot product). But, of course, that would be pointless.
I should go sit in the corner now. :'{
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 12/11/2009 02:43 AM, Slime wrote:
>> Have you tried this condition?
>>
>> ((distSq< (radius1Sq+radius2Sq))
>> or (4*radius1Sq*radius2Sq> (distSq - (radius1Sq+radius2Sq))^2))
>
> Smart. I feel stupid now! =) I see the equivalence, but I'm curious how long
> you took to come up with that. Was it obvious?
>
> - Slime
> [ http://www.slimeland.com/ ]
>
>
Well, it comes naturally from
distSq < radius1Sq + radius2Sq + 2 * radius1 * radius2
=>
distSq - (radius1Sq + radius2Sq ) < 2 * radius1 * radius2
=>
(distSq - (radius1Sq + radius2Sq ))^2 < 4 * radius1Sq * radius2Sq,
but the downside is that you have an overflow risk here to check
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
bart <bart@home> wrote:
> Have you tried this condition?
> ((distSq < (radius1Sq+radius2Sq))
> or (4*radius1Sq*radius2Sq > (distSq - (radius1Sq+radius2Sq))^2))
I was curious to see how much of a difference sqrt() really makes compared
to avoiding it, so I made this C++ file:
//-------------------------------------------------------------
#include <cmath>
#include <algorithm>
bool circlesCollide1(double cx1, double cy1, double rs1,
double cx2, double cy2, double rs2)
{
const double vx = cx2 - cx1, vy = cy2 - cy1;
return std::sqrt(vx*vx + vy*vy) < std::sqrt(rs1) + std::sqrt(rs2);
}
bool circlesCollide2(double cx1, double cy1, double rs1,
double cx2, double cy2, double rs2)
{
const double vx = cx2 - cx1, vy = cy2 - cy1;
const double dist2 = vx*vx + vy*vy;
const double tmp = dist2 - (rs1 + rs2);
return dist2 < rs1 + rs2 || 4.0*rs1*rs1 > tmp*tmp;
}
//-------------------------------------------------------------
and then a main program to test the speed:
//-------------------------------------------------------------
#include <iostream>
#include <ctime>
bool circlesCollide1(double cx1, double cy1, double rs1,
double cx2, double cy2, double rx2);
bool circlesCollide2(double cx1, double cy1, double rs1,
double cx2, double cy2, double rx2);
int main()
{
double cx1 = -10.0, cy1 = -5.0, rs1_1 = 2.0*2.0, rs1_2 = 16.0*16.0;
double cx2 = 5.0, cy2 = 10.0, rs2_1 = 2.5*2.5, rs2_2 = 18.0*18.0;
const int iterations = 200000000;
std::clock_t iClock = std::clock();
for(int i = 0; i < iterations; ++i)
{
circlesCollide1(cx1, cy1, rs1_1, cx2, cy2, rs2_1);
circlesCollide1(cx1, cy1, rs1_2, cx2, cy2, rs2_2);
}
std::clock_t eClock = std::clock();
std::cout << "circlesCollide1: " << iterations << " iterations took "
<< double(eClock-iClock) / CLOCKS_PER_SEC << " seconds."
<< std::endl;
iClock = std::clock();
for(int i = 0; i < iterations; ++i)
{
circlesCollide2(cx1, cy1, rs1_1, cx2, cy2, rs2_1);
circlesCollide2(cx1, cy1, rs1_2, cx2, cy2, rs2_2);
}
eClock = std::clock();
std::cout << "circlesCollide2: " << iterations << " iterations took "
<< double(eClock-iClock) / CLOCKS_PER_SEC << " seconds."
<< std::endl;
}
//-------------------------------------------------------------
I compiled with "-O3 -march=native" and got this result:
circlesCollide1: 200000000 iterations took 16.23 seconds.
circlesCollide2: 200000000 iterations took 5.42 seconds.
(I didn't test that the functions give the correct result, though.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 12/11/2009 02:43 AM, Slime wrote:
>> Have you tried this condition?
>>
>> ((distSq< (radius1Sq+radius2Sq))
>> or (4*radius1Sq*radius2Sq> (distSq - (radius1Sq+radius2Sq))^2))
>
> Smart. I feel stupid now! =) I see the equivalence, but I'm curious how long
> you took to come up with that. Was it obvious?
>
hmm, it looks that this sqrt-free test
will equally work in 3D for sphere-sphere intersection test.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Suppose you have a line from point A to point B. Compute the vector AB,
> dot product V . A and call it k.
If you do this in 2D, minus the unnecessary normalization, the math is
equivalent to doing the 3D cross product while ignoring the components you
know to be 0.
inline vec_t Vec2Cross( const vec2_t v1, const vec2_t v2 )
{
return v1[0] * v2[1] - v1[1] * v2[0];
}
If the result is positive, v2 points to the "left" of v1, otherwise it
points to the "right".
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> I was curious to see how much of a difference sqrt() really makes compared
> to avoiding it, so I made this C++ file:
> I compiled with "-O3 -march=native" and got this result:
>
> circlesCollide1: 200000000 iterations took 16.23 seconds.
> circlesCollide2: 200000000 iterations took 5.42 seconds.
I would imagine this sort of thing varies by processor, but yeah... I'm
often curious to know how "expensive" various math operations are in
relation to each other. (Of course, real programs are also affected by
cache issues and which combinations of math ops you perform together and
branch prediction issues and so forth... But typically you can't do much
about that.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> If you're that hell-bent on avoiding the square root, it can be
> approximated as follows:
>
> Suppose that the most significant non-zero digit of x is in the 2^n
> place. Let y = x / 2^(n/2). Now Sqrt(x) is approximately x / y.
>
> Usually this estimated value is greater than the true square root (but
> always by less than 45%). However, exact powers of 2 seem to come out
> slightly below the true value. (By about 10^-16.) I don't know if that's
> a glitch in my test program or what... It shouldn't be insurmountable
> though.
One would have to be very hell-bent indeed to employ, in lieu of sqrt(),
functions that are probably just as computationally expensive, like ln()
and exp(). Unless you have a quick way of getting n and 2^(n/2) without
ln() and exp(), you'll need them.
Now if you know the internal representation of your floats (or doubles),
you can slap together a union like
union MajorKludge {
double dvalue;
char carray[sizeof double];
};
and get both n and 2^(n/2) using some very simple bitwise logical
operations (as well as code whose portability will be highly suspect).
Regards,
John
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
John VanSickle wrote:
> Invisible wrote:
>
>> If you're that hell-bent on avoiding the square root, it can be
>> approximated as follows:
>>
>> Suppose that the most significant non-zero digit of x is in the 2^n
>> place. Let y = x / 2^(n/2). Now Sqrt(x) is approximately x / y.
>>
>> Usually this estimated value is greater than the true square root (but
>> always by less than 45%). However, exact powers of 2 seem to come out
>> slightly below the true value. (By about 10^-16.) I don't know if
>> that's a glitch in my test program or what... It shouldn't be
>> insurmountable though.
>
> One would have to be very hell-bent indeed to employ, in lieu of sqrt(),
> functions that are probably just as computationally expensive, like ln()
> and exp(). Unless you have a quick way of getting n and 2^(n/2) without
> ln() and exp(), you'll need them.
>
> Now if you know the internal representation of your floats (or doubles),
Indeed, the *only* reason you would ever use a construction like this is
because it's very fast and efficient to get at these numbers using the
binary representation of a float.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|