POV-Ray : Newsgroups : povray.programming : what is quicker ? Server Time
25 Apr 2024 15:40:29 EDT (-0400)
  what is quicker ? (Message 1 to 4 of 4)  
From: Le Forgeron
Subject: what is quicker ?
Date: 14 Oct 2007 11:26:36
Message: <4712352c$1@news.povray.org>
basic question, but I have a doubt about any universal answer.

This is not about SDL, but C/C++ (and still povray related, later!)

Let's assume you have an array of N positive values (some might be
null, but it happens rarely, and its impossible for all of the
values to be null at the same time).
Let's call it d[]. It is proven that sum(d[]) > 0.

We have N > 2. (three or more)

You want to end up with an array of N values, let's call it z[], so
that:

z[i] = product of all d[] but d[i];

In fact, by the end of the computation, z[] will be divided evenly
so that sum(z[]) = 1.

And now the question about hardware computation: would it be quicker
to...

1/ compute N*(N-1) multiplications, blindly.

2/ compute a global product, so that's about N multiplications,
followed by N divisions if the product is not null, or followed by N
simple 1 or 0 choice based on d[i] being null or not.

Where is, with the actual processor, the N for which the divisions
(of double floating point numbers) cames out quicker than the
N*(N-2) multiplications (of same double floating point numbers) ?


Extra bonus question: to enhance scalability and precisions with
large N, shouldn't d[] be scaled beforehand so that max value of d[]
become limited to 1 (or any other sensible value ?). This would cost
an additional scan of d[] to get the max value (let's call it m).
Then would it be more effective to loop over d[] with:
 A/ d[i] =/ m;
 B/ i_m = 1/m (computed once), then d[i] *= i_m;

I'm afraid of too big numbers, but should I worry about loss of very
small numbers ?

-- 
The superior man understands what is right;
the inferior man understands what will sell.
-- Confucius


Post a reply to this message

From: Christian Froeschlin
Subject: Re: what is quicker ?
Date: 14 Oct 2007 15:48:20
Message: <47127284$1@news.povray.org>
> 2/ compute a global product, so that's about N multiplications,
> followed by N divisions if the product is not null, or followed by N
> simple 1 or 0 choice based on d[i] being null or not.

If product is 0 and d[i] is 0 you still need to compute z[i]
using option 1/? But otherwise, if N is large, you will be
much better off doing N divisions than N*(N-1) multiplications.
The capabilities of the processor only determine the value
of N where option 2/ becomes faster than option 1/.

> Extra bonus question: to enhance scalability and precisions with
> large N, shouldn't d[] be scaled beforehand so that max value of d[]
> become limited to 1

That would be a sensible course of action if the
actual numbers can be very large or very small.


Post a reply to this message

From: Warp
Subject: Re: what is quicker ?
Date: 14 Oct 2007 20:13:12
Message: <4712b098@news.povray.org>
Perhaps instead of guessing you could test it?

-- 
                                                          - Warp


Post a reply to this message

From: Le Forgeron
Subject: Re: what is quicker ?
Date: 15 Oct 2007 12:21:45
Message: <47139399$1@news.povray.org>
Le 14.10.2007 21:49, Christian Froeschlin nous fit lire :
> 
> If product is 0 and d[i] is 0 you still need to compute z[i]
> using option 1/?

No, in such case, z[i]=1/count of null in d[] | for i where d[i]=0, and
z[j]=0 for everyone else.
Does not really matter, as anyway, at the end, sum of z[] is made to 1.
So, setting z[i]=1 is just fine here.

> 
> That would be a sensible course of action if the
> actual numbers can be very large or very small.
They do.
Thanks you.

-- 
The superior man understands what is right;
the inferior man understands what will sell.
-- Confucius


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.