POV-Ray : Newsgroups : povray.programming : what is quicker ? : what is quicker ? Server Time
5 May 2024 08:39:08 EDT (-0400)
  what is quicker ?  
From: Le Forgeron
Date: 14 Oct 2007 11:26:36
Message: <4712352c$1@news.povray.org>
basic question, but I have a doubt about any universal answer.

This is not about SDL, but C/C++ (and still povray related, later!)

Let's assume you have an array of N positive values (some might be
null, but it happens rarely, and its impossible for all of the
values to be null at the same time).
Let's call it d[]. It is proven that sum(d[]) > 0.

We have N > 2. (three or more)

You want to end up with an array of N values, let's call it z[], so
that:

z[i] = product of all d[] but d[i];

In fact, by the end of the computation, z[] will be divided evenly
so that sum(z[]) = 1.

And now the question about hardware computation: would it be quicker
to...

1/ compute N*(N-1) multiplications, blindly.

2/ compute a global product, so that's about N multiplications,
followed by N divisions if the product is not null, or followed by N
simple 1 or 0 choice based on d[i] being null or not.

Where is, with the actual processor, the N for which the divisions
(of double floating point numbers) cames out quicker than the
N*(N-2) multiplications (of same double floating point numbers) ?


Extra bonus question: to enhance scalability and precisions with
large N, shouldn't d[] be scaled beforehand so that max value of d[]
become limited to 1 (or any other sensible value ?). This would cost
an additional scan of d[] to get the max value (let's call it m).
Then would it be more effective to loop over d[] with:
 A/ d[i] =/ m;
 B/ i_m = 1/m (computed once), then d[i] *= i_m;

I'm afraid of too big numbers, but should I worry about loss of very
small numbers ?

-- 
The superior man understands what is right;
the inferior man understands what will sell.
-- Confucius


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.