POV-Ray : Newsgroups : povray.off-topic : Three guesses why : Re: Three guesses why Server Time
29 Jul 2024 18:27:17 EDT (-0400)
  Re: Three guesses why  
From: Warp
Date: 16 Jun 2011 17:40:19
Message: <4dfa7843@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> http://www.economist.com/node/18750706?story_id=18750706

  One thing I thought of:

  A programming language/compiler cannot automatically parallelize all
possible programs given to it in the most optimal way (iow. so that it
maximizes, or at least gets very close to maximizing the efficiency of
the program in a multiprocessor architecture). Some problems are more
easily parallelized in an automatic way, while others are much harder
(they can be really hard to parallelize even for an expert programmer,
not to talk about parallelizing it automatically). This is just a fact
(and I wouldn't be surprised if such an optimization problem would be
at the very least in the NP category, if not in the "impossible" category
in the general case; in fact, if you think about it, distributing code
optimally among processors sounds a lot like the knapsack problem; only
it's probably even harder than that).

  So if we have a programming language and/or compiler that automatically
parallelizes a program, in many cases it will make a pretty lousy job
compared to am expert programmer doing it.

  One would hastily think "well, better *some* speedup by distributing
the task among processors than just running it in one core in the old
fashioned way, with no speedup at all".

  That may be so if you are running that program only. However, in many
cases in many situations that will not be the only program running in the
system. The problem is that this "lousily" parallelized program will be
hogging all the cores for itself, and in a rather inefficient way at that,
and thus other processes will get a smaller share.

  It may even be that in some situation you might even want just the
program using one core if it can't use the others efficiently, so that
you can run something else on the others, than the one program hogging
everything for itself, making other programs run more slowly than they
would have to.

  This automatic parallelization may be also give the false illusion of
efficiency, when in fact it's far from it. After all, you will see the
program using 100% of all the cores. That's pretty efficient, isn't it?
Of course this is just an illusion. CPU usage percentage doesn't necessarily
directly translate to efficiency. One program using two cores at 100% each
might be running just eg. 20% faster than when running on one single core.
(And in fact, I think there are examples of programs which actually run
*slower* on multiple cores than on one core, even though they are using
100% of CPU time on each, as contradictory as that might sound.)

-- 
                                                          - Warp


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.