POV-Ray : Newsgroups : povray.off-topic : Parallel processing : Re: Parallel processing Server Time
3 Sep 2024 13:17:35 EDT (-0400)
  Re: Parallel processing  
From: Invisible
Date: 18 Jan 2011 11:54:58
Message: <4d35c5e2$1@news.povray.org>
On 18/01/2011 04:20 PM, Warp wrote:
> Invisible<voi### [at] devnull>  wrote:
>> As far as computer programming is concerned, writing programs which
>> aren't single-threaded is a "hard problem". Oh, it depends on the task
>> of course. But many programs are just really awkward to write in a way
>> that utilises multiple cores.
>
>    The problems with concurrent and parallel programming are quite fundamental
> (and not all that dependent eg. on the programming language used).

The more you look at it, the more problems you find.

> There are
> many severe problems, mutual exclusion, deadlocks and livelocks being
> just a few examples of them.

These are problems of "making the program work correctly". There are 
also grave problems of "making the program actually go faster when you 
add more computer power".

> Curiously the majority of these problems
> appear even in single-core systems which run multiple threads.

> The problems that may
> arise are sometimes quite surprising and not at all evident from the program
> itself, even if it's competently implemented. These problems have been known
> for a really long time and there are tons and tons of entire books dedicated
> to them. Also lots of theory and software has been developed in order to
> try to model such parallel systems and try to find flaws in them, and to
> verify that a proposed implementation works.

In other words, "correctness is really, really hard".

>    I don't think that switching from current serial design CPUs to something
> that is more parallel is going to solve any of those problems (if anything,
> they could only aggravate them). AFAIK the problems are more fundamental
> than the precise CPU design used.

Well, certainly writing a program is something that tends to be very 
linear. But with multiple processors, the system's actions are no longer 
linear at all. Indeed, maybe the very metaphor of a linear program is no 
longer appropriate.

There are other problems. All the processors communicate through a 
single memory link, which is a bottleneck. If your program is limited by 
RAM bandwidth, adding more cores is hardly going to help. The caches 
that were added to hide RAM latency tend to trip each other up when 
threads want to communicate.

The usual solution is something like NUMA, where the different access 
speeds of different regions of memory are explicit rather than hidden. 
Distributed computing is in a way a more severe version of NUMA, but 
with lots more protocol overhead, and usually no guarantees of message 
delivery. It all spirals into complexity.

The more I think about it, the more I think we need to look at the 
problem in some sort of totally different way.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.