POV-Ray : Newsgroups : povray.off-topic : Three guesses why : Re: Three guesses why Server Time
29 Jul 2024 18:17:54 EDT (-0400)
  Re: Three guesses why  
From: Darren New
Date: 17 Jun 2011 16:50:07
Message: <4dfbbdff$1@news.povray.org>
On 6/17/2011 1:12, Invisible wrote:
> On the other hand, how do you make your word processor go faster?

Is your word processor really not fast enough?

> *can* you make your word processor go faster?

Spell check in one thread, layout of pages you haven't looked at in another 
thread, etc.  I think they're already making it faster, if you actually 
watch Word or Reader open a big document, for example.

> From what I've seen, the thing
> that usually slows these down is I/O latency

I find that's almost all of the slow-down, actually. Have you seen the video 
of the SSD RAID array? Where they open all four office programs with 
documents in about half a second?  Where they open every program installed 
on the computer in about 10 seconds?

>> This is just a fact
>> (and I wouldn't be surprised if such an optimization problem would be
>> at the very least in the NP category
>
> It took me a moment to think about this one. My initial reaction was "isn't
> the NP category only defined for *sequential* algorithms?"

Sort of. A NDTM isn't exactly a sequential machine.

If it's NP, it's going to be NP on any fixed number of processors. Only if 
you let the number of processors scale with the size of the problem does it 
come into question. (Granted, that's how a lot of SIMD theory is defined.)

> OOC, how many problems can you name for which there is no known optimal
> algorithm? How many problems provably cannot be solved optimally?

Certainly all the problems that can't be solved can't be solved optimally. 
Otherwise, you can always brute force all possible answers and pick out the 
optimal one, assuming you have a way of generating answers and evaluating 
their relative merits.  I think your question is too ill-specified to be 
answered clearly, tho.

> Even for programs which do run faster in parallel, due to Amdahl's law,
> there's usually a point at which adding more cores does not improve
> performance.

Certainly having more cores than the number of instructions the program 
executes is going to be pointless, so mathematically speaking, the law 
*always* applies, even if it's silly in practice to take it to that 
exaggeration.

> Except now each CPU core continuously invalidates the other core's L1 cache
> lines, causing constant memory traffic and hence very high latency. It's
> probably vastly slower than just doing all the work in one thread.

And for people who don't have a technical background:

http://blogs.msdn.com/b/ericlippert/archive/2011/06/16/atomicity-volatility-and-immutability-are-different-part-three.aspx

> (The solution they eventually implemented was that each core has its own
> heap, and garbage-collects it in parallel. At least, for the first
> generation. The second generation has a shared heap, which is still split
> into "chunks" and collected in parallel.)

GC is definitely one of the harder problems out there. :-) One of those "in 
theory it's easy, in practice getting good performance is really hard."

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.