POV-Ray : Newsgroups : povray.off-topic : Three guesses why : Re: Three guesses why Server Time
30 Jul 2024 00:18:56 EDT (-0400)
  Re: Three guesses why  
From: Darren New
Date: 21 Jun 2011 13:55:45
Message: <4e00db21$1@news.povray.org>
On 6/21/2011 1:31, Invisible wrote:
> I wonder if anybody has fixed that thing where closing the CD drive causes
> the entire Windows OS to lock up for 30 seconds yet?

Yes. They call it "NT".  Your flaw is thinking the entire OS is locked up, 
when it's really just explorer waiting to read the disk.

> WAN is not fast. 5 Mbit/sec bandwidth and 105 ms latency is not fun with
> such a chatty protocol.

Yeah. Try starting up SunOS and X Windows on a 4-meg sparcstation with no 
internal disk paging over the 10Mbps ethernet.  Ninety seconds to switch 
window focus?

> I do remember configuring my Amiga 1200 to boot from a RAM disk. Jesus, that
> was fast...

Amazing how when you get rid of the moving parts and the simulation of 
moving parts it goes really fast.

>>> OOC, how many problems can you name for which there is no known optimal
>>> algorithm? How many problems provably cannot be solved optimally?
>>
>> Certainly all the problems that can't be solved can't be solved
>> optimally. Otherwise, you can always brute force all possible answers
>> and pick out the optimal one, assuming you have a way of generating
>> answers and evaluating their relative merits. I think your question is
>> too ill-specified to be answered clearly, tho.
>
> OK, so a problem would be provably impossible to solve if the search space
> is infinite or if there's no way to enumerate all possible solutions?

It would be provably impossible to solve via brute force, sure. That's 
exactly what keeps the halting problem from being solved, in at least some 
sense.

Beyond that, you'll have to say what you mean by "solving optimally". We 
know the optimal Big-O for a number of problems. Is that what you're talking 
about?

> Sure. My point was that for some programs, adding more cores yields more
> speed, at least until you reach ridiculous limits. For other programs,
> adding just one or two cores means you already reach that point.

Right. My comment was more of an aside.

I also realized yesterday, when driving past the building I tried to model 
for my game that slowed the frame rate to a crawl when I actually drew it, 
that the reason the universe doesn't have trouble keeping up with the 
frame-rate is that it really does have N processors for a problem of size N.

How do you calculate in real time exactly what it would look like to have 
light reflecting off that building built out of all those atoms?  Give each 
atom the ability to process its contribution to the reflected light in 
parallel.  How do you do the clipping and shadow casting in real time? Have 
each atom doing the clipping and shadow casting in parallel for all the 
pixels it affects. Etc.

(Yes, maybe obvious, but it kind of twinged with this conversation.)

> I think the fundamental thing is that GC wants to traverse pointers
> backwards, but the heap structure only allows you to traverse them forwards.

I don't think GC wants to traverse backwards, except perhaps in purely 
functional languages/heaps. GC works by starting at the roots and seeing 
what's still reachable, so that's all forward pointers.

> Other than that, we keep getting processors with more and more cores, but
> exactly the same amount of memory bandwidth. This seems very unsustainable.

Yep.  That's why NUMA is getting more popular and such.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.