POV-Ray : Newsgroups : povray.off-topic : Three guesses why : Re: Three guesses why Server Time
30 Jul 2024 00:19:59 EDT (-0400)
  Re: Three guesses why  
From: Invisible
Date: 22 Jun 2011 04:35:48
Message: <4e01a964$1@news.povray.org>
On 21/06/2011 06:55 PM, Darren New wrote:
> On 6/21/2011 1:31, Invisible wrote:
>> I wonder if anybody has fixed that thing where closing the CD drive
>> causes
>> the entire Windows OS to lock up for 30 seconds yet?
>
> Yes. They call it "NT". Your flaw is thinking the entire OS is locked
> up, when it's really just explorer waiting to read the disk.

Sure. It's not the whole OS. It's just the entire GUI. Subtle difference 
there.

Same thing happens if the OS is expecting to configure a NIC via DHCP 
but there's no server. It waits quite a long time - which is fine - but 
the entire GUI again locks up while it does so (which isn't fine). No 
reason, just poor design. (I think in XP it at least doesn't sit around 
waiting if there's no cable plugged in.)

>> WAN is not fast. 5 Mbit/sec bandwidth and 105 ms latency is not fun with
>> such a chatty protocol.
>
> Yeah. Try starting up SunOS and X Windows on a 4-meg sparcstation with
> no internal disk paging over the 10Mbps ethernet. Ninety seconds to
> switch window focus?

Running X Windows on an Amiga 1200 is almost as slow as that. Which is 
daft really; the native OS is trippy-fast. More responsive than some of 
the [massively more powerful] PCs I look after at work. And here I was 
thinking Linux was fast...

>> I do remember configuring my Amiga 1200 to boot from a RAM disk.
>> Jesus, that was fast...
>
> Amazing how when you get rid of the moving parts and the simulation of
> moving parts it goes really fast.

Back in those days, people were talking about PCMCIA RAM cards and how 
they were going to replace HDs. Never happened, did it?

Then again, SSDs are here now, and gradually becoming actually popular. 
I guess history repeats itself, eh?

>> OK, so a problem would be provably impossible to solve if the search
>> space
>> is infinite or if there's no way to enumerate all possible solutions?
>
> It would be provably impossible to solve via brute force, sure. That's
> exactly what keeps the halting problem from being solved, in at least
> some sense.
>
> Beyond that, you'll have to say what you mean by "solving optimally". We
> know the optimal Big-O for a number of problems. Is that what you're
> talking about?

Warp's original comment was that partitioning an arbitrary problem such 
that you gain parallel speedup is probably impossible "in practise". 
Intuitively, I was wondering how many real-world problems are 
"impossible" to solve in the real world. (Here "solve" doesn't 
necessarily mean deriving an /optimal/ solution, just a solution that's 
"useable".)

I guess that *is* way too vague to answer.

> I also realized yesterday, when driving past the building I tried to
> model for my game that slowed the frame rate to a crawl when I actually
> drew it, that the reason the universe doesn't have trouble keeping up
> with the frame-rate is that it really does have N processors for a
> problem of size N.
>
> How do you calculate in real time exactly what it would look like to
> have light reflecting off that building built out of all those atoms?
> Give each atom the ability to process its contribution to the reflected
> light in parallel. How do you do the clipping and shadow casting in real
> time? Have each atom doing the clipping and shadow casting in parallel
> for all the pixels it affects. Etc.
>
> (Yes, maybe obvious, but it kind of twinged with this conversation.)

Heheh, yeah. Didn't somebody say a few months back "we need advanced 
quantum computers *now*!", to which somebody else replied "we already 
have that; we call it 'the real world'". ;-)

>> I think the fundamental thing is that GC wants to traverse pointers
>> backwards, but the heap structure only allows you to traverse them
>> forwards.
>
> I don't think GC wants to traverse backwards, except perhaps in purely
> functional languages/heaps. GC works by starting at the roots and seeing
> what's still reachable, so that's all forward pointers.

Ideally you want to find out whether a given object is pointed to by 
anything - which is a reverse pointer traversal. But of course, that's 
impossible, so we do forward traversal to compute the same information.

Except that's not true of course. It's possible for two dead objects to 
point to each other. (Which is why reference counting will never work.) 
The other option is to put the liveliness testing into pointer 
manipulation operations (since this is the only place where the 
information can change), but that's going to heap tonnes of overhead 
onto very common operations.

The other thing I've thought about is having multiple heap areas and 
tracking pointers between them. If you could arrange it so that all the 
garbage is in one heap chunk, you can just drop the whole chunk rather 
than doing complex processing over it. However, that's not easy to 
achieve in general.

>> Other than that, we keep getting processors with more and more cores, but
>> exactly the same amount of memory bandwidth. This seems very
>> unsustainable.
>
> Yep. That's why NUMA is getting more popular and such.

Only for supercomputers.

I wonder how long it will take for desktops to catch up?

Speaking of which, take a look at this:

http://tinyurl.com/64bfdmv

The interesting bit is slides #3 and #4. I've often wondered why all the 
books about supercomputers talk a lot about parallel processing, but 
[until recently] it's never seen in normal computers. Now I know, I guess...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.