POV-Ray : Newsgroups : povray.off-topic : Three guesses why Server Time
29 Jul 2024 22:27:01 EDT (-0400)
  Three guesses why (Message 11 to 20 of 22)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 2 Messages >>>
From: Darren New
Subject: Re: Three guesses why
Date: 21 Jun 2011 13:55:45
Message: <4e00db21$1@news.povray.org>
On 6/21/2011 1:31, Invisible wrote:
> I wonder if anybody has fixed that thing where closing the CD drive causes
> the entire Windows OS to lock up for 30 seconds yet?

Yes. They call it "NT".  Your flaw is thinking the entire OS is locked up, 
when it's really just explorer waiting to read the disk.

> WAN is not fast. 5 Mbit/sec bandwidth and 105 ms latency is not fun with
> such a chatty protocol.

Yeah. Try starting up SunOS and X Windows on a 4-meg sparcstation with no 
internal disk paging over the 10Mbps ethernet.  Ninety seconds to switch 
window focus?

> I do remember configuring my Amiga 1200 to boot from a RAM disk. Jesus, that
> was fast...

Amazing how when you get rid of the moving parts and the simulation of 
moving parts it goes really fast.

>>> OOC, how many problems can you name for which there is no known optimal
>>> algorithm? How many problems provably cannot be solved optimally?
>>
>> Certainly all the problems that can't be solved can't be solved
>> optimally. Otherwise, you can always brute force all possible answers
>> and pick out the optimal one, assuming you have a way of generating
>> answers and evaluating their relative merits. I think your question is
>> too ill-specified to be answered clearly, tho.
>
> OK, so a problem would be provably impossible to solve if the search space
> is infinite or if there's no way to enumerate all possible solutions?

It would be provably impossible to solve via brute force, sure. That's 
exactly what keeps the halting problem from being solved, in at least some 
sense.

Beyond that, you'll have to say what you mean by "solving optimally". We 
know the optimal Big-O for a number of problems. Is that what you're talking 
about?

> Sure. My point was that for some programs, adding more cores yields more
> speed, at least until you reach ridiculous limits. For other programs,
> adding just one or two cores means you already reach that point.

Right. My comment was more of an aside.

I also realized yesterday, when driving past the building I tried to model 
for my game that slowed the frame rate to a crawl when I actually drew it, 
that the reason the universe doesn't have trouble keeping up with the 
frame-rate is that it really does have N processors for a problem of size N.

How do you calculate in real time exactly what it would look like to have 
light reflecting off that building built out of all those atoms?  Give each 
atom the ability to process its contribution to the reflected light in 
parallel.  How do you do the clipping and shadow casting in real time? Have 
each atom doing the clipping and shadow casting in parallel for all the 
pixels it affects. Etc.

(Yes, maybe obvious, but it kind of twinged with this conversation.)

> I think the fundamental thing is that GC wants to traverse pointers
> backwards, but the heap structure only allows you to traverse them forwards.

I don't think GC wants to traverse backwards, except perhaps in purely 
functional languages/heaps. GC works by starting at the roots and seeing 
what's still reachable, so that's all forward pointers.

> Other than that, we keep getting processors with more and more cores, but
> exactly the same amount of memory bandwidth. This seems very unsustainable.

Yep.  That's why NUMA is getting more popular and such.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: Invisible
Subject: Re: Three guesses why
Date: 22 Jun 2011 04:07:13
Message: <4e01a2b1$1@news.povray.org>
On 21/06/2011 06:41 PM, Warp wrote:
> Invisible<voi### [at] devnull>  wrote:
>> OK. So the early ones were entirely hard-wired. They're still using
>> parallel processing to solve a real-world problem. My point was that
>> some kinds of algorithms are very easy to solve with parallel processing.
>
>    Scanline rendering is easy to parallelize because all the pixels can be
> rendered independently of each other (not unlike raytracing).

Indeed, a great many graphical tasks parallise very well. For example, 
image scaling (whether bilinear or trilinear or anisotropic or 
whatever). Or DCT / inverse DCT operations [on independent pixel blocks].

Other tasks don't parallise so well.


Post a reply to this message

From: Invisible
Subject: Re: Three guesses why
Date: 22 Jun 2011 04:35:48
Message: <4e01a964$1@news.povray.org>
On 21/06/2011 06:55 PM, Darren New wrote:
> On 6/21/2011 1:31, Invisible wrote:
>> I wonder if anybody has fixed that thing where closing the CD drive
>> causes
>> the entire Windows OS to lock up for 30 seconds yet?
>
> Yes. They call it "NT". Your flaw is thinking the entire OS is locked
> up, when it's really just explorer waiting to read the disk.

Sure. It's not the whole OS. It's just the entire GUI. Subtle difference 
there.

Same thing happens if the OS is expecting to configure a NIC via DHCP 
but there's no server. It waits quite a long time - which is fine - but 
the entire GUI again locks up while it does so (which isn't fine). No 
reason, just poor design. (I think in XP it at least doesn't sit around 
waiting if there's no cable plugged in.)

>> WAN is not fast. 5 Mbit/sec bandwidth and 105 ms latency is not fun with
>> such a chatty protocol.
>
> Yeah. Try starting up SunOS and X Windows on a 4-meg sparcstation with
> no internal disk paging over the 10Mbps ethernet. Ninety seconds to
> switch window focus?

Running X Windows on an Amiga 1200 is almost as slow as that. Which is 
daft really; the native OS is trippy-fast. More responsive than some of 
the [massively more powerful] PCs I look after at work. And here I was 
thinking Linux was fast...

>> I do remember configuring my Amiga 1200 to boot from a RAM disk.
>> Jesus, that was fast...
>
> Amazing how when you get rid of the moving parts and the simulation of
> moving parts it goes really fast.

Back in those days, people were talking about PCMCIA RAM cards and how 
they were going to replace HDs. Never happened, did it?

Then again, SSDs are here now, and gradually becoming actually popular. 
I guess history repeats itself, eh?

>> OK, so a problem would be provably impossible to solve if the search
>> space
>> is infinite or if there's no way to enumerate all possible solutions?
>
> It would be provably impossible to solve via brute force, sure. That's
> exactly what keeps the halting problem from being solved, in at least
> some sense.
>
> Beyond that, you'll have to say what you mean by "solving optimally". We
> know the optimal Big-O for a number of problems. Is that what you're
> talking about?

Warp's original comment was that partitioning an arbitrary problem such 
that you gain parallel speedup is probably impossible "in practise". 
Intuitively, I was wondering how many real-world problems are 
"impossible" to solve in the real world. (Here "solve" doesn't 
necessarily mean deriving an /optimal/ solution, just a solution that's 
"useable".)

I guess that *is* way too vague to answer.

> I also realized yesterday, when driving past the building I tried to
> model for my game that slowed the frame rate to a crawl when I actually
> drew it, that the reason the universe doesn't have trouble keeping up
> with the frame-rate is that it really does have N processors for a
> problem of size N.
>
> How do you calculate in real time exactly what it would look like to
> have light reflecting off that building built out of all those atoms?
> Give each atom the ability to process its contribution to the reflected
> light in parallel. How do you do the clipping and shadow casting in real
> time? Have each atom doing the clipping and shadow casting in parallel
> for all the pixels it affects. Etc.
>
> (Yes, maybe obvious, but it kind of twinged with this conversation.)

Heheh, yeah. Didn't somebody say a few months back "we need advanced 
quantum computers *now*!", to which somebody else replied "we already 
have that; we call it 'the real world'". ;-)

>> I think the fundamental thing is that GC wants to traverse pointers
>> backwards, but the heap structure only allows you to traverse them
>> forwards.
>
> I don't think GC wants to traverse backwards, except perhaps in purely
> functional languages/heaps. GC works by starting at the roots and seeing
> what's still reachable, so that's all forward pointers.

Ideally you want to find out whether a given object is pointed to by 
anything - which is a reverse pointer traversal. But of course, that's 
impossible, so we do forward traversal to compute the same information.

Except that's not true of course. It's possible for two dead objects to 
point to each other. (Which is why reference counting will never work.) 
The other option is to put the liveliness testing into pointer 
manipulation operations (since this is the only place where the 
information can change), but that's going to heap tonnes of overhead 
onto very common operations.

The other thing I've thought about is having multiple heap areas and 
tracking pointers between them. If you could arrange it so that all the 
garbage is in one heap chunk, you can just drop the whole chunk rather 
than doing complex processing over it. However, that's not easy to 
achieve in general.

>> Other than that, we keep getting processors with more and more cores, but
>> exactly the same amount of memory bandwidth. This seems very
>> unsustainable.
>
> Yep. That's why NUMA is getting more popular and such.

Only for supercomputers.

I wonder how long it will take for desktops to catch up?

Speaking of which, take a look at this:

http://tinyurl.com/64bfdmv

The interesting bit is slides #3 and #4. I've often wondered why all the 
books about supercomputers talk a lot about parallel processing, but 
[until recently] it's never seen in normal computers. Now I know, I guess...


Post a reply to this message

From: Darren New
Subject: Re: Three guesses why
Date: 22 Jun 2011 11:44:45
Message: <4e020ded$1@news.povray.org>
On 6/22/2011 1:35, Invisible wrote:
> Sure. It's not the whole OS. It's just the entire GUI. Subtle difference there.

Not the entire GUI. One program running the file manager.

Your email client still works. Your background processes still work. 
Programs trying to open files might get stuck.

> Same thing happens if the OS is expecting to configure a NIC via DHCP but
> there's no server. It waits quite a long time - which is fine - but the
> entire GUI again locks up while it does so (which isn't fine).

Never had that problem that I remember. On the other hand, yes, since Vista 
or so, network I/O requests can be canceled, which is what you're 
experiencing there.

> No reason, just poor design.

More like "insufficient engineering time given to edge cases to make them 
efficient."

> Then again, SSDs are here now, and gradually becoming actually popular. I
> guess history repeats itself, eh?

Yeah. Sadly, it has to actually *look* like a disk, so you still get a lot 
of the overhead.

> Warp's original comment was that partitioning an arbitrary problem such that
> you gain parallel speedup is probably impossible "in practise".

I wouldn't be surprised if it was mathematically impossible in theory, either.

> I guess that *is* way too vague to answer.

For the general case, yeah.

> Heheh, yeah. Didn't somebody say a few months back "we need advanced quantum
> computers *now*!", to which somebody else replied "we already have that; we
> call it 'the real world'". ;-)

I think that was me, yes. :-)

> Ideally you want to find out whether a given object is pointed to by
> anything

Um, no, not really. You want the collection of live objects. Or the 
collection of dead objects. You don't do GC one object at a time, unless 
you're doing reference counting.

http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Tri-colour_marking

> Except that's not true of course. It's possible for two dead objects to
> point to each other. (Which is why reference counting will never work.) The
> other option is to put the liveliness testing into pointer manipulation
> operations (since this is the only place where the information can change),
> but that's going to heap tonnes of overhead onto very common operations.

Yes, that's reference counting, and that's why people avoid that in GC when 
they can.

> The other thing I've thought about is having multiple heap areas and
> tracking pointers between them. If you could arrange it so that all the
> garbage is in one heap chunk, you can just drop the whole chunk rather than
> doing complex processing over it. However, that's not easy to achieve in
> general.

That's called semi-space garbage collection.

>>> Other than that, we keep getting processors with more and more cores, but
>>> exactly the same amount of memory bandwidth. This seems very
>>> unsustainable.
>>
>> Yep. That's why NUMA is getting more popular and such.
>
> Only for supercomputers.

The Opteron is a supercomputer?

> I wonder how long it will take for desktops to catch up?

You're probably already running it.

> Speaking of which, take a look at this:
>
> http://tinyurl.com/64bfdmv

Comic Saaaaaaaaaaans!


> The interesting bit is slides #3 and #4. I've often wondered why all the
> books about supercomputers talk a lot about parallel processing, but [until
> recently] it's never seen in normal computers. Now I know, I guess...

Because until recently (like, the last 10 years), putting multiple CPUs in 
something was very expensive.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: Darren New
Subject: Re: Three guesses why
Date: 22 Jun 2011 11:45:51
Message: <4e020e2f@news.povray.org>
On 6/22/2011 1:07, Invisible wrote:
> Indeed, a great many graphical tasks parallise very well.

Pretty much anything computed by atoms parallelzixes well. Sound algorithms, 
video algorithms, image algorithms, mapping algorithms, etc.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: Darren New
Subject: Re: Three guesses why
Date: 22 Jun 2011 12:07:11
Message: <4e02132f$1@news.povray.org>
On 6/22/2011 8:44, Darren New wrote:
> Your email client still works. Your background processes still work.
> Programs trying to open files might get stuck.

Actually, I suspect what's happening is that you put the CD in, and Windows 
immediately generates a message saying "hey, your list of valid disks may 
have changed." At which point Explorer (and maybe others) goes out over the 
list of disks, but Windows is still spinning up the CD and all that, not yet 
having determined which file system it is and mounted it, so processes that 
do that hang waiting for the disk to mount.

You *could* make a separate thread to handle this one case, but it's 
probably not worth it, esp. since you're the only person in my entire career 
I've heard complain about it.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: Orchid XP v8
Subject: Re: Three guesses why
Date: 22 Jun 2011 16:31:48
Message: <4e025134$1@news.povray.org>
On 22/06/2011 05:07 PM, Darren New wrote:

> You *could* make a separate thread to handle this one case, but it's
> probably not worth it, esp. since you're the only person in my entire
> career I've heard complain about it.

The other fun one is clicking on A: by mistake. It takes an absurdly 
long time to figure out that there's no disk there. (That's possibly a 
limitation of the hardware, but some sort of status indication wouldn't 
hurt.)

Still, apparently only people like me actually *care* that the design is 
flawed. Normal people just assume "it's a computer, it's not supposed to 
work properly". So why does M$ have any incentive to fix it? It's not 
like it affects their profits...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Three guesses why
Date: 22 Jun 2011 16:51:36
Message: <4e0255d8$1@news.povray.org>
On 22/06/2011 04:45 PM, Darren New wrote:
> On 6/22/2011 1:07, Invisible wrote:
>> Indeed, a great many graphical tasks parallise very well.
>
> Pretty much anything computed by atoms parallelzixes well. Sound
> algorithms, video algorithms, image algorithms, mapping algorithms, etc.

Not sure what the difference between "video algorithms" and "image 
algorithms" is, but you can add "kinetic simulations" to the list...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: Three guesses why
Date: 22 Jun 2011 17:05:40
Message: <4e025924$1@news.povray.org>
On 6/22/2011 13:31, Orchid XP v8 wrote:
> Still, apparently only people like me actually *care* that the design is
> flawed.

I'd say confusing "flaw" with "intentional design trade-off" is your problem 
there.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

From: Darren New
Subject: Re: Three guesses why
Date: 22 Jun 2011 17:06:42
Message: <4e025962$1@news.povray.org>
On 6/22/2011 13:51, Orchid XP v8 wrote:
> Not sure what the difference between "video algorithms" and "image
> algorithms" is,

If you have an image operation that convolves the entire image in a way that 
can't be para;lelized, you can still paralellize it between different frames.

 > but you can add "kinetic simulations" to the list...

Yeah, probably true.

-- 
Darren New, San Diego CA, USA (PST)
   "Coding without comments is like
    driving without turn signals."


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 2 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.