|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Since someone (Andrew?) recently mentioned something about caches.
http://duartes.org/gustavo/blog/post/what-your-computer-does-while-you-wait
--
Darren New, San Diego CA, USA (PST)
The NFL should go international. I'd pay to
see the Detroit Lions vs the Roman Catholics.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> Since someone (Andrew?) recently mentioned something about caches.
> http://duartes.org/gustavo/blog/post/what-your-computer-does-while-you-wait
In the good old days the CPU and the RAM had the same speed. In other
words, the RAM controller could supply the CPU with data at the exact
speed at which the CPU could read it. This made both the CPU and the
RAM controllers very simple and straightforward to implement.
Then at some point CPU speeds started growing faster than RAM speeds.
At some points it got so ridiculous that the CPU could theoretically
read hundreds of times faster than the RAM controller could feed it.
Of course since basically everything the CPU can do is in RAM (including
the very code the CPU is trying to execute), something had to be done,
or else the CPU would simply be idle most of the time while waiting for
the data to arrive from RAM.
I suppose a huge percentage of development resources have been put
into solving the problem of how to feed the CPU with data as fast as
it can eat it.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> In the good old days the CPU and the RAM had the same speed.
Some even clocked the memory faster than the CPU, which is where you got
things like "I/O Channel Processors" (aka "IOPs") on the mainframes. You'd
give the IOP a list of sectors to read and where to put them, and it would
interleave the access with what the processor was accessing. (The Amiga did
things like this too, with the "blitter" and other chips.)
> I suppose a huge percentage of development resources have been put
> into solving the problem of how to feed the CPU with data as fast as
> it can eat it.
It's going to be difficult, if the CPU is running up against speed-of-light
delays. About the only thing you can do is make things smaller or 3D. :-)
--
Darren New, San Diego CA, USA (PST)
The NFL should go international. I'd pay to
see the Detroit Lions vs the Roman Catholics.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> I suppose a huge percentage of development resources have been put
>> into solving the problem of how to feed the CPU with data as fast as
>> it can eat it.
>
> It's going to be difficult, if the CPU is running up against
> speed-of-light delays. About the only thing you can do is make things
> smaller or 3D. :-)
Remind me: Why do they not make 3D chips already?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
From: Nicolas Alvarez
Subject: Re: Interesting look at CPU performance
Date: 1 Dec 2008 16:41:29
Message: <49345a08@news.povray.org>
|
|
|
| |
| |
|
|
Warp wrote:
> In the good old days the CPU and the RAM had the same speed. In other
> words, the RAM controller could supply the CPU with data at the exact
> speed at which the CPU could read it. This made both the CPU and the
> RAM controllers very simple and straightforward to implement.
>
> Then at some point CPU speeds started growing faster than RAM speeds.
> At some points it got so ridiculous that the CPU could theoretically
> read hundreds of times faster than the RAM controller could feed it.
> Of course since basically everything the CPU can do is in RAM (including
> the very code the CPU is trying to execute), something had to be done,
> or else the CPU would simply be idle most of the time while waiting for
> the data to arrive from RAM.
On the PrimeGrid distributed computing project, the LLR app (to test if a
number is prime) is a *lot* heavier on the CPU than the sieve app (finding
lots of composite numbers to filter them out so LLR doesn't have to test
them). Users report hotter CPUs when running LLR.
And indeed, sieve uses quite a bit more memory. I guess LLR can fit most of
its data in the CPU cache, making the CPU process more and wait less, and
making it heat more.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 wrote:
> Remind me: Why do they not make 3D chips already?
I would guess heat, interference, and the fact that they make 2D chips by
successively scraping layers off the top (essentially)? First you have to
invent a way of growing a transistor in the middle of a chip.
They do have "piggyback" type packaging, which helps.
This is one of the things that "optical" chips are supposed to help with.
--
Darren New, San Diego CA, USA (PST)
The NFL should go international. I'd pay to
see the Detroit Lions vs the Roman Catholics.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Remind me: Why do they not make 3D chips already?
They use lithography to put transistors onto the surface
of silicon wafers, commonly this required wiring from the
edges of the chips.
3D chips require interconections between layers of
of circuits. IBM has been producing some chips with
TSVs (through-silicon vias) to connect stacks of wafers.
There are some production issues, but it's almost to
the PC market.
http://www.semiconductor.net/article/CA6535050.html
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |