|
 |
Patrick Elliott <sel### [at] npgcable com> wrote:
> CPU = IPU (Integer processor, which does the stuff that most processors,
> prior to adding in-built FPUs did. I.e., execute code, but *not* do any
> math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and
> replacing all functions of the FPU in the process.
> They figure that this, even without changing die sizes, will garner an
> 80% increase in speed, since all FPU functions get dumped, not to a
> separate FPU for each core, but to how ever many of the 128+ "pipes" in
> the GPU. Since Cuda is supposed to be a way to "program" a GPU... What
> happens if you don't need to do that any more, but can just drop your
> "code" into an IPU, and have it use the GPU pipes to do its math? Seems
> to me it makes "programming" the GPU at all kind of redundant. :p
Why is it even called a GPU? How is it related to graphics? It sounds more
like a parallelized math coprocessor, not unlike SSE.
--
- Warp
Post a reply to this message
|
 |