|
 |
andrel wrote:
> On 29-12-2009 14:31, Warp wrote:
>> Patrick Elliott <sel### [at] npgcable com> wrote:
>>> CPU = IPU (Integer processor, which does the stuff that most
>>> processors, prior to adding in-built FPUs did. I.e., execute code,
>>> but *not* do any math.). Multiple IPUs. FPU - <none>. GPU -
>>> Integrated into CPU, and replacing all functions of the FPU in the
>>> process.
>>
>>> They figure that this, even without changing die sizes, will garner
>>> an 80% increase in speed, since all FPU functions get dumped, not to
>>> a separate FPU for each core, but to how ever many of the 128+
>>> "pipes" in the GPU. Since Cuda is supposed to be a way to "program" a
>>> GPU... What happens if you don't need to do that any more, but can
>>> just drop your "code" into an IPU, and have it use the GPU pipes to
>>> do its math? Seems to me it makes "programming" the GPU at all kind
>>> of redundant. :p
>>
>> Why is it even called a GPU? How is it related to graphics? It
>> sounds more
>> like a parallelized math coprocessor, not unlike SSE.
>>
> Or like a GPU with some added clue, whichever way you want to look at it.
> BTW I don't seem to be able to find any references for this claim. Some
> indications that AMD was thinking in this way in 2007 or even 2006. So
> if Patrick (or anybody else) could come up with a relevant reference we
> might be able to judge what is going on and if it is really an innovation.
Hmm. Can't find the original I was reading, I didn't bookmark it. But
this one about covers it:
http://arstechnica.com/hardware/news/2009/11/amd-avoiding-larrabee-route-on-road-to-cpugpu-fusion.ars
--
void main () {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
 |