POV-Ray : Newsgroups : povray.off-topic : AMD killing Cuda? Server Time
4 Sep 2024 17:16:38 EDT (-0400)
  AMD killing Cuda? (Message 1 to 10 of 11)  
Goto Latest 10 Messages Next 1 Messages >>>
From: Patrick Elliott
Subject: AMD killing Cuda?
Date: 29 Dec 2009 02:43:34
Message: <4b39b326$1@news.povray.org>
Got to wonder. See, the current system, and Intel's newest design, which 
is more of the same, is:

CPU = Core (IPU + FPU), multiple cores.
GPU = separate component.

AMD is looking at making it:

CPU = IPU (Integer processor, which does the stuff that most processors, 
prior to adding in-built FPUs did. I.e., execute code, but *not* do any 
math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and 
replacing all functions of the FPU in the process.

They figure that this, even without changing die sizes, will garner an 
80% increase in speed, since all FPU functions get dumped, not to a 
separate FPU for each core, but to how ever many of the 128+ "pipes" in 
the GPU. Since Cuda is supposed to be a way to "program" a GPU... What 
happens if you don't need to do that any more, but can just drop your 
"code" into an IPU, and have it use the GPU pipes to do its math? Seems 
to me it makes "programming" the GPU at all kind of redundant. :p

-- 
void main () {

     if version = "Vista" {
       call slow_by_half();
       call DRM_everything();
     }
     call functional_code();
   }
   else
     call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models, 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Warp
Subject: Re: AMD killing Cuda?
Date: 29 Dec 2009 08:31:52
Message: <4b3a04c8@news.povray.org>
Patrick Elliott <sel### [at] npgcablecom> wrote:
> CPU = IPU (Integer processor, which does the stuff that most processors, 
> prior to adding in-built FPUs did. I.e., execute code, but *not* do any 
> math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and 
> replacing all functions of the FPU in the process.

> They figure that this, even without changing die sizes, will garner an 
> 80% increase in speed, since all FPU functions get dumped, not to a 
> separate FPU for each core, but to how ever many of the 128+ "pipes" in 
> the GPU. Since Cuda is supposed to be a way to "program" a GPU... What 
> happens if you don't need to do that any more, but can just drop your 
> "code" into an IPU, and have it use the GPU pipes to do its math? Seems 
> to me it makes "programming" the GPU at all kind of redundant. :p

  Why is it even called a GPU? How is it related to graphics? It sounds more
like a parallelized math coprocessor, not unlike SSE.

-- 
                                                          - Warp


Post a reply to this message

From: andrel
Subject: Re: AMD killing Cuda?
Date: 29 Dec 2009 09:45:46
Message: <4B3A1618.1040308@hotmail.com>
On 29-12-2009 14:31, Warp wrote:
> Patrick Elliott <sel### [at] npgcablecom> wrote:
>> CPU = IPU (Integer processor, which does the stuff that most processors, 
>> prior to adding in-built FPUs did. I.e., execute code, but *not* do any 
>> math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and 
>> replacing all functions of the FPU in the process.
> 
>> They figure that this, even without changing die sizes, will garner an 
>> 80% increase in speed, since all FPU functions get dumped, not to a 
>> separate FPU for each core, but to how ever many of the 128+ "pipes" in 
>> the GPU. Since Cuda is supposed to be a way to "program" a GPU... What 
>> happens if you don't need to do that any more, but can just drop your 
>> "code" into an IPU, and have it use the GPU pipes to do its math? Seems 
>> to me it makes "programming" the GPU at all kind of redundant. :p
> 
>   Why is it even called a GPU? How is it related to graphics? It sounds more
> like a parallelized math coprocessor, not unlike SSE.
> 
Or like a GPU with some added clue, whichever way you want to look at it.
BTW I don't seem to be able to find any references for this claim. Some 
indications that AMD was thinking in this way in 2007 or even 2006. So 
if Patrick (or anybody else) could come up with a relevant reference we 
might be able to judge what is going on and if it is really an innovation.


Post a reply to this message

From: somebody
Subject: Re: AMD killing Cuda?
Date: 29 Dec 2009 11:57:15
Message: <4b3a34eb@news.povray.org>
"andrel" <a_l### [at] hotmailcom> wrote in message
news:4B3### [at] hotmailcom...

> indications that AMD was thinking in this way in 2007 or even 2006. So
> if Patrick (or anybody else) could come up with a relevant reference we
> might be able to judge what is going on and if it is really an innovation.

Maybe this:

http://arstechnica.com/hardware/news/2009/11/amd-avoiding-larrabee-route-on-road-to-cpugpu-fusion.ars


Post a reply to this message

From: nemesis
Subject: Re: AMD killing Cuda?
Date: 29 Dec 2009 12:30:00
Message: <web.4b3a3b9accd6e3f412fad2f0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Patrick Elliott <sel### [at] npgcablecom> wrote:
> > CPU = IPU (Integer processor, which does the stuff that most processors,
> > prior to adding in-built FPUs did. I.e., execute code, but *not* do any
> > math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and
> > replacing all functions of the FPU in the process.
>
> > They figure that this, even without changing die sizes, will garner an
> > 80% increase in speed, since all FPU functions get dumped, not to a
> > separate FPU for each core, but to how ever many of the 128+ "pipes" in
> > the GPU. Since Cuda is supposed to be a way to "program" a GPU... What
> > happens if you don't need to do that any more, but can just drop your
> > "code" into an IPU, and have it use the GPU pipes to do its math? Seems
> > to me it makes "programming" the GPU at all kind of redundant. :p
>
>   Why is it even called a GPU? How is it related to graphics?

Because it first got its hype from the games industry with the explicit goal for
graphics.  I'm sure you're aware of that.

AMD bought ATI and ATI cards are not innovating anymore.  They realize since
they can't compete with NVidia, better to just try to crap all over their plans
for worldwide domination.

In any case, CUDA may be a bad proprietary solution, but OpenCl is proving
already that there's some potential there for fast, homogeneous parallel
computing in hybrid CPU/GPU environments:

http://vimeo.com/8141489


Post a reply to this message

From: Warp
Subject: Re: AMD killing Cuda?
Date: 29 Dec 2009 12:58:19
Message: <4b3a433b@news.povray.org>
nemesis <nam### [at] gmailcom> wrote:
> AMD bought ATI and ATI cards are not innovating anymore.  They realize since
> they can't compete with NVidia, better to just try to crap all over their plans
> for worldwide domination.

  I find your opinion a bit contradictory.

  The idea of integrating what are currently two completely separate
microprocessors (separated by a relatively slow bus) into one single
microchip in such way that even existing programs which do not explicitly
take advantage of the current design could take advantage of the new
design, sound rather innovative to me.

  The problem with CUDA is that programs need to support it explicitly,
and it is, and always will be limited. If the new AMD design allows
currently existing executables to be run on a CPU/GPU-like hybrid and
get a considerable speed boost from that, I call that innovative and
desirable. Even POV-Ray could someday benefit from that, without having
to do anything special about it.

-- 
                                                          - Warp


Post a reply to this message

From: nemesis
Subject: Re: AMD killing Cuda?
Date: 29 Dec 2009 15:55:01
Message: <web.4b3a6c3cccd6e3f412fad2f0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> nemesis <nam### [at] gmailcom> wrote:
> > AMD bought ATI and ATI cards are not innovating anymore.  They realize since
> > they can't compete with NVidia, better to just try to crap all over their plans
> > for worldwide domination.
>
>   I find your opinion a bit contradictory.
>
>   The idea of integrating what are currently two completely separate
> microprocessors (separated by a relatively slow bus) into one single
> microchip in such way that even existing programs which do not explicitly
> take advantage of the current design could take advantage of the new
> design, sound rather innovative to me.

It just sounds to me like a CPU-maker afraid of severely multiprocessing units
growing out of their control buying the second largest of them and trying to
make them into a mere FPU.

>   The problem with CUDA is that programs need to support it explicitly,
> and it is, and always will be limited. If the new AMD design allows
> currently existing executables to be run on a CPU/GPU-like hybrid and
> get a considerable speed boost from that, I call that innovative and
> desirable. Even POV-Ray could someday benefit from that, without having
> to do anything special about it.

Funny you don't comment on OpenCl.


Post a reply to this message

From: Patrick Elliott
Subject: Re: AMD killing Cuda?
Date: 31 Dec 2009 18:45:11
Message: <4b3d3787$1@news.povray.org>
andrel wrote:
> On 29-12-2009 14:31, Warp wrote:
>> Patrick Elliott <sel### [at] npgcablecom> wrote:
>>> CPU = IPU (Integer processor, which does the stuff that most 
>>> processors, prior to adding in-built FPUs did. I.e., execute code, 
>>> but *not* do any math.). Multiple IPUs. FPU - <none>. GPU - 
>>> Integrated into CPU, and replacing all functions of the FPU in the 
>>> process.
>>
>>> They figure that this, even without changing die sizes, will garner 
>>> an 80% increase in speed, since all FPU functions get dumped, not to 
>>> a separate FPU for each core, but to how ever many of the 128+ 
>>> "pipes" in the GPU. Since Cuda is supposed to be a way to "program" a 
>>> GPU... What happens if you don't need to do that any more, but can 
>>> just drop your "code" into an IPU, and have it use the GPU pipes to 
>>> do its math? Seems to me it makes "programming" the GPU at all kind 
>>> of redundant. :p
>>
>>   Why is it even called a GPU? How is it related to graphics? It 
>> sounds more
>> like a parallelized math coprocessor, not unlike SSE.
>>
> Or like a GPU with some added clue, whichever way you want to look at it.
> BTW I don't seem to be able to find any references for this claim. Some 
> indications that AMD was thinking in this way in 2007 or even 2006. So 
> if Patrick (or anybody else) could come up with a relevant reference we 
> might be able to judge what is going on and if it is really an innovation.

Hmm. Can't find the original I was reading, I didn't bookmark it. But 
this one about covers it:

http://arstechnica.com/hardware/news/2009/11/amd-avoiding-larrabee-route-on-road-to-cpugpu-fusion.ars

-- 
void main () {

     if version = "Vista" {
       call slow_by_half();
       call DRM_everything();
     }
     call functional_code();
   }
   else
     call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models, 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Patrick Elliott
Subject: Re: AMD killing Cuda?
Date: 31 Dec 2009 18:53:48
Message: <4b3d398c$1@news.povray.org>
nemesis wrote:
> Warp <war### [at] tagpovrayorg> wrote:
>> nemesis <nam### [at] gmailcom> wrote:
>>> AMD bought ATI and ATI cards are not innovating anymore.  They realize since
>>> they can't compete with NVidia, better to just try to crap all over their plans
>>> for worldwide domination.
>>   I find your opinion a bit contradictory.
>>
>>   The idea of integrating what are currently two completely separate
>> microprocessors (separated by a relatively slow bus) into one single
>> microchip in such way that even existing programs which do not explicitly
>> take advantage of the current design could take advantage of the new
>> design, sound rather innovative to me.
> 
> It just sounds to me like a CPU-maker afraid of severely multiprocessing units
> growing out of their control buying the second largest of them and trying to
> make them into a mere FPU.
> 
No, actually, what they are doing has the "disadvantage", that if you 
need a GPU upgrade, it means a CPU upgrade, since they are the same. 
*but* you are losing the bottlenecks that come from *having* separate 
components, which means you don't have to even improve the GPU itself, 
to get improved performance. And, yeah, that has some drawbacks as well, 
undoubtedly. In any case, you hear people babbling about CUDA all the 
time, so I figured, "OK, this sounds like it kind of kills the whole idea."

>>   The problem with CUDA is that programs need to support it explicitly,
>> and it is, and always will be limited. If the new AMD design allows
>> currently existing executables to be run on a CPU/GPU-like hybrid and
>> get a considerable speed boost from that, I call that innovative and
>> desirable. Even POV-Ray could someday benefit from that, without having
>> to do anything special about it.
> 
> Funny you don't comment on OpenCl.
> 

Right, because having a single homogeneous processor core, where the GPU 
is integrated, would *completely* hose the idea of OpenCl, which depends 
on, "Being able to handle code execution on *any* machine, without 
regard to the platform." Guess what, anything that boosts performance of 
the CPU/GPU combination by 80% is going to improve the speed of 
*anything* that involves running graphical code, including OpenCl. So, 
yeah, its pretty irrelevant to the issue of whether a "graphics 
specific" language might get hosed by doing this.

-- 
void main () {

     if version = "Vista" {
       call slow_by_half();
       call DRM_everything();
     }
     call functional_code();
   }
   else
     call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models, 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Warp
Subject: Re: AMD killing Cuda?
Date: 31 Dec 2009 19:20:16
Message: <4b3d3fc0@news.povray.org>
Patrick Elliott <sel### [at] npgcablecom> wrote:
> No, actually, what they are doing has the "disadvantage", that if you 
> need a GPU upgrade, it means a CPU upgrade, since they are the same. 

  My guess is that if you need a GPU upgrade, what you do is to buy a new
regular GPU card and install it. Similarly to what you would do to a PC
with an integrated GPU.

  I suppose that if the architecture is designed cleverly enough, a program
could benefit from both the CPU/GPU hybrid (for CPU-bound tasks) *and* the
faster separate GPU card (for the actual rendering) at the same time.

-- 
                                                          - Warp


Post a reply to this message

Goto Latest 10 Messages Next 1 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.