|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Got to wonder. See, the current system, and Intel's newest design, which
is more of the same, is:
CPU = Core (IPU + FPU), multiple cores.
GPU = separate component.
AMD is looking at making it:
CPU = IPU (Integer processor, which does the stuff that most processors,
prior to adding in-built FPUs did. I.e., execute code, but *not* do any
math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and
replacing all functions of the FPU in the process.
They figure that this, even without changing die sizes, will garner an
80% increase in speed, since all FPU functions get dumped, not to a
separate FPU for each core, but to how ever many of the 128+ "pipes" in
the GPU. Since Cuda is supposed to be a way to "program" a GPU... What
happens if you don't need to do that any more, but can just drop your
"code" into an IPU, and have it use the GPU pipes to do its math? Seems
to me it makes "programming" the GPU at all kind of redundant. :p
--
void main () {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Patrick Elliott <sel### [at] npgcablecom> wrote:
> CPU = IPU (Integer processor, which does the stuff that most processors,
> prior to adding in-built FPUs did. I.e., execute code, but *not* do any
> math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and
> replacing all functions of the FPU in the process.
> They figure that this, even without changing die sizes, will garner an
> 80% increase in speed, since all FPU functions get dumped, not to a
> separate FPU for each core, but to how ever many of the 128+ "pipes" in
> the GPU. Since Cuda is supposed to be a way to "program" a GPU... What
> happens if you don't need to do that any more, but can just drop your
> "code" into an IPU, and have it use the GPU pipes to do its math? Seems
> to me it makes "programming" the GPU at all kind of redundant. :p
Why is it even called a GPU? How is it related to graphics? It sounds more
like a parallelized math coprocessor, not unlike SSE.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 29-12-2009 14:31, Warp wrote:
> Patrick Elliott <sel### [at] npgcablecom> wrote:
>> CPU = IPU (Integer processor, which does the stuff that most processors,
>> prior to adding in-built FPUs did. I.e., execute code, but *not* do any
>> math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and
>> replacing all functions of the FPU in the process.
>
>> They figure that this, even without changing die sizes, will garner an
>> 80% increase in speed, since all FPU functions get dumped, not to a
>> separate FPU for each core, but to how ever many of the 128+ "pipes" in
>> the GPU. Since Cuda is supposed to be a way to "program" a GPU... What
>> happens if you don't need to do that any more, but can just drop your
>> "code" into an IPU, and have it use the GPU pipes to do its math? Seems
>> to me it makes "programming" the GPU at all kind of redundant. :p
>
> Why is it even called a GPU? How is it related to graphics? It sounds more
> like a parallelized math coprocessor, not unlike SSE.
>
Or like a GPU with some added clue, whichever way you want to look at it.
BTW I don't seem to be able to find any references for this claim. Some
indications that AMD was thinking in this way in 2007 or even 2006. So
if Patrick (or anybody else) could come up with a relevant reference we
might be able to judge what is going on and if it is really an innovation.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"andrel" <a_l### [at] hotmailcom> wrote in message
news:4B3### [at] hotmailcom...
> indications that AMD was thinking in this way in 2007 or even 2006. So
> if Patrick (or anybody else) could come up with a relevant reference we
> might be able to judge what is going on and if it is really an innovation.
Maybe this:
http://arstechnica.com/hardware/news/2009/11/amd-avoiding-larrabee-route-on-road-to-cpugpu-fusion.ars
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp <war### [at] tagpovrayorg> wrote:
> Patrick Elliott <sel### [at] npgcablecom> wrote:
> > CPU = IPU (Integer processor, which does the stuff that most processors,
> > prior to adding in-built FPUs did. I.e., execute code, but *not* do any
> > math.). Multiple IPUs. FPU - <none>. GPU - Integrated into CPU, and
> > replacing all functions of the FPU in the process.
>
> > They figure that this, even without changing die sizes, will garner an
> > 80% increase in speed, since all FPU functions get dumped, not to a
> > separate FPU for each core, but to how ever many of the 128+ "pipes" in
> > the GPU. Since Cuda is supposed to be a way to "program" a GPU... What
> > happens if you don't need to do that any more, but can just drop your
> > "code" into an IPU, and have it use the GPU pipes to do its math? Seems
> > to me it makes "programming" the GPU at all kind of redundant. :p
>
> Why is it even called a GPU? How is it related to graphics?
Because it first got its hype from the games industry with the explicit goal for
graphics. I'm sure you're aware of that.
AMD bought ATI and ATI cards are not innovating anymore. They realize since
they can't compete with NVidia, better to just try to crap all over their plans
for worldwide domination.
In any case, CUDA may be a bad proprietary solution, but OpenCl is proving
already that there's some potential there for fast, homogeneous parallel
computing in hybrid CPU/GPU environments:
http://vimeo.com/8141489
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] gmailcom> wrote:
> AMD bought ATI and ATI cards are not innovating anymore. They realize since
> they can't compete with NVidia, better to just try to crap all over their plans
> for worldwide domination.
I find your opinion a bit contradictory.
The idea of integrating what are currently two completely separate
microprocessors (separated by a relatively slow bus) into one single
microchip in such way that even existing programs which do not explicitly
take advantage of the current design could take advantage of the new
design, sound rather innovative to me.
The problem with CUDA is that programs need to support it explicitly,
and it is, and always will be limited. If the new AMD design allows
currently existing executables to be run on a CPU/GPU-like hybrid and
get a considerable speed boost from that, I call that innovative and
desirable. Even POV-Ray could someday benefit from that, without having
to do anything special about it.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp <war### [at] tagpovrayorg> wrote:
> nemesis <nam### [at] gmailcom> wrote:
> > AMD bought ATI and ATI cards are not innovating anymore. They realize since
> > they can't compete with NVidia, better to just try to crap all over their plans
> > for worldwide domination.
>
> I find your opinion a bit contradictory.
>
> The idea of integrating what are currently two completely separate
> microprocessors (separated by a relatively slow bus) into one single
> microchip in such way that even existing programs which do not explicitly
> take advantage of the current design could take advantage of the new
> design, sound rather innovative to me.
It just sounds to me like a CPU-maker afraid of severely multiprocessing units
growing out of their control buying the second largest of them and trying to
make them into a mere FPU.
> The problem with CUDA is that programs need to support it explicitly,
> and it is, and always will be limited. If the new AMD design allows
> currently existing executables to be run on a CPU/GPU-like hybrid and
> get a considerable speed boost from that, I call that innovative and
> desirable. Even POV-Ray could someday benefit from that, without having
> to do anything special about it.
Funny you don't comment on OpenCl.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
andrel wrote:
> On 29-12-2009 14:31, Warp wrote:
>> Patrick Elliott <sel### [at] npgcablecom> wrote:
>>> CPU = IPU (Integer processor, which does the stuff that most
>>> processors, prior to adding in-built FPUs did. I.e., execute code,
>>> but *not* do any math.). Multiple IPUs. FPU - <none>. GPU -
>>> Integrated into CPU, and replacing all functions of the FPU in the
>>> process.
>>
>>> They figure that this, even without changing die sizes, will garner
>>> an 80% increase in speed, since all FPU functions get dumped, not to
>>> a separate FPU for each core, but to how ever many of the 128+
>>> "pipes" in the GPU. Since Cuda is supposed to be a way to "program" a
>>> GPU... What happens if you don't need to do that any more, but can
>>> just drop your "code" into an IPU, and have it use the GPU pipes to
>>> do its math? Seems to me it makes "programming" the GPU at all kind
>>> of redundant. :p
>>
>> Why is it even called a GPU? How is it related to graphics? It
>> sounds more
>> like a parallelized math coprocessor, not unlike SSE.
>>
> Or like a GPU with some added clue, whichever way you want to look at it.
> BTW I don't seem to be able to find any references for this claim. Some
> indications that AMD was thinking in this way in 2007 or even 2006. So
> if Patrick (or anybody else) could come up with a relevant reference we
> might be able to judge what is going on and if it is really an innovation.
Hmm. Can't find the original I was reading, I didn't bookmark it. But
this one about covers it:
http://arstechnica.com/hardware/news/2009/11/amd-avoiding-larrabee-route-on-road-to-cpugpu-fusion.ars
--
void main () {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis wrote:
> Warp <war### [at] tagpovrayorg> wrote:
>> nemesis <nam### [at] gmailcom> wrote:
>>> AMD bought ATI and ATI cards are not innovating anymore. They realize since
>>> they can't compete with NVidia, better to just try to crap all over their plans
>>> for worldwide domination.
>> I find your opinion a bit contradictory.
>>
>> The idea of integrating what are currently two completely separate
>> microprocessors (separated by a relatively slow bus) into one single
>> microchip in such way that even existing programs which do not explicitly
>> take advantage of the current design could take advantage of the new
>> design, sound rather innovative to me.
>
> It just sounds to me like a CPU-maker afraid of severely multiprocessing units
> growing out of their control buying the second largest of them and trying to
> make them into a mere FPU.
>
No, actually, what they are doing has the "disadvantage", that if you
need a GPU upgrade, it means a CPU upgrade, since they are the same.
*but* you are losing the bottlenecks that come from *having* separate
components, which means you don't have to even improve the GPU itself,
to get improved performance. And, yeah, that has some drawbacks as well,
undoubtedly. In any case, you hear people babbling about CUDA all the
time, so I figured, "OK, this sounds like it kind of kills the whole idea."
>> The problem with CUDA is that programs need to support it explicitly,
>> and it is, and always will be limited. If the new AMD design allows
>> currently existing executables to be run on a CPU/GPU-like hybrid and
>> get a considerable speed boost from that, I call that innovative and
>> desirable. Even POV-Ray could someday benefit from that, without having
>> to do anything special about it.
>
> Funny you don't comment on OpenCl.
>
Right, because having a single homogeneous processor core, where the GPU
is integrated, would *completely* hose the idea of OpenCl, which depends
on, "Being able to handle code execution on *any* machine, without
regard to the platform." Guess what, anything that boosts performance of
the CPU/GPU combination by 80% is going to improve the speed of
*anything* that involves running graphical code, including OpenCl. So,
yeah, its pretty irrelevant to the issue of whether a "graphics
specific" language might get hosed by doing this.
--
void main () {
if version = "Vista" {
call slow_by_half();
call DRM_everything();
}
call functional_code();
}
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Patrick Elliott <sel### [at] npgcablecom> wrote:
> No, actually, what they are doing has the "disadvantage", that if you
> need a GPU upgrade, it means a CPU upgrade, since they are the same.
My guess is that if you need a GPU upgrade, what you do is to buy a new
regular GPU card and install it. Similarly to what you would do to a PC
with an integrated GPU.
I suppose that if the architecture is designed cleverly enough, a program
could benefit from both the CPU/GPU hybrid (for CPU-bound tasks) *and* the
faster separate GPU card (for the actual rendering) at the same time.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|