POV-Ray : Newsgroups : povray.programming : CUDA - NVIDIA's massively parallel programming architecture Server Time
1 Jun 2024 20:59:18 EDT (-0400)
  CUDA - NVIDIA's massively parallel programming architecture (Message 21 to 25 of 25)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Chambers
Subject: Re: CUDA - NVIDIA's massively parallel programming architecture
Date: 20 May 2007 13:21:22
Message: <46508392$1@news.povray.org>
Warp wrote:
> Chambers <ben### [at] pacificwebguycom> wrote:
>> Perhaps if 
>> Intel surprises everyone, and releases their next graphics chip as a 
>> double precision FP monsters, we'd be able to take advantage of that, 
> 
>   Exactly how would it be different from current FPUs?
> 

Because it would be on a graphics chip?

There has been some serious speculation that Intel will simply release a 
chip with something like 256 double-precision FPUs on it as Unified 
Shaders, and rely on shader programs to provide *all* graphics 
functionality (that is, they wouldn't hardcode *any* graphics stuff at all).

While it would be slower than an equivalent unit from nVidia or DAAMIT, 
it would also be much more flexible, and utilizing it for non-graphics 
work (like HPC) would be extremely easy compared to today's cards.

-- 
...Ben Chambers
www.pacificwebguy.com


Post a reply to this message

From: Chambers
Subject: Re: CUDA - NVIDIA's massively parallel programming architecture
Date: 20 May 2007 13:24:05
Message: <46508435@news.povray.org>
Warp wrote:
>   Compilers don't need to simulate anything: Intel FPUs have supported 64-bit
> floating point numbers since probably the 8087. That has nothing to do with
> the register size of the CPU, as the FPU is quite independent of it.

I think it was actually the 287, but I could very easily be wrong about 
that :)

-- 
...Ben Chambers
www.pacificwebguy.com


Post a reply to this message

From: Chambers
Subject: Re: CUDA - NVIDIA's massively parallel programming architecture
Date: 20 May 2007 13:29:53
Message: <46508591@news.povray.org>
Warp wrote:
>   I'm also wondering about what advantages there could be compared to
> current FPUs.

The main difference, Warp, is the shear number of execution units.  On a 
CPU, you're looking at what, 3-5 FPUs *at most* per core, meaning a Quad 
core chip would have 12-20.

On a GPU, we're now seeing >100 execution units.

>   Besides, there will probably be data transfer overhead. Games can simply
> upload their vertex and pixel shader code into the graphics card and then
> let the graphics card do what they do. Games don't need the results back.

One of the main reasons the switch was made from AGP to PCIe is that 
PCIe is bidirectional, allowing efficient communication in both 
directions.  Although previous generations of cards don't take advantage 
of this, the 8800 series does (a little bit), and future cards are 
likely to as well.

I looked into the CUDA to see if it was something I could take advantage 
of for personal projects, and unfortunately it isn't.  There are too 
many restrictions on what data may be accessed, what data has to be 
shared, et cetera, for it to be useful for something as complex as 
POV-Ray at this time.  Perhaps with future revisions (based on future 
cards) it will be more flexible, and thus more usable, but for now it 
won't help.

I tried to find information on DAAMIT's cards and programming them, but 
I couldn't find anything public on the Web.

-- 
...Ben Chambers
www.pacificwebguy.com


Post a reply to this message

From: Warp
Subject: Re: CUDA - NVIDIA's massively parallel programming architecture
Date: 20 May 2007 13:59:31
Message: <46508c82@news.povray.org>
Chambers <ben### [at] pacificwebguycom> wrote:
> One of the main reasons the switch was made from AGP to PCIe is that 
> PCIe is bidirectional, allowing efficient communication in both 
> directions.

  Compare the data transfer speed between the CPU and the FPU with the
data transfer speed between the CPU and the GPU. Now consider the sheer
amount of data which has to be transferred for a raytracer.

  Unless you can implement the raytracer as a shader, I don't think there
can't be any advantage.

-- 
                                                          - Warp


Post a reply to this message

From: Chambers
Subject: Re: CUDA - NVIDIA's massively parallel programming architecture
Date: 20 May 2007 16:34:03
Message: <4650b0bb$1@news.povray.org>
Warp wrote:
> Chambers <ben### [at] pacificwebguycom> wrote:
>> One of the main reasons the switch was made from AGP to PCIe is that 
>> PCIe is bidirectional, allowing efficient communication in both 
>> directions.
> 
>   Compare the data transfer speed between the CPU and the FPU with the
> data transfer speed between the CPU and the GPU. Now consider the sheer
> amount of data which has to be transferred for a raytracer.
> 
>   Unless you can implement the raytracer as a shader, I don't think there
> can't be any advantage.
> 

Well, the whole point of CUDA and other such ventures is that you 
offload a program (not directly comparable to a shader, in this 
instance) or parts of one to the GPU, and it only returns the result.

In other words, POV-Ray wouldn't say to the GPU "Trace this ray, and now 
get this ray, and now do this texture..."  Rather, it would say "Here's 
this scene, here's the camera, now return the finished picture."

Unfortunately, the GPUs just aren't flexible enough yet.  However, 
within one or two more generations, they probably will be...

-- 
...Ben Chambers
www.pacificwebguy.com


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.