POV-Ray : Newsgroups : povray.off-topic : Suggestion: OpenCL : Re: Suggestion: OpenCL Server Time
5 Sep 2024 09:19:37 EDT (-0400)
  Re: Suggestion: OpenCL  
From: clipka
Date: 14 Aug 2009 01:57:29
Message: <4a84fcc9$1@news.povray.org>
Saul Luizaga schrieb:
> clipka wrote:
>> (*groans*)
> 
> Way to go to start a discussion...

Sure, but you're really not the first one, and haven't been so recently.

> I think you are wrong: "OpenCL (Open Computing Language) greatly 
> improves speed and responsiveness for a wide spectrum of applications in 
> numerous market categories from gaming and entertainment to scientific 
> and medical software."
> 
>  From here: 
>
http://www.khronos.org/news/press/releases/the_khronos_group_releases_opencl_1.0_specification/


That's a nice statement. Where does it originate from?

Ah, a paper from the group that designed OpenCL to the press. What could 
be their major goal with such a paper? They're not possibly trying 
primarily to get attention to that thing? Right, sure they wouldn't want 
to hype that thing.

Also note that...

- "a wide spectrum of applications" is a very vague statement, and may 
exclude some.

- The categories mentioned all have one thing in common: Massive number 
crunching with few decision making.

POV-Ray does number crunching too, in a sense, but there's a lot of 
desision making involved.

> Have you appreciated first hand that overhead making it inviable for 
> POV-Ray?

How could I? Do you have an OpenCL implementation available for me so 
that I could test it?

But I have read about some limitations of GPU processing in general and 
with regard to raytracing in particular, and imagine to have enough 
understanding of computer architecture to be able to say that data 
exchange between CPU and GPU requires a tad more overhead than 
inter-process communication between separate threads running on the same 
CPU.

> "Tony Tamasi, senior vice president of technical marketing at NVIDIA 


> powerful way to harness the enormous processing capabilities of our 
> CUDA-based GPUs on multiple platforms." From the same link.

Another marketing blurp. Of /course/ the vice president of a big player 
in the GPU market is advertising it as the greatest invention since 
sliced bread: It will sell more of their chips.

> Some GPGPU provide 64-bit Floating Point computing wich is, I think, the 
> major concern baout raytracing.

It used to be one of the major ones, and particularly easy to explain, 
though it's a limitation that is gradually disappearing. I named some 
others in my previous post.

> Granted, this new C standard (C99) is not fully supported in any C++ 
> implementations; Intel C++ supports it for the most part but not fully. 
> But I think a port to C++ probably is in the making since C++ is by
> far more popular than C99 IMHO, so I think, since it has been released 
> about 8 months ago, maybe there is a C++ ported OpenCL spec or maybe 
> more by now. Many  computing intensive apps. would want this for 
> themselves.

I doubt that C++ support is to come anytime soon, given that OpenCL is 
even more limited than C99: No function pointers for instance. How could 
you possibly implement polymorphic objects if you don't even have 
function pointers at your disposal?

If a standard imposes limitations which are more rigorous than what 
you'll find on most brain-dead embedded microcontrollers, then there's a 
hardware reason for it.

> OK, maybe is not as suitable for raytracing as it is for protein folding 
> research, maybe the explanation why not is in the discussion about CUDA 
> is where the answer is, but maybe is worth it because it has 64-bit 
> Floating Point computing, which IIRC is the one and only big obstacle to 
> avoid GPU-aided raytracing.

It used to be the Big One that used to be mentioned first whenever the 
discussion popped up again, and possibly the only thing the POV-Ray 
developers really cared about, historically: Without support for 
double-precision floating point, there was no point in having any closer 
look at GPUs. Fortunately for scientific simulations (like that protein 
folding thing), the precision issue is improving now (probably /because/ 
the GPU developers want to go for that scientific sim market share). 
Howver, other limitations still apply, which are no issue for such use 
cases, but a problem for POV-Ray.

> Or what I'm missing? don't want any details, only the highlights if you 
> care to answer.

No support for recursion is one I named already.

Another one is that GPUs are highly optimized for massively parallel 
computations where exactly the same program with exactly the same 
control flow is run on a vast number of data sets (which is why they 
/can/ be so fast on this type of problems in the first place), but they 
can /only/ run programs of this type; so if program flow must be 
expected to change from one data set to the next, each data set must be 
run on its own, along with (for instance) 31 "empty" data sets: You lose
97% of your processing power. That does not leave much.

Massive parallelization /could/ be used for the primary rays in a scene. 
However, those are not the problem anyway: You only have a few million 
of those, and sophisticated bounding and caching typically keep the 
workload per ray low. It's usually the secondary rays (testing for 
shadows, following reflected and refracted rays, and some such) that eat 
most of the time.


 > thanks, I think I'll try povray.programming.

I had actually and honestly hoped to discourage you with my initial 
groaning. I guess the POV-Ray dev team is better informed about GPU 
computing than you expect.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.