|
 |
clipka wrote:
> Saul Luizaga schrieb:
>> clipka wrote:
>>> (*groans*)
>>
>> Way to go to start a discussion...
>
> Sure, but you're really not the first one, and haven't been so recently.
>
>> I think you are wrong: "OpenCL (Open Computing Language) greatly
>> improves speed and responsiveness for a wide spectrum of applications
>> in numerous market categories from gaming and entertainment to
>> scientific and medical software."
>>
>> From here:
>>
http://www.khronos.org/news/press/releases/the_khronos_group_releases_opencl_1.0_specification/
>
>
> That's a nice statement. Where does it originate from?
>
> Ah, a paper from the group that designed OpenCL to the press. What could
> be their major goal with such a paper? They're not possibly trying
> primarily to get attention to that thing? Right, sure they wouldn't want
> to hype that thing.
>
> Also note that...
>
> - "a wide spectrum of applications" is a very vague statement, and may
> exclude some.
>
> - The categories mentioned all have one thing in common: Massive number
> crunching with few decision making.
>
> POV-Ray does number crunching too, in a sense, but there's a lot of
> desision making involved.
>
>> Have you appreciated first hand that overhead making it inviable for
>> POV-Ray?
>
> How could I? Do you have an OpenCL implementation available for me so
> that I could test it?
>
> But I have read about some limitations of GPU processing in general and
> with regard to raytracing in particular, and imagine to have enough
> understanding of computer architecture to be able to say that data
> exchange between CPU and GPU requires a tad more overhead than
> inter-process communication between separate threads running on the same
> CPU.
>
>> "Tony Tamasi, senior vice president of technical marketing at NVIDIA
>> powerful way to harness the enormous processing capabilities of our
>> CUDA-based GPUs on multiple platforms." From the same link.
>
> Another marketing blurp. Of /course/ the vice president of a big player
> in the GPU market is advertising it as the greatest invention since
> sliced bread: It will sell more of their chips.
>
>> Some GPGPU provide 64-bit Floating Point computing wich is, I think,
>> the major concern baout raytracing.
>
> It used to be one of the major ones, and particularly easy to explain,
> though it's a limitation that is gradually disappearing. I named some
> others in my previous post.
>
>> Granted, this new C standard (C99) is not fully supported in any C++
>> implementations; Intel C++ supports it for the most part but not
>> fully. But I think a port to C++ probably is in the making since C++
>> is by
>> far more popular than C99 IMHO, so I think, since it has been released
>> about 8 months ago, maybe there is a C++ ported OpenCL spec or maybe
>> more by now. Many computing intensive apps. would want this for
>> themselves.
>
> I doubt that C++ support is to come anytime soon, given that OpenCL is
> even more limited than C99: No function pointers for instance. How could
> you possibly implement polymorphic objects if you don't even have
> function pointers at your disposal?
>
> If a standard imposes limitations which are more rigorous than what
> you'll find on most brain-dead embedded microcontrollers, then there's a
> hardware reason for it.
>
>> OK, maybe is not as suitable for raytracing as it is for protein
>> folding research, maybe the explanation why not is in the discussion
>> about CUDA is where the answer is, but maybe is worth it because it
>> has 64-bit Floating Point computing, which IIRC is the one and only
>> big obstacle to avoid GPU-aided raytracing.
>
> It used to be the Big One that used to be mentioned first whenever the
> discussion popped up again, and possibly the only thing the POV-Ray
> developers really cared about, historically: Without support for
> double-precision floating point, there was no point in having any closer
> look at GPUs. Fortunately for scientific simulations (like that protein
> folding thing), the precision issue is improving now (probably /because/
> the GPU developers want to go for that scientific sim market share).
> Howver, other limitations still apply, which are no issue for such use
> cases, but a problem for POV-Ray.
>
>> Or what I'm missing? don't want any details, only the highlights if
>> you care to answer.
>
> No support for recursion is one I named already.
>
> Another one is that GPUs are highly optimized for massively parallel
> computations where exactly the same program with exactly the same
> control flow is run on a vast number of data sets (which is why they
> /can/ be so fast on this type of problems in the first place), but they
> can /only/ run programs of this type; so if program flow must be
> expected to change from one data set to the next, each data set must be
> run on its own, along with (for instance) 31 "empty" data sets: You lose
> 97% of your processing power. That does not leave much.
>
> Massive parallelization /could/ be used for the primary rays in a scene.
> However, those are not the problem anyway: You only have a few million
> of those, and sophisticated bounding and caching typically keep the
> workload per ray low. It's usually the secondary rays (testing for
> shadows, following reflected and refracted rays, and some such) that eat
> most of the time.
>
So, you assume that is just a huge amount of hype, and even if it works
for other apps., won't for POV-Ray. I think there are more to analyze
than just simple generalizations. Anyway, is a POV/TAG-Team decision, we
can merely speculate.
> > thanks, I think I'll try povray.programming.
>
> I had actually and honestly hoped to discourage you with my initial
> groaning. I guess the POV-Ray dev team is better informed about GPU
> computing than you expect.
And you know what I expect the POV/TAG-Team to know... you assume too
much...
groaning is an emotional response and as such, irrational: I haven't
been reading this NG for a long time and the first line of text I read
was your groaning, besides of the rudeness, it explains nothing, and
clarifies the same way. I don't like emotional responses to intellectual
matters, I find them quite out of place, getting in the way of
logical/rational thinking, making a mess of things, reason enough for me
to disregard them on sight. Discouraging/intimidating me is not that easy.
Post a reply to this message
|
 |