POV-Ray : Newsgroups : povray.pov4.discussion.general : GPU usage Server Time
27 Dec 2024 21:42:59 EST (-0500)
  GPU usage (Message 1 to 10 of 26)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Doom
Subject: GPU usage
Date: 31 Dec 2008 12:50:00
Message: <web.495bb04c6954e3ed37f671630@news.povray.org>
I was curious, with the OpenCL 1.0 spec, and also with ATI/AMD releasing
open-source code now for utilizing the GPU processing power, could it be
possible to utilize GPU acceleration within POV-Ray?  I know a lot of POV-Ray
is CPU intensive and utilizing the threads in the GPU would drastically speed
up render times.

anyones thoughts?


Post a reply to this message

From: Patrick Elliott
Subject: Re: GPU usage
Date: 31 Dec 2008 14:26:01
Message: <495bc749$1@news.povray.org>
Doom wrote:
> I was curious, with the OpenCL 1.0 spec, and also with ATI/AMD releasing
> open-source code now for utilizing the GPU processing power, could it be
> possible to utilize GPU acceleration within POV-Ray?  I know a lot of POV-Ray
> is CPU intensive and utilizing the threads in the GPU would drastically speed
> up render times.
> 
> anyones thoughts?
> 

This comes up about once a month. The practical answer is, "No, not 
really, since GPUs are optimized for the type of math that is used by 
scanline systems, not physics based raytracing, and their floating point 
systems are almost always using lower bit depth than the CPU of the same 
machine, so nearly everything POVRay does would fall in the category of, 
"Nope, the GPU can't do that, or if it did, it would produce results 
that are poorer than the FPU of the main processor."

Now, if you didn't mind getting "poorer" quality, and the *slight* speed 
increase you might get from it was worth it... maybe. But, unless its 
enough to give you real time, its not worth the effort, since the only 
reason you might want to downgrade the results, to get better 
performance, is as a game engine. And actually, I kind of wish you could 
do that. I am getting seriously tired of things like the layers alpha BS 
in Second Life, which a raytracer would never have in the first place. lol

-- 
void main () {

     if version = "Vista" {
       call slow_by_half();
       call DRM_everything();
     }
     call functional_code();
   }
   else
     call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models, 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: clipka
Subject: Re: GPU usage
Date: 31 Dec 2008 18:50:00
Message: <web.495c0405dc9638ab30acaf600@news.povray.org>
"Doom" <mar### [at] gmailcom> wrote:
> I was curious, with the OpenCL 1.0 spec, and also with ATI/AMD releasing
> open-source code now for utilizing the GPU processing power, could it be
> possible to utilize GPU acceleration within POV-Ray?  I know a lot of POV-Ray
> is CPU intensive and utilizing the threads in the GPU would drastically speed
> up render times.

There has already been some mention in the FAQ.

The first question will be: Does OpenCL support double-precision floats? If not,
then forget it, because that's the precision POV needs. I wouldn't be surprised
if games could do fine with single-precision.

The next question would be: How to distribute the workload? Just running another
POV render thread in the GPU will hardly work.

I guess it wouldn't be worth the pain. With SSE2, the compiler will help you to
make some use of it. With OpenCL, the compiler probably cannot, so a lot of
manual changes to the code would be needed, and designing them for portability
(so that the program can still run on non-OpenCL systems) could be quite a
problem.


Post a reply to this message

From: Doom
Subject: Re: GPU usage
Date: 31 Dec 2008 20:40:01
Message: <web.495c1e4cdc9638ab37f671630@news.povray.org>
"clipka" <nomail@nomail> wrote:
> "Doom" <mar### [at] gmailcom> wrote:
> > I was curious, with the OpenCL 1.0 spec, and also with ATI/AMD releasing
> > open-source code now for utilizing the GPU processing power, could it be
> > possible to utilize GPU acceleration within POV-Ray?  I know a lot of POV-Ray
> > is CPU intensive and utilizing the threads in the GPU would drastically speed
> > up render times.
>
> There has already been some mention in the FAQ.
>
> The first question will be: Does OpenCL support double-precision floats? If not,
> then forget it, because that's the precision POV needs. I wouldn't be surprised
> if games could do fine with single-precision.
>
> The next question would be: How to distribute the workload? Just running another
> POV render thread in the GPU will hardly work.
>
> I guess it wouldn't be worth the pain. With SSE2, the compiler will help you to
> make some use of it. With OpenCL, the compiler probably cannot, so a lot of
> manual changes to the code would be needed, and designing them for portability
> (so that the program can still run on non-OpenCL systems) could be quite a
> problem.


Thank you for the input.  All of it makes sense.  From my knowledge, most
current GPU's do support double precision floats, however, not nearly at the
speed as single precision, as there isn't the need in graphics today.

I know some cards to exist that are designed more for math / calcuations and not
graphics, but i do not know much about those workstation cards, and/or what all
they do support.

Thanks again for the quick input!  It answered my curiosity of the day.

-Mark


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: GPU usage
Date: 1 Jan 2009 19:21:43
Message: <495d5e16@news.povray.org>
Patrick Elliott wrote:
> This comes up about once a month. The practical answer is, "No, not
> really, since GPUs are optimized for the type of math that is used by
> scanline systems, not physics based raytracing, and their floating point
> systems are almost always using lower bit depth than the CPU of the same
> machine, so nearly everything POVRay does would fall in the category of,
> "Nope, the GPU can't do that, or if it did, it would produce results
> that are poorer than the FPU of the main processor."

Modern GPUs are very programmable; they aren't tied to scanline rendering
anymore. Ever heard of protein folding on GPUs?

Raytracing *can* be done in GPUs.
See http://www.clockworkcoders.com/oglsl/rt/

However it would be a HUGE amount of work to modify POV-Ray to run on CUDA /
OpenCL. Lots of algorithms would need to be rewritten from scratch, and a
few may be impossible.


Post a reply to this message

From: nemesis
Subject: Re: GPU usage
Date: 1 Jan 2009 21:10:00
Message: <web.495d76d8dc9638ab180057960@news.povray.org>
This is a very interesting summary for one such "GPU raytracer" project:

http://www.ce.chalmers.se/edu/proj/raygpu/index.php?view=projects

In particular:
"So far we have not been able to acquire rendering times that compete with
established ray tracers like mental ray but then again this has never been the
objective of this thesis. Writing fast ray tracers with the help of graphics
hardware is something that we believe is definitely possible though. A good
idea is probably to use the graphics card only for some parts of the rendering
like for example intersection tests and shading and letting the CPU take care
of the traversal of data structures and such."

I thought running it through a GPU would automatically yield even a naive
raytracer an immense boost in performance.  But then, not even MentalRay nor
VRay seem much interested in running on GPU either.


Post a reply to this message

From: Chambers
Subject: Re: GPU usage
Date: 7 Jan 2009 03:50:49
Message: <5412C33CFC874C64AD0AFF80C11C2654@HomePC>
(Sorry it took me so long to respond, our internet connection was down
for a while).

While the OpenCL spec is very exciting for the GPGPU effort, it will not
help renderers such as POV-Ray.  Ultimately, they still come down to a
SIMD design: specify one instruction (or one set of instructions), and
run it on multiple sets of data.

In practice, this means that anything with a high degree of branching
(ie POV-Ray) won't benefit nearly as much.

Of course, the new GPUs are more capable than ever, and they do offer
double precision floating point, so it may at one point be possible for
limited aspects of the renderer.  However, bear in mind that even
something as inherently SIMD as video encoding, using a high end GPU
from NVidia (the GeForce 260) was actually *slower* than a stock Core 2
Duo from Intel (1).  Something with as much branching as POV-Ray will
perform even worse.

So, there's new hope for the future, but it's still a *distant* future.

(1)
http://xtreview.com/addcomment-id-6872-view-CUDA-video-performance-using
-TMPGEnc-4.0-Xpress.html

...Ben Chambers
www.pacificwebguy.com

The plural of anecdote is not data  --Elbows


Post a reply to this message

From: Saul Luizaga
Subject: Re: GPU usage
Date: 3 Mar 2009 02:29:04
Message: <49acdc40$1@news.povray.org>
For what I have read about it, GPUs are great for geometry and some 
other graphic related calculations, because they have more sophisticated 
  hardwired, say trigonometric, functions that the FPU simply doesn't 
have, this leads me to ask the experienced programmers here if how huge 
would be to make a rough "previewer", real-time if possibile, for 
POV-Ray? Correct me if I'm wrong but at some point that effort has to be 
easy because the GPU have this scene elements analog to POV-Ray's 
(lightning, primitives, colors, phong, diffuse, etc.), why is so hard to 
use resemblance in features to POV-Ray's favor? I know... is only 32-bit 
precision, is not 100% compatible, POV-Rays is Hyper-complex and 
wealthy-rich in features but, somehow can't the to GPU along with the 
FPU & CPU can't get to cut a deal that can give u an estimate of what 
you'll receive if the scene is rendered?

  I really whish I could answer this myself...maybe a como of GPU 
features can give some of the most complex POV-Ray's features? maybe a 
rough rough preview, something that can "guess"(fast calculations with 
GPU+FPU+CPU) to get just an estimation of the final colors and effects 
on the objects, and mainly this could be used to preview the modeled 
geometry of solids and surfaces and the lighting on the scene, rather 
than actually have a correct colored accurate scene, don't you think?

Another suggestion, what about this:
a small floating window (maybe 512*384 or 384*288) with the title "rough 
preview" that makes the GPU+FPU+CPU preview of the scene as ral-time as 
possible "while you type" or when you have finished a valid line, this 
could be configurable, but I think would be great as a default.

This ideas is so incompatible with reality? I hope not, you tell me :)

Regards.


Post a reply to this message

From: Saul Luizaga
Subject: Re: GPU usage
Date: 3 Mar 2009 02:44:19
Message: <49acdfd3@news.povray.org>
In the spirit of co-processing (GPU+FPU+CPU) I would like to suggest 
that if isn't too much of a trouble, would be great if POV-Ray could 
have a GPU+CPU(CPU included) Identifier (GPU-Z, CPU-Z,etc) to use 
available processing hardware resources to render, *of course* GPU 
helping whenever it can keep with POV-Ray's quality specs and/or "rough 
preview". Makes sense?


Post a reply to this message

From: clipka
Subject: Re: GPU usage
Date: 3 Mar 2009 07:25:00
Message: <web.49ad2031dc9638abbdc576310@news.povray.org>
Saul Luizaga <sau### [at] netscapenet> wrote:
(<GPU preview>)

Interesting idea. Would be a lot of work to implement though, I guess.

> Another suggestion, what about this:
> a small floating window (maybe 512*384 or 384*288) with the title "rough
> preview" that makes the GPU+FPU+CPU preview of the scene as ral-time as
> possible "while you type" or when you have finished a valid line, this
> could be configurable, but I think would be great as a default.

Great idea, but what if your scene file takes 15 minutes to (re-)parse because
of heavy macro usage? And what if you are working on SDL code designed to write
to files (radiosity samples, photons, or just plain #write output)? I wouldn't
want POV to just start running *such* SDL code while it isn't "ready to rock"
yet.

(Or what if all the changes you want to do is just a few #defines that govern
whether your SDL file *does* #write or instead #read some stuff, in order to
switch from #write to #read mode... don't even want to think about it. Argh!)

No, I think the idea is intriguing, but prohibitive due to the power of POV's
SDL language.


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.