POV-Ray : Newsgroups : povray.pov4.discussion.general : GPU Rendering Server Time
24 Nov 2024 07:40:30 EST (-0500)
  GPU Rendering (Message 1 to 10 of 41)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Allen
Subject: GPU Rendering
Date: 19 Oct 2007 19:40:00
Message: <web.47193f22c64c964747101aad0@news.povray.org>
Would it be possible to use the GPU as well as the CPU to render?  This
could speed up render times so much and would allow a somewhat crappy
machine to generate some pretty spiffy results.


Post a reply to this message

From: Charles C
Subject: Re: GPU Rendering
Date: 19 Oct 2007 20:43:34
Message: <47194f36@news.povray.org>
Allen wrote:
> Would it be possible to use the GPU as well as the CPU to render?  This
> could speed up render times so much and would allow a somewhat crappy
> machine to generate some pretty spiffy results.
> 

I think this is one of those recurring questions, and generally the 
answers I've seen come down to "no of course not."  I'm no expert - I 
speak with less knowledge on this than many who've answered this 
question before, but I'm not entirely confident that "no of course not" 
will always be the answer.

Anyway....

While the 'G' in GPU's are moving more towards "General" as opposed to 
"Graphics," it'd be hard, for one thing, since their floating point 
calculations are generally single precision and in POV-Ray everything's 
double.   I'll add that even with double precision, people trying to 
make things like spacecraft near planets to-scale run into practical 
limitations:  POV-Ray won't necessarily render everything - parts of an 
object may vanish.

Another matter is how large of chunks of functionality should be loaded 
into the card at once (afterall, you'd be using the GPU as a CPU or some 
sort of co-processor, not for its built-in shading).  Another issue 
would be platform independence when you're talking about graphics 
cards.... That could be interesting.

On a 'POV4' note, I wonder how hard it'd be to make it simple* to choose 
what math library to use at compile time?  I.e. not even having doubles 
as the only option.... DO compile using single precision, or DO use one 
of those variable-precision (slow) libraries, or if somebody were to 
write something that'd send floating point calculations (disregarding 
whether it'd be effective or not) to the graphics card**, they could do 
so?  Somebody using 'insane' scale differences might be willing to deal 
with the slowness of an added-precision library, and somebody wanting to 
experiment with GPGPU could do so too.

I think all use of alternate math libraries would be non-standard 
compilations of POV-Ray and thus resultant images wouldn't necessarily 
be the same as one rendered with official POV-Ray.  But who cares? 
They'd know it's non-standard.  Am I right in saying that being able to 
switch to a non-standard or platform-dependant library easily wouldn't 
break portability?

Charles

*It may already be simple for all I know.  I haven't looked into it. I'm 
just talking.

**Maybe that person would choose to promote returned single precision 
floats to doubles at reduced quality.


Post a reply to this message

From: Tim Attwood
Subject: Re: GPU Rendering
Date: 19 Oct 2007 23:50:06
Message: <47197aee$1@news.povray.org>
The problem with using GPUs in this manner is that
the protocols for doing so are proprietary, and
often unpublished by the card manufacturers, they
differ in implementation, feature sets etc.

If the manufacturers start providing drivers for
doing so, then it might catch on, but IMO GPUs
aren't going to remain competitive with newer
multi-core processors anyways.

It might be possible to end up with a real-time
GUI preview via graphics card pipelines for POV
though, if tesselations of pov objects are eventually
built in.


Post a reply to this message

From: Warp
Subject: Re: GPU Rendering
Date: 20 Oct 2007 07:31:55
Message: <4719e72b@news.povray.org>
Allen <nomail@nomail> wrote:
> Would it be possible to use the GPU as well as the CPU to render?

  The general answer is: No.

  There are currently two (non-exclusive) ways to use a GPU to speed up
rendering (or other calculations), neither of which is very useful in
raytracing (for reasons given below):

  A GPU can, quite obviously, be used for scanline-rendering triangles.
This is almost like raycasting, but not quite.
  Basically what the GPU does is to render one entire triangle at a time.
The Z-buffer is used for hidden surface removal. Unless the triangles are
rendered in a front-to-back order, there's quite a lot of wasted rendering.
(I'm sure, but it's possible that some games try to make the GPU render in
a front-to-back order, eg. by using a BSP tree, in order to make the
rendering faster, as it minimizes the amount of wasted rendered pixels.)

  The triangle-rendering capabilities of a GPU can sometimes be used for
something else than pure 3D graphics too (such as fast 2D blending, rotating,
and other similar effects), some of them quite ingenuous (I think some have
used the GPU triangle rendering for sound mixing...)
  One very typical usage of the triangle-rendering capability is to create
radiosity lightmaps.

  Another, more modern usage, is to make use of the mathematical
co-processors in current GPUs to perform other calculations. Especially
shaders can be used for much more than simply surface texturing. Some have
used them for quite a varied set of things, many not related with graphics
at all.

  Why doesn't either of these help in raytracing? Couldn't the triangle
rendering be used for the first recursion level of raytracing (iow. for
the raycasting step)? Couldn't shaders be used to raytrace?

  The answer is no, and the reason is that the data transmission overhead
between the GPU and the CPU is way too massive. A raytracer needs info
about objects, intersection points, etc. on a per ray basis. They shoot
a ray, and they need all info that ray provided right away, in order to
make a decision on what to do next (eg. for shooting a new reflected or
refracted ray from the intersection point). This means that for each ray
the raytracer will need to read data from the GPU, and this would be really
slow. It wouldn't help if the GPU is let to "raycast" the entire scene
first, and then all the data is read, because you are still reading the
data for every single pixel, and it doesn't make too much of a difference
in speed.

  Besides, the "raycasting" step, ie, the very first recusion level, is
not the slowest step in the whole raytracing process. On the contrary,
as far as I can see, it's usually the fastest and simplest. You can test
this by setting max_trace_level to 1 in POV-Ray. The scene will most
probably always render very fast, especially if it only consists of
triangle meshes (as it would have to, if you were going to use the GPU
to help in the raycasting process). The slowness of raytracing comes
from the subsequent recursion levels (as well as texturing, media, etc),
and the vast majority of time is spent there.

  Data transfer speed is also the reason why trying to use the math
co-processors in the GPU is not feasible. While they could in theory
perform some very fast operations, reading the results of those operations
is so slow that it would completely nullify the speed benefit.
  The only theoretical benefit from the GPU math co-processors could be
achieved if they could perform lengthy calculations with a small (in data
amount) result. For example, being able to fully raytrace a pixel would
probably be such a thing. However, I don't think GPUs will be able to
perform full raytracing anytime soon. Not with all the features in POV-Ray
(such as different types of primitives, procedural texturing, media, etc).
  (One big barrier to this is also that, AFAIK, shaders cannot be
recursive. They might not even support loops, if I remember correctly.)

-- 
                                                          - Warp


Post a reply to this message

From: Fa3ien
Subject: Re: GPU Rendering
Date: 25 Oct 2007 06:23:35
Message: <47206ea7$1@news.povray.org>

> Would it be possible to use the GPU as well as the CPU to render?  This
> could speed up render times so much and would allow a somewhat crappy
> machine to generate some pretty spiffy results.

Even if it was possible, there's the problem of portability : GPU evolves
very fast, and there's no common standard these days (there are things
like DirectX, but they are platform-specific). Maybe someday.

Fabien.


Post a reply to this message

From: zeroin23
Subject: Re: GPU Rendering
Date: 10 Nov 2007 22:15:00
Message: <web.4736736dbb28b9b8dcba94ae0@news.povray.org>
anyone looked at the below? (maybe can used it? Shaders are tough, but I think
the below might help)
Another idea I have is, anyone thought of using VTK for quick preview and
rotating of models etc. So that the user can rotate, and finally find the
camera location, zoom and lookat vector that is required. (Moray can already do
this?)


Last Updated: 10 / 11 / 2007
http://developer.nvidia.com/object/cuda.html

that enables the GPU to solve complex computational problems in consumer,
business, and technical applications. CUDA (Compute Unified Device
Architecture) technology gives computationally intensive applications access to
the tremendous processing power of NVIDIA graphics processing units (GPUs)
through a revolutionary new programming interface. Providing orders of
magnitude more performance and simplifying software development by using the
standard C language, CUDA technology enables developers to create innovative
solutions for data-intensive problems. For advanced research and language
development, CUDA includes a low level assembly language layer and driver
interface.

The CUDA Toolkit is a complete software development solution for programming
CUDA-enabled GPUs. The Toolkit includes standard FFT and BLAS libraries, a
C-compiler for the NVIDIA GPU and a runtime driver. The CUDA runtime driver is




Post a reply to this message

From: scott
Subject: Re: GPU Rendering
Date: 12 Nov 2007 07:22:00
Message: <47384568$1@news.povray.org>
> Would it be possible to use the GPU as well as the CPU to render?  This
> could speed up render times so much and would allow a somewhat crappy
> machine to generate some pretty spiffy results.

Whilst I agree with what others have written here, it seems nobody has 
thought about the fact that by the time POV4 is released 3D cards will have 
progressed several generations.

IMO we shouldn't totally rule out using the GPU to help with high-quality 
renderings just because of some limitation that might not even be there in 5 
years time.  You only have to have a look at the feature-list of the 
different shader versions over the last 5 years to see the way it's going 
(btw loops are fine in both vertex and pixel shaders now, and there is no 
limit on the number of instructions like there used to be).

Also look at nVidia's Gelato, they use the 3D card to help with rendering, 
so it can't be all bad.  Maybe there is some technical paper somewhere about 
what Gelato does that would give some ideas for POV4?


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: GPU Rendering
Date: 12 Nov 2007 08:26:29
Message: <47385485@news.povray.org>

>> Would it be possible to use the GPU as well as the CPU to render?  This
>> could speed up render times so much and would allow a somewhat crappy
>> machine to generate some pretty spiffy results.
> 
> Whilst I agree with what others have written here, it seems nobody has 
> thought about the fact that by the time POV4 is released 3D cards will 
> have progressed several generations.
> 
> IMO we shouldn't totally rule out using the GPU to help with 
> high-quality renderings just because of some limitation that might not 
> even be there in 5 years time.  You only have to have a look at the 
> feature-list of the different shader versions over the last 5 years to 
> see the way it's going (btw loops are fine in both vertex and pixel 
> shaders now, and there is no limit on the number of instructions like 
> there used to be).
> 
> Also look at nVidia's Gelato, they use the 3D card to help with 
> rendering, so it can't be all bad.  Maybe there is some technical paper 
> somewhere about what Gelato does that would give some ideas for POV4?
> 

Agreed. I suggest not adding GPU support until there are GPUs that can 
actually do that, but leave appropriate hooks and stubs on the code to 
easily add it whenever we're ready :)

For realtime previews, I think the best way is having the Object class 
(and all the concrete objects that extend it like Sphere) have a 
Tesselate method along with Trace_Ray and Point_Inside (well, whatever 
they're called; you get the idea).


Post a reply to this message

From: scott
Subject: Re: GPU Rendering
Date: 12 Nov 2007 08:56:52
Message: <47385ba4$1@news.povray.org>
> Agreed. I suggest not adding GPU support until there are GPUs that can 
> actually do that, but leave appropriate hooks and stubs on the code to 
> easily add it whenever we're ready :)

Yep, exactly my thinking.  Not even considering GPU rendering when writing 
POV4 would be a bit of a mistake IMO.  Just like if you started coding a big 
project a year or two ago without any thinking of future support for 
multi-core...

> For realtime previews, I think the best way is having the Object class 
> (and all the concrete objects that extend it like Sphere) have a Tesselate 
> method along with Trace_Ray and Point_Inside (well, whatever they're 
> called; you get the idea).

Yes that sounds sensible, it's not actually very hard to tesselate the POV 
primitives, even the isosurface is pretty straightforward when you use a 
finite volumetric grid.  Of course for infinite objects you would need to 
bound them, but seeing as in GPU rendering you need to specify a "far" plane 
anyway, you would just clip your objects to that.

The materials shouldn't be too hard either, just the effort of porting the 
pigment/finish code over to pixel shaders.

Lights and shadows should be easy enough.  Multi-level reflections would be 
almost impossible, but first-level reflections should be doable for preview 
quality.


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: GPU Rendering
Date: 12 Nov 2007 09:08:13
Message: <47385e4d@news.povray.org>

>> For realtime previews, I think the best way is having the Object class 
>> (and all the concrete objects that extend it like Sphere) have a 
>> Tesselate method along with Trace_Ray and Point_Inside (well, whatever 
>> they're called; you get the idea).
> 
> Yes that sounds sensible, it's not actually very hard to tesselate the 
> POV primitives, even the isosurface is pretty straightforward when you 
> use a finite volumetric grid.  Of course for infinite objects you would 
> need to bound them, but seeing as in GPU rendering you need to specify a 
> "far" plane anyway, you would just clip your objects to that.
> 
> The materials shouldn't be too hard either, just the effort of porting 
> the pigment/finish code over to pixel shaders.
> 
> Lights and shadows should be easy enough.  Multi-level reflections would 
> be almost impossible, but first-level reflections should be doable for 
> preview quality.

In this case too I'm not interested in the maths needed for tesselating. 
Just to plan the code from the beginning to, for example, allow each 
object to have a tesselator. So that if somebody wants to write a live 
preview patch, he doesn't have to change half of the POV-Ray code :)


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.