POV-Ray : Newsgroups : povray.off-topic : GPU rendering : Re: GPU rendering Server Time
5 Sep 2024 01:22:11 EDT (-0400)
  Re: GPU rendering  
From: nemesis
Date: 13 Jan 2010 21:34:14
Message: <4b4e82a6@news.povray.org>
Sabrina Kilian wrote:
> nemesis wrote:
>> Amazing the amount of self-denial one has to go in order to cope with excuses
>> for why his favorite rendering engine should not evolve.
>>
> 
> That isn't an arguement; it borders on insult.

yes, I apologise for that.  It was a general over-the-top statement not 
addressed directly at you, but at all GPU deniers.  I'm not an attention 
whore, but I think it's an important subject and by going a bit trollish 
I think I helped to bring discussion about it and, hopefully, change.

You already commit to the idea, so I think I did a good job.  But sorry 
for the rudeness anyway.  Military tactics, dear. ;)

> I offered you reasons.
> Not vague reasons, but reasons why there is no man-power to divert to
> this path you, who admit you do not have the skills to contribute, think
> will bring a dramatic speed increase.

I don't "think", I've been following it elsewhere.  There are measured 
data, at least for triangle meshes.  May not be helpful at all for 
people who want perfect spheres or extremely slow isosurface terrains, 
but should make povray a lot more viable to general 3D artists.

>> Sabrina Kilian <ski### [at] vtedu> wrote:
> Right, and as I said, the bandwidth of the PCI-E bus may or may not
> allow for those truly complex scenes. Look at these demos, check out the
> rather simplistic geometry being used. Now pick a scene from the POV HOF.

You have a point.  I prefer to be optimistic though and think the main 
reason to be that they were targetting a "real-time" (puny 4fps) display 
to draw attention and that wouldn't be possible with far too much 
geometry on screen.

Have you seen the whole video?  There are some considerably more 
detailed scenes to the end, including some well-known 3D benchmark scenes.

> My suspicion right now, is that those complex scenes would choke the bus
> in one setup, or choke the GPU in another. This is based on knowledge of
> the principle and some skill at parallel programing and low level code
> design, but little in either the POV-Ray code base or GPGPU. If one of
> the experts would like to chime in before I get some tests written,
> please do.

Might be of help to see how they are doing there, complete with benchmarks:

http://www.luxrender.net/forum/viewtopic.php?f=21&t=2947

very long thread full of juicy stuff.

>> This is the future:  tapping all that hidden power that was being ignored so far
>> because we insist on using a lame-O chip geared at word processing to do math
>> operations.
> 
> Yes, it is. But that is not what you are asking for. You are asking for
> it to be the immediate future of POV-Ray.

No, but it's good to be prepared.  POV-Ray develops slowly and without a 
push, it just might wait another 5 years just to become aware of GPGPU, 
let alone try to implement.

> At this point in time, both OpenCL and CUDA have no enforced double
> precision float variables. They are optional in OpenCL, and only
> available on certain cards in CUDA. In a raytracer that does not care
> about this, or that is rewritten to not care, this would not be a
> problem. However, since POV-Ray uses them for a lot of things
> internally, there is a major problem in off-loading just part to the
> GPU. The parts on the GPU will only be using standard floats, while the
> CPU is using doubles, which will result in noticeable precision loss. If
> you think the solar system scale issue is bad, right now, then wait
> until you lose a portion of that and are stuck dealing with it on a much
> smaller scale. And you might as well go back to running the older 16 bit
> versions if you just change the CPU to only use single precision floats.
> Or, if you insist on forcing the GPU to do the extra math to fake double
> precision values, then you have lost all of the speed increase.

David, the author of the demo, says in that thread:

"Yup, it is so easy and so fast to zoom in that you end the 32bit 
floating point resolution very soon. The only solution would be to use 
64bit floating points instead of 32bit but there are very few boards 
supporting them at the moment (I think the new ATI HD5xxx series has the 
hardware support for double).

The other option would be the use software implemented floating point 
numbers with user defined resolution ... this stuff is so fast that it 
could handle it quite well even in software."


In any case, no need to worry about doubles as the hardware will just be 
there once any implementation whatsoever is complete:  don't forget 3.7 
beta has been around for ages ever since multicore started to become 
feasible.

Cards supporting doubles are already there, just not so cheap as to be 
in every PC.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.