POV-Ray : Newsgroups : povray.off-topic : GPU rendering : Re: GPU rendering Server Time
4 Sep 2024 23:19:13 EDT (-0400)
  Re: GPU rendering  
From: Sabrina Kilian
Date: 13 Jan 2010 16:44:46
Message: <4b4e3ece$1@news.povray.org>
nemesis wrote:
> Amazing the amount of self-denial one has to go in order to cope with excuses
> for why his favorite rendering engine should not evolve.
> 

That isn't an arguement; it borders on insult. I offered you reasons.
Not vague reasons, but reasons why there is no man-power to divert to
this path you, who admit you do not have the skills to contribute, think
will bring a dramatic speed increase.

> Sabrina Kilian <ski### [at] vtedu> wrote:
>>> Can you imagine what povray could do comparatively without getting
>>> boiled down by unbiased techniques?!
>>>
>> Those unbiased techniques are what allows them to appear so fast. The
>> video appears to be getting 10fps tops, and between 2 and 4 the rest of
>> the time. Yes, unbiased rendering is slower to achieve the same image as
>> a biased renderer, however you can stop it much sooner and get a full
>> resolution picture. That picture just has more noise.
>>
>> Your question comes out as if you were asking "Could you imagine what
>> povray could do by using unbiased techniques for single sample speed
>> increases without being unbiased?" I think the quickest answer would be
>> "Not really, but set the max depth to 1 and lets see what the pictures
>> look like."
> 
> No, my question is merely like:  "hey, I can render scenes in povray in seconds
> rather than dozens of minutes.  And truly complex ones in a few hours rather
> than days."
> 

Right, and as I said, the bandwidth of the PCI-E bus may or may not
allow for those truly complex scenes. Look at these demos, check out the
rather simplistic geometry being used. Now pick a scene from the POV HOF.

My suspicion right now, is that those complex scenes would choke the bus
in one setup, or choke the GPU in another. This is based on knowledge of
the principle and some skill at parallel programing and low level code
design, but little in either the POV-Ray code base or GPGPU. If one of
the experts would like to chime in before I get some tests written,
please do.

> I'm not impressed with that demo's noisy 4fps display.  I'm impressed to see
> scenes with full light transport phenomena set that usually take 1 hour to
> denoise being noise-free in a couple of minutes.
> 
> This is the future:  tapping all that hidden power that was being ignored so far
> because we insist on using a lame-O chip geared at word processing to do math
> operations.

Yes, it is. But that is not what you are asking for. You are asking for
it to be the immediate future of POV-Ray.

At this point in time, both OpenCL and CUDA have no enforced double
precision float variables. They are optional in OpenCL, and only
available on certain cards in CUDA. In a raytracer that does not care
about this, or that is rewritten to not care, this would not be a
problem. However, since POV-Ray uses them for a lot of things
internally, there is a major problem in off-loading just part to the
GPU. The parts on the GPU will only be using standard floats, while the
CPU is using doubles, which will result in noticeable precision loss. If
you think the solar system scale issue is bad, right now, then wait
until you lose a portion of that and are stuck dealing with it on a much
smaller scale. And you might as well go back to running the older 16 bit
versions if you just change the CPU to only use single precision floats.
Or, if you insist on forcing the GPU to do the extra math to fake double
precision values, then you have lost all of the speed increase.

Will POV-Ray move to GPU? Of course, once there is a good assurance that
the code will not have to be rewritten for 8 different APIs, and will
not lose the precision that it is known for now. And all of the multiple
other problems that you seem to think are just fan-boys' "self-denial" .
. . "in order to cope with excuses for why his favorite rendering engine
should not evolve."


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.