POV-Ray : Newsgroups : povray.off-topic : GPU rendering Server Time
4 Sep 2024 19:23:21 EDT (-0400)
  GPU rendering (Message 11 to 20 of 175)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: nemesis
Subject: Re: GPU rendering
Date: 12 Jan 2010 20:08:14
Message: <4b4d1cfe$1@news.povray.org>
Nicolas Alvarez wrote:
> nemesis wrote:
>> Good Lord, it's sad that 3.7 is not out yet
> 
> Are you contributing to the code?

no, like I said, I don't have the talent.  Do you?

what?  Can't we ask or dream that povray can still improve and get out 
of the 90's?


Post a reply to this message

From: Sabrina Kilian
Subject: Re: GPU rendering
Date: 13 Jan 2010 00:46:22
Message: <4b4d5e2e$1@news.povray.org>
nemesis wrote:
> scott wrote:
>>>> Hopefully someone will try its hand at least to speed up povray
>>>> triangle handling.
>>>>
>>> It would be extremely difficult to handle triangles, and e.g. spheres
>>> or isosurfaces consistently if you use different ways of computing them.
>>
>> Yes, unless you do the whole lot on the GPU you're going to get bogged
>> down with CPU<>GPU communications.
>>
>> If you converted POV to run on the GPU then it would take a huge
>> amount of work and would no longer be as portable as it is today.
> 
> But would be much faster!  Take a look here:
> 

No, it would not. Yes, GPUs are fast at some things, but they are slow
at others. Certain techniques work well when moved to the GPU. POV-Ray
uses several that do not fall into that category.

Now, lets say we just move some parts to the graphics card, starting
from the 3.6 code that is stable right now. That would involve not just
updating the code to be highly parallel, but updating how certain
interactions are handled. That parallel part is currently being tested
in 3.7. And as andrel said, once you start handling each different
object type in a different manner, things get complicated.

First off, it is not likely that the parser will be moved to the
graphics card, just not something that I see moving that direction. The
most obvious thing, to me anyways, would be the ray intersection test.
Now, in doing that, you have lost some precision on some platforms but
not others, due to different graphics cards supporting different float
sizes. Ta-da, instant bug reports.

Lets say we get around those bugs, by some how enforcing the size of
double float. Compared to certain cards then, even with multiple cores
in parallel, the CPU's FPU is faster. No gain. But, lets say that it is
enforced to only compile on cards that support double precision floats.
This is something the GPU excels at, so it should offer some obvious
speed increases, as long as everything works. The difference is, that
what the graphics card is really good at is taking vector math like that
and outputting the results to the VGA or DVI or HDMI cable. And what we
probably would need is for that data to go back across the bus and to
the CPU.

My GPU coding skills are lacking, and I haven't read the POV-Ray source
in ages, so I am probably wrong on this. My suspicion is that the
texture code is still going to be handled on the CPU. Too many possible
things to stack up, but what this means is that all of those
intersection tests are going to get dumped back to the CPU. Now, another
decision: do you keep the GPU testing rays that may spawn from those
intersections, without knowing which ray you want? If the texture*
supports bumps, then tracing the reflection from an intersection is
probably a waste of a test. So would be tracing everything else that ray
eventually intersects, and all of it's children and so on. So, now you
have to clutter the pipe again by sending all of your requests for ray
tests over to the GPU.

*if the object is deformed by the bumps, and the geometry reflects that,
then tracing the ray is a good option.

This would be a great thing to benchmark; to see how much render time is
spent on the ray intersection tests, and how much data is moved around
in processing all of them. If the amount of data is small enough, and if
the tests would be sped up appreciably*, and if the geometry can all be
loaded into the graphics card beforehand, then it may be worth the time
investment to move that code to the GPU.

*it may happen that the data is small enough, and the tests are sped up,
but the overhead for having to process all of this information outside
the CPU outweighs the benefits.

But not during the move from 3.6 to 3.7 as there are too many other
things being moved and changed.

> Can you imagine what povray could do comparatively without getting
> boiled down by unbiased techniques?!
> 

Those unbiased techniques are what allows them to appear so fast. The
video appears to be getting 10fps tops, and between 2 and 4 the rest of
the time. Yes, unbiased rendering is slower to achieve the same image as
a biased renderer, however you can stop it much sooner and get a full
resolution picture. That picture just has more noise.

Your question comes out as if you were asking "Could you imagine what
povray could do by using unbiased techniques for single sample speed
increases without being unbiased?" I think the quickest answer would be
"Not really, but set the max depth to 1 and lets see what the pictures
look like."


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 13 Jan 2010 02:00:18
Message: <web.4b4d6f4939d93b1bcd7db9a80@news.povray.org>
Amazing the amount of self-denial one has to go in order to cope with excuses
for why his favorite rendering engine should not evolve.

Sabrina Kilian <ski### [at] vtedu> wrote:
> > Can you imagine what povray could do comparatively without getting
> > boiled down by unbiased techniques?!
> >
>
> Those unbiased techniques are what allows them to appear so fast. The
> video appears to be getting 10fps tops, and between 2 and 4 the rest of
> the time. Yes, unbiased rendering is slower to achieve the same image as
> a biased renderer, however you can stop it much sooner and get a full
> resolution picture. That picture just has more noise.
>
> Your question comes out as if you were asking "Could you imagine what
> povray could do by using unbiased techniques for single sample speed
> increases without being unbiased?" I think the quickest answer would be
> "Not really, but set the max depth to 1 and lets see what the pictures
> look like."

No, my question is merely like:  "hey, I can render scenes in povray in seconds
rather than dozens of minutes.  And truly complex ones in a few hours rather
than days."

I'm not impressed with that demo's noisy 4fps display.  I'm impressed to see
scenes with full light transport phenomena set that usually take 1 hour to
denoise being noise-free in a couple of minutes.

This is the future:  tapping all that hidden power that was being ignored so far
because we insist on using a lame-O chip geared at word processing to do math
operations.


Post a reply to this message

From: scott
Subject: Re: GPU rendering
Date: 13 Jan 2010 03:15:14
Message: <4b4d8112@news.povray.org>
>> If you converted POV to run on the GPU then it would take a huge amount 
>> of work and would no longer be as portable as it is today.
>
> But would be much faster!  Take a look here:

Sure, but there simply aren't the people willing to do the work for free. 
Look how long it's taken from 3.6 to 3.7 beta, what hope is there of a 
complete rewrite for the GPU before 2020?

> it's an experimental and limited port of the open-source unbiased renderer 
> Luxrender to OpenCL.

From my very limited knowledge of OpenCL it seems like POV would need to be 
rewritten from scratch to use it.


Post a reply to this message

From: Tim Cook
Subject: Re: GPU rendering
Date: 13 Jan 2010 09:10:09
Message: <4b4dd441$1@news.povray.org>
scott wrote:
> Sure, but there simply aren't the people willing to do the work for 
> free. Look how long it's taken from 3.6 to 3.7 beta, what hope is there 
> of a complete rewrite for the GPU before 2020?
> 
>  From my very limited knowledge of OpenCL it seems like POV would need 
> to be rewritten from scratch to use it.

In other words, 'not for 3.7'.  3.7 is still building the things that 
will be needed for an eventual support of GPU rendering.  They need to 
be tested and verified and streamlined.

Maybe 4.0 will be able to use a bit of the GPU.  Maybe 5.0.  Maybe GPUs 
will become like what FPUs were, part of the main chip itself instead of 
a separate element.  Who knows?  That's at least ten years off... *wink*

--
Tim Cook
http://empyrean.freesitespace.net


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 13 Jan 2010 11:10:03
Message: <4b4df05b@news.povray.org>
scott escreveu:
>>> If you converted POV to run on the GPU then it would take a huge 
>>> amount of work and would no longer be as portable as it is today.
>>
>> But would be much faster!  Take a look here:
> 
> Sure, but there simply aren't the people willing to do the work for 
> free. Look how long it's taken from 3.6 to 3.7 beta, what hope is there 
> of a complete rewrite for the GPU before 2020?

What hope is there for an old-fashioned, dog-slow, CPU-only raytracer to 
still be alive by 2020?

>> it's an experimental and limited port of the open-source unbiased 
>> renderer Luxrender to OpenCL.
> 
>  From my very limited knowledge of OpenCL it seems like POV would need 
> to be rewritten from scratch to use it.

The guys there ported the ray intersection code and the space 
partitioning.  And are working now on refining load balance.  Good 
starting points without much pain if you ask me...  I'd leave troubles 
with non-triangle surfaces out.

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: Invisible
Subject: Re: GPU rendering
Date: 13 Jan 2010 11:14:50
Message: <4b4df17a$1@news.povray.org>
nemesis wrote:

> What hope is there for an old-fashioned, dog-slow, CPU-only raytracer to 
> still be alive by 2020?

Depends on whether any other renderers pop up that can do what POV-Ray does.

Scan-line rendering is way, way faster than any conceivable ray-tracing 
algorithm. And yet, people still use ray tracers...


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 13 Jan 2010 11:48:25
Message: <4b4df959@news.povray.org>
Invisible escreveu:
> Scan-line rendering is way, way faster than any conceivable ray-tracing 
> algorithm. And yet, people still use ray tracers...

yes, and in coming years will be using several magnitudes faster 
GPU-bound ones based on industry-standard triangle meshes.

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: andrel
Subject: Re: GPU rendering
Date: 13 Jan 2010 12:47:26
Message: <4B4E072E.8030704@hotmail.com>
On 13-1-2010 7:59, nemesis wrote:
> Amazing the amount of self-denial one has to go in order to cope with excuses
> for why his favorite rendering engine should not evolve.
> 
> Sabrina Kilian <ski### [at] vtedu> wrote:
>>> Can you imagine what povray could do comparatively without getting
>>> boiled down by unbiased techniques?!
>>>
>> Those unbiased techniques are what allows them to appear so fast. The
>> video appears to be getting 10fps tops, and between 2 and 4 the rest of
>> the time. Yes, unbiased rendering is slower to achieve the same image as
>> a biased renderer, however you can stop it much sooner and get a full
>> resolution picture. That picture just has more noise.
>>
>> Your question comes out as if you were asking "Could you imagine what
>> povray could do by using unbiased techniques for single sample speed
>> increases without being unbiased?" I think the quickest answer would be
>> "Not really, but set the max depth to 1 and lets see what the pictures
>> look like."
> 
> No, my question is merely like:  "hey, I can render scenes in povray in seconds
> rather than dozens of minutes.  And truly complex ones in a few hours rather
> than days."

You will still have the general increase in power, so if you need a 30 
time increase, just wait 7.5 years.

> I'm not impressed with that demo's noisy 4fps display.  I'm impressed to see
> scenes with full light transport phenomena set that usually take 1 hour to
> denoise being noise-free in a couple of minutes.
> 
> This is the future:  tapping all that hidden power that was being ignored so far
> because we insist on using a lame-O chip geared at word processing to do math
> operations.

That remark merely shows that you don't know anything about the history 
and design of computers, or choose to ignore that.

Anyway, the best we can hope for is a continuation of POV along the 
lines that we are currently following. Hopefully there will be an 
offspring (possibly GPU based) that will be able to parse POV scenes and 
generate a preview of less quality but in much smaller time. When that 
will get to the same quality as the main development line they might be 
merged. The main problem is that there is not enough man-power to do 
this. You have indicated that you don't have the skills. I might have 
some, but I don't have the time. Most people here fall in either one of 
these categories. So we just have to hope that someone comes along with 
a coincidence of time and skills.


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 13 Jan 2010 14:16:25
Message: <4b4e1c09@news.povray.org>
andrel escreveu:
> You will still have the general increase in power, so if you need a 30 
> time increase, just wait 7.5 years.

in 7.5 years, assuming intel don't buy nvidia, I'll be using the whole 
sheer processing power available rather than just CPU.  So, you may have 
your 30x speedup, while your GPU sits idle, but I'll be making it sweat 
to give me 500-1000x speedups.

>> This is the future:  tapping all that hidden power that was being 
>> ignored so far
>> because we insist on using a lame-O chip geared at word processing to 
>> do math
>> operations.
> 
> That remark merely shows that you don't know anything about the history 
> and design of computers, or choose to ignore that.

It was obviously an over-the-top remark, but you get the point.

 > Hopefully there will be an
 > offspring (possibly GPU based) that will be able to parse POV scenes and
 > generate a preview of less quality but in much smaller time. When that

This makes no sense at all:  people aren't getting into GPU to get 
lame-O real-time previews of less quality, but to speed up final renders 
by a few orders of magnitude.

If you think GPU = game-like graphic quality, you're very dead wrong. 
General Purpose GPU programming is all about using that huge available 
power for general purpose computations.  Power that you don't use at all 
if you're not a gamer right now.

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.