POV-Ray : Newsgroups : povray.off-topic : GPU rendering Server Time
4 Sep 2024 23:19:40 EDT (-0400)
  GPU rendering (Message 31 to 40 of 175)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: nemesis
Subject: Re: GPU rendering
Date: 13 Jan 2010 19:15:23
Message: <4b4e621b@news.povray.org>
andrel wrote:
>> I think is that ATM GPU's are useful for trangulations and
>> that the  result is near real life. As far as I have heard the
>> rendering is less physical correct and more is faked. I though they
>> were a bit lacking in multiple reflection and refraction, in media
>> and possibly also in versatility of procedural textures. I am not a
>> gamer, so I don't actually know for sure.
>>
> So, point me to where I talked exclusively about game tech.

"less physical correct", "lacking reflection and refraction", "lacking 
media" and "lacking procedural textures" are all indeed game tech 
limitations (for now) that don't show up at all in the "physically 
correct" path tracer running on GPU I linked to.

> But let me try to explain again: what you pointed at and other things I 
> have seen so far is that GPUs are used for a limited set of primitives 
> only, modelling a subset of physical behaviour. Even if they claim to be 
> physically accurate in practice they aren't

GPU's don't claim anything.  General purpose algorithms making use of 
their sheer math processing power do.

I thought by physically correct you were talking about materials or 
light propagation behavior, not every model being made of from polygons.

>  Again what I have seen and understood is that up till now POV is more 
> physical complete (disclaimer: I have not seen everything that is out 
> there.). Hence POV still has a place.

and I hope that place is a GPU.

> In order to get more 'realistic' games the GPUs have been optimized to 
> render textures and fake reflections and shadows. They can be used as 
> FPU replacement for certain tasks, but they are not perfect for general 
> processing (yet). That is as far as I know. There may have been 
> significant advances that I have missed because I am not a gamer and 
> hence do not follow the developments closely.

I'm not talking about games at all.  I only hinted at the fact that 
several other raytracers are beginning to use the GPU for their 
calculations, including full raytraced reflections, refractions, 
mesh-based lighting etc.  They are not using the GPU as a mere game 
scanline engine,  they are just general purpose programs (actually, 
parts of it) running on GPU.


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 13 Jan 2010 21:34:14
Message: <4b4e82a6@news.povray.org>
Sabrina Kilian wrote:
> nemesis wrote:
>> Amazing the amount of self-denial one has to go in order to cope with excuses
>> for why his favorite rendering engine should not evolve.
>>
> 
> That isn't an arguement; it borders on insult.

yes, I apologise for that.  It was a general over-the-top statement not 
addressed directly at you, but at all GPU deniers.  I'm not an attention 
whore, but I think it's an important subject and by going a bit trollish 
I think I helped to bring discussion about it and, hopefully, change.

You already commit to the idea, so I think I did a good job.  But sorry 
for the rudeness anyway.  Military tactics, dear. ;)

> I offered you reasons.
> Not vague reasons, but reasons why there is no man-power to divert to
> this path you, who admit you do not have the skills to contribute, think
> will bring a dramatic speed increase.

I don't "think", I've been following it elsewhere.  There are measured 
data, at least for triangle meshes.  May not be helpful at all for 
people who want perfect spheres or extremely slow isosurface terrains, 
but should make povray a lot more viable to general 3D artists.

>> Sabrina Kilian <ski### [at] vtedu> wrote:
> Right, and as I said, the bandwidth of the PCI-E bus may or may not
> allow for those truly complex scenes. Look at these demos, check out the
> rather simplistic geometry being used. Now pick a scene from the POV HOF.

You have a point.  I prefer to be optimistic though and think the main 
reason to be that they were targetting a "real-time" (puny 4fps) display 
to draw attention and that wouldn't be possible with far too much 
geometry on screen.

Have you seen the whole video?  There are some considerably more 
detailed scenes to the end, including some well-known 3D benchmark scenes.

> My suspicion right now, is that those complex scenes would choke the bus
> in one setup, or choke the GPU in another. This is based on knowledge of
> the principle and some skill at parallel programing and low level code
> design, but little in either the POV-Ray code base or GPGPU. If one of
> the experts would like to chime in before I get some tests written,
> please do.

Might be of help to see how they are doing there, complete with benchmarks:

http://www.luxrender.net/forum/viewtopic.php?f=21&t=2947

very long thread full of juicy stuff.

>> This is the future:  tapping all that hidden power that was being ignored so far
>> because we insist on using a lame-O chip geared at word processing to do math
>> operations.
> 
> Yes, it is. But that is not what you are asking for. You are asking for
> it to be the immediate future of POV-Ray.

No, but it's good to be prepared.  POV-Ray develops slowly and without a 
push, it just might wait another 5 years just to become aware of GPGPU, 
let alone try to implement.

> At this point in time, both OpenCL and CUDA have no enforced double
> precision float variables. They are optional in OpenCL, and only
> available on certain cards in CUDA. In a raytracer that does not care
> about this, or that is rewritten to not care, this would not be a
> problem. However, since POV-Ray uses them for a lot of things
> internally, there is a major problem in off-loading just part to the
> GPU. The parts on the GPU will only be using standard floats, while the
> CPU is using doubles, which will result in noticeable precision loss. If
> you think the solar system scale issue is bad, right now, then wait
> until you lose a portion of that and are stuck dealing with it on a much
> smaller scale. And you might as well go back to running the older 16 bit
> versions if you just change the CPU to only use single precision floats.
> Or, if you insist on forcing the GPU to do the extra math to fake double
> precision values, then you have lost all of the speed increase.

David, the author of the demo, says in that thread:

"Yup, it is so easy and so fast to zoom in that you end the 32bit 
floating point resolution very soon. The only solution would be to use 
64bit floating points instead of 32bit but there are very few boards 
supporting them at the moment (I think the new ATI HD5xxx series has the 
hardware support for double).

The other option would be the use software implemented floating point 
numbers with user defined resolution ... this stuff is so fast that it 
could handle it quite well even in software."


In any case, no need to worry about doubles as the hardware will just be 
there once any implementation whatsoever is complete:  don't forget 3.7 
beta has been around for ages ever since multicore started to become 
feasible.

Cards supporting doubles are already there, just not so cheap as to be 
in every PC.


Post a reply to this message

From: Patrick Elliott
Subject: Re: GPU rendering
Date: 13 Jan 2010 23:37:13
Message: <4b4e9f79$1@news.povray.org>
nemesis wrote:
> andrel wrote:
>>> I think is that ATM GPU's are useful for trangulations and
>>> that the  result is near real life. As far as I have heard the
>>> rendering is less physical correct and more is faked. I though they
>>> were a bit lacking in multiple reflection and refraction, in media
>>> and possibly also in versatility of procedural textures. I am not a
>>> gamer, so I don't actually know for sure.
>>>
>> So, point me to where I talked exclusively about game tech.
> 
> "less physical correct", "lacking reflection and refraction", "lacking 
> media" and "lacking procedural textures" are all indeed game tech 
> limitations (for now) that don't show up at all in the "physically 
> correct" path tracer running on GPU I linked to.
> 
>> But let me try to explain again: what you pointed at and other things 
>> I have seen so far is that GPUs are used for a limited set of 
>> primitives only, modelling a subset of physical behaviour. Even if 
>> they claim to be physically accurate in practice they aren't
> 
> GPU's don't claim anything.  General purpose algorithms making use of 
> their sheer math processing power do.
> 
> I thought by physically correct you were talking about materials or 
> light propagation behavior, not every model being made of from polygons.
> 
>>  Again what I have seen and understood is that up till now POV is more 
>> physical complete (disclaimer: I have not seen everything that is out 
>> there.). Hence POV still has a place.
> 
> and I hope that place is a GPU.
> 
>> In order to get more 'realistic' games the GPUs have been optimized to 
>> render textures and fake reflections and shadows. They can be used as 
>> FPU replacement for certain tasks, but they are not perfect for 
>> general processing (yet). That is as far as I know. There may have 
>> been significant advances that I have missed because I am not a gamer 
>> and hence do not follow the developments closely.
> 
> I'm not talking about games at all.  I only hinted at the fact that 
> several other raytracers are beginning to use the GPU for their 
> calculations, including full raytraced reflections, refractions, 
> mesh-based lighting etc.  They are not using the GPU as a mere game 
> scanline engine,  they are just general purpose programs (actually, 
> parts of it) running on GPU.
Lets just say that.. It may be feasible to do in an hour what takes days 
now, using a GPU, for POVRay, when it was using prior CPUs, that didn't 
have the bit depth for floats that modern ones do. But I agree with 
Andrel's general assessment, based on what I have seen and read, that, 
*in general* too many things are still done using cheats, because the 
GPU can't handle them, and you *may* find yourself running into 
precision issues, when trying to use some GPUs.

Now, that said, you *could* allow for a "minimum" requirement in that 
respect, and get stuff as good or better than we see on a cheap machine 
now, possibly. Though, that is a pure guess. You could then "allow for" 
some sort of computable scaling of settings, where if the GPU allowed 
larger floats, you could adjust to get a *better* final. Problem is, 
most of that parameters are stuff that is bloody hard to get right when 
you *know* the machine you are using, never mind if you had to have the 
parser "update" them to match a better GPU.

All of which become irrelevant, if AMD's idea takes wing and the GPU 
becomes part of the CPU, making the difference between the "internal" 
floats, and the "external" GPU floats completely bloody meaningless. We 
may see GPUs become a sort of "enhanced process" loaded with stuff the 
inbuilt doesn't handle, instead of the main focus of this whole thing. 
The end result is likely that the difference between an AMD compliant 
POVRay and one for GPU will be zip, zilch, and nada.

-- 
void main () {

     if version = "Vista" {
       call slow_by_half();
       call DRM_everything();
     }
     call functional_code();
   }
   else
     call crash_windows();
}

<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models, 
3D Content, and 3D Software at DAZ3D!</A>


Post a reply to this message

From: Sabrina Kilian
Subject: Re: GPU rendering
Date: 13 Jan 2010 23:42:18
Message: <4b4ea0aa$1@news.povray.org>
nemesis wrote:
> Sabrina Kilian wrote:
>> nemesis wrote:
>>> Amazing the amount of self-denial one has to go in order to cope with
>>> excuses
>>> for why his favorite rendering engine should not evolve.
>>>
>>
>> That isn't an arguement; it borders on insult.
> 
> yes, I apologise for that.  It was a general over-the-top statement not
> addressed directly at you, but at all GPU deniers.  I'm not an attention
> whore, but I think it's an important subject and by going a bit trollish
> I think I helped to bring discussion about it and, hopefully, change.
> 
> You already commit to the idea, so I think I did a good job.  But sorry
> for the rudeness anyway.  Military tactics, dear. ;)
> 

My commitment to this started well before this conversation. Some time
about 2 years ago when I started reading about the abilities of CUDA and
the multicore setup on the PS3. Problem was, the PS3's double precision
was handled in hardware, but at something close to 2GFLOPs for the whole
system. Not a substantial increase. Under CUDA, the option was 32bit or
forget about it.

Military tactics only work if you can step up and lead. Otherwise . . .

>> I offered you reasons.
>> Not vague reasons, but reasons why there is no man-power to divert to
>> this path you, who admit you do not have the skills to contribute, think
>> will bring a dramatic speed increase.
> 
> I don't "think", I've been following it elsewhere.  There are measured
> data, at least for triangle meshes.  May not be helpful at all for
> people who want perfect spheres or extremely slow isosurface terrains,
> but should make povray a lot more viable to general 3D artists.
> 

Right, they bring a dramatic speed increase to other systems and
programs, so you THINK they will here as well. Prove it, or wait.


> Have you seen the whole video?  There are some considerably more
> detailed scenes to the end, including some well-known 3D benchmark scenes.
>

Various GPUs is going  to be one of the places this all falls apart. My
laptop has a GeForce 9600M GT, with 2.5gigs of available ram, because it
can share system resources. But for all that available space, it only
has 32 cores.

If I start testing, and find an 8 fold decrease in speed, does this
indicate a problem with my set up, or with the code, or with the API?
Unlike when we deal with CPUs, where we may have a range of single to 4
core, with only a few people using more, and a relative similar range of
ram, and clock speeds only varying by maybe 25%. For GPUs, the clock
speed may be similar, but ram varies from 512 to several gigs, cores
from 8 to 112 just on the 9xxx mobile lines.

If someone wants to donate a 2xx GPU that can actually handle 64bit
floats, I will be glad to put it to use.

>> My suspicion right now, is that those complex scenes would choke the bus
>> in one setup, or choke the GPU in another. This is based on knowledge of
>> the principle and some skill at parallel programing and low level code
>> design, but little in either the POV-Ray code base or GPGPU. If one of
>> the experts would like to chime in before I get some tests written,
>> please do.
> 
> Might be of help to see how they are doing there, complete with benchmarks:
> 
> http://www.luxrender.net/forum/viewtopic.php?f=21&t=2947
> 
> very long thread full of juicy stuff.
> 

30 odd pages, if you have a specific part that you think would be
helpful, a direct link would be better. I will keep trawling it.

> David, the author of the demo, says in that thread:
> 
> "Yup, it is so easy and so fast to zoom in that you end the 32bit
> floating point resolution very soon. The only solution would be to use
> 64bit floating points instead of 32bit but there are very few boards
> supporting them at the moment (I think the new ATI HD5xxx series has the
> hardware support for double).
> 
> The other option would be the use software implemented floating point
> numbers with user defined resolution ... this stuff is so fast that it
> could handle it quite well even in software."

Depends on the card, again. If the card is only offering double or
triple the FLOPs of the FPU/CPU, then the speed loss by faking it in
software will not be better. Wide range of hardware, remember?

Anyone have a good double precision faking library for CUDA? I guess I
could write that as a first program, but why start from scratch?

> In any case, no need to worry about doubles as the hardware will just be
> there once any implementation whatsoever is complete:  don't forget 3.7
> beta has been around for ages ever since multicore started to become
> feasible.

Right, it will be there eventually. But testing it on 32bit hardware is
tough. Bugs may crop up, performance will be vastly different. And then
there is the number of developers available. Convince some new people to
offer code to POV-Ray, and you might convince the dev team otherwise.

> 
> Cards supporting doubles are already there, just not so cheap as to be
> in every PC.

I know, I bought a new laptop at the beginning of this summer and had
been waiting for 2 years to get a graphics card that would support
hardware 64-bit floats. But, because of timing of my main desktop dying,
and funds available, I just couldn't manage it. The university undergrad
lab doesn't stock a room full of gaming computers for development, the
grad student one may but I am not privy to that.

If you happen to have a computer with a high end GPU that I can run
comparison benchmarks on, great. Otherwise, my development time will be
limited to how often I can ship code off to friends and get benchmarks
and profiles.

Or I could find an AGP GeForce 2xx, and drop it into a really old tower.
I wonder if those even exist.


Post a reply to this message

From: Sabrina Kilian
Subject: Re: GPU rendering
Date: 14 Jan 2010 00:13:02
Message: <4b4ea7de$1@news.povray.org>
nemesis wrote:
> andrel wrote:
>>> I think is that ATM GPU's are useful for trangulations and
>>> that the  result is near real life. As far as I have heard the
>>> rendering is less physical correct and more is faked. I though they
>>> were a bit lacking in multiple reflection and refraction, in media
>>> and possibly also in versatility of procedural textures. I am not a
>>> gamer, so I don't actually know for sure.
>>>
>> So, point me to where I talked exclusively about game tech.
> 
> "less physical correct", "lacking reflection and refraction", "lacking
> media" and "lacking procedural textures" are all indeed game tech
> limitations (for now) that don't show up at all in the "physically
> correct" path tracer running on GPU I linked to.
> 

They are also short cuts that people have used to implement ray-tracing
on the GPU.


Post a reply to this message

From: Sabrina Kilian
Subject: Re: GPU rendering
Date: 14 Jan 2010 00:22:54
Message: <4b4eaa2e$1@news.povray.org>
Patrick Elliott wrote:
> All of which become irrelevant, if AMD's idea takes wing and the GPU
> becomes part of the CPU, making the difference between the "internal"
> floats, and the "external" GPU floats completely bloody meaningless. We
> may see GPUs become a sort of "enhanced process" loaded with stuff the
> inbuilt doesn't handle, instead of the main focus of this whole thing.
> The end result is likely that the difference between an AMD compliant
> POVRay and one for GPU will be zip, zilch, and nada.
> 

This is one thing that will drastically change everything. If that idea
takes flight, and any compiler offers a nice API and syntax for writing
the equivalent of CUDA kernels, then the data bus problems vanish (to
the coder, they are still there in reality), the bit accuracy problem
goes away, and very little needs to be rewritten. Recursion may be a
problem for the GPU on chip, but that will be a problem for any current
GPU implementation.

But, chances are, we would still be stuck with AMD/ATI implementing
their Stream SDK instead of OpenCL. But, we can hold out hope.


Post a reply to this message

From: scott
Subject: Re: GPU rendering
Date: 14 Jan 2010 03:21:06
Message: <4b4ed3f2$1@news.povray.org>
>> Sure, but there simply aren't the people willing to do the work for free. 
>> Look how long it's taken from 3.6 to 3.7 beta, what hope is there of a 
>> complete rewrite for the GPU before 2020?
>
> What hope is there for an old-fashioned, dog-slow, CPU-only raytracer to 
> still be alive by 2020?

Because nothing else will come along with a nice SDL like POV has.  Almost 
all other raytracers *require* you to use an external mesh modeller to 
generate your scene - POV doesn't which IMO is its strongest point.

> The guys there ported the ray intersection code and the space 
> partitioning.

Sure, that's relatively simple for triangles, but that is only a tiny part 
of POV.  The interesting parts are all the other primitives and the material 
options.  I suspect to convert those would be order of magnitudes more work 
than just the ray-triangle intersection test.

> And are working now on refining load balance.  Good starting points 
> without much pain if you ask me...  I'd leave troubles with non-triangle 
> surfaces out.

If you only want to render triangle meshes then I would suggest that POV 
isn't the best tool for the job.


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 14 Jan 2010 10:54:13
Message: <4b4f3e25@news.povray.org>
Sabrina Kilian escreveu:
> nemesis wrote:
>> You already commit to the idea, so I think I did a good job.  But sorry
>> for the rudeness anyway.  Military tactics, dear. ;)
> 
> Military tactics only work if you can step up and lead. Otherwise . . .

you're right.  I'm only Private Joker playing of Sergeant Hartman.

> Right, they bring a dramatic speed increase to other systems and
> programs, so you THINK they will here as well. Prove it, or wait.

I would think pov-ray's triangle handling to be similar to that of other 
raytracers but if that's not the case, I'll just wait for your word on 
it, sweetie.

> If I start testing, and find an 8 fold decrease in speed, does this
> indicate a problem with my set up, or with the code, or with the API?

In that thread, they run into all of them.  oh teh fun!  might be of 
much help to not duplicate the same errors.

notice also that OpenCL drivers are still alfa/beta quality.

>> Might be of help to see how they are doing there, complete with benchmarks:
>>
>> http://www.luxrender.net/forum/viewtopic.php?f=21&t=2947
>>
>> very long thread full of juicy stuff.
>>
> 
> 30 odd pages, if you have a specific part that you think would be
> helpful, a direct link would be better. I will keep trawling it.

No, I didn't read it all either, much less the later pages.  The talk 
about float4 vectors might interest you.

> Depends on the card, again. If the card is only offering double or
> triple the FLOPs of the FPU/CPU, then the speed loss by faking it in
> software will not be better. Wide range of hardware, remember?

Wide range of hardware has never prevented povray running on low-end 
hardware or super-duper minicomputers.  Users never complained of the 
vastly different speeds.

Shoot for the best, darling.

> Anyone have a good double precision faking library for CUDA? I guess I
> could write that as a first program, but why start from scratch?

So you'll be targetting CUDA?  It's not cross-platform as OpenCL, 
although much more mature for now.

> If you happen to have a computer with a high end GPU that I can run
> comparison benchmarks on, great. Otherwise, my development time will be
> limited to how often I can ship code off to friends and get benchmarks
> and profiles.

sadly, I'm already an old wig with a Q6600 and a cheap nvidia card (9400 
GT if I remember correctly).


-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 14 Jan 2010 11:31:24
Message: <4b4f46dc$1@news.povray.org>
scott escreveu:
>> What hope is there for an old-fashioned, dog-slow, CPU-only raytracer 
>> to still be alive by 2020?
> 
> Because nothing else will come along with a nice SDL like POV has.  
> Almost all other raytracers *require* you to use an external mesh 
> modeller to generate your scene - POV doesn't which IMO is its strongest 
> point.

well, you have a point.  FORTRAN is still around too and just as much 
niche as pov SDL, but it's also more useful.

>> The guys there ported the ray intersection code and the space 
>> partitioning.
> 
> Sure, that's relatively simple for triangles, but that is only a tiny 
> part of POV.

which is why I'm suggesting looking only at this tiny part for the 
change.  Tiny or not, it's 90% more useful for general 3D artists than 
all perfect spheres or math surfaces.

Would you not enjoy povray to go wildly popular?  Do you prefer it to be 
this geek niche?  Popularity would also bring more contributing 
developers, I guess...

> If you only want to render triangle meshes then I would suggest that POV 
> isn't the best tool for the job.

That is kind of obvious as it is now.

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: nemesis
Subject: Re: GPU rendering
Date: 14 Jan 2010 11:35:32
Message: <4b4f47d4@news.povray.org>
Sabrina Kilian escreveu:
> nemesis wrote:
>> andrel wrote:
>>> So, point me to where I talked exclusively about game tech.
>> "less physical correct", "lacking reflection and refraction", "lacking
>> media" and "lacking procedural textures" are all indeed game tech
>> limitations (for now) that don't show up at all in the "physically
>> correct" path tracer running on GPU I linked to.
>>
> They are also short cuts that people have used to implement ray-tracing
> on the GPU.

I don't see those limitations in the demo.  Indeed there is a cornell 
box in there with reflections and refraction.

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.