POV-Ray : Newsgroups : povray.pov4.discussion.general : Paralel GPU processor support for Nvidia CUDA architecture Server Time
19 Apr 2024 11:22:01 EDT (-0400)
  Paralel GPU processor support for Nvidia CUDA architecture (Message 6 to 15 of 15)  
<<< Previous 5 Messages Goto Initial 10 Messages
From: clipka
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 19 Nov 2009 07:45:19
Message: <4b053ddf@news.povray.org>
Louis schrieb:

> Ok, so I wasn't the first to think of it :-s

Indeed :-)

> After thinking about it, I guess such one should only start supporting such
> architectures when a platform has been established that runs on at least two
> competitor's hardware.

... and when the architecture becomes flexible enough for the tasks at 
hand. The requirements of a raytracer differ a lot from those of a 
rasterizer.


Post a reply to this message

From: arblick spule
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 30 Nov 2009 00:45:00
Message: <web.4b135bb982bfbbd4919c69570@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> Louis schrieb:
>
> > Ok, so I wasn't the first to think of it :-s
>
> Indeed :-)
>
> > After thinking about it, I guess such one should only start supporting such
> > architectures when a platform has been established that runs on at least two
> > competitor's hardware.
>
> ... and when the architecture becomes flexible enough for the tasks at
> hand. The requirements of a raytracer differ a lot from those of a
> rasterizer.

Is it not the case that the big two (three?) (NVidia and ATI and (?)) are
developing a C-like language (common) which might deal with this?  I.E. using
the GPU's of NVidia will directly translate to t'other.

Anyway, it would be great if we could use the power of the big GPU's to do a lot
of our maths while our CPU's sit there and deal with the need for AntiVirus
software, overbearing operating systems, instant messaging, and all of that
whilst streaming TV over our 't'internet connection, AND rendering 920,332,986
spheres in POVRAY!


Sorry to mention my 920,332,986 spheres again (fourth time today) but I'm quite
impressed by POVRAY's ability!  It did it in the time it takes to watch an
epsiode of "Scrubs" on MegaVideo but without affecting the playback!  Bonza!


Post a reply to this message

From: Nicolas Alvarez
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 30 Nov 2009 01:29:49
Message: <4b13665d@news.povray.org>
"arblick spule" <aspule> wrote:
> Anyway, it would be great if we could use the power of the big GPU's to do
> a lot of our maths while our CPU's sit there and deal with the need for
> AntiVirus software, overbearing operating systems, instant messaging, and
> all of that whilst streaming TV over our 't'internet connection, AND
> rendering 920,332,986 spheres in POVRAY!

What will you say if antivirus software and instant messengers also move to 
the GPU "because the CPU is busy with all that other useless software"? ;)


Post a reply to this message

From: Sabrina Kilian
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 30 Nov 2009 06:46:12
Message: <4b13b084$1@news.povray.org>
arblick spule wrote:
> Is it not the case that the big two (three?) (NVidia and ATI and (?)) are
> developing a C-like language (common) which might deal with this?  I.E. using
> the GPU's of NVidia will directly translate to t'other.

Not the case, at the moment. They may be working towards using the same
language, but I stopped following along. Right now, it isn't there yet.

> Anyway, it would be great if we could use the power of the big GPU's to do a lot
> of our maths while our CPU's sit there and deal with the need for AntiVirus
> software, overbearing operating systems, instant messaging, and all of that
> whilst streaming TV over our 't'internet connection, AND rendering 920,332,986
> spheres in POVRAY!

It would be great, yes. The problem that what POV-Ray requires of a
processor is not what graphics cards provide. When either graphics cards
get complex enough to handle the requirements of a general raytracer, or
someone comes up with an algorithm that works, or someone invests the
time placing all the branching code on the CPU with the matrix math on
the GPU and finds that it does indeed work faster, that is when we may
see POV-Ray start that direction. Not before then, and definitely not
before 4.0.

> Sorry to mention my 920,332,986 spheres again (fourth time today) but I'm quite
> impressed by POVRAY's ability!  It did it in the time it takes to watch an
> epsiode of "Scrubs" on MegaVideo but without affecting the playback!  Bonza!

It is amazing, isn't it?


Post a reply to this message

From: JAppleyard
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 12 Dec 2009 09:55:00
Message: <web.4b23aa7982bfbbd4eb316e150@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> Louis <nomail@nomail> wrote:
> > It probably would not be possible to let every core process a pixel
>
>   How do you suggest for the GPU to process the scene data (which it needs
> to trace rays in the first place), which might take hundreds of megabytes of
> memory?
>
>   And does CUDA already support conditional recursion?
>
> --
>                                                           - Warp

#1: GPU has a bank of global memory which is fully accessable to every core.
Sizes in new cards vary, though the very bottom of the range is probably 128MB
(top of the range is 4GB, general consumer stuff is 256-768MB). Newer GPUs also
have the ability to directly access host RAM.

#2: No, but next gen (Q1 2010) will. IIRC this was annouced in October. Speed
would depend on quite how branchy it is.


People are doing raytracing on CUDA already (are I think they're quite speedy -
look up NVIDIA's Optix), though I imagine they can't quite match the feature set
of pov-ray... and it's probably these features that don't map to GPUs. I must
admit to being fairly ignorant when it comes to interals of ray-tracers.


Post a reply to this message

From: Chambers
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 12 Dec 2009 12:58:06
Message: <4b23d9ae$1@news.povray.org>
JAppleyard wrote:
> People are doing raytracing on CUDA already

With fixed recursion levels.  Not good enough.

...Chambers


Post a reply to this message

From: Kevin Wampler
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 12 Dec 2009 13:34:22
Message: <4b23e22e$1@news.povray.org>
Chambers wrote:
> JAppleyard wrote:
>> People are doing raytracing on CUDA already
> 
> With fixed recursion levels.  Not good enough.

Although I don't really think it's worth the POV-team's limited time 
working on it, raytracing with limited recursion levels (and maybe other 
restrictions) on the GPU could be very useful for fast lower-quality 
renders used while modeling a scene -- similar to the current Quality=n 
setting.


Post a reply to this message

From: Warp
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 12 Dec 2009 13:51:44
Message: <4b23e640@news.povray.org>
Kevin Wampler <wam### [at] uwashingtonedu> wrote:
> Although I don't really think it's worth the POV-team's limited time 
> working on it, raytracing with limited recursion levels (and maybe other 
> restrictions) on the GPU could be very useful for fast lower-quality 
> renders used while modeling a scene -- similar to the current Quality=n 
> setting.

  It's not only about the quality of the rendering (in other words,
whether you could express all the textures, media and lighting features
of POV-Ray in CUDA), but also whether you can trace all POV-Ray primitives
with CUDA. Can you, for example, trace isosurfaces, the poly object or the
julia object with CUDA (even the next generation one)?

-- 
                                                          - Warp


Post a reply to this message

From: Kevin Wampler
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 12 Dec 2009 18:02:03
Message: <4b2420eb$1@news.povray.org>
Warp wrote:
>   It's not only about the quality of the rendering (in other words,
> whether you could express all the textures, media and lighting features
> of POV-Ray in CUDA), but also whether you can trace all POV-Ray primitives
> with CUDA. Can you, for example, trace isosurfaces, the poly object or the
> julia object with CUDA (even the next generation one)?

I definitely agree with this, but I think even an incomplete set of POV 
features could be implemented on a GPU it might still prove very useful 
for modeling.  I suppose the question is if enough of the features can 
be implemented to be generally useful.  Unfortunately I haven't ever 
messed around with GPU programming so I can't answer this question.

Again, I'd personally prefer that the POV team spend their time on other 
things, but I still sort of hope that someone else who's interested 
makes a solid attempt at this sometime.


Post a reply to this message

From: Chambers
Subject: Re: Paralel GPU processor support for Nvidia CUDA architecture
Date: 15 Dec 2009 02:27:28
Message: <4b273a60@news.povray.org>
Kevin Wampler wrote:
> Chambers wrote:
>> JAppleyard wrote:
>>> People are doing raytracing on CUDA already
>>
>> With fixed recursion levels.  Not good enough.
> 
> Although I don't really think it's worth the POV-team's limited time 
> working on it, raytracing with limited recursion levels (and maybe other 
> restrictions) on the GPU could be very useful for fast lower-quality 
> renders used while modeling a scene -- similar to the current Quality=n 
> setting.

The problem really is that the entire engine would need to be rewritten 
from scratch to work this way.  Considering that it is not yet clear if 
GPGPUs (General Purpose Graphics Processing Units... isn't that an 
oxymoron?) will perform better in such branching conditions than more 
traditional multicore processors (Intel doesn't think so, which is why 
they're working on Larrabee), the amount of work necessary just isn't 
worth it for a partial implementation.

...Chambers


Post a reply to this message

<<< Previous 5 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.