|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tom Melly wrote:
> "scott" <sco### [at] spamcom> wrote in message
> news:416cf59c@news.povray.org...
>
>>> Now, if you could increase your understanding of
>>> http://tag.povray.org/povQandT/miscQandT.html#3dcard , I think I
>>> would be happier :-)
>>
>> I think I would be happier if you could increase your understanding
>> of how pixel shaders work on the latest 3D cards :-)
>>
>
> ... but this misses the point. Irrespective of whether modern 3d card
> acceleration options are compatible with raytracing or not, povray is
> unlikely to support them if it requires custom code.
>
> On the other hand, if the 3d card automatically took over the
> calculation from the main CPU when appropriate....
The way you write code for a pixel/vertex shader is so different from that
of a normal CPU, I think that would be almost impossible.
> That said, given what I understand of POV and 3d cards, unless 3d
> cards were suitable for involvement in *any* intensive CPU activity,
> then I fail to see why they would be of use.
Well, if you write a ray-object intersection algorithm that runs on the GPU,
that would certainly help a lot. Of course POV-ray would need to be
modified (and this will not likely happen, but it could be a patch, or
another raytracer entirely). GPUs are *very* fast at doing the same code in
parallel. So POV could give the GPU a batch of rays to calculate
intersections with, the GPU can go away and do this and return the result
when it's done. During this time the CPU can also be doing the same (and
working out the pixels from the last GPU result). It would certainly speed
things up, look at that link where they guy was getting 30fps from his
simple raytracer and then was getting 1200fps when it was running with the
GPU helping.
Don't forget that GPUs can run pixel shaders at stupid speeds, of the order
of millions of texels/second, that's *far* faster than any code could run on
a normal CPU.
> Povray, after all, is about creating a file from a series of
> calculations - the display of the image is not part of povray's job.
True, and the job of a pixel shader in a GPU is to run code and output a
chunk of data. Normally that chunk of data is then displayed on the screen,
but it doesn't have to be!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <416d6075$1@news.povray.org>, "scott" <sco### [at] spamcom>
wrote:
> Well, if you write a ray-object intersection algorithm that runs on the GPU,
> that would certainly help a lot. Of course POV-ray would need to be
> modified (and this will not likely happen, but it could be a patch, or
> another raytracer entirely). GPUs are *very* fast at doing the same code in
> parallel. So POV could give the GPU a batch of rays to calculate
> intersections with, the GPU can go away and do this and return the result
> when it's done. During this time the CPU can also be doing the same (and
> working out the pixels from the last GPU result). It would certainly speed
> things up, look at that link where they guy was getting 30fps from his
> simple raytracer and then was getting 1200fps when it was running with the
> GPU helping.
It's still not very useful to POV-Ray. It'd spend too much time copying
data to and from the card, and pulling data off the card is generally
not a fast operation...they're optimized for displaying triangles and
crunching numbers local to the card. What these demos are do is
basically hard code a simple, small raytracer and scene designed around
the abilities of the card into the pixel shader. It's fast because it
all fits on the card and it doesn't do anything that needs to move stuff
between the card and the main system. In addition, precision limitations
will be a huge problem...POV uses double precision for most
calculations, and it needs them. From what I've seen, GPUs use half
precision...good enough for a demo, but not enough for general
raytracing.
It's a neat trick, but it's just not general enough to handle what
POV-Ray needs to do. You might be able to make use of it with a more
limited raytracer (maybe a scientific visualization app, for example),
but it's of no help to POV-Ray.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: <chr### [at] tagpovrayorg>
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
> In article <416d6075$1@news.povray.org>, "scott" <sco### [at] spamcom>
> wrote:
>
>> Well, if you write a ray-object intersection algorithm that runs on
>> the GPU, that would certainly help a lot. Of course POV-ray would
>> need to be modified (and this will not likely happen, but it could
>> be a patch, or another raytracer entirely). GPUs are *very* fast at
>> doing the same code in parallel. So POV could give the GPU a batch
>> of rays to calculate intersections with, the GPU can go away and do
>> this and return the result when it's done. During this time the CPU
>> can also be doing the same (and working out the pixels from the last
>> GPU result). It would certainly speed things up, look at that link
>> where they guy was getting 30fps from his simple raytracer and then
>> was getting 1200fps when it was running with the GPU helping.
>
> It's still not very useful to POV-Ray. It'd spend too much time
> copying data to and from the card, and pulling data off the card is
> generally not a fast operation...they're optimized for displaying
> triangles and crunching numbers local to the card. What these demos
> are do is basically hard code a simple, small raytracer and scene
> designed around the abilities of the card into the pixel shader. It's
> fast because it all fits on the card and it doesn't do anything that
> needs to move stuff between the card and the main system. In
> addition, precision limitations will be a huge problem...POV uses
> double precision for most calculations, and it needs them. From what
> I've seen, GPUs use half precision...good enough for a demo, but not
> enough for general raytracing.
>
> It's a neat trick, but it's just not general enough to handle what
> POV-Ray needs to do. You might be able to make use of it with a more
> limited raytracer (maybe a scientific visualization app, for example),
> but it's of no help to POV-Ray.
Oh yes, I realise this. Just worth keeping an eye on in the future, I'm
sure the GPUs are going to get more and more complex.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
maybe this is a stupid question (and a waste of your 3d card if I'm right),
but couldn't you draw meshes and flat, simple shapes (i.e boxes)with the 3d
card and take a bit of the load off the main CPU?
I guess it's not so much of a problem nowadays, but a co-processor would
probably be a godsend too..
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Ghost_Dog" <gho### [at] hotmailcom> wrote:
>
> but couldn't you draw meshes and flat, simple shapes (i.e boxes)with the 3d
> card and take a bit of the load off the main CPU?
Hi,
see here why this does not work...
http://tag.povray.org/povQandT/miscQandT.html#3dcard
Greets, Mark
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Ghost_Dog <gho### [at] hotmailcom> wrote:
> maybe this is a stupid question (and a waste of your 3d card if I'm right),
> but couldn't you draw meshes and flat, simple shapes (i.e boxes)with the 3d
> card and take a bit of the load off the main CPU?
How do you expect the 3D-card to be able to render POV-Ray's procedural
textures, reflections, refractions, photons, radiosity and other similar
effects? And how do you expect the mesh to be reflected/refracted from
other objects? How do you expect the mesh to cast shadows (including
self-shadowing)? Meshes can also eg. contain media: How do you expect
it to be rendered? What if the camera has been set up for something the
3D-card is unable to handle, such as a spherical, ultra-wide-angle or a
panoramic camera, or if the camera as a 'normal' block?
--
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Now I'm wondering just what algorithms a 3D card does have that makes it
'impossible' to be used to raytrace...
...or it it merely 'utterly impractical'?
After all, you can do calculations to arbitrary precision the hard way,
it just isn't very fast compared to using inbuilt FPU abilities, and
reduces the 'using 3D card to increase rendering speed' to an absurdity,
but not an impossibility. If the 3D card is able to calculate even one
pixel in the time it takes the rest of the scene to be rendered
normally, it technically has sped up the total render speed.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On a related note, if you go the reductio ad absurdum mathmatica route
and do number crunching with anything on the computer that can possibly
do so, could you raytrace with a sound card...?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <web.41860bcbb4bdbd0d563d27f50@news.povray.org>,
"Ghost_Dog" <gho### [at] hotmailcom> wrote:
> maybe this is a stupid question (and a waste of your 3d card if I'm right),
> but couldn't you draw meshes and flat, simple shapes (i.e boxes)with the 3d
> card and take a bit of the load off the main CPU?
> I guess it's not so much of a problem nowadays, but a co-processor would
> probably be a godsend too..
You could, but:
1) it would only work for shapes that can easily be tessellated. Shapes
that are typically very fast to raytrace anyway.
2) it would only work for the first-level camera rays. No help on
reflections, transparency, shadows, and the other stuff that accounts
for the main slowdowns with raytracing. It also will only work with
orthographic and perspective cameras.
3) meshes are very fast to raytrace, and get better as they get larger.
A mesh twice as big will take twice as long for a scanline engine to
draw, but may only take a small fraction longer with a raytracer.
4) it would make POV-Ray far more dependent on the hardware, and require
a complete redesign of the core code. You would also likely get
precision problems...depth buffers aren't floating point.
Basically, you're putting lots of work into improving the parts that
least need improvement, and introducing a lot of new problems in the
process.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: <chr### [at] tagpovrayorg>
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Tim Cook wrote:
> On a related note, if you go the reductio ad absurdum mathmatica route
> and do number crunching with anything on the computer that can
> possibly do so, could you raytrace with a sound card...?
Well, I don't know how advanced the latest sound cards are, but if you
imagine a reverb algorithm that takes some geometry of a room, position of
speakers, microphones etc then that it starting to sound scarily like ray
tracing.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|