POV-Ray : Newsgroups : povray.programming : speed up povray by recompiling for graphics card : Re: speed up povray by recompiling for graphics card Server Time
17 May 2024 03:39:18 EDT (-0400)
  Re: speed up povray by recompiling for graphics card  
From: Patrick Elliott
Date: 21 Jun 2006 16:14:21
Message: <MPG.1f0356624ecdf8a9989f2c@news.povray.org>
In article <web.44985e8dbda360d27d55e4a40@news.povray.org>, 
alp### [at] zubenelgenubi34spcom says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> 
> > Ah.. Yeah, that could be it. Not that one couldn't sacrifice "some"
> > precision to gain speed that way, but it wouldn't be the same result.
> 
> Essentially the GPU would be an alternative CPU. It would execute the sam
e
> raytracing algorithms as a CPU would.
> 
Yeah. For what I want that might be usable, though it only adds a layer 
of additional complication, given I don't yet understand the math, never 
mind the code to use the GPU. lol

> > If you want to be picky, then yes. The point though is that between the
> > two approximations, true raytracing is closest to reality, even if you
> > have to round the edges of some things to make them, "not perfect
> > boxes".
> 
> They both capture some aspects of the way light behaves. Neither of them 
are
> noticably faithful to "real light". Many raytracers do not support anythi
ng
> but the famous triangle soup. The box primitive is just another idealised
> mathematical abstraction that needs work to resemble reality.
> 
True.

> > And the overhead... Like I have told several people, what you
> > can do in about 50 lines of POV-Ray code would take a 1.5MB mesh file,
> > not including textures, on most GPU based systems.
> 
> The original suggestion was that it is possible to use a GPU as a fast
> floating point processor, and a claim was made that POVRay would benefit.
 So
> the GPU would run POVRay. There are lots of caveats (few of which derive
> from the scanline heritage of GPUs). I personally can't see how it would 
be
> worth the time, and think that the benefits are minimal compared to runni
ng
> on the CPU, but it wasn't being suggested that some sort of super-GPU wou
ld
> take POVRay on with scanline techniques, as far as I could tell.
> 
> These days I find myself using mainly meshes in POV (I don't do much/any
> abstract work). Compactness is to be prized, but only if it actually
> resembles what you want. I model planets with sphere primitives, starship
s
> with meshes.
> 
Well, a lot of stuff you have no choice. Even Isosurfaces have some 
flaws, like not being able to self bound in a way that would knock off 
bits that stick out where they shouldn't be. Like the one short code 
contest entry, where it you render it at higher res it because obvious 
that bits of the "rock" are floating in space. Not impossible to fix but 
it adds something else to it that has to be later differenced out to 
make it right. I am just looking at it from the perspective of the fact 
that you are forced to cut corners. If even 20% of something in a game 
was possible using more simplistic primitives, then that is 20% of the 
objects you don't need to build out of triangles. This means more space 
on the game disc for the "game" and less bandwidth for all the stuff you 
have to feed to a player for an online one. Even if all you are 
producing is still images, you are still looking at a "huge" data spike 
every time they need to transfer a new model, or worse, an entirely new 
room. Real time lighting changes, etc. have to be done in the game 
engine, not generated on the other end, because if you don't cache the 
files to produce it, you are looking at that same data hit every time 
you enter the room. Some of that you want to do on your end anyway, but 
I was looking in terms of treating the "script" for the images the same 
way LPC works on a mud. The moment any change is made, looking at thing 
again generates the changes on the player end, with "minimal" intrusion.

Yeah, if you are making a major motion picture and have lots of money to 
buy a mess of computers, the best available software tools and people 
that know how to use them, and you are planning on having the final 
project come up 5 years from now, great. If you want it to update more 
or less real time, make changes on the fly and people are going to be 
getting the content over the internet (possibly not all on the "best" 
high speed connections), then from that perspective the current 
architecture is not practical. If anything its amazing some things like 
Uru Live or the shard projects branching off of it, work at all over the 
internet, even with high speed, same with Second Life or other "true" 3D 
worlds. Heck, the isometric ones only really work because they require 
installing patches with the new models and textures to extend the game. 
If you couldn't buy a disk and install it, maybe 50% of the players 
would never go past the first version.

Anyway, that is the standpoint I am coming from. Not, "how do I make a 
photo?", but, "how do I do this without shoving 4GB of data onto 
someone's computer, then cramming as much stuff as I can down the pipe 
anyway when they play the game?" The later is why new content is not 
exactly a staple for graphical online games. lol

Now, if using the scanline system "could" allow approximation of the 
primitives at a decent speed and still make things faster.. That could 
help too. But its still not going to be doing "some" things very well 
without a lot of cludging, like reflecting objects not "in view", etc.

It is an issue made all the more complicated by the fact that if I 
wanted to really do something, most of the books that provided clear 
information on how most of the raytrace stuff where last published... 
15-20 years ago. :( Now, its sort of a "turtles all the way down" world, 
where everyone "assumes" you are just going to use blender to make a 
model, then throw it at a GPU. Quite annoying and I am way to lazy to 
spend weeks trying to find the information online (all the while dodging 
site that refer back to GPUs) or probably even longer trying to figure 
out what the code it POV-Ray, which I can't actually use anyway the way 
I want, works. Oh well...

But yeah, for some things GPUs are practical. If you have a) bandwidth, 
b) storage space, c) money and d) a lot of time. If a and b are limited 
and your intent is to avoid a lot of c and d, especially if d is the one 
thing you want to avoid needing, you are screwed when using GPUs. ;)

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.