POV-Ray : Newsgroups : povray.programming : speed up povray by recompiling for graphics card Server Time
2 May 2024 13:28:19 EDT (-0400)
  speed up povray by recompiling for graphics card (Message 1 to 9 of 9)  
From: Blake Eggemeyer
Subject: speed up povray by recompiling for graphics card
Date: 18 Jun 2006 01:50:01
Message: <web.4494e8a94ad1379aa6bb2e320@news.povray.org>
gpgpu.com
this site shows how to make a graphics card do all sorts os stuff.
if povray were recompiled for graphics card, then it could run between 4-16
times faster depending on the card. i of course am not going to attempt
anything like this any time soon, im no good at C++ yet.
this would also free up the cpu for any thing you want while your gpu is
churning away.
have fun with this idea


Post a reply to this message

From: Patrick Elliott
Subject: Re: speed up povray by recompiling for graphics card
Date: 18 Jun 2006 14:29:05
Message: <MPG.1eff4956b259aeb0989f28@news.povray.org>
In article <web.4494e8a94ad1379aa6bb2e320@news.povray.org>, 
i.l### [at] gmailcom says...
> gpgpu.com
> this site shows how to make a graphics card do all sorts os stuff.
> if povray were recompiled for graphics card, then it could run between 4-
16
> times faster depending on the card. i of course am not going to attempt
> anything like this any time soon, im no good at C++ yet.
> this would also free up the cpu for any thing you want while your gpu is
> churning away.
> have fun with this idea
> 
No it couldn't, because graphics cards ***do not*** raytrace, then 
scanline. The basic fundamental architecture of the scene generation, 
math and even how some things are tested against is *completely* 
different. It would be like me telling someone who uses traditional wood 
sculpting tools to make wooden chests that he would be much better off 
using a paper + resin 3D prototyping machine to make thousands of 
identical parts, then simply bang them together at the end to produce 
thousands of identical wooden chests. Yes, you could, with a lot of 
effort, a completely rethink of how the basic design needs to be handled 
for it to work at all, and a total disregard for the qualitative 
differences, manage to make lost of identical prefab chests, but that 
would make you Wal-Mart, not a professional wood sculptor. Using scan 
line cards to try to duplicate what POV-Ray does would make you into 
something like Cyan, not a raytracer, with the same added effort to come 
close to what you intend, the same complete rethinking of how basic 
things need to be done to work and a complete disregard of "quality".

GPUs are not designed for, capable of, or likely to ever be able to 
(without a fundamental redesign of their entire operation) to do what 
POV-Ray does. Yes, new ones can "now" produce very good approximations 
of the same results what POV-Ray was generating back when the Commodore 
Amiga was a top of the line machine and no one had ever heard of a GPU. 
But they still cannot do the math correctly to exactly match the 
capabilities of full raytracers. And more to the point, while "some" of 
the stuff done with POV-Ray, out of necessity, uses the same triangle 
patch systems that GPUs "only" know how to do, the vast majority of a 
complex photo realistic scene that we do with POV-Ray, while not real 
time, can, as long as humans, animals or other "patch based" models are 
not in it, fit in 25% of the space taken up by just ***one*** human 
model for your great GPU.

That is the other thing you don't get. At some point you still have to 
do the math. You can do it the way POV-Ray does and produce "exact" 
mathematical versions, or you can use the math the generate 
"approximations" that the GPU can handle. And it will always be an 
approximation. You can't do true physical models of real world objects, 
which are "not" made up of bunches of triangles, using a system that 
understands nothing but triangles, can't do recursive anything, never 
mind illumination, physically accurate reflections or refraction. Yes, 
it can approximate reflections, at 1-2 recursion levels, maybe..., but 
since its entire architecture is built on throwing out stuff you can't 
"see", you can't get correct reflections off objects that are not "right 
in front of your 'eyes'" in the scene. Then there is media.. POV-Ray 
uses real media, GPUs use, well I don't know what exactly, save that its 
more like adding a layer of fuzzy film over the stuff you want to, and 
*not* real media interactions. Again, you end up approximating the real 
world, instead of simply calculating how the real world would 
"actually" look.

Put simply, the two techniques are completely incompatible with each 
other and GPUs are by far inferior to the real physical models used in 
raytracing. But heh, there is a company working to fit their raytracer 
onto a programmable GPU. If/when they have a viable product, it "might" 
be reasonable to reconsider. Until then, will people that favor GPUs 
please stop telling us its easier to do the equivalent of building a 
fish using legos than to try to actually go fishing and catch one...

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Tom York
Subject: Re: speed up povray by recompiling for graphics card
Date: 20 Jun 2006 03:25:00
Message: <web.4497a173bda360d27d55e4a40@news.povray.org>
Patrick Elliott <sel### [at] rraznet> wrote:

> No it couldn't, because graphics cards ***do not*** raytrace, then
> scanline.

I think what's actually being referred to is using the GPU (in sufficiently
modern cards) as a fast vector processor that can be programmed to carry
out arbitrary (i.e. not triangular or scanliney) calculations. I would
actually read the headlines on that site (http://www.gpgpu.com) to see what
is being talked about. GPU pipelines are still single-precision though, and
POVRay uses double-precision maths throughout. It would be a massive job in
any case, and easier to buy more CPUs.

> That is the other thing you don't get. At some point you still have to
> do the math. You can do it the way POV-Ray does and produce "exact"
> mathematical versions, or you can use the math the generate
> "approximations" that the GPU can handle. And it will always be an
> approximation. You can't do true physical models of real world objects,
> which are "not" made up of bunches of triangles

I don't agree. Raytracing is absolutely an approximation (e.g. no forward
light without photon maps, ignores wave physics of light, etc). Also, the
use of triangles is a separate issue (you can clearly raytrace triangles).
I would say that real-world objects are not made up of any sort of
primitive that POVRay or other renderers use, triangles included.
Certainly, I've never seen a perfect box{} in the real world (checks
furniture).

Tom


Post a reply to this message

From: Patrick Elliott
Subject: Re: speed up povray by recompiling for graphics card
Date: 20 Jun 2006 15:37:41
Message: <MPG.1f01fc2a9ad23536989f2a@news.povray.org>
In article <web.4497a173bda360d27d55e4a40@news.povray.org>, 
alp### [at] zubenelgenubi34spcom says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> 
> > No it couldn't, because graphics cards ***do not*** raytrace, then
> > scanline.
> 
> I think what's actually being referred to is using the GPU (in sufficient
ly
> modern cards) as a fast vector processor that can be programmed to carry
> out arbitrary (i.e. not triangular or scanliney) calculations. I would
> actually read the headlines on that site (http://www.gpgpu.com) to see wh
at
> is being talked about. GPU pipelines are still single-precision though, a
nd
> POVRay uses double-precision maths throughout. It would be a massive job 
in
> any case, and easier to buy more CPUs.
> 
Ah.. Yeah, that could be it. Not that one couldn't sacrifice "some" 
precision to gain speed that way, but it wouldn't be the same result. 
Though.. It does provide an interesting solution if you want to create a 
game based renderer that uses POV-Ray like function, but not quite the 
same level of accuracy. Not sure how much it would truly improve the 
speed though. If it could let you do refraction and reflection in a few 
seconds on a complex scene, instead of minutes, it might be worth it for 
an idea I had, but I wouldn't expect real time from it. lol

> > That is the other thing you don't get. At some point you still have to
> > do the math. You can do it the way POV-Ray does and produce "exact"
> > mathematical versions, or you can use the math the generate
> > "approximations" that the GPU can handle. And it will always be an
> > approximation. You can't do true physical models of real world objects,
> > which are "not" made up of bunches of triangles
> 
> I don't agree. Raytracing is absolutely an approximation (e.g. no forward
> light without photon maps, ignores wave physics of light, etc). Also, the
> use of triangles is a separate issue (you can clearly raytrace triangles)
.
> I would say that real-world objects are not made up of any sort of
> primitive that POVRay or other renderers use, triangles included.
> Certainly, I've never seen a perfect box{} in the real world (checks
> furniture).
> 
If you want to be picky, then yes. The point though is that between the 
two approximations, true raytracing is closest to reality, even if you 
have to round the edges of some things to make them, "not perfect 
boxes". And the overhead... Like I have told several people, what you 
can do in about 50 lines of POV-Ray code would take a 1.5MB mesh file, 
not including textures, on most GPU based systems. From a purely 
practical standpoint, GPUs are not practical for photorealism, even if 
they are currently faster at producing results.

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Tom York
Subject: Re: speed up povray by recompiling for graphics card
Date: 20 Jun 2006 16:55:00
Message: <web.44985e8dbda360d27d55e4a40@news.povray.org>
Patrick Elliott <sel### [at] rraznet> wrote:

> Ah.. Yeah, that could be it. Not that one couldn't sacrifice "some"
> precision to gain speed that way, but it wouldn't be the same result.

Essentially the GPU would be an alternative CPU. It would execute the same
raytracing algorithms as a CPU would.

> If you want to be picky, then yes. The point though is that between the
> two approximations, true raytracing is closest to reality, even if you
> have to round the edges of some things to make them, "not perfect
> boxes".

They both capture some aspects of the way light behaves. Neither of them are
noticably faithful to "real light". Many raytracers do not support anything
but the famous triangle soup. The box primitive is just another idealised
mathematical abstraction that needs work to resemble reality.

> And the overhead... Like I have told several people, what you
> can do in about 50 lines of POV-Ray code would take a 1.5MB mesh file,
> not including textures, on most GPU based systems.

The original suggestion was that it is possible to use a GPU as a fast
floating point processor, and a claim was made that POVRay would benefit. So
the GPU would run POVRay. There are lots of caveats (few of which derive
from the scanline heritage of GPUs). I personally can't see how it would be
worth the time, and think that the benefits are minimal compared to running
on the CPU, but it wasn't being suggested that some sort of super-GPU would
take POVRay on with scanline techniques, as far as I could tell.

These days I find myself using mainly meshes in POV (I don't do much/any
abstract work). Compactness is to be prized, but only if it actually
resembles what you want. I model planets with sphere primitives, starships
with meshes.

> From a purely practical standpoint, GPUs are not practical for photorealism,
> even if they are currently faster at producing results.

I don't know about that - scanline methods (grab-bag that they are) have
been the only commercially viable (extremely practical, no?) route to
photorealism for some time, and GPUs are probably the fastest
implementation of that technique available "by weight". They are only
getting more capable as time goes on.

I think it's clear that scanline will have to lose out to raytracing and its
derivatives in the end, but if it's good enough for ILM I think it's
probably not fair to call it impractical. It's sometimes said that pure
raytracing produces images that are too clean to be photorealistic. That's
got as much basis in fact, but doesn't reflect (yes) the potential of the
technique any more than, say, Quake does of scanlining.

Tom


Post a reply to this message

From: Patrick Elliott
Subject: Re: speed up povray by recompiling for graphics card
Date: 21 Jun 2006 16:14:21
Message: <MPG.1f0356624ecdf8a9989f2c@news.povray.org>
In article <web.44985e8dbda360d27d55e4a40@news.povray.org>, 
alp### [at] zubenelgenubi34spcom says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> 
> > Ah.. Yeah, that could be it. Not that one couldn't sacrifice "some"
> > precision to gain speed that way, but it wouldn't be the same result.
> 
> Essentially the GPU would be an alternative CPU. It would execute the sam
e
> raytracing algorithms as a CPU would.
> 
Yeah. For what I want that might be usable, though it only adds a layer 
of additional complication, given I don't yet understand the math, never 
mind the code to use the GPU. lol

> > If you want to be picky, then yes. The point though is that between the
> > two approximations, true raytracing is closest to reality, even if you
> > have to round the edges of some things to make them, "not perfect
> > boxes".
> 
> They both capture some aspects of the way light behaves. Neither of them 
are
> noticably faithful to "real light". Many raytracers do not support anythi
ng
> but the famous triangle soup. The box primitive is just another idealised
> mathematical abstraction that needs work to resemble reality.
> 
True.

> > And the overhead... Like I have told several people, what you
> > can do in about 50 lines of POV-Ray code would take a 1.5MB mesh file,
> > not including textures, on most GPU based systems.
> 
> The original suggestion was that it is possible to use a GPU as a fast
> floating point processor, and a claim was made that POVRay would benefit.
 So
> the GPU would run POVRay. There are lots of caveats (few of which derive
> from the scanline heritage of GPUs). I personally can't see how it would 
be
> worth the time, and think that the benefits are minimal compared to runni
ng
> on the CPU, but it wasn't being suggested that some sort of super-GPU wou
ld
> take POVRay on with scanline techniques, as far as I could tell.
> 
> These days I find myself using mainly meshes in POV (I don't do much/any
> abstract work). Compactness is to be prized, but only if it actually
> resembles what you want. I model planets with sphere primitives, starship
s
> with meshes.
> 
Well, a lot of stuff you have no choice. Even Isosurfaces have some 
flaws, like not being able to self bound in a way that would knock off 
bits that stick out where they shouldn't be. Like the one short code 
contest entry, where it you render it at higher res it because obvious 
that bits of the "rock" are floating in space. Not impossible to fix but 
it adds something else to it that has to be later differenced out to 
make it right. I am just looking at it from the perspective of the fact 
that you are forced to cut corners. If even 20% of something in a game 
was possible using more simplistic primitives, then that is 20% of the 
objects you don't need to build out of triangles. This means more space 
on the game disc for the "game" and less bandwidth for all the stuff you 
have to feed to a player for an online one. Even if all you are 
producing is still images, you are still looking at a "huge" data spike 
every time they need to transfer a new model, or worse, an entirely new 
room. Real time lighting changes, etc. have to be done in the game 
engine, not generated on the other end, because if you don't cache the 
files to produce it, you are looking at that same data hit every time 
you enter the room. Some of that you want to do on your end anyway, but 
I was looking in terms of treating the "script" for the images the same 
way LPC works on a mud. The moment any change is made, looking at thing 
again generates the changes on the player end, with "minimal" intrusion.

Yeah, if you are making a major motion picture and have lots of money to 
buy a mess of computers, the best available software tools and people 
that know how to use them, and you are planning on having the final 
project come up 5 years from now, great. If you want it to update more 
or less real time, make changes on the fly and people are going to be 
getting the content over the internet (possibly not all on the "best" 
high speed connections), then from that perspective the current 
architecture is not practical. If anything its amazing some things like 
Uru Live or the shard projects branching off of it, work at all over the 
internet, even with high speed, same with Second Life or other "true" 3D 
worlds. Heck, the isometric ones only really work because they require 
installing patches with the new models and textures to extend the game. 
If you couldn't buy a disk and install it, maybe 50% of the players 
would never go past the first version.

Anyway, that is the standpoint I am coming from. Not, "how do I make a 
photo?", but, "how do I do this without shoving 4GB of data onto 
someone's computer, then cramming as much stuff as I can down the pipe 
anyway when they play the game?" The later is why new content is not 
exactly a staple for graphical online games. lol

Now, if using the scanline system "could" allow approximation of the 
primitives at a decent speed and still make things faster.. That could 
help too. But its still not going to be doing "some" things very well 
without a lot of cludging, like reflecting objects not "in view", etc.

It is an issue made all the more complicated by the fact that if I 
wanted to really do something, most of the books that provided clear 
information on how most of the raytrace stuff where last published... 
15-20 years ago. :( Now, its sort of a "turtles all the way down" world, 
where everyone "assumes" you are just going to use blender to make a 
model, then throw it at a GPU. Quite annoying and I am way to lazy to 
spend weeks trying to find the information online (all the while dodging 
site that refer back to GPUs) or probably even longer trying to figure 
out what the code it POV-Ray, which I can't actually use anyway the way 
I want, works. Oh well...

But yeah, for some things GPUs are practical. If you have a) bandwidth, 
b) storage space, c) money and d) a lot of time. If a and b are limited 
and your intent is to avoid a lot of c and d, especially if d is the one 
thing you want to avoid needing, you are screwed when using GPUs. ;)

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Tom York
Subject: Re: speed up povray by recompiling for graphics card
Date: 21 Jun 2006 19:05:01
Message: <web.4499d00dbda360d27d55e4a40@news.povray.org>
Patrick Elliott <sel### [at] rraznet> wrote:
> In article <web.44985e8dbda360d27d55e4a40@news.povray.org>,
> Yeah. For what I want that might be usable, though it only adds a layer
> of additional complication, given I don't yet understand the math, never
> mind the code to use the GPU. lol

Hm, well, that webpage is probably a place to start :)

> Well, a lot of stuff you have no choice. Even Isosurfaces have some
> flaws, like not being able to self bound in a way that would knock off
> bits that stick out where they shouldn't be.

Isosurfaces are one of those things that I've never had the patience for,
although from some perspectives they're a way to compress enormous amounts
of data.

> If even 20% of something in a game
> was possible using more simplistic primitives, then that is 20% of the
> objects you don't need to build out of triangles. This means more space
> on the game disc for the "game" and less bandwidth for all the stuff you
> have to feed to a player for an online one.

I don't write games, but I hear from friends who have that bottlenecks in a
combined CPU/GPU rendering system can be in surprising places, like in
switching materials, rather more than actually getting those things onto
the card's memory in the first place.

Looking at the sort of models that games involve, about the most likely
substitute as a primitive would be variations on bicubic patches - I can't
imagine most primitives being worth the effort. I think the point is very
weak as far as storage goes, it's far cheaper than CPU in my opinion, but
bandwidth is, yes, expensive.

The point about bandwidth has been examined in various interesting ways. I
don't know if you've seen the .kkreiger first-person shoot-em-up (I think
that's its name). It's about 100kb in size and uses clever compression
techniques (sort of procedural, I suppose) to pack in what's alleged to be
several gig of data. It spends about 2 minutes uncompressing chunks of that
when you load it up.

There was also a PC game called Sacrifice which was alleged to store special
effects meshes as procedures. One use of, what I suppose you could call this
procedural geometry is that you could perhaps tie it to machine speed -
faster machines take the procedural description and tesselate it into more
renderable primitives (triangles) than slower machines, the appropriate
level being determined by the user at set-up time.

> Even if all you are
> producing is still images, you are still looking at a "huge" data spike
> every time they need to transfer a new model, or worse, an entirely new
> room.

I don't think that's particularly fatal (I assume you're talking about some
sort of still renderer here). Most renderers for that, scanline, raytracers
or whatever, have a big set up hit. POVRay has the parsing phase and image
texture loading, for example. Specifically in regard to the GPU, well, it's
typically got the widest bandwidth of all in a typical PC, IIRC. It might
even be faster to send data to a GPU renderer than it would to fetch it
from main memory to the CPU, I'm not sure.

> Real time lighting changes, etc. have to be done in the game
> engine, not generated on the other end, because if you don't cache the
> files to produce it, you are looking at that same data hit every time
> you enter the room.

Again, I'm not sure that this is as significant as it appears.

> Yeah, if you are making a major motion picture and have lots of money to
> buy a mess of computers, the best available software tools and people
> that know how to use them, and you are planning on having the final
> project come up 5 years from now, great.

Well, let's take the opposite end. Logos for adverts and TV special effects
shots (like in scifi series) are typically done on a relative shoestring
(i.e. less than it would cost to build a physical model) and need to be
done last week. They don't raytrace either.

They will eventually, though.

>  If you want it to update more
> or less real time, make changes on the fly and people are going to be
> getting the content over the internet (possibly not all on the "best"
> high speed connections), then from that perspective the current
> architecture is not practical.

Well, broadband connections are spreading very rapidly, I think. My only
experience of low-bandwidth "over-the-network" 3D is VRML-style interactive
stuff, and to be honest, back when I saw it my impression was that it was
both academic and a bit pointless at the sort of quality you'd get on a
low-bandwidth connection. One major producer of PC games (and many
sharewhere producers) has moved to distributing their products over the
internet as a first, rather than second choice. The problems the majority
of internet-based games face are more to do with latency and reliability
than raw bandwidth, I think.

> If anything its amazing some things like
> Uru Live or the shard projects branching off of it, work at all over the
> internet, even with high speed, same with Second Life or other "true" 3D
> worlds. Heck, the isometric ones only really work because they require
> installing patches with the new models and textures to extend the game.
> If you couldn't buy a disk and install it, maybe 50% of the players
> would never go past the first version.

I'm not sure the problem is with geometry. Game geometry, even mesh
geometry, is pretty lightweight. It's all those textures that make the
difference, and I don't think changing the type of primitive you use gets
you away from the need to detail surfaces. Procedurals are one alternative
but they cost in CPU time.

> Anyway, that is the standpoint I am coming from. Not, "how do I make a
> photo?", but, "how do I do this without shoving 4GB of data onto
> someone's computer, then cramming as much stuff as I can down the pipe
> anyway when they play the game?" The later is why new content is not
> exactly a staple for graphical online games. lol

I think it's becoming more important. I seem to be regularly updated with
content for Half Life and derivatives, and I suppose other games will only
follow suit. They've been distributing upgrades and bug-fix patches this
way for years. Also, you could use those procedural compression tricks
again; uncompress the data into meshes on the user's computer after
downloading.

>
> Now, if using the scanline system "could" allow approximation of the
> primitives at a decent speed and still make things faster.. That could
> help too. But its still not going to be doing "some" things very well
> without a lot of cludging, like reflecting objects not "in view", etc.

I don't know how important such effects are. You can render a reflective
sphere primitive with a raytracer and be very memory-efficient and
incidentally a perfect reflection, but it may well look less "real" than a
few hundred k of relatively memory inefficient mesh with a few MB of
displacement map and textures on it. Reflection's an interesting one;
blurred reflection is cheap in scanline, and can look more realistic to the
eye than sharp raytraced reflections, even though the latter is accurate and
the former isn't.

> It is an issue made all the more complicated by the fact that if I
> wanted to really do something, most of the books that provided clear
> information on how most of the raytrace stuff where last published...
> 15-20 years ago. :( Now, its sort of a "turtles all the way down" world,
> where everyone "assumes" you are just going to use blender to make a
> model, then throw it at a GPU. Quite annoying and I am way to lazy to
> spend weeks trying to find the information online (all the while dodging
> site that refer back to GPUs) or probably even longer trying to figure
> out what the code it POV-Ray, which I can't actually use anyway the way
> I want, works. Oh well...

I'm not sure I understand. Are you talking about writing a graphics engine
for a game using raytracing instead of scanline methods, or a
high-quality/non-game rendering engine?

One thing I'd recommend is to check out the work behind

http://www.openrt.de/

I remember the papers behind it being spectacular, frankly.

> But yeah, for some things GPUs are practical. If you have a) bandwidth,
> b) storage space, c) money and d) a lot of time. If a and b are limited
> and your intent is to avoid a lot of c and d, especially if d is the one
> thing you want to avoid needing, you are screwed when using GPUs. ;)

I think most of these are answered above. Certainly I'd argue that
raytracing is far more time-intensive for game applications. But I don't
understand what application you have in mind.

Tom


Post a reply to this message

From: m1j
Subject: Re: speed up povray by recompiling for graphics card
Date: 21 Jun 2006 19:35:01
Message: <web.4499d6bdbda360d28f25b9020@news.povray.org>
"Blake Eggemeyer" <i.l### [at] gmailcom> wrote:
> gpgpu.com
> this site shows how to make a graphics card do all sorts os stuff.
> if povray were recompiled for graphics card, then it could run between 4-16
> times faster depending on the card. i of course am not going to attempt
> anything like this any time soon, im no good at C++ yet.
> this would also free up the cpu for any thing you want while your gpu is
> churning away.
> have fun with this idea

http://www.nvidia.com/page/gelato.html

Here you go. And it is free. Limited to their cards though. And yes it is
raytrace with many primitives simulare to POVRay. An ISOsurface modal could
be built into the shader for an object as well. i have seen it on MentalRay
and Renderman so it may have already been done for Gelato.

I believe it would take way too much changing to make POVRay capable of
using the GPU and still run on so many different OS's.


Post a reply to this message

From: Patrick Elliott
Subject: Re: speed up povray by recompiling for graphics card
Date: 22 Jun 2006 17:31:29
Message: <MPG.1f04ba1a8fd0dd5c989f2d@news.povray.org>
In article <web.4499d00dbda360d27d55e4a40@news.povray.org>, 
alp### [at] zubenelgenubi34spcom says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> > Even if all you are
> > producing is still images, you are still looking at a "huge" data spike
> > every time they need to transfer a new model, or worse, an entirely new
> > room.
> 
> I don't think that's particularly fatal (I assume you're talking about so
me
> sort of still renderer here). Most renderers for that, scanline, raytrace
rs
> or whatever, have a big set up hit. POVRay has the parsing phase and imag
e
> texture loading, for example. Specifically in regard to the GPU, well, it
's
> typically got the widest bandwidth of all in a typical PC, IIRC. It might
> even be faster to send data to a GPU renderer than it would to fetch it
> from main memory to the CPU, I'm not sure.
> 
Well, I specifically meant internet bandwidth. True the card bandwidth 
is not the bottleneck in such cases, its sending the data from the 
server "to" the players machine. Especially if the game environment is 
an extended text based one, where I don't expect everyone to necessarily 
have broadband.

> > Real time lighting changes, etc. have to be done in the game
> > engine, not generated on the other end, because if you don't cache the
> > files to produce it, you are looking at that same data hit every time
> > you enter the room.
> 
> Again, I'm not sure that this is as significant as it appears.
> 
Your thinking in terms of the "wrong" bandwidth here. ;)

> > Yeah, if you are making a major motion picture and have lots of money t
o
> > buy a mess of computers, the best available software tools and people
> > that know how to use them, and you are planning on having the final
> > project come up 5 years from now, great.
> 
> Well, let's take the opposite end. Logos for adverts and TV special effec
ts
> shots (like in scifi series) are typically done on a relative shoestring
> (i.e. less than it would cost to build a physical model) and need to be
> done last week. They don't raytrace either.
> 
> They will eventually, though.
> 
They don't raytrace because a) there are lot more Photoshop experts than 
3D people and b) most of that sort of thing you can find cheap 
alternatives for that don't raytrace. Sadly, that is one of the problems 
imho, you can find ten billion applications of GPUs, very few 
raytracing. Its sort of like the same situation as Linux vs. Windows. At 
one time Windows was "way" better than all the alternatives and part of 
that was how complicated using those alternates where. That didn't mean 
the alternates where not on some level better, just at the time less 
convenient. Same thing here, only without the anticompetitive BS from 
the technologies manufacturer(s) to help it happen. But yeah, it is 
inevitable that cards will start to do the real thing, even if in some 
cases the result starts out as a hibred.

> >  If you want it to update more
> > or less real time, make changes on the fly and people are going to be
> > getting the content over the internet (possibly not all on the "best"
> > high speed connections), then from that perspective the current
> > architecture is not practical.
> 
> Well, broadband connections are spreading very rapidly, I think. My only
> experience of low-bandwidth "over-the-network" 3D is VRML-style interacti
ve
> stuff, and to be honest, back when I saw it my impression was that it was
> both academic and a bit pointless at the sort of quality you'd get on a
> low-bandwidth connection. One major producer of PC games (and many
> sharewhere producers) has moved to distributing their products over the
> internet as a first, rather than second choice. The problems the majority
> of internet-based games face are more to do with latency and reliability
> than raw bandwidth, I think.
> 
Not an issue with what I plan to do. It will be still images, but the 
image would be generated on the fly as needed, including, presumably if 
possible, fog, media, etc.

> > If anything its amazing some things like
> > Uru Live or the shard projects branching off of it, work at all over th
e
> > internet, even with high speed, same with Second Life or other "true" 3
D
> > worlds. Heck, the isometric ones only really work because they require
> > installing patches with the new models and textures to extend the game.
> > If you couldn't buy a disk and install it, maybe 50% of the players
> > would never go past the first version.
> 
> I'm not sure the problem is with geometry. Game geometry, even mesh
> geometry, is pretty lightweight. It's all those textures that make the
> difference, and I don't think changing the type of primitive you use gets
> you away from the need to detail surfaces. Procedurals are one alternativ
e
> but they cost in CPU time.
> 
True. That is why one solution is to avoid "premade" textures. I 
understand newer cards support far better algorithmic textures, but then 
you also have to deal with some "minimum" video hardware. Since my 
intent is to try to avoid that at all possible, just so that, like the 
client itself I planned to add it as an extension to, it will hopefully 
run on the widest number of systems possible.

> > Anyway, that is the standpoint I am coming from. Not, "how do I make a
> > photo?", but, "how do I do this without shoving 4GB of data onto
> > someone's computer, then cramming as much stuff as I can down the pipe
> > anyway when they play the game?" The later is why new content is not
> > exactly a staple for graphical online games. lol
> 
> I think it's becoming more important. I seem to be regularly updated with
> content for Half Life and derivatives, and I suppose other games will onl
y
> follow suit. They've been distributing upgrades and bug-fix patches this
> way for years. Also, you could use those procedural compression tricks
> again; uncompress the data into meshes on the user's computer after
> downloading.
> 
True.

> > Now, if using the scanline system "could" allow approximation of the
> > primitives at a decent speed and still make things faster.. That could
> > help too. But its still not going to be doing "some" things very well
> > without a lot of cludging, like reflecting objects not "in view", etc.
> 
> I don't know how important such effects are. You can render a reflective
> sphere primitive with a raytracer and be very memory-efficient and
> incidentally a perfect reflection, but it may well look less "real" than 
a
> few hundred k of relatively memory inefficient mesh with a few MB of
> displacement map and textures on it. Reflection's an interesting one;
> blurred reflection is cheap in scanline, and can look more realistic to t
he
> eye than sharp raytraced reflections, even though the latter is accurate 
and
> the former isn't.
> 
Hmm. Yeah, but in my case the added texture isn't practical. It needs to 
be as much procedural as possible, both to speed up the transfer from 
the server, and the make sure that the scene "can" be changed from the 
purely raw text. And yeah, compression would be used, on the entire SDL 
file.

> > It is an issue made all the more complicated by the fact that if I
> > wanted to really do something, most of the books that provided clear
> > information on how most of the raytrace stuff where last published...
> > 15-20 years ago. :( Now, its sort of a "turtles all the way down" world
,
> > where everyone "assumes" you are just going to use blender to make a
> > model, then throw it at a GPU. Quite annoying and I am way to lazy to
> > spend weeks trying to find the information online (all the while dodgin
g
> > site that refer back to GPUs) or probably even longer trying to figure
> > out what the code it POV-Ray, which I can't actually use anyway the way
> > I want, works. Oh well...
> 
> I'm not sure I understand. Are you talking about writing a graphics engin
e
> for a game using raytracing instead of scanline methods, or a
> high-quality/non-game rendering engine?
> 
Umm. Sort of in between. The idea is to make it a plugin for clients 
that can support them. Right now the only methods that exist are sort of 
embedded HTML tags that retrieve and display static images. I would like 
the be able to send across a link to a render file, sort of like:

<render "http:www.myserver.com\images\old_house.sdl">

And maybe even have some limited animation capability, like generating 
the first image, then rendering the rest in the background and either 
running it all once, or looping it. So, no, its not really a "game 
engine" in the sense of a FPS, but its not something that can "really" 
be done unobtrusively or quite the way I want if I where to simply try 
to use POV-Ray through a loadable ActiveX control.

> One thing I'd recommend is to check out the work behind
> 
> http://www.openrt.de/
> 
> I remember the papers behind it being spectacular, frankly.
> 
Those are the same people developing a true raytrace card I think. I 
will definitely be looking there, but first I need to overcome two 
frustrating problems I am having, a) figuring out the "correct" way to 
bridge events, so I can convince the developer of the client I use to 
implement something less nuts that the current UDP based system. At the 
moment, you can instance objects in its various supported scripts, but 
not handle their events. Of course, since 90% of the world either a) 
uses only early bound events or b) some existing application that 
supports late bound ones already, like IE or WScript, just getting 
people to understand that you are "not" working within that environment 
in the first place is like trying to explain physics using Egyptian 
hieroglyphs... You first have to get past the "assumption" that you just 
don't know about all the wonderfully useless existing implementations 
that won't work for your project. lol

I did finally get a response on it. It amounted to, "Read 'Essential 
COM' by Don Box, that is how I figured it out, but since I developed my 
solution to the problem for a company, I can't tell you what it was." 
Gosh! Thanks. :p

The other issue is that I have been trying to make it possible to use 
design mode to "edit" a layout for a window, so that people can set up 
the layout of forms they want, and place "any" control they want on 
them. The closest I have gotten on that one is finding a truly obscure 
command in ATL that "may" allow it, but even "both" books on OLE and COM 
I now have make only the standard and quite useless mention of 
"reading" the property and that it should be possible to "set" it for 
containers as well. Umm, ok... That would be fine if 90% of the existing 
documentation lists time after time that in most cases and most 
languages you "can't" set it at all, so how you do it and when it is 
possible are really obscure. Almost intentionally obscure... lol

The rendering part would then, if the above could be done, either a 
control included with the ActiveX window that let you set design mode 
and use it like an IDE form designer, or as a control that can be 
instanced and added to the same, if the window where built into the 
client. I know.. A lot of stuff for what is basically a text client, but 
some of it "needs" to be done if you later want to develop other similar 
applications too, which is entirely possible. ;)

> > But yeah, for some things GPUs are practical. If you have a) bandwidth,
> > b) storage space, c) money and d) a lot of time. If a and b are limited
> > and your intent is to avoid a lot of c and d, especially if d is the on
e
> > thing you want to avoid needing, you are screwed when using GPUs. ;)
> 
> I think most of these are answered above. Certainly I'd argue that
> raytracing is far more time-intensive for game applications. But I don't
> understand what application you have in mind.
> 
Well, hopefully I have given a bit more of an idea of what I intended 
with it. Most engines are inflexible by nature. It is probably not 
possible to make isometric or full FPS style systems that are completely 
flexible. For that text based systems like muds still rule, though stuff 
like SL start to change that. But SL is "still" more of a chat room than 
a "MUD", it doesn't have the elements that make for hack and slash that 
most games supply on some level. So, I am aiming for the middle ground. 
Good graphics for still images or limited animation, which is based on 
easily edited text files (and thus may not even, for purposes of keeping 
it simple, allow meshes), that can be altered as easily as the LPC code 
used to built the game itself. Right now, the solutions either involve 
going all the way and using Blender, or just using very simple and 
limited still images that can't adjust to differences in lighting, 
weather, fog, or anything else that might be part of the game world. I 
want the deep well with a glint of gold at the bottom to look different 
if its morning, midday cloudy or freaking pitch black and in torch 
light. Existing methods would require like 30 images, each pre-made for 
those conditions, and not able to adjust to how many people are there, 
how many are using lights, or anything else that might alter the 
"expected" results that where used to produce those images. Or, as I 
said, you go all the way and end up with 2 GB of textures cached on your 
system and a mess of meshes... Which is quite ridiculous if your not 
playing something like Marrowind, but what is basically "text".

Basically, I want something as easy to maintain, sans talent of course, 
as the script used to run the game. And for something like POV-Ray's 
SDL, time can make up for talent, especially if you can make prefab 
includable objects to use, which still take up less room than a mesh and 
the graphics based textures.

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.