POV-Ray : Newsgroups : povray.programming : speed up povray by recompiling for graphics card : Re: speed up povray by recompiling for graphics card Server Time
17 May 2024 04:43:56 EDT (-0400)
  Re: speed up povray by recompiling for graphics card  
From: Patrick Elliott
Date: 22 Jun 2006 17:31:29
Message: <MPG.1f04ba1a8fd0dd5c989f2d@news.povray.org>
In article <web.4499d00dbda360d27d55e4a40@news.povray.org>, 
alp### [at] zubenelgenubi34spcom says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> > Even if all you are
> > producing is still images, you are still looking at a "huge" data spike
> > every time they need to transfer a new model, or worse, an entirely new
> > room.
> 
> I don't think that's particularly fatal (I assume you're talking about so
me
> sort of still renderer here). Most renderers for that, scanline, raytrace
rs
> or whatever, have a big set up hit. POVRay has the parsing phase and imag
e
> texture loading, for example. Specifically in regard to the GPU, well, it
's
> typically got the widest bandwidth of all in a typical PC, IIRC. It might
> even be faster to send data to a GPU renderer than it would to fetch it
> from main memory to the CPU, I'm not sure.
> 
Well, I specifically meant internet bandwidth. True the card bandwidth 
is not the bottleneck in such cases, its sending the data from the 
server "to" the players machine. Especially if the game environment is 
an extended text based one, where I don't expect everyone to necessarily 
have broadband.

> > Real time lighting changes, etc. have to be done in the game
> > engine, not generated on the other end, because if you don't cache the
> > files to produce it, you are looking at that same data hit every time
> > you enter the room.
> 
> Again, I'm not sure that this is as significant as it appears.
> 
Your thinking in terms of the "wrong" bandwidth here. ;)

> > Yeah, if you are making a major motion picture and have lots of money t
o
> > buy a mess of computers, the best available software tools and people
> > that know how to use them, and you are planning on having the final
> > project come up 5 years from now, great.
> 
> Well, let's take the opposite end. Logos for adverts and TV special effec
ts
> shots (like in scifi series) are typically done on a relative shoestring
> (i.e. less than it would cost to build a physical model) and need to be
> done last week. They don't raytrace either.
> 
> They will eventually, though.
> 
They don't raytrace because a) there are lot more Photoshop experts than 
3D people and b) most of that sort of thing you can find cheap 
alternatives for that don't raytrace. Sadly, that is one of the problems 
imho, you can find ten billion applications of GPUs, very few 
raytracing. Its sort of like the same situation as Linux vs. Windows. At 
one time Windows was "way" better than all the alternatives and part of 
that was how complicated using those alternates where. That didn't mean 
the alternates where not on some level better, just at the time less 
convenient. Same thing here, only without the anticompetitive BS from 
the technologies manufacturer(s) to help it happen. But yeah, it is 
inevitable that cards will start to do the real thing, even if in some 
cases the result starts out as a hibred.

> >  If you want it to update more
> > or less real time, make changes on the fly and people are going to be
> > getting the content over the internet (possibly not all on the "best"
> > high speed connections), then from that perspective the current
> > architecture is not practical.
> 
> Well, broadband connections are spreading very rapidly, I think. My only
> experience of low-bandwidth "over-the-network" 3D is VRML-style interacti
ve
> stuff, and to be honest, back when I saw it my impression was that it was
> both academic and a bit pointless at the sort of quality you'd get on a
> low-bandwidth connection. One major producer of PC games (and many
> sharewhere producers) has moved to distributing their products over the
> internet as a first, rather than second choice. The problems the majority
> of internet-based games face are more to do with latency and reliability
> than raw bandwidth, I think.
> 
Not an issue with what I plan to do. It will be still images, but the 
image would be generated on the fly as needed, including, presumably if 
possible, fog, media, etc.

> > If anything its amazing some things like
> > Uru Live or the shard projects branching off of it, work at all over th
e
> > internet, even with high speed, same with Second Life or other "true" 3
D
> > worlds. Heck, the isometric ones only really work because they require
> > installing patches with the new models and textures to extend the game.
> > If you couldn't buy a disk and install it, maybe 50% of the players
> > would never go past the first version.
> 
> I'm not sure the problem is with geometry. Game geometry, even mesh
> geometry, is pretty lightweight. It's all those textures that make the
> difference, and I don't think changing the type of primitive you use gets
> you away from the need to detail surfaces. Procedurals are one alternativ
e
> but they cost in CPU time.
> 
True. That is why one solution is to avoid "premade" textures. I 
understand newer cards support far better algorithmic textures, but then 
you also have to deal with some "minimum" video hardware. Since my 
intent is to try to avoid that at all possible, just so that, like the 
client itself I planned to add it as an extension to, it will hopefully 
run on the widest number of systems possible.

> > Anyway, that is the standpoint I am coming from. Not, "how do I make a
> > photo?", but, "how do I do this without shoving 4GB of data onto
> > someone's computer, then cramming as much stuff as I can down the pipe
> > anyway when they play the game?" The later is why new content is not
> > exactly a staple for graphical online games. lol
> 
> I think it's becoming more important. I seem to be regularly updated with
> content for Half Life and derivatives, and I suppose other games will onl
y
> follow suit. They've been distributing upgrades and bug-fix patches this
> way for years. Also, you could use those procedural compression tricks
> again; uncompress the data into meshes on the user's computer after
> downloading.
> 
True.

> > Now, if using the scanline system "could" allow approximation of the
> > primitives at a decent speed and still make things faster.. That could
> > help too. But its still not going to be doing "some" things very well
> > without a lot of cludging, like reflecting objects not "in view", etc.
> 
> I don't know how important such effects are. You can render a reflective
> sphere primitive with a raytracer and be very memory-efficient and
> incidentally a perfect reflection, but it may well look less "real" than 
a
> few hundred k of relatively memory inefficient mesh with a few MB of
> displacement map and textures on it. Reflection's an interesting one;
> blurred reflection is cheap in scanline, and can look more realistic to t
he
> eye than sharp raytraced reflections, even though the latter is accurate 
and
> the former isn't.
> 
Hmm. Yeah, but in my case the added texture isn't practical. It needs to 
be as much procedural as possible, both to speed up the transfer from 
the server, and the make sure that the scene "can" be changed from the 
purely raw text. And yeah, compression would be used, on the entire SDL 
file.

> > It is an issue made all the more complicated by the fact that if I
> > wanted to really do something, most of the books that provided clear
> > information on how most of the raytrace stuff where last published...
> > 15-20 years ago. :( Now, its sort of a "turtles all the way down" world
,
> > where everyone "assumes" you are just going to use blender to make a
> > model, then throw it at a GPU. Quite annoying and I am way to lazy to
> > spend weeks trying to find the information online (all the while dodgin
g
> > site that refer back to GPUs) or probably even longer trying to figure
> > out what the code it POV-Ray, which I can't actually use anyway the way
> > I want, works. Oh well...
> 
> I'm not sure I understand. Are you talking about writing a graphics engin
e
> for a game using raytracing instead of scanline methods, or a
> high-quality/non-game rendering engine?
> 
Umm. Sort of in between. The idea is to make it a plugin for clients 
that can support them. Right now the only methods that exist are sort of 
embedded HTML tags that retrieve and display static images. I would like 
the be able to send across a link to a render file, sort of like:

<render "http:www.myserver.com\images\old_house.sdl">

And maybe even have some limited animation capability, like generating 
the first image, then rendering the rest in the background and either 
running it all once, or looping it. So, no, its not really a "game 
engine" in the sense of a FPS, but its not something that can "really" 
be done unobtrusively or quite the way I want if I where to simply try 
to use POV-Ray through a loadable ActiveX control.

> One thing I'd recommend is to check out the work behind
> 
> http://www.openrt.de/
> 
> I remember the papers behind it being spectacular, frankly.
> 
Those are the same people developing a true raytrace card I think. I 
will definitely be looking there, but first I need to overcome two 
frustrating problems I am having, a) figuring out the "correct" way to 
bridge events, so I can convince the developer of the client I use to 
implement something less nuts that the current UDP based system. At the 
moment, you can instance objects in its various supported scripts, but 
not handle their events. Of course, since 90% of the world either a) 
uses only early bound events or b) some existing application that 
supports late bound ones already, like IE or WScript, just getting 
people to understand that you are "not" working within that environment 
in the first place is like trying to explain physics using Egyptian 
hieroglyphs... You first have to get past the "assumption" that you just 
don't know about all the wonderfully useless existing implementations 
that won't work for your project. lol

I did finally get a response on it. It amounted to, "Read 'Essential 
COM' by Don Box, that is how I figured it out, but since I developed my 
solution to the problem for a company, I can't tell you what it was." 
Gosh! Thanks. :p

The other issue is that I have been trying to make it possible to use 
design mode to "edit" a layout for a window, so that people can set up 
the layout of forms they want, and place "any" control they want on 
them. The closest I have gotten on that one is finding a truly obscure 
command in ATL that "may" allow it, but even "both" books on OLE and COM 
I now have make only the standard and quite useless mention of 
"reading" the property and that it should be possible to "set" it for 
containers as well. Umm, ok... That would be fine if 90% of the existing 
documentation lists time after time that in most cases and most 
languages you "can't" set it at all, so how you do it and when it is 
possible are really obscure. Almost intentionally obscure... lol

The rendering part would then, if the above could be done, either a 
control included with the ActiveX window that let you set design mode 
and use it like an IDE form designer, or as a control that can be 
instanced and added to the same, if the window where built into the 
client. I know.. A lot of stuff for what is basically a text client, but 
some of it "needs" to be done if you later want to develop other similar 
applications too, which is entirely possible. ;)

> > But yeah, for some things GPUs are practical. If you have a) bandwidth,
> > b) storage space, c) money and d) a lot of time. If a and b are limited
> > and your intent is to avoid a lot of c and d, especially if d is the on
e
> > thing you want to avoid needing, you are screwed when using GPUs. ;)
> 
> I think most of these are answered above. Certainly I'd argue that
> raytracing is far more time-intensive for game applications. But I don't
> understand what application you have in mind.
> 
Well, hopefully I have given a bit more of an idea of what I intended 
with it. Most engines are inflexible by nature. It is probably not 
possible to make isometric or full FPS style systems that are completely 
flexible. For that text based systems like muds still rule, though stuff 
like SL start to change that. But SL is "still" more of a chat room than 
a "MUD", it doesn't have the elements that make for hack and slash that 
most games supply on some level. So, I am aiming for the middle ground. 
Good graphics for still images or limited animation, which is based on 
easily edited text files (and thus may not even, for purposes of keeping 
it simple, allow meshes), that can be altered as easily as the LPC code 
used to built the game itself. Right now, the solutions either involve 
going all the way and using Blender, or just using very simple and 
limited still images that can't adjust to differences in lighting, 
weather, fog, or anything else that might be part of the game world. I 
want the deep well with a glint of gold at the bottom to look different 
if its morning, midday cloudy or freaking pitch black and in torch 
light. Existing methods would require like 30 images, each pre-made for 
those conditions, and not able to adjust to how many people are there, 
how many are using lights, or anything else that might alter the 
"expected" results that where used to produce those images. Or, as I 
said, you go all the way and end up with 2 GB of textures cached on your 
system and a mess of meshes... Which is quite ridiculous if your not 
playing something like Marrowind, but what is basically "text".

Basically, I want something as easy to maintain, sans talent of course, 
as the script used to run the game. And for something like POV-Ray's 
SDL, time can make up for talent, especially if you can make prefab 
includable objects to use, which still take up less room than a mesh and 
the graphics based textures.

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.