|
|
In article <4721bb6d@news.povray.org>, war### [at] tagpovrayorg says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> > Obviously the *major* disadvantage to doing it that way is that you are
> > either running it uncompiled, or JIT compiled, both of which might slow
> > things down too much
>
> What makes you think that JIT-compiled code is relevantly slower than
> regularly-compiled code?
>
> Besides, transforming objects is not one of the most speed-critical
> things in rendering. It's a pre-rendering step done once (per frame)
> and that's it. Hardly anything that needs extreme speed.
>
Except that in many cases the parsing, and thus script execution, is
already taking far longer than the actual render, so it *does* matter
how its done. And as Darren points out, JIT optimization its hardly the
same as processor optimization.
Having thought about it, there needs to be something close to the core
to do this anyway. As I see it there are two ways to handle it, before
rendering, do a call to some sort of "transforms" event, which would
call a function like:
function on_transforms (object){
if exists object.autotransforms {
for each trans in object.autotransforms {
call transform(trans)}}}
This would happen for "every" subobject in a union or other layered
object. But, this bit of code it so simple there is no reason at all
that you can't have it built in some place, since you already **need** a
way to intercept the render, so you can auto-add the transforms. The
difficulty is that you either have to have an array for every object, or
if you use a table of some sort, the object.autotransforms needs to be
smart enough to check for a table, and if the object being tested is
"in" that table as something which needs to be transformed. I.e., it
needs to walk the table somehow, and return an array for "that" object
from in it. another way is to not step through the table, but rather to
mark each object, so it knows to look in the table for it. This would be
like a flag. Such a flag does away with the need to add code on the
lower levels to "automatically" feed every object through the above code
fragment, which would be faster, but it also means you would have to
"mark" every section of a compound object that need to be read from the
table(s) instead. I.e., the could would still be present, but triggered
by the presence of the keyword, which could then be handled in a plugin,
instead of in the engine.
The former version "must" be in the engine, or at least the latch needed
to intercept with. The reason is simple. Since the later is triggered by
a key word, you can add the keyword as a plugin, along with the code
needed to handle its presence. If you do it the other way, you **must**
already have the means to stop the engine *before* it renders anything,
so as to test for and process the changes you want, before allowing it
to continue.
If however, it was decided that such an intercept prior to the render
step was useful for other things, then it becomes less relevant, *but*
then the question is still, "Does stepping through a lot of these calls
cost more time in 'script' or compiled with proper optimizations?" Its
not much code, but it "is" a bottleneck, and one that might be faster as
part of something that also supports multiple core processing, where as
the script itself... might not do that so well. How many copies of the
script is likely to be "allowed" to run? Or for that matter, is it even
possible to split parts of a script to a separate thread, on a different
processor. This is one case where doing that could vastly speed up
application of the transforms, since you could calculate the matrices
for as many objects as you have threads/processors. In fact, that is
faster, in theory, than you could do the same transforms directly.
Right?
--
void main () {
call functional_code()
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
|