|
|
In article <4722dd91@news.povray.org>, war### [at] tagpovrayorg says...
> Patrick Elliott <sel### [at] rraznet> wrote:
> > Except that in many cases the parsing, and thus script execution, is
> > already taking far longer than the actual render, so it *does* matter
> > how its done.
>
> Applying transformations is not the heaviest operation during parsing.
> You will not gain any parsing speed using your suggestion.
>
> > And as Darren points out, JIT optimization its hardly the
> > same as processor optimization.
>
> We are talking about some percents. You wrote as if JIT-compiled code
> was several orders of magnitude slower than compiled code.
>
It probably isn't, but we are not talking about just handling
transformations, but the overhead of handling data needed to do that.
Your still using the same commands to do the transforms, but you are not
doing them directly, you are reading them from a table, then passing
each to the part of the system that handles transforms. That means its
going to take slightly longer. Mind you, its possible you could make a
table layout that short cut things, like tokenizing the commands, which
would make lookup slightly faster, maybe. But again, its still
"percentages" as you say.
You seem to have missed part of what I wrote though. On some level you
are still dealing with the problem if if you are going to parse one
command, to add the table, then use an intercept, prior to the final
render step, to process the table, or you **also** parse a separate
command for every part of the object that need to gets its transforms
from the table. The later adds for extra time to detect that command,
every time it appears, as well as the process or retrieving the data
itself from the table, so it knows what transforms to apply. Whether or
not the actual computations for the transforms themselves are trivial
delays has no meaning to if the added commands, or the method used to
retrieve the data to be applied to the transforms, is going to cause
more time to be used. At mimimum, its going to cost a *slight* amount of
additional time, no matter which method you use, compare to parsing
**each command** by itself. I am not sure, other than something silly
like tokenization, which could be meaningless on a modern processor,
that you could speed up the retrieval of the data from the table(s).
Yes, we are talking about percentages here, but those can matter,
especially if there are bottlenecks. And, as I said, if you had 4 cores,
each reading one objects table, each of those tables containing say 3
transforms per object, with 20 objects, and it took .01 seconds to read
and apply the transform for "each" of those commands, you would take 0.6
seconds to do it, while using a non-threaded script driven solution
would produce 4 separate passes, one for each compound object, with all
the same parameters, or 2.4 seconds. Please tell me again how this
**isn't** significant, especially if you where dealing with the real
numbers and you took into account a scene with, say, 90 compound
objects, only two cores and even a .01% decrease in speed due to it
being a script, instead of a compiled module.
And don't tell me my numbers are silly or irrelevant. At least I am
trying to come up with some sort of means to compare. If I got them
wrong by some order of magnitude, it still doesn't change the basic fact
that more than one core, each processing the transforms for a different
object, takes **less** time than a mono-threaded script. And most script
systems don't support multiple threads/processors. It can be enough of a
pain getting the "core" code to do that right, never mind letting some
end user muck around with it.
--
void main () {
call functional_code()
else
call crash_windows();
}
<A HREF='http://www.daz3d.com/index.php?refid=16130551'>Get 3D Models,
3D Content, and 3D Software at DAZ3D!</A>
Post a reply to this message
|
|