|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am Wed, 15 Apr 2009 15:35:33 -0300 schrieb nemesis:
>
> Now, I'm probably all wrong and messed up here so if anyone cares may
> correct me. But I do feel JIT-compiled, bytecode compiled or even
> native code compiled SDL would do nothing for performance except perhaps
> make it worse should a more expressive and general language be used for
> the next SDL.
Ideally parsing would take only a fraction of the time used for scene
creation and actual script execution. I suspect, although I cannot prove
it, that POV 3 spends most its time used for parsing in string comparison
routines (that means, i could try to prove it with a profiler...) and
things like that. Any implementation that eliminates the need for
repeated string matching (you have to do it once, but not each time a
loop is executed) would be fine.
> I've recently been taking a look at Povray beta source, in particular
> parse.hpp and associated. It is the purpose of parse to create a long
> list of povray objects and it does that by analizing the SDL code,
> breaking it down into tokens and calling as it goes the internal
> functions which will create the objects, like Create_Sphere or
> Create_Plane.
And actually its quite good at it. If this was all we used SDL for, then
everybody would be happy. But no, we want loops and macros and
identifiers.
>
> Quite frankly, hardly any external general-purpose language would do
> much better at pure speed than using this method. They'll still have to
> parse and create the objects in the same manner, by binding some of its
> particular calls to calls to povray object-creating functions and
> methods. But there's also a compilation step. And I don't believe
> parsing itself would be faster by going with a general purpose language,
> even lightweight ones like Lua, Tcl or Scheme.
That's for simple scenes where the whole file is processed only once. I'm
sure the most simple scenes would load a bit slower with a bytecode-
compiled language. All scenes where parse time currently is an issue
would load faster, because parsing of loops and macros is done only once,
loops are then executed as often as they need to be.
> How can you be faster for a general purpose language that would allow
> for creation of unrestricted user-level functions rather than pov's
> textual macros and still bring possibly much more unecessary stuff to
> the table? How can that be faster than the SDL's straightforward
> parsing by directly calling the creation functions?
>
> I was thinking even if I would use a native compiled language perhaps it
> would not be worth it because I believe most users of povray SDL enjoy
> an iteractive development style: write a little, parse with low quality
> settings, go back, loop. Would the time taken on parsing and
> compilation be worth it? Current SDL only takes time on parsing,
> compilation merely means calling the internal creation functions.
Let's take a Perl script. I write it in pretty the same manner as you
describe because I'm not too good at Perl. The perl interpreter compiles
it to bytecode and executes it, but I don't even notice the time it needs
to start up, even though Perl brings loads of features most of which I
never use.
> I'm not against a better, more expressive SDL that would allow for far
> better and convenient scripting and do away with many name clashes with
> true scoping rules. Just pointing out that that kind of power may also
> mean slower rather than higher performance.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"clipka" <nomail@nomail> wrote:
> nemesis <nam### [at] gmailcom> wrote:
> > Quite frankly, hardly any external general-purpose language would do
> > much better at pure speed than using this method. They'll still have to
> > parse and create the objects in the same manner, by binding some of its
> > particular calls to calls to povray object-creating functions and
> > methods. But there's also a compilation step. And I don't believe
> > parsing itself would be faster by going with a general purpose language,
> > even lightweight ones like Lua, Tcl or Scheme.
>
> There is a difference:
>
> Bytecode is more compact. It doesn't have whitespace, nor comments. It doesn't
> refer to variables or keywords (commands in bytecode) by names, but by indices
> in a table instead. Its command block ends are already identified. Its code -
> and moreover that of include files - is executed from memory instead of a
> (hopefully buffered) disk access module.
>
> So...
>
> - Disk access is reduced to the absolute minimum, not only because the files are
> likely smaller, but also because macro calls don't cause re-opening of the files
> they're declared in.
>
> - No need to seek over whitespace and comments.
>
> - No need to seek the end of a token.
>
> - No need to re-compute the hash of a variable name each time it is encountered.
>
> - No need to seek the end of an else-block.
>
> - No need to re-load code of macros executed.
>
> All this does not take up *much* time per statement - but if for instance you're
> calling the VRand macro in some loop a million times, then you *will* notice the
> difference.
I thought an SDL loop and its statements would be parsed only once and get
handled by an internal for loop. Should've got more acquainted with pov's
source before trumpeting its amazing performance. :P
Very good answer and solid points, anyway. Those are all indeed very solid
benefits of nicely performing interpreters, no doubt. Thanks for correcting
me. ;)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"nemesis" <nam### [at] gmailcom> wrote:
> Very good answer and solid points, anyway. Those are all indeed very solid
> benefits of nicely performing interpreters, no doubt. Thanks for correcting
> me. ;)
You're welcome ;)
BTW, if you ever feel the need to speed up your scene code, here's one that
makes a hell of a difference:
*** Examine your scene file for macros that your scene makes heavy uses of,
*** and copy them to your scene file!
Consider the parsing times for the following code:
#include "rand.inc"
//#macro VRand(RS) < rand(RS), rand(RS), rand(RS)> #end
#declare R = seed(42);
#declare i = 0;
#while (i < 100000)
#declare V = VRand(R);
#declare i = i + 1;
#end
On my Windows machine:
Second line commented out: 91.907 seconds (76.843 CPU seconds)
Second line "revived": A blasting 3.562 seconds (3.500 CPU seconds)
Thats ****** 25 TIMES FASTER ******
just because of that single macro call.
Why? Because POV-Ray re-opens the macro's source file each and every time it is
invoked... >_<
On my Linux machine it's not all that bad - probably because Linux doesn't have
such a high OS overhead for opening and closing a file - but still an
impressive factor of 12.
So, standard macro libs are a good thing, but sometimes you better not #include
but copy & paste them :P
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> *** Examine your scene file for macros that your scene makes heavy uses of,
> *** and copy them to your scene file!
>
>
> Consider the parsing times for the following code:
>
> #include "rand.inc"
> //#macro VRand(RS) < rand(RS), rand(RS), rand(RS)> #end
> #declare R = seed(42);
> #declare i = 0;
> #while (i < 100000)
> #declare V = VRand(R);
> #declare i = i + 1;
> #end
>
> On my Windows machine:
> Second line commented out: 91.907 seconds (76.843 CPU seconds)
> Second line "revived": A blasting 3.562 seconds (3.500 CPU seconds)
>
> Thats ****** 25 TIMES FASTER ******
> just because of that single macro call.
>
> Why? Because POV-Ray re-opens the macro's source file each and every time it is
> invoked... >_<
Friggin' insane. Without even peeking at the code, I'm guessing the
lack of proper scoping rules has something to do with that as #locals
are locals in the context of files, not of macros... or am I wrong again?...
> So, standard macro libs are a good thing, but sometimes you better not #include
> but copy & paste them :P
Much better than simply having a bytecompiled language with true scoping
rules and modules. ;)
ah, boost 1.38 finally compiled... :)
beta here we go!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] nospam-gmailcom> wrote:
> Friggin' insane. Without even peeking at the code, I'm guessing the
> lack of proper scoping rules has something to do with that as #locals
> are locals in the context of files, not of macros... or am I wrong again?...
Um... I think you are. #locals are indeed locals in the context of include files
as long as they're not inside macros - but when they're inside macros, they're
local to that one.
> > So, standard macro libs are a good thing, but sometimes you better not #include
> > but copy & paste them :P
>
> Much better than simply having a bytecompiled language with true scoping
> rules and modules. ;)
Yeah. Think of it: If we had pre-compiled bytecode include files, how could we
ever copy & paste the macros into our main scene files for speedup? :P
> ah, boost 1.38 finally compiled... :)
>
> beta here we go!
Have fun!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka wrote:
> nemesis <nam### [at] nospam-gmailcom> wrote:
>> Friggin' insane. Without even peeking at the code, I'm guessing the
>> lack of proper scoping rules has something to do with that as #locals
>> are locals in the context of files, not of macros... or am I wrong again?...
>
> Um... I think you are. #locals are indeed locals in the context of include files
> as long as they're not inside macros - but when they're inside macros, they're
> local to that one.
I assumed files were reopened when calling macros exactly to provide
them with some clucky local context. :P If not, I don't see the reason
for the reopening...
I'll dig in now, but since my point is replacing all that, I don't think
I'll lose my time trying to understand it in the first place... :P
> Yeah. Think of it: If we had pre-compiled bytecode include files, how could we
> ever copy & paste the macros into our main scene files for speedup? :P
Oh, almost forgot pre-compiled includes... but not to scare people off
it's worth reminding it should be something akin to how Python does,
source and bytecode side-by-side. It just loads the bytecode if already
compiled and if not, compiles and stores for next time.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] nospam-gmailcom> wrote:
> I assumed files were reopened when calling macros exactly to provide
> them with some clucky local context. :P If not, I don't see the reason
> for the reopening...
Because in former times having too many files open simultaneously used to be a
problem?
If local variable context was an issue, then macros in the main file wouldn't be
able to have local variables.
> I'll dig in now, but since my point is replacing all that, I don't think
> I'll lose my time trying to understand it in the first place... :P
Right.
> Oh, almost forgot pre-compiled includes... but not to scare people off
> it's worth reminding it should be something akin to how Python does,
> source and bytecode side-by-side. It just loads the bytecode if already
> compiled and if not, compiles and stores for next time.
Yep, I guess so. With some hash of the include file stored in the bytecode
version of course.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am Wed, 15 Apr 2009 22:35:19 +0200 schrieb C:
> Even if the new SDL is lightning fast when your placing 100 000 blades
> of grass dynamically, it's going to take a while.
If no really complicated calculations are involved (like collission
detection) in placing them, loading 100 000 blades of grass is going to
take a while as well.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
nemesis <nam### [at] gmailcom> wrote:
> That's unexpected.
> From just the looks of it, the SDL parser seems so straightforward that
> it's difficult to reason where any slowness could be coming from.
It's not difficult at all:
1) SDL is not byte-compiled. It's parsed&interpreted on the fly.
2) SDL loops are implemented by seeking the input file and continuing
the parsing from there. No caching of previously-parsed code of any kind.
So not only do you get the overhead of re-parsing the code every time
it loops, you always get the same file I/O overhead.
Another issue is that I wouldn't be surprised if the *parsing* itself is
faster in the perl interpreter than in povray. What looks "straightforward"
to you might still not be the fastest way of parsing, tokenizing and
interpreting the code.
> What you did was just measure SDL's slow macro processing
What "macro processing"? I didn't use any macros.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
C <the### [at] gmailcom> wrote:
> Even if the new SDL is lightning fast when your placing 100 000 blades
> of grass dynamically, it's going to take a while.
Like what? A half second?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|