POV-Ray : Newsgroups : povray.pov4.discussion.general : Caching parsed code Server Time
19 Apr 2024 15:23:42 EDT (-0400)
  Caching parsed code (Message 12 to 21 of 21)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: nemesis
Subject: Re: Caching parsed code
Date: 15 Apr 2009 21:00:01
Message: <web.49e6823ce0976a9bfa66eb000@news.povray.org>
"clipka" <nomail@nomail> wrote:
> nemesis <nam### [at] gmailcom> wrote:
> > Quite frankly, hardly any external general-purpose language would do
> > much better at pure speed than using this method.  They'll still have to
> > parse and create the objects in the same manner, by binding some of its
> > particular calls to calls to povray object-creating functions and
> > methods.  But there's also a compilation step.  And I don't believe
> > parsing itself would be faster by going with a general purpose language,
> > even lightweight ones like Lua, Tcl or Scheme.
>
> There is a difference:
>
> Bytecode is more compact. It doesn't have whitespace, nor comments. It doesn't
> refer to variables or keywords (commands in bytecode) by names, but by indices
> in a table instead. Its command block ends are already identified. Its code -
> and moreover that of include files - is executed from memory instead of a
> (hopefully buffered) disk access module.
>
> So...
>
> - Disk access is reduced to the absolute minimum, not only because the files are
> likely smaller, but also because macro calls don't cause re-opening of the files
> they're declared in.
>
> - No need to seek over whitespace and comments.
>
> - No need to seek the end of a token.
>
> - No need to re-compute the hash of a variable name each time it is encountered.
>
> - No need to seek the end of an else-block.
>
> - No need to re-load code of macros executed.
>
> All this does not take up *much* time per statement - but if for instance you're
> calling the VRand macro in some loop a million times, then you *will* notice the
> difference.

I thought an SDL loop and its statements would be parsed only once and get
handled by an internal for loop.  Should've got more acquainted with pov's
source before trumpeting its amazing performance. :P

Very good answer and solid points, anyway.  Those are all indeed very solid
benefits of nicely performing interpreters, no doubt.  Thanks for correcting
me. ;)


Post a reply to this message

From: clipka
Subject: Re: Caching parsed code
Date: 16 Apr 2009 00:05:01
Message: <web.49e6ad4fe0976a9b255d1edc0@news.povray.org>
"nemesis" <nam### [at] gmailcom> wrote:
> Very good answer and solid points, anyway.  Those are all indeed very solid
> benefits of nicely performing interpreters, no doubt.  Thanks for correcting
> me. ;)

You're welcome ;)

BTW, if you ever feel the need to speed up your scene code, here's one that
makes a hell of a difference:


*** Examine your scene file for macros that your scene makes heavy uses of,
*** and copy them to your scene file!


Consider the parsing times for the following code:

#include "rand.inc"
//#macro VRand(RS) < rand(RS), rand(RS), rand(RS)> #end
#declare R = seed(42);
#declare i = 0;
#while (i < 100000)
  #declare V = VRand(R);
  #declare i = i + 1;
#end

On my Windows machine:
Second line commented out: 91.907 seconds (76.843 CPU seconds)
Second line "revived": A blasting 3.562 seconds (3.500 CPU seconds)

Thats ****** 25 TIMES FASTER ******
just because of that single macro call.

Why? Because POV-Ray re-opens the macro's source file each and every time it is
invoked... >_<

On my Linux machine it's not all that bad - probably because Linux doesn't have
such a high OS overhead for opening and closing a file - but still an
impressive factor of 12.


So, standard macro libs are a good thing, but sometimes you better not #include
but copy & paste them :P


Post a reply to this message

From: nemesis
Subject: Re: Caching parsed code
Date: 16 Apr 2009 00:12:13
Message: <49e6b01d@news.povray.org>
clipka wrote:
> *** Examine your scene file for macros that your scene makes heavy uses of,
> *** and copy them to your scene file!
> 
> 
> Consider the parsing times for the following code:
> 
> #include "rand.inc"
> //#macro VRand(RS) < rand(RS), rand(RS), rand(RS)> #end
> #declare R = seed(42);
> #declare i = 0;
> #while (i < 100000)
>   #declare V = VRand(R);
>   #declare i = i + 1;
> #end
> 
> On my Windows machine:
> Second line commented out: 91.907 seconds (76.843 CPU seconds)
> Second line "revived": A blasting 3.562 seconds (3.500 CPU seconds)
> 
> Thats ****** 25 TIMES FASTER ******
> just because of that single macro call.
> 
> Why? Because POV-Ray re-opens the macro's source file each and every time it is
> invoked... >_<

Friggin' insane.  Without even peeking at the code, I'm guessing the 
lack of proper scoping rules has something to do with that as #locals 
are locals in the context of files, not of macros... or am I wrong again?...

> So, standard macro libs are a good thing, but sometimes you better not #include
> but copy & paste them :P

Much better than simply having a bytecompiled language with true scoping 
rules and modules. ;)

ah, boost 1.38 finally compiled... :)

beta here we go!


Post a reply to this message

From: clipka
Subject: Re: Caching parsed code
Date: 16 Apr 2009 00:25:00
Message: <web.49e6b2e1e0976a9b255d1edc0@news.povray.org>
nemesis <nam### [at] nospam-gmailcom> wrote:
> Friggin' insane.  Without even peeking at the code, I'm guessing the
> lack of proper scoping rules has something to do with that as #locals
> are locals in the context of files, not of macros... or am I wrong again?...

Um... I think you are. #locals are indeed locals in the context of include files
as long as they're not inside macros - but when they're inside macros, they're
local to that one.

> > So, standard macro libs are a good thing, but sometimes you better not #include
> > but copy & paste them :P
>
> Much better than simply having a bytecompiled language with true scoping
> rules and modules. ;)

Yeah. Think of it: If we had pre-compiled bytecode include files, how could we
ever copy & paste the macros into our main scene files for speedup? :P


> ah, boost 1.38 finally compiled... :)
>
> beta here we go!

Have fun!


Post a reply to this message

From: nemesis
Subject: Re: Caching parsed code
Date: 16 Apr 2009 00:35:24
Message: <49e6b58c@news.povray.org>
clipka wrote:
> nemesis <nam### [at] nospam-gmailcom> wrote:
>> Friggin' insane.  Without even peeking at the code, I'm guessing the
>> lack of proper scoping rules has something to do with that as #locals
>> are locals in the context of files, not of macros... or am I wrong again?...
> 
> Um... I think you are. #locals are indeed locals in the context of include files
> as long as they're not inside macros - but when they're inside macros, they're
> local to that one.

I assumed files were reopened when calling macros exactly to provide 
them with some clucky local context. :P  If not, I don't see the reason 
for the reopening...

I'll dig in now, but since my point is replacing all that, I don't think 
I'll lose my time trying to understand it in the first place... :P

> Yeah. Think of it: If we had pre-compiled bytecode include files, how could we
> ever copy & paste the macros into our main scene files for speedup? :P

Oh, almost forgot pre-compiled includes... but not to scare people off 
it's worth reminding it should be something akin to how Python does, 
source and bytecode side-by-side.  It just loads the bytecode if already 
compiled and if not, compiles and stores for next time.


Post a reply to this message

From: clipka
Subject: Re: Caching parsed code
Date: 16 Apr 2009 01:05:00
Message: <web.49e6bb9ae0976a9b255d1edc0@news.povray.org>
nemesis <nam### [at] nospam-gmailcom> wrote:
> I assumed files were reopened when calling macros exactly to provide
> them with some clucky local context. :P  If not, I don't see the reason
> for the reopening...

Because in former times having too many files open simultaneously used to be a
problem?

If local variable context was an issue, then macros in the main file wouldn't be
able to have local variables.

> I'll dig in now, but since my point is replacing all that, I don't think
> I'll lose my time trying to understand it in the first place... :P

Right.

> Oh, almost forgot pre-compiled includes... but not to scare people off
> it's worth reminding it should be something akin to how Python does,
> source and bytecode side-by-side.  It just loads the bytecode if already
> compiled and if not, compiles and stores for next time.

Yep, I guess so. With some hash of the include file stored in the bytecode
version of course.


Post a reply to this message

From: Lukas Winter
Subject: Re: Caching parsed code
Date: 16 Apr 2009 10:08:58
Message: <49e73bfa@news.povray.org>
Am Wed, 15 Apr 2009 22:35:19 +0200 schrieb C:
> Even if the new SDL is lightning fast when your placing 100 000 blades
> of grass dynamically, it's going to take a while.
If no really complicated calculations are involved (like collission 
detection) in placing them, loading 100 000 blades of grass is going to 
take a while as well.


Post a reply to this message

From: Warp
Subject: Re: Caching parsed code
Date: 16 Apr 2009 11:50:54
Message: <49e753de@news.povray.org>
nemesis <nam### [at] gmailcom> wrote:
> That's unexpected.

>  From just the looks of it, the SDL parser seems so straightforward that 
> it's difficult to reason where any slowness could be coming from. 

  It's not difficult at all:

1) SDL is not byte-compiled. It's parsed&interpreted on the fly.

2) SDL loops are implemented by seeking the input file and continuing
the parsing from there. No caching of previously-parsed code of any kind.

  So not only do you get the overhead of re-parsing the code every time
it loops, you always get the same file I/O overhead.

  Another issue is that I wouldn't be surprised if the *parsing* itself is
faster in the perl interpreter than in povray. What looks "straightforward"
to you might still not be the fastest way of parsing, tokenizing and
interpreting the code.

> What you did was just measure SDL's slow macro processing

  What "macro processing"? I didn't use any macros.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: Caching parsed code
Date: 16 Apr 2009 11:52:32
Message: <49e7543f@news.povray.org>
C <the### [at] gmailcom> wrote:
> Even if the new SDL is lightning fast when your placing 100 000 blades 
> of grass dynamically, it's going to take a while.

  Like what? A half second?

-- 
                                                          - Warp


Post a reply to this message

From: clipka
Subject: Re: Caching parsed code
Date: 16 Apr 2009 13:35:00
Message: <web.49e76bcee0976a9bf708085d0@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> C <the### [at] gmailcom> wrote:
> > Even if the new SDL is lightning fast when your placing 100 000 blades
> > of grass dynamically, it's going to take a while.
>
>   Like what? A half second?

Depends on how complex those blades are, I guess... and whether they're
duplicates or all individual.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.