POV-Ray : Newsgroups : povray.beta-test : Optional way to handle #include files? Server Time
23 Dec 2024 01:02:33 EST (-0500)
  Optional way to handle #include files? (Message 1 to 10 of 18)  
Goto Latest 10 Messages Next 8 Messages >>>
From: stbenge
Subject: Optional way to handle #include files?
Date: 29 Jul 2010 16:50:49
Message: <4c51e9a9$1@news.povray.org>
Just thought I'd get some feedback before writing up a feature request. 
Right now, accessing functions and macros from an included file can be 
*slow*. It's not so bad when you're only accessing something once or 
twice, it's when you request something too many times.

As an example, if I trace() an object and place 5000 particles and using 
Point_At_Trans to orient them, the scene will take 6.672 seconds to 
parse when I include transforms.inc. If I copy all the necessary macros 
and don't include anything, it takes only 1.297 seconds to parse.

It would be nice if you could include files, or objects within certain 
files, as if they existed inside the scene itself, not as other files, 
if that makes any sense. Something like:

  #include  "transforms.inc"  into_scene

or:

  #make_local  Point_At_Trans()

would be nice. What do you think? Would this kind of functionality be 
worth it?

Sam


Post a reply to this message

From: Le Forgeron
Subject: Re: Optional way to handle #include files?
Date: 29 Jul 2010 17:52:28
Message: <4c51f81c$1@news.povray.org>
Le 29/07/2010 22:50, stbenge nous fit lire :
> Just thought I'd get some feedback before writing up a feature request.
> Right now, accessing functions and macros from an included file can be
> *slow*. It's not so bad when you're only accessing something once or
> twice, it's when you request something too many times.

> would be nice. What do you think? Would this kind of functionality be
> worth it?

In one word: no. (IMHO)

Your problem is not only the extras macro, but also the opening &
parsing the included files multiple time: IIRC macro are stored as file
& position in memory, when you invoke one, the file get (re-)opened, the
parser positionned at the right place, parsing occurs, on final #end the
file is closed.

So, for a distinct #include/macro, that means for each call:
 open()
 seek()
 ...
 close()
 continue original file

For an inline macro, only a:
 seek()
 ...
 continue original file

Now, what is the point of optimising the parser when the actual work of
rendering the picture might take hours, days or weeks ?
(And smart OS should be caching that file automatically anyway, but
open() can be slow)

If you really wants performance, a single file with a copy of the needed
macros (or even better, no macro call at all) is all you need.


Post a reply to this message

From: Christian Froeschlin
Subject: Re: Optional way to handle #include files?
Date: 29 Jul 2010 17:54:59
Message: <4c51f8b3$1@news.povray.org>
stbenge wrote:

>  #include  "transforms.inc"  into_scene

I'd support the feature request that include handling
should be more efficient but without special syntax.
After all, once you have implemented the functionality
there is no need to have slow includes anymore. There
might be some additional memory use for the include
files but it should be manageable.

Hmm ... a brute force approach would be to simply create
one huge scene file in a preparsing step checking #include
only and render the resulting file.


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Optional way to handle #include files?
Date: 29 Jul 2010 18:21:03
Message: <4c51fecf$1@news.povray.org>
On 29.07.10 22:50, stbenge wrote:
> Just thought I'd get some feedback before writing up a feature request.
> Right now, accessing functions and macros from an included file can be
> *slow*. It's not so bad when you're only accessing something once or
> twice, it's when you request something too many times.

The suggestion of caching include files comes up every now and then. The 
issue is that there is no way to determine if an include file is modified by 
some other file. The only always reliable way to determine this is by 
parsing the scene, defeating any optimization on parsing.

	Thorsten


Post a reply to this message

From: clipka
Subject: Re: Optional way to handle #include files?
Date: 29 Jul 2010 19:37:19
Message: <4c5210af$1@news.povray.org>
Am 29.07.2010 23:57, schrieb Christian Froeschlin:
> stbenge wrote:
>
>> #include "transforms.inc" into_scene
>
> I'd support the feature request that include handling
> should be more efficient but without special syntax.
> After all, once you have implemented the functionality
> there is no need to have slow includes anymore. There
> might be some additional memory use for the include
> files but it should be manageable.

There is the potential of breaking a macro framework or two: In case an 
include file with macros would be generated on-the-fly multiple times 
with the same name but different content, caching macros would change 
the behaviour.

I instead suggest to leave such features to a next-generation scene 
language.


Post a reply to this message

From: Darren New
Subject: Re: Optional way to handle #include files?
Date: 29 Jul 2010 22:45:58
Message: <4c523ce6$1@news.povray.org>
clipka wrote:
> Am 29.07.2010 23:57, schrieb Christian Froeschlin:
>> stbenge wrote:
>>
>>> #include "transforms.inc" into_scene
>>
>> I'd support the feature request that include handling
>> should be more efficient but without special syntax.
> 
> There is the potential of breaking a macro framework or two: In case an 
> include file with macros would be generated on-the-fly multiple times 

Of course, if you didn't remove the need for the special syntax then it 
wouldn't break existing macro frameworks.

-- 
Darren New, San Diego CA, USA (PST)
    C# - a language whose greatest drawback
    is that its best implementation comes
    from a company that doesn't hate Microsoft.


Post a reply to this message

From: Warp
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 02:41:35
Message: <4c52741e@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> The suggestion of caching include files comes up every now and then. The 
> issue is that there is no way to determine if an include file is modified by 
> some other file. The only always reliable way to determine this is by 
> parsing the scene, defeating any optimization on parsing.

  Well, POV-Ray could officially take the stance that "once a macro has
been defined, that's it, it remains like that for the rest of the parsing".
In other words, the possibility of generating a new macro every time the
file containing the macro is included will be removed. However, I don't
think it would affect 99.99% of users. After all, it's such a level of SDL
hackery that nobody uses it (nor is there really a need for it, really).

  (Besides, if POV-Ray handles macros as filename-offset pairs, wouldn't
modifying the file in the SDL break that? If the file is modified via SDL,
then the offset of the macro inside the file will be subject to change,
which would hence break the parsing. I haven't tested this in practice,
though. If that is indeed the case, then there actually is no such possibility
in practice, so there's nothing to lose, really.)

-- 
                                                          - Warp


Post a reply to this message

From: Chris Cason
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 02:51:17
Message: <4c527665$1@news.povray.org>
On 30/07/2010 16:41, Warp wrote:
>   (Besides, if POV-Ray handles macros as filename-offset pairs, wouldn't
> modifying the file in the SDL break that? If the file is modified via SDL,

Yes, that's how it's done. Technically, though, as long as the file offset
remained the same, it would still work.

We could I suppose have a "fast_macro" keyword which is explicitly defined
as being cached in memory. But none of us have the time to write this ATM.

Feel up to a challenge? :-)

-- Chris


Post a reply to this message

From: Warp
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 04:04:33
Message: <4c528791@news.povray.org>
Chris Cason <del### [at] deletethistoopovrayorg> wrote:
> On 30/07/2010 16:41, Warp wrote:
> >   (Besides, if POV-Ray handles macros as filename-offset pairs, wouldn't
> > modifying the file in the SDL break that? If the file is modified via SDL,

> Yes, that's how it's done. Technically, though, as long as the file offset
> remained the same, it would still work.

> We could I suppose have a "fast_macro" keyword which is explicitly defined
> as being cached in memory. But none of us have the time to write this ATM.

  I don't see a reason why regular macros cannot be cached in memory.
I doubt anyone has ever used such a contrived way of generating new macros
on-the-fly as the current method allows (which, as stated, is quite limited
as the beginning of the macro in the file must remain unchanged).

> Feel up to a challenge? :-)

  I seem to remember that a patch has existed which does this. I wonder if
something could be learned from it.

-- 
                                                          - Warp


Post a reply to this message

From: Le Forgeron
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 04:28:16
Message: <4c528d20@news.povray.org>
Le 30/07/2010 10:04, Warp nous fit lire :
> Chris Cason <del### [at] deletethistoopovrayorg> wrote:

>   I don't see a reason why regular macros cannot be cached in memory.

Because we all come from an old epoch where memory was scarce. Wasting
memory to cache macro was not worth when you could get back to the file
and its position. Moreover, at that time, memory allocation was to be
done once and for all with a size known at the beginning... whereas a
macro's length is known really by the end.

And there is "cheating code" (but which I believe is still ok) like

================
// file macro_begin.inc
#macro Warp_Is_Great(a,n)
sphere { .... }
#include "macro_end.inc"

//continue with scene description

================
// file macro_end.inc"
torus { .... }
#end

================
How would you cache that ? (and it can even be more trickier)
Sometime tricky code is useful.

The sad thing is that for macro processing the design for povray did not
try to reuse the cpp program (directly, not as a model), but that would
have also had its own issues.

DKBTrace had no fancy macro, nor loop: if you wanted a hundred spheres
along a curve, you made a C program (or a shell script, or whatever) to
generate the scene code, then parse the result with the renderer.

Macro and loop came handy to avoid that.
If you want to make cached macro, may I suggest you use some new keyword
like #inline, with the added constraints that the end of the #inline
must be in the same file as its beginning, and probably that you cannot
redefined an already defined #inline. (which might be some issue if you
include a file more than once "per error" which has one or more #inline).
Cleaning #inline definition as soon as the parsing is ended is of course
a nicer usage of the memory.
(C++ string is really nice for today's programmer, it was not so easy in
C (and remember to handle all possible failures))

Have a nice day.


Post a reply to this message

Goto Latest 10 Messages Next 8 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.