POV-Ray : Newsgroups : povray.beta-test : Optional way to handle #include files? Server Time
30 Jun 2024 11:59:03 EDT (-0400)
  Optional way to handle #include files? (Message 9 to 18 of 18)  
<<< Previous 8 Messages Goto Initial 10 Messages
From: Warp
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 04:04:33
Message: <4c528791@news.povray.org>
Chris Cason <del### [at] deletethistoopovrayorg> wrote:
> On 30/07/2010 16:41, Warp wrote:
> >   (Besides, if POV-Ray handles macros as filename-offset pairs, wouldn't
> > modifying the file in the SDL break that? If the file is modified via SDL,

> Yes, that's how it's done. Technically, though, as long as the file offset
> remained the same, it would still work.

> We could I suppose have a "fast_macro" keyword which is explicitly defined
> as being cached in memory. But none of us have the time to write this ATM.

  I don't see a reason why regular macros cannot be cached in memory.
I doubt anyone has ever used such a contrived way of generating new macros
on-the-fly as the current method allows (which, as stated, is quite limited
as the beginning of the macro in the file must remain unchanged).

> Feel up to a challenge? :-)

  I seem to remember that a patch has existed which does this. I wonder if
something could be learned from it.

-- 
                                                          - Warp


Post a reply to this message

From: Le Forgeron
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 04:28:16
Message: <4c528d20@news.povray.org>
Le 30/07/2010 10:04, Warp nous fit lire :
> Chris Cason <del### [at] deletethistoopovrayorg> wrote:

>   I don't see a reason why regular macros cannot be cached in memory.

Because we all come from an old epoch where memory was scarce. Wasting
memory to cache macro was not worth when you could get back to the file
and its position. Moreover, at that time, memory allocation was to be
done once and for all with a size known at the beginning... whereas a
macro's length is known really by the end.

And there is "cheating code" (but which I believe is still ok) like

================
// file macro_begin.inc
#macro Warp_Is_Great(a,n)
sphere { .... }
#include "macro_end.inc"

//continue with scene description

================
// file macro_end.inc"
torus { .... }
#end

================
How would you cache that ? (and it can even be more trickier)
Sometime tricky code is useful.

The sad thing is that for macro processing the design for povray did not
try to reuse the cpp program (directly, not as a model), but that would
have also had its own issues.

DKBTrace had no fancy macro, nor loop: if you wanted a hundred spheres
along a curve, you made a C program (or a shell script, or whatever) to
generate the scene code, then parse the result with the renderer.

Macro and loop came handy to avoid that.
If you want to make cached macro, may I suggest you use some new keyword
like #inline, with the added constraints that the end of the #inline
must be in the same file as its beginning, and probably that you cannot
redefined an already defined #inline. (which might be some issue if you
include a file more than once "per error" which has one or more #inline).
Cleaning #inline definition as soon as the parsing is ended is of course
a nicer usage of the memory.
(C++ string is really nice for today's programmer, it was not so easy in
C (and remember to handle all possible failures))

Have a nice day.


Post a reply to this message

From: Warp
Subject: Re: Optional way to handle #include files?
Date: 30 Jul 2010 04:40:41
Message: <4c529009@news.povray.org>
Le_Forgeron <jgr### [at] freefr> wrote:
> And there is "cheating code" (but which I believe is still ok) like

> ================
> // file macro_begin.inc
> #macro Warp_Is_Great(a,n)
> sphere { .... }
> #include "macro_end.inc"

> //continue with scene description

> ================
> // file macro_end.inc"
> torus { .... }
> #end

> ================
> How would you cache that ? (and it can even be more trickier)
> Sometime tricky code is useful.

  The possibility of having part of the macro's body in a different file
(the macro definition block doesn't even need to be split into two files
as in your example, it's enough to simply have part of the macro's body
in a different file, inserted in the macro with an #include) makes the
caching slightly more complicated because "#include" needs to be handled
in a special way. In other words the "#include" command itself cannot be
cached, but instead it has to be interpreted while caching (so that what
ends up in the cache is not the "#include" command, but what it brings
to the macro definition). I don't see this is impossible.

  (Another option is that while transferring the macro to the memory
cache, if an "#include" is found, the macro is then marked as "uncacheable"
and then handled as macros are currently handled. The advantage of supporting
this is that if some other "trick" is later discovered which messes up with
the caching, the same solution can be used to fix the problem: If the problem
is found while caching, simply discard the caching of that macro and mark it
as "uncacheable".)

> Macro and loop came handy to avoid that.
> If you want to make cached macro, may I suggest you use some new keyword
> like #inline, with the added constraints that the end of the #inline
> must be in the same file as its beginning, and probably that you cannot
> redefined an already defined #inline.

  It's perfectly possible to detect, while parsing the macro, if it has
been split into more than one file, so the caching/non-caching can be
automatized. No need to introduce new keywords and functionalities into
the SDL.

-- 
                                                          - Warp


Post a reply to this message

From: stevenvh
Subject: Re: Optional way to handle #include files?
Date: 31 Jul 2010 05:00:01
Message: <web.4c53e5082fe4fe04c0721a1d0@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> The suggestion of caching include files comes up every now and then. The
> issue is that there is no way to determine if an include file is modified by
> some other file. The only always reliable way to determine this is by
> parsing the scene, defeating any optimization on parsing.
>
>  Thorsten

Personally I see no reason why POV-Ray would expect the include file to be
changed. It could read the file and subsequently read from the cache.
Alternatively one could indicate that the file doesn't change between reads and
that it's safe to read from the cache by including "static":

    #include "doesntchange.inc"  static
    #include "maychange.inc"


Post a reply to this message

From: stbenge
Subject: Re: Optional way to handle #include files?
Date: 31 Jul 2010 19:06:48
Message: <4c54ac88@news.povray.org>
Le_Forgeron wrote:
> Now, what is the point of optimising the parser when the actual work of
> rendering the picture might take hours, days or weeks ?

It's not the parsing speed of the final render, it's all the previewing 
and tweaking it takes to get there.

> If you really wants performance, a single file with a copy of the needed
> macros (or even better, no macro call at all) is all you need.

I've copied macros and functions from other files into my main include, 
and that it seems to have sped things up considerably.


Post a reply to this message

From: stbenge
Subject: Re: Optional way to handle #include files?
Date: 31 Jul 2010 19:09:21
Message: <4c54ad21$1@news.povray.org>
Christian Froeschlin wrote:
> stbenge wrote:
> 
>>  #include  "transforms.inc"  into_scene
> 
> I'd support the feature request that include handling
> should be more efficient but without special syntax.
> After all, once you have implemented the functionality
> there is no need to have slow includes anymore. There
> might be some additional memory use for the include
> files but it should be manageable.

But if the implementation increases memory usage, it would become a 
problem for some scenes (especially if you're like me and only have a 
measly 2GBs of RAM).


Post a reply to this message

From: stbenge
Subject: Re: Optional way to handle #include files?
Date: 31 Jul 2010 19:14:03
Message: <4c54ae3b$1@news.povray.org>
Chris Cason wrote:
> We could I suppose have a "fast_macro" keyword which is explicitly defined
> as being cached in memory. But none of us have the time to write this ATM.

Something like that would be great, mainly if you could mark *any* macro 
as being a fast_macro, so that official macros could be used in this 
manner without editing the includes files they come from.


Post a reply to this message

From: clipka
Subject: Re: Optional way to handle #include files?
Date: 1 Aug 2010 07:58:53
Message: <4c55617d$1@news.povray.org>
Am 01.08.2010 01:14, schrieb stbenge:
> Chris Cason wrote:
>> We could I suppose have a "fast_macro" keyword which is explicitly
>> defined
>> as being cached in memory. But none of us have the time to write this
>> ATM.
>
> Something like that would be great, mainly if you could mark *any* macro
> as being a fast_macro, so that official macros could be used in this
> manner without editing the includes files they come from.

Good point.

So maybe a suitable syntax would be:

     #macro Foo(...)
       ...
     #end
     #fast_macro Foo

where "#fast_macro IDENTIFIER" would cause the parser to retrieve the 
specified macro into memory.

I'd also suggest not to store the content of the macro literally, but 
rather in an already tokenized form, as I guess this would speed up 
things even a bit more.


BTW, when revisiting macro syntax, I think it would be a good idea to 
also introduce "local" macros, in analogy to local variables. But that's 
another story.


Post a reply to this message

From: ingo
Subject: Re: Optional way to handle #include files?
Date: 2 Aug 2010 04:09:13
Message: <Xns9DC86747B136Dseed7@news.povray.org>
in news:4c527665$1@news.povray.org Chris Cason wrote:

> On 30/07/2010 16:41, Warp wrote:
> We could I suppose have a "fast_macro" keyword which is explicitly
> defined as being cached in memory. 

currently we have #include "mannequin.inc"

how about

#from "mannequin.inc" include Hand

or even

#from "mannequin.inc" include Hand #as MyHand

in the last case you could make a fast hardcopy of the macro / function 
/ object and still be able to modify the original Hand of mannequin.inc 
and and (re)use that in you scene using "#from "mannequin.inc" include 
Hand"

... to that add namespaces to avoid collisions and do,
#include "mannequin.inc"
mannequin.Hand( , , )

but then we are realy into POV4.0


ingo


Post a reply to this message

From: Warp
Subject: Re: Optional way to handle #include files?
Date: 2 Aug 2010 07:50:03
Message: <4c56b0eb@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> So maybe a suitable syntax would be:

>      #macro Foo(...)
>        ...
>      #end
>      #fast_macro Foo

> where "#fast_macro IDENTIFIER" would cause the parser to retrieve the 
> specified macro into memory.

  Maybe I'm being stubborn here, but I still can't understand the need
for such a keyword. What would be the problem in *always* having all
macros as "fast"?

> I'd also suggest not to store the content of the macro literally, but 
> rather in an already tokenized form, as I guess this would speed up 
> things even a bit more.

  This going towards a byte-compiled VM (much like the user-defined
functions), which is what has been the suggestion for POV-Ray 4 for
a long time...

-- 
                                                          - Warp


Post a reply to this message

<<< Previous 8 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.