POV-Ray : Newsgroups : povray.general : Announcement: Moray acquired by POV-Ray; to be released as Open Source Server Time
1 Jun 2024 06:55:04 EDT (-0400)
  Announcement: Moray acquired by POV-Ray; to be released as Open Source (Message 31 to 40 of 58)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Thorsten Froehlich
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 07:00:13
Message: <45d2f9cd@news.povray.org>
Warp wrote:
>   You know, "requires a C++ compiler" and "is C++" (and not C) are two
> different things. ;)
> 
>   Those gigantic switch-case blocks are the most typical C-style code
> in POV-Ray. 

But those are in the parser and not used for dynamic binding!?!

	Thorsten


Post a reply to this message

From: Warp
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 07:18:48
Message: <45d2fe28@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> But those are in the parser and not used for dynamic binding!?!

  They are not doing dynamic binding because they are written in C.
However, the functionality they are performing is something quite
typical of what dynamic binding is for. In typical OO code you don't
write such gigantic switch-case blocks but instead you inherit from
a base class which has the proper virtual functions to do the same
task.

-- 
                                                          - Warp


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 08:09:44
Message: <45d30a18@news.povray.org>
Warp wrote:
> Thorsten Froehlich <tho### [at] trfde> wrote:
>> But those are in the parser and not used for dynamic binding!?!
> 
>   They are not doing dynamic binding because they are written in C.
> However, the functionality they are performing is something quite
> typical of what dynamic binding is for. In typical OO code you don't
> write such gigantic switch-case blocks but instead you inherit from
> a base class which has the proper virtual functions to do the same
> task.

I think we are talking about two different things here. The switch-case
statements in the parser do neither emulate nor replace or in any other way
simulate anything even remotely like dynamic binding: Remember, the POV-Ray
scanner and parser are a standard recursive decent implementation, and
interpretation of a language requires staged conditional code execution.
There is no place for dynamic binding in a parser for those tasks, it is
just a completely different technique which has no application here.

	Thorsten


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 08:36:29
Message: <45d3105d@news.povray.org>
Thorsten Froehlich wrote:
> Warp wrote:
>> Thorsten Froehlich <tho### [at] trfde> wrote:
>>> But those are in the parser and not used for dynamic binding!?!
>>   They are not doing dynamic binding because they are written in C.
>> However, the functionality they are performing is something quite
>> typical of what dynamic binding is for. In typical OO code you don't
>> write such gigantic switch-case blocks but instead you inherit from
>> a base class which has the proper virtual functions to do the same
>> task.
> 
> I think we are talking about two different things here.

To elaborate some more:

I do guess that your view is of the SDL as just that, a static description
(in the layer above preprocessing via macros and so on) of objects that is
read from a file and replicated in memory. While this perspective my appear
correct at first, second and third sight, it does not actually represent the
nature of a scene description.

Pressing a SDL (I did this at work for VRML 97 a few years ago, so I cannot
share the code that would illustrate it) in a completely get/set driven
manner, with object descriptions created by binding the get/set methods as
parser rules, may be appealing. Unfortunately, implementing even with the
most advanced template structures to support it, is not all that easy. Apart
from the approximately two abstraction layers in form of intermediate
template functions (which cost nothing, but *are* visible when debugging),
to define an object almost always involves some terminal rules to be
executed that depend on other information. In essence, you gain
interdependencies of data on previously parsed data, or (more difficult)
data that is still to be parsed or may be entirely optional.

Dealing with these cases is certainly possible, but you end up with more
than just get/set rules. Instead, you end up with rules as object creation
to set default values, rules for setting a parsed data, rules for
post-processing results after parsing everything else, and rules for setting
data after that. All those do share a lot of dependencies. Factoring each
into separate methods is possible. Factoring each special handling case into
methods is possible as well. However, suddenly what would have been 10 case
statements and some leading and trailing code has turned into 10 methods,
each calling two or three other methods that contain the shared code.

Each such method contains only a handful of lines of code. And conventional
wisdom holds that small methods are easy to maintain, but if you have a
swarm of 10 methods, depending on about 20 shared methods, all working on
the same object, you suddenly have decomposed sequential conditional code
into a complex tree of method calls. The code will be easy to read for sure,
*but* it will no longer be as easily understood.

Does this mean the get/set pattern does not work for an SDL? No, absolutely
not. It just means that a case-switch pattern will still be very useful: The
parser needs to be abstracted such that all possible values are sequential
starting at zero, and then you get a very efficient switch-case statement.
Even better, you also retain easy to maintain code that others can
understand quickly. Call it a tradeoff between maintainability and elegance
if you want to, it certainly is, but it also is a rather pragmatic approach
and has no performance drawbacks whatsoever. It may not be the most elegant,
but it will certainly be fast and easy to maintain. In the end, that is what
counts the most...

	Thorsten


Post a reply to this message

From: Warp
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 09:31:25
Message: <45d31d3d@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> However, suddenly what would have been 10 case
> statements and some leading and trailing code has turned into 10 methods,
> each calling two or three other methods that contain the shared code.

> Each such method contains only a handful of lines of code. And conventional
> wisdom holds that small methods are easy to maintain, but if you have a
> swarm of 10 methods, depending on about 20 shared methods, all working on
> the same object, you suddenly have decomposed sequential conditional code
> into a complex tree of method calls. The code will be easy to read for sure,
> *but* it will no longer be as easily understood.

  The question is not how many methods or case branches are needed, but
how the code is organized. The main point with OO modularization is that
everything related to one element of the input will be located in the
same place, and each such implementation will be similar in structure.
Once you know how such element parser objects are implemented and what
is their structure it's easy to understand any given one.

  I remember once wanting to add a new keyword to the POV-Ray 3.5 SDL.
I don't remember exact details anylonger, but I had to add at least two
separate 'case' lines in two separate files, as well as at least two
new elements to two arrays in to separate files. Plus the implementation
of that feature in its own separate file. (As I said, I don't remember
the details well anymore, so I may remember something wrongly, but it
felt needlessly laborious back then.)
  The information about that new token was scattered into many separate
files, none of which were specific to that feature. This is not very
modular.

  A well-implemented modular parser does not need this. Instead, you
create *one* new class in one new file, and then add the name of that
class into *one* existing array (or whatever). There's no need to make
modifications to existing code anywhere else than in this one place.
The class you created will contain all the necessary info for parsing
the new input element that it was created for. The existing parser file
where you add the name of this class will be completely abstract: It only
contains generic code for parsing, no token-specific code at all (as
pov3.6 and earlier do).

  The advantage of this is, of course, that all the code related to a
specific token/feature is contained in its own module instead of being
scattered among several gigantic files. If you need to eg. change this
somehow you don't need to hunt for all the files where it is mentioned
but you change only that one module.

  Will this cause some kind of speed penalty in parsing? Maybe, but I bet
it's nothing radical. Probably negligible compared to the most time-consuming
tasks done during parsing (such as allocating and initializing objects).

> It may not be the most elegant,
> but it will certainly be fast and easy to maintain. In the end, that is what
> counts the most...

  It's easy to maintain only if you know the parser and all the gigantic
switch-case-blocks by heart, and you can cite from memory all the files
which you need to modify if you want to eg. add a new token.

-- 
                                                          - Warp


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 09:52:15
Message: <45d3221f@news.povray.org>
Warp wrote:
>   A well-implemented modular parser does not need this. Instead, you
> create *one* new class in one new file, and then add the name of that
> class into *one* existing array (or whatever). 

That may or may not be desirable. Usually it actually is not, as it has a
serious negative effect on performance. Not to mention that is is not
actually possible if the parser is part of a public interface, where you of
course always have header and source file that need to be modified.

	Thorsten


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 09:55:42
Message: <45d322ee@news.povray.org>
Warp wrote:
>> It may not be the most elegant,
>> but it will certainly be fast and easy to maintain. In the end, that is what
>> counts the most...
> 
>   It's easy to maintain only if you know the parser and all the gigantic
> switch-case-blocks by heart, and you can cite from memory all the files
> which you need to modify if you want to eg. add a new token.

I made no statement that a parser would or should be implemented exactly
like the current one, did I?

Even the current one does not have only "gigantic switch-case-blocks"
because there simply isn't that much to handle in 90% of cases. Here are
exceptions, but they are just that (i.e. texture parsing).

	Thorsten


Post a reply to this message

From: Warp
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 11:44:58
Message: <45d33c8a@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> That may or may not be desirable. Usually it actually is not, as it has a
> serious negative effect on performance.

  Serious negative performance? Like eg. parsing goes 1% slower?

  Switch-case blocks are not that much faster compared to dynamic binding.
Besides I clearly remember you saying in the past that the parsing of the
input file itself is not by far the slowest operation done at the parsing
stage (but allocating and initializing objects).

> Not to mention that is is not
> actually possible if the parser is part of a public interface, where you of
> course always have header and source file that need to be modified.

  Why would the public interface of the parser have any info on the
implementation details of the input file?

  A well-implemented parser *abstracts* away these implementation details.
The user of the parser is not interested in knowing whether a sphere is
created with the keyword "sphere", "Sphere" or "ball". The user of the
parser is only interested in getting the objects (in an abstract way).
Putting implementation details in the public interface is only going
to cause problems: Code around the entire application may start being
dependant on those implementation details, making it harder to modify.

  I see no reason why adding a new token could not be done by simply
adding the name of a new class in one .cpp file (besides implementing
that new class, of course).

-- 
                                                          - Warp


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 12:18:57
Message: <45d34481@news.povray.org>
Warp wrote:
>   Switch-case blocks are not that much faster compared to dynamic binding.

They are not by code, but by design. As I outlined, you get an impenetrable
mess of dependent methods. That is slow. How slow? Well, it depends on the
object's details. Expect up to twice the time required for processing. Does
that translate to 20% or 50% or more slowdown? Implementing would tell, but
no need to implement something already known to be slower. As I said, I have
done it before. How fast was it to begin with - the VRML 97 I worked on was
faster than the equivalent generated by a standard C parser generator. For
some unfortunate VRML 97 object types abstraction on a method level would
cause needlessly complicated detached processing that ate most of the
parser's performance compared to the parser generator version.

>   A well-implemented parser *abstracts* away these implementation details.

That is pure theory and completely inappropriate for maintainable code.

>   I see no reason why adding a new token could not be done by simply
> adding the name of a new class in one .cpp file (besides implementing
> that new class, of course).

Well, from experience I know there are good reasons to make certain choices
for speed and maintainability. So I guess in the end it just comes down to
believing in my experience in building very fast and maintainable parsers,
including all the lessons learnt from problems caused by overdoing the
design abstraction. The best theory simply isn't always the best (in all
dimensions) implementation.

	Thorsten


Post a reply to this message

From: Warp
Subject: Re: Announcement: Moray acquired by POV-Ray; to be released as OpenSource
Date: 14 Feb 2007 17:11:22
Message: <45d38909@news.povray.org>
Thorsten Froehlich <tho### [at] trfde> wrote:
> >   A well-implemented parser *abstracts* away these implementation details.

> That is pure theory and completely inappropriate for maintainable code.

  Are you basically saying that abstract code is unmaintainable?

  Are you next going to say that modules make code hard to reuse? ;)

-- 
                                                          - Warp


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.