|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
delete this message, I can't do it myself from Mozilla Thunderbird,
tried to unsubscribe and subscribe several times, tried several methods
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
delete this message, I can't do it myself from Mozilla Thunderbird,
tried to unsubscribe and subscribe several times, tried several methods,
it was a moment of insanity, sorry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
delete this message, I can't do it myself from Mozilla Thunderbird,
tried to unsubscribe and subscribe several times, tried several methods,
it was a moment of insanity, sorry
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mr" <nomail@nomail> wrote:
> clipka <ano### [at] anonymousorg> wrote:
> > A simple POV-Ray scene described in this format might look like this:
> [...]
> > {
> > camera: {
> > up: [ 0, 1, 0 ],
> The first thing bothering me is the combination of colons and brace characters
> in some places or maybe rather that it seems it can't be used the same way
> elsewhere?
> Would we have to write camera: {
> It would seem a little cluttered to have to use a : for just specifying the
> opening of a block. do we have to ?
I would say that some of these things will have to be looked at from the
perspective of
1. the developer / parser
2. the end user
I would like it if we could dispense with some of the brackets altogether, and
just have a LF/CR/NL or new keyword signal the end of a statement.
On the other hand, I _would_ like to have explicit endings for code blocks, such
as
#endcamera
#endif
#endfor
#endwhile
as I think that overall it would be in the long run easier from both
perspectives to follow and debug code.
> > One possible change to the syntax could be the use of regular braces
> > around list items instead of square brackets to specifically denote
> > vectors, if only to make it more pleasant to read.
There has been some discussion about typing and automatic vector promotion.
Getting rid of brackets and braces altogether would make a little less typing to
do, but maybe it would be worth it to lose that benefit if we had to specify
things like vec2, vec3, vec4 - if only to keep it at the forefront of our minds
what sort of values we're dealing with in any given instance.
> > I'd also love to add ranges to the set of types, using a syntax akin to
> > this:
> >
> > [ 1 .. 20 ] // range from 1 to 20, both inclusive
> > ( 1 .. 20 ) // range from 1 to 20, both exclusive
> > [ 1 .. 20 ) // range from 1 inclusive to 20 exclusive
> > ( 1 .. 20 ] // range from 1 exclusive to 20 inclusive
Ranges would be very nice, but maybe I would like to get rid of the parentheses
in favor of a leading keyword, so that we could instead have all manner of
parentheses available for use in grouping terms in equations without making
parsing (more of) a complicated ordeal.
so, the keywords for the above examples might be ii, ee, ie, ei.
then we could write equations like val = [sin (x+3) / pi] + (tau/6) -
abs{cos[(q/360)+(<n/0.5>+0.5)]};
It would also be nice to drop the requirement for #declare and just write
x=3;
> > symbols as (entirely optional) syntactic sugar.
This would be very nice, especially for placing the text/symbol in the render.
The same would go for a mechanism for exposing the text of a parsed line of code
to the SDL.
What I mean by that is to have a mechanism similar to that in a spreadsheet
whereby if cell A3 has (x+1)/10 in it, then formula (A3) returns the string
"(x+1)/10"
> But with the prerequisites that they should have an explicit
> alternative for when we don't know or have any internet to check for the code to
> type. most people never enter a unicode special number their whole life. but
> maybe the parsing times could really be worth that learning?
We could do that with an include file, like we have with functions.inc -
symbol.inc could have #declare sym_nabla = symbol (U+2207); or however it would
get coded.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 09.06.2021 um 20:13 schrieb Bald Eagle:
>>> I'd also love to add ranges to the set of types, using a syntax akin to
>>> this:
>>>
>>> [ 1 .. 20 ] // range from 1 to 20, both inclusive
>>> ( 1 .. 20 ) // range from 1 to 20, both exclusive
>>> [ 1 .. 20 ) // range from 1 inclusive to 20 exclusive
>>> ( 1 .. 20 ] // range from 1 exclusive to 20 inclusive
>
> Ranges would be very nice, but maybe I would like to get rid of the parentheses
> in favor of a leading keyword, so that we could instead have all manner of
> parentheses available for use in grouping terms in equations without making
> parsing (more of) a complicated ordeal.
>
> so, the keywords for the above examples might be ii, ee, ie, ei.
>
> then we could write equations like val = [sin (x+3) / pi] + (tau/6) -
> abs{cos[(q/360)+(<n/0.5>+0.5)]};
> It would also be nice to drop the requirement for #declare and just write
> x=3;
I think we can have both.
The style of range notation is deliberately chosen to make it easy to
determine what is the start of something and what the end; for instance,
I personally prefer the following alternative notation because I find it
more intuitive with respect to which end is "inclusive" and which is
"exclusive", but _that_ would indeed seriously complicate parsing:
[ 1 .. 20 ] // range from 1 to 20, both inclusive
] 1 .. 20 ] // range from 1 exclusive to 20 inclusive
As for the proposed syntax further above, distinguishing an arbitrary
mathematical expression from a range should be easy: All the parser
needs to do is look at the "..", which we can take as an operator that
takes two numeric values and returns what we might call a "naked range".
Parentheses and/or brackets around such a "naked range" would then
convert that into what we might call a "qualified range". (Further
wrapping a "qualified range" in more parentheses or brackets would have
no additional effect.)
>> But with the prerequisites that they should have an explicit
>> alternative for when we don't know or have any internet to check for the code to
>> type. most people never enter a unicode special number their whole life. but
>> maybe the parsing times could really be worth that learning?
>
> We could do that with an include file, like we have with functions.inc -
> symbol.inc could have #declare sym_nabla = symbol (U+2207); or however it would
> get coded.
Explicit ASCII alternatives, hard-baked into the language, would be a
must, IMO. As I mentioned, Unicode symbols would be syntactic sugar. The
ASCII constructs would be the real deal, while the Unicode symbols would
be considered shortcuts.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Explicit ASCII alternatives, hard-baked into the language, would be a
> must, IMO. As I mentioned, Unicode symbols would be syntactic sugar. The
> ASCII constructs would be the real deal, while the Unicode symbols would
> be considered shortcuts.
Okay, and do you confirm that such kind of things would have significant impact
on parse time, like: linearly, if you divide the character amounts by two you
get half parsing code?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 10.06.2021 um 13:18 schrieb Mr:
> clipka <ano### [at] anonymousorg> wrote:
>
>> Explicit ASCII alternatives, hard-baked into the language, would be a
>> must, IMO. As I mentioned, Unicode symbols would be syntactic sugar. The
>> ASCII constructs would be the real deal, while the Unicode symbols would
>> be considered shortcuts.
>
> Okay, and do you confirm that such kind of things would have significant impact
> on parse time, like: linearly, if you divide the character amounts by two you
> get half parsing code?
No, parser performance is not that simple.
A good parser (which POV-Ray's old one is not by any stretch, and even
the overhauled one is only a step on the way there) will just _scan_ the
whole file once (i.e. identify start and end of each character sequence
that look like a token at first glance - e.g. sequences that look like
numbers, sequences that look like keywords or identifies, sequences that
look like operands, etc.), _tokenize_ it once (i.e. translate those
character sequences into internal numeric IDs, aka tokens), and from
there on just juggle those IDs.
The next steps would be to either...
- walk through those tokens and "execute" them, implementing loops by
processing the corresponding tokens over and over again; in this case
processing the loops again and again would be the bottleneck.
- digest that token sequence even further, "compiling" it into something
that can be executed so efficiently that it might have a chance to
become negligibe compared to the time spent scanning and tokenizing; but
to achieve that, the effort to bring it into this efficient form will
itself outweigh the effort of scanning and tokenizing.
In either case, the genuinely time-consuming portions of parsing will
work on a representation in which the number of characters comprising
the keywords or operands will have become entirely irrelevant.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Bump! (unless discussion is going on elsewhere, if so, sorry for missing it) and
not knowing much about json yet, I have a remark / question to the experts.
I also like the idea to use end of lines for ending some statements. To me that
is one cause for the popularity of python, staying as close to pseudo code as
possible in the core functionality. (which doesn't prevent to use stuff like
brackets for list comprehensions or more recently type annotations, depending on
the scope of the snippet or project written. There's even some days when you
forget the colon after concentrating on formulation of an if clause, and you
think; couldn't we do without it?
However , I have to assume that some of the syntax previously mentionned is
organically intertwined within JSON core syntax... But I do not have the
experience to distinguish which at all.
So my question to the experts is, could end of line statement still be
compatible with the proposed json paradigm / standard ? maybe in the worse case,
breaking away from the standard but by chosing a subset of it... or whatever
shift...?
(I only asked because it would seem that we would be at leat two in favour of
such a feature, but I for one would gladly give the request up after being
explained a reason, such as performance gain etc...)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 12.07.2021 um 15:01 schrieb Mr:
> So my question to the experts is, could end of line statement still be
> compatible with the proposed json paradigm / standard ? maybe in the worse case,
> breaking away from the standard but by chosing a subset of it... or whatever
> shift...?
One of the key features of the proposed language would be that any valid
JSON file would automatically qualify as valid POV-Ray SDL file
(provided the "document object model", aka "schema", matches, i.e. the
hierarchy of how and what stuff is nested where).
Allowing for line endings to terminate statements might break that
compatibility, because the JSON standard allows line endings anywhere it
allows for whitespace.
I used JSON as a starting point for the proposed SDL format not only to
start _somewhere_ but specifically because being 100% compatible with
JSON would have the advantage that there are tools and libraries galore
out there to generate that format.
JSON is one of just a handful of probably the top most popular formats
for storage and exhange of structured data:
- XML
- JSON
- YAML
XML is painfully verbose, and therefore not an option. YAML is very
concise, but assigns semantics not only to line endings but also to
indentation, and that is something I'm anything but a fan of.
Which leaves us with JSON as the next most obvious choice - which
happens to be similar to POV-Ray's current SDL both in terms of
verbosity and overall look & feel (thanks to both ultimately being
inspired by C).
Also, categorically making any line ending end a statement has the big
drawback that any statement must be written on a single line. To work
around this, the statement-ending semantics of line endings would have
to be weakened depending on context, which in turn would add more
complexity to the parser, and moreover to the language itself,
potentially making it difficult for users to grasp how the line-ending
rules work.
An alternative would be to make some of the commas optional, but at
least as soon as we add certain of our own features we'd run into
similar problems to avoid ambiguities.
> (I only asked because it would seem that we would be at leat two in favour of
> such a feature, but I for one would gladly give the request up after being
> explained a reason, such as performance gain etc...)
Compatibility with the base format, keeping the format free from
ambigiuties, and keeping the format reasonably easy for users to grasp.
That's pretty much all the reason there is to not assign special
semantics to line endings.
In terms of performance, it probably wouldn't make much of a difference.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Am 12.07.2021 um 15:01 schrieb Mr:
>
> > So my question to the experts is, could end of line statement still be
> > compatible with the proposed json paradigm / standard ? maybe in the worse case,
> > breaking away from the standard but by chosing a subset of it... or whatever
> > shift...?
>
> One of the key features of the proposed language would be that any valid
> JSON file would automatically qualify as valid POV-Ray SDL file
> (provided the "document object model", aka "schema", matches, i.e. the
> hierarchy of how and what stuff is nested where).
>
> Allowing for line endings to terminate statements might break that
> compatibility, because the JSON standard allows line endings anywhere it
> allows for whitespace.
>
>
> I used JSON as a starting point for the proposed SDL format not only to
> start _somewhere_ but specifically because being 100% compatible with
> JSON would have the advantage that there are tools and libraries galore
> out there to generate that format.
>
> JSON is one of just a handful of probably the top most popular formats
> for storage and exhange of structured data:
>
> - XML
> - JSON
> - YAML
>
> XML is painfully verbose, and therefore not an option. YAML is very
> concise, but assigns semantics not only to line endings but also to
> indentation, and that is something I'm anything but a fan of.
[...]
Thanks a lot for this answer clarifying the stakes, Now I'll feel more at peace
when POV4 will keep its tolerant and meaningless line endings behaviour .
I would also hate it to be like xml compared to the other two options. So I
looked here https://levelup.gitconnected.com/json-vs-yaml-6aa0243aefc6
Coming from Python I know that my apriori not (yet) feeling the same about this
is probably thus biased, but could you please also develop more about why as a
user you would prefer an abundant punctuation over a meaningful indentation
system ?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|