POV-Ray : Newsgroups : povray.off-topic : GOTO Server Time
3 Sep 2024 21:16:02 EDT (-0400)
  GOTO (Message 27 to 36 of 36)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Darren New
Subject: Re: GOTO
Date: 17 Oct 2010 13:00:00
Message: <4cbb2b90@news.povray.org>
nemesis wrote:
> like forth, it doesn't need to operate on named arguments, which is taken then
> to be the first in the "stack"... :)

Uh, OK. That's, like, the absolute least important bit of FORTH there, but 
OK. :-)  I think that's more shell syntax than FORTH syntax, really.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Darren New
Subject: Re: GOTO
Date: 17 Oct 2010 13:26:03
Message: <4cbb31ab$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> The destructor semantics is one of the things I think C++ really did get 
>> just right.
> 
>   The proper name for that mechanism would be "RAII" (which might be
> somewhat of a misnomer, as it doesn't fully express what it means).

I think the two are separate, but "RAII" is the best (maybe even "intended") 
way to use the C++ destructor semantics. Certainly the semantics don't 
change if you screw up your RAII. :-)

>   (In principle RAII is not incompatible with garbage collection, so
> conceivably you could have both in the same language.)

Indeed. Especially if you use something like reference counting and deal 
with circular references specially or something. Unfortunately, the fastest 
garbage collectors are the ones that never touch the garbage, so it's hard 
to do this well without having basically a separate heap for objects with 
non-memory destructors, which I think is what we'll eventually start seeing 
in some of these run-time implementations, if we don't start seeing GC-smart 
operating systems first.  (And I'd even count Erlang in that latter 
categorization, since all interaction outside the Erlang semantics goes thru 
a "port" kind of construct rather than a function call kind of construct.)

>   According to wikipedia, C++ is not the only language using RAII, and
> mentions Ada as another one. I didn't know that.

I think that's ... stretching it a bit. Sounds like Ada folks trying to 
convince C++ folks they should switch or something. :-)

Ada is very non-orthogonal in its data structures (even more so than C++). 
Objects are basically declared as "records with a vtable" or so, and if your 
data type isn't a type of record (in the Pascal/Algol sense of the word, or 
what C would call a struct), then you don't get to make it an object.

Ah. They're called "controlled types."

http://www.adaic.org/docs/95style/html/sec_9/9-2-3.html

Basically, an object type that inherits from Ada.Finalization.Controlled. 
Inheriting from that type gives you three methods: Initialize, Finalize and 
Adjust. Finalize is like the destructor, and Adjust is sort of like a 
copy/assignment constructor, only more limited. (When you assign to a 
controlled type, it copies all the hardware bits, then invokes Adjust on the 
copy. You don't get to change what you assigned *from*, because that would 
be confusing. Use a procedure with two in/out arguments for that. :-)

However, this only applies to objects derived from Controlled. It doesn't 
work with strings, arrays, any of the built-in collections, files, tasks, 
loadable packages, pointers, etc etc etc. Ada is really pretty 
non-orthogonal in that sense. You can't mix and match. (Yes, that sucks. :-)

I.e., much like you can't have constructors and destructors on pointers or 
integers in C++, except even for some very complicated types in Ada.

Note that any type can be declared "limited" too, which basically means the 
assignment operator is private. A record that can do inheritance (i.e., that 
has a vtable) is called a "tagged type". In case you get interested and read 
some of the following pages. :-)  Note that "class type X" means "X and all 
its descendants" as opposed to "type X" which means just type X. Declaring a 
procedure with an argument that is a class type is how you get run-time 
dynamic inheritance-based dispatch as opposed to overloading.  The "with 
private" declaration is like declaring something "struct xyz;" in C or C++. 
A "protected" type is basically a monitor in the multitasking definition of 
the word.

Blah'goop is a way of getting the goop property or type out of the blah 
variable or type. So myarray'length, or mytaggedtype'parent, or 
localvariable'address (&localvariable) or something like that.

Ada has some very unusual yet precise terminology. It's fun to read a 
sentence talking about a controlled atomic volatile protected limited class 
type and have an idea of what that means. ;-)

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: GOTO
Date: 17 Oct 2010 14:34:37
Message: <4cbb41bd@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> >   (In principle RAII is not incompatible with garbage collection, so
> > conceivably you could have both in the same language.)

> Indeed. Especially if you use something like reference counting and deal 
> with circular references specially or something.

  What I meant is that, for example in C++, it's possible to have value
semantics for objects as well as reference (well, pointer) semantics for
objects, and these can be kept pretty much separate. Then you can have
RAII (iow. scope-based lifetime) semantics on the objects and GC on the
references. There are, in fact, GC engines for C++ which work like this
(basically, anything you allocate dynamically with 'new' can be GC'd,
while anything you allocate without it uses regular RAII semantics).

  On a different note, one example where I think the RAII mechanism is
better than the Java-style GC mechanism is that RAII allows you to
implement copy-on-write semantics (ie. "lazy copying") for objects.
In other words, if you have some object with potentially lots of data,
and you want to use copy semantics for it (ie. if you assign the object
to another, the latter gets a copy of the data rather than sharing the
data), copy-on-write makes the copying lazily, only if needed (ie. only
if one of the copies tries to modify the data).

  For instance, assume that std::string used copy-on-write (which it does
in some implementations). If you write this:

    std::vector<std::string> strings(1000, "hello there");

you will have a vector of 1000 strings, each one with the value "hello
there". However, the actual "hello there" data is shared among all the
strings and hence stored in memory only once (which may become a significant
saving if it would be kilobytes or megabytes of data instead of just 11
characters). However, if you now do something like:

    strings[250] += ", world";

only the 251st string in the vector will be converted to "hello there, world"
rather than all of them. The rest of the string will still share that one
and same data, and only the 251st string will now have its own copy of the
data, with more data appended.

  What happened there is that the 251st string made a deep-copy of the data
before modifying it, thus preventing any of the other strings from changing.

  Of course if you now modify it again:

    strings[250] += "!";

it will *not* perform a needless deep-copy of the data because the current
data is not shared. In other words, it will deep-copy the data only when
needed.

  And most importantly, it will do that completely transparently. You can't
see that from the outside. From the outside std::string simply has copy
semantics and that's it. You could compile the program with a different
implementation of std::string which does not use CoW, and it would still
work the same (except, obviously, now consuming more memory).

  Of course CoW requires reference counting of the data (or, more precisely,
it needs a way to tell if the data is being shared or not). I don't know
how you would do that in Java (transparently, or at all).

  This is possible transparently in C++ because of RAII: When objects are
created, copied, assigned and destroyed, you can specify what happens.
This allows you to keep a reference count on the data handled by the class.

  (And note that I'm not saying there aren't advantages with a GC system
like the one in Java, including efficiency benefits in some situations.)

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: GOTO
Date: 17 Oct 2010 15:10:12
Message: <4cbb4a14$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> Indeed. Especially if you use something like reference counting and deal 
>> with circular references specially or something.
> 
>   What I meant is that,

Oh, I see. Yes, I guess that would work. Probably an excellent way to do it, 
especially if you store enough info about things that you can do compacting 
collections.

>   On a different note, one example where I think the RAII mechanism is
> better than the Java-style GC mechanism is that RAII allows you to
> implement copy-on-write semantics (ie. "lazy copying") for objects.

I understand what you're saying, but this is a bad example for Java because 
that's exactly how it would work in Java, because strings are immutable, so 
your "+=" returns a brand new string. :-)  That's why Java has a 
StringBuilder as well as a String.

>   Of course CoW requires reference counting of the data (or, more precisely,
> it needs a way to tell if the data is being shared or not). I don't know
> how you would do that in Java (transparently, or at all).

Very difficult in Java to think of a way, offhand. I don't think you can 
overload pure assignment in C#, so I don't think you could do the same sort 
of thing there easily either.

>   This is possible transparently in C++ because of RAII: When objects are
> created, copied, assigned and destroyed, you can specify what happens.

I think it's more because you can overload the assignment operator, not the 
RAII as such. Maybe you count that as part of RAII.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: GOTO
Date: 17 Oct 2010 15:39:16
Message: <4cbb50e4@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> >   On a different note, one example where I think the RAII mechanism is
> > better than the Java-style GC mechanism is that RAII allows you to
> > implement copy-on-write semantics (ie. "lazy copying") for objects.

> I understand what you're saying, but this is a bad example for Java because 
> that's exactly how it would work in Java, because strings are immutable, so 
> your "+=" returns a brand new string. :-)  That's why Java has a 
> StringBuilder as well as a String.

  The problem is that it will make a copy of the string data every time
it's modified, which is inefficient.

  For example, assume that instead of appending something to the string
you want to modify one of the characters, such as:

    strings[250][0] = 'H';

  If the string data is not shared, this is very efficient to do because
it can be done in-place, without any kind of data copying. Imagine the
string containing 10 megabytes of data, and you wanting to modify just
the first character.

> >   This is possible transparently in C++ because of RAII: When objects are
> > created, copied, assigned and destroyed, you can specify what happens.

> I think it's more because you can overload the assignment operator, not the 
> RAII as such. Maybe you count that as part of RAII.

  Well, construction (including copy construction) and destruction are
integral parts of CoW, because without them you wouldn't be able to
perform reference counting (and this is exactly one of the aspects of
RAII: resource acquisition and release).

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: GOTO
Date: 17 Oct 2010 16:30:50
Message: <4cbb5cfa@news.povray.org>
Warp wrote:
>   The problem is that it will make a copy of the string data every time
> it's modified, which is inefficient.

Yes. I understand that. I'm just saying that when you do this discussion, 
don't discuss it using strings if Java is your target language. Java strings 
are immutable. Discuss it using arrays of bytes, or something like that.

>   If the string data is not shared, this is very efficient to do because
> it can be done in-place, without any kind of data copying. Imagine the
> string containing 10 megabytes of data, and you wanting to modify just
> the first character.

Yeah. You *can* do that sort of thing in Java. You just have to do it the 
same way you do it in C: manually and error-prone. :-)

>   Well, construction (including copy construction) and destruction are
> integral parts of CoW, because without them you wouldn't be able to
> perform reference counting (and this is exactly one of the aspects of
> RAII: resource acquisition and release).

OK. I just wasn't sure whether the assignment operator and such was 
considered part of the support for RAII, but in retrospect, I guess it 
wouldn't make sense for it not to be.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: GOTO
Date: 17 Oct 2010 17:44:00
Message: <4cbb6e20@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> I understand what you're saying, but this is a bad example for Java because 
> that's exactly how it would work in Java, because strings are immutable, so 
> your "+=" returns a brand new string. :-)  That's why Java has a 
> StringBuilder as well as a String.

  Maybe this is a much better example, and one which is actually closer to
an actual, realistic situation:

    // An array of 100 unit matrices of size 1000x1000:
    std::vector<Matrix> matrices(100, Matrix(1000, 1000));

  The 'matrices' vector will contain 100 unit matrices. Since these matrices
are really large (1 million elements each), but they are all identical,
there's a very clear memory saving advantage if they all share the same
data (as will be the case if the 'Matrix' class uses copy-on-write). This
is especially so if it's expected that not all of them will ever be modified.

  Of course if you modify one of them, eg:

    matrices[25] *= 2;

you want only the 26th matrix to be modified, rather than all of them.
(Hence it needs to perform a deep-copy before the modification, but only
if its data is being currently shared. If it's not being currently shared,
as may be the case if you modify it again after the above, deep-copying
should not be performed because it would be horribly inefficient.)

-- 
                                                          - Warp


Post a reply to this message

From: Invisible
Subject: Re: GOTO
Date: 18 Oct 2010 04:09:44
Message: <4cbc00c8$1@news.povray.org>
On 16/10/2010 04:03 AM, nemesis wrote:

> in other words:  "I am your father!" :D

NOOOOOOOOO!!! >_<


Post a reply to this message

From: Invisible
Subject: Re: GOTO
Date: 18 Oct 2010 04:23:15
Message: <4cbc03f3@news.povray.org>
>> I also note, with some interest, what looks suspiciously like Haskell
>> syntax, in a letter typed 42 years ago. Obviously it's not after
>> Haskell, but rather whatever mathematical formalism Haskell borrowed the
>> notation from. Still, interesting none the less...)
>
> What syntax specifically?

case [i] of (A1, A2 ... An)

It's almost valid Haskell as it stands. The next line down has

(B1 -> E1, B2 -> E2 ... Bn -> En)

which is also strikingly similar.

> I think (but I was not yet part of the scene then) that people were
> looking for good notation. They knew that programming and mathematics
> had a lot in common. Yet things that were obvious in von Neumann
> machines (assignments and control flow statements in particular) did not
> have a direct counterpart in maths.

I think it's more that mathematics is capable of expressing 
transformations far more complex than what the hardware can actually 
implement. Computer hardware is, more or less, restricted to mutating 
individual hardware registers one at a time. Which, if you think about 
it, is a very slow way to make large-scale changes. Mathematics has a 
lot of language for describing complex transformations, and is much less 
focused on how to (say) multiply two matrices one floating-point value 
at a time.

> A problem that is not entirely solved even today. Haskell and other
> languages try to deal with it by restriction to a a paradigm that is
> closer to math than the imperative way of thinking. The downside of that
> is that problems that are easier solved in another paradigm become more
> complicated. Perhaps there are also people working on extending maths to
> include time-dependent behaviour.

I think that "maths" has "included time-dependent behaviour" since at 
least when Sir Isaac Newton formulated his Laws of Motion. :-P

> Fact is that we as humans solve problems using a lot of techniques.
> Choosing whatever seems appropriate at the time. That is why I think a
> good programmer needs to be familiar with at least three or four
> different languages.

Well, I'd say I've tried more than most...


Post a reply to this message

From: andrel
Subject: Re: GOTO
Date: 19 Oct 2010 18:10:47
Message: <4CBE176A.3030002@gmail.com>
On 18-10-2010 10:23, Invisible wrote:

>> A problem that is not entirely solved even today. Haskell and other
>> languages try to deal with it by restriction to a a paradigm that is
>> closer to math than the imperative way of thinking. The downside of that
>> is that problems that are easier solved in another paradigm become more
>> complicated. Perhaps there are also people working on extending maths to
>> include time-dependent behaviour.
>
> I think that "maths" has "included time-dependent behaviour" since at
> least when Sir Isaac Newton formulated his Laws of Motion. :-P

No, but you knew that.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.