|
|
On 21-Sep-08 6:27, Darren New wrote:
> andrel wrote:
>> Indeed, but only if you assume you have a language that makes that
>> distinction.
>
> That's what I said.
>
>> Maths does not have 'reserved words'. I.e. for the casual observer
>> things like '=' and '+' looks as though they are universally defined
>> and understood the same way. In practice even these are sometimes
>> redefined to suit a specific field or application in a way that may
>> even go deeper than overloading.
>
> Correct. Except that given the programming language, reserved words are
> fundamentally different from functions or variables. That's why they're
> reserved.
>
> Kind of like parens in math - once you decide you have a grouping
> operator that's written ( ), you really can't use it to define functions
> called ( and ).
>
> The confusion, I think, is that you think maths doesn't have reserved
> words. It does. They're just outside of the maths.
I definitely think they exist, only that which ones are reserved and
what they actually mean can vary wildly from field to field. And yes, I
have seen a serious paper where they redefined '=' in a way that is
slightly incompatible with standard use (and with a good reason).
Perhaps you understood me a bit wrong. I am not arguing that you have
variables, constants, reserved words, functions or whatever in Haskell.
My only point was that on a fundamental level the difference between the
categories is so blurred that you should not expect a language to have
visual clues to distinguish between them. At least not for languages
based on an abstract concept. For languages designed to just give a
slightly abstract representation of current hardware and current
programming techniques that may be quite different.
> For example, the "summation function" is kind of the same as the for
> loop: start variable and value below the sigma, stop value above the
> sigma, expression to evaluate to the right of the sigma. The reserved
> words, in this case, is the English sentence I just wrote describing the
> functionality.
>
> Unfortunately, to communicate with the computer about how a compiler
> works, we need to speak something other than English. That's where the
> reserved words come in. They're a way for the compiler writer to let the
> compiler user communicate with the compiler. They are part of the syntax
> of a language that is implicit in the rules for evaluating the
> functions, just like (A^A)->C you can do a production to find that A->C
> because of the rules of pattern matching and substitution. It's
> virtually impossible (or even utterly impossible) to formalize the
> productions you use in your formalization.
>
>> want to model a language on the way mathematics is used, defining
>> 'reserved words' is something you might want to leave out.
>
> Granted. But Haskell doesn't model a language on the way mathematics is
> used. It models a language on one particular mathematics. The reserved
> words are a syntactic shortcut for a much larger lambda expression, just
> like "193" is a syntactic shortcut for a big long string of bits in a
> turing machine.
>
>>> Knowing what's an argument and what's a function is also pretty
>>> important for understanding how a given line of code works.
>>
>> Again absolutely true, for a imperative language. In lambda calculus
>> and any functional language derived from that this distinction make no
>> sense at all.
>
> Except that Haskell is obviously beyond lambda calculus in its
> implementation and meaning. See any of Andrew's postings about writing
> "2+2" in lambda calculus.
I did both a course on lambda calculus and one on functional programming
languages when I was at university. That was 20 years ago, but I don't
think lambda calculus and functional programming has changed a lot since
then. The point I was trying to make is that the fact that there is no
difference in syntax of functions and arguments in lambda calculus is so
fundamental that it is not a good idea to use these terms to analyse
Haskell.
>>> If you can't look at the code and figure out which function bodies
>>> get evaluated, it makes it really rough to work out what's going on,
>>> even in a functional language.
>>
>> I don't think so. If you have an entity called current_temperature it
>> should not matter if that is derived by looking at a specific memory
>> location (a variable), by looking at an index in a circular buffer
>> (array) or by doing something more complicated like calling a function
>> of generating an interrupt. OK, it does matter sometimes but only in
>> an implementation. It should not have any influence on understanding
>> the code conceptually.
>
> Sure. And if you have an entity called "o" or "+" or "case" or "start"?
yes?
>> I think you understood that my mean reason for my response was that
>> Warp is looking with imperative spectacles to lines of code in a
>> functional language and desperately trying to force it into his own
>> mindframe.
>
> I wouldn't go that far, but yes. But if you have a language that defines
> complex syntax (which Haskell does a little and C++ does a lot) and you
> can't even parse the syntax tree because you can't distinguish reserved
> words from variables or functions, then you must admit it's going to be
> difficult to read the code. Indeed, if you can't even tell where one
> expression ends and another starts, it can be very difficult to
> understand the code.
I don't think I ever claimed a program in Haskell or any functional
language is easy to understand. Nor can I easily understand a paper in
the field of string theory for that matter. That does not mean however
that people working in that field or with that language can see the
structure immediately. I know that, so I won't complain if I don't get
it myself. Just as that I won't tell a Korean that she should use our
alphabet because otherwise it won't make sense to me.
(Next time I'll snip unnecessary parts, I promise. Now I am just to tired.)
Post a reply to this message
|
|