POV-Ray : Newsgroups : povray.off-topic : Land of Lisp : Re: Land of Lisp Server Time
3 Sep 2024 23:30:18 EDT (-0400)
  Re: Land of Lisp  
From: Invisible
Date: 2 Nov 2010 06:24:41
Message: <4ccfe6e9$1@news.povray.org>
On 02/11/2010 02:00 AM, nemesis wrote:
> Invisible<voi### [at] devnull>  wrote:
>> Is it possible to create variables with local rather than global scope?
>
> you're kidding me, right?

Well, I've never seen anybody do it before, that's all.

>> When you get to Haskell, of course, automatic type inference blows #1
>> out of the water
>
> yeah, except when it can't decide on the types all by itself and requests type
> annotations anyway.

Which is fairly rare.

In truth, you're probably going to put annotations on all your top-level 
stuff anyway, for documentation's sake. What automatic type inference 
means is that you don't have to dot type annotations absolutely 
everywhere you go (and change them all over the place when something 
changes), just in the few places you need them.

> I would also say the Hindley-Milner kinda cheats in this regard.  You usually
> don't have to type annotate whenever the function operates on numbers, strings
> or Algebraic Data Types.  And Algebraic Datatypes are a dead on giveaway of the
> type of an expression, because it usually have a name of its own to describe
> values of its type.  So, a function like:
>
> withBuf run = run . Buf 0 ini =<<  mallocBytes ini
>    where ini = 1024
>
> needs no type annotation because it already has a friggin reference there to
> type:
>
> data Buf = Buf !Int !Int !(Ptr Word8)
>
> so, you exchange type annotations everywhere for type names for values of this
> type in the expressions.  Not that much of a fair trade:  instead of providing a
> single type declaration, you provide it everywhere you use it.

I wouldn't agree with that.

If you want a point, you say Point 3 7. Which, as you say, rather gives 
away the fact that it's a Point value. But how about saying "True"? 
That's of type Bool, but you didn't write that in your expression. 
Similarly, when you write

   Branch (Leaf 3) (Leaf 7)

you didn't say anywhere that this is of type Tree Int, did you? The 
compiler infers this automatically. And don't even get me started on 
expressions which are polymorphic, and could have /several/ possible types.

> I'm talking ill of Hindley-Milner type system, but you should be aware that the
> same lameness can happen in Lisps too:  here and now you will see things like
> fl* fx+ or fl<... that is, functions to operate on specific numeric types.
> Again, they exchange a single type declaration for type-specific operators
> everywhere.  Makes me wonder if the Algol folks got it right...

Not so in Haskell. If you want to add two numbers, you use (+). It's 
polymorphic like that. And functions that use it end up being 
polymorphic too (unless you specifically don't want them to be, usually 
for performance reasons).

I'm actually surprised that a dynamically-typed language would do 
something like that. I thought the whole point of dynamic typing was 
that you can completely ignore types.

(Haskell also has the nice trick of numeric literals being polymorphic. 
Sadly, the same does not hold for lists or strings...)

>> Really, in the modern age, there is no reason for dynamically-typed
>> languages to exist at all. It just invites bugs.
>
> Dynamically typed languages are just perfect for quick prototyping of systems
> where invariants are still not well known.  The fact that some of them can not
> just quickly prototype working systems, but prototype them with good performance
> is only a plus.

All I know is that every time I've used a dynamically-typed language, 
I've spent 90% of my time fixing type errors - errors which *could* have 
and *should* have been caught automatically by the machine, but I have 
ended up catching them myself, by hand. Yuck!

>> "Why is Lisp the best?"
>> "Because it has macros!"
>
> I almost never use macros and still find Lisp the best. :)

I just don't get what's to like about it. It seems messy and ad hoc to me.

> I suspect its homoiconic nature has lots to do with it.  Plus, if you've never
> edited Lisp code with hierarchical parenthetical editing you don't know what a
> bless it is compared to any other language in existence.

More like "without automatic bracket match, it's impossible to produce 
code which is even syntactically correct". It wouldn't be so bad, but it 
seems all Lisp programmers habitually indent their code incorrectly.

I can see how having a language where everything is a list is 
technically quite a nice idea. It's simple and elegant. Unfortunately, 
the language as a whole seems to be messy and inconsistent. Too much 
backwards compatibility, too much muddled thinking. Still, I guess it 
was the first attempt...

> you don't need to type brackets with a proper Lisp editor such as emacs or
> DrRacket.  You usually just type alt+shift+( and it opens a pair and puts you
> right in.  You can select a big code section spanning several lines by just
> alt+shift+(right or left).

In other words "Lisp is so hard to write that you need a special program 
to help you do it". (The same accusation could be levelled at XML of 
course...)

>> I still don't think Lisp is very functional. :-P
>
> It's because you still don't know it well.

Sure, functions are first-class. But it lacks purity. It also doesn't 
appear to place any emphasis at all on coding in a vaguely functional 
style - not in the language design nor the standard libraries.

>> Depends on what you consider to be "functional programming". There are
>> really two separate characteristics that most FP languages have:
>>
>> 1. Pure functions.
>> 2. Functions as first-class objects.
>>
>> Any programming language that supports functions can support writing
>> pure functions. But as far as I can tell, Lisp does not in any way
>> encourage or facilitate doing so. Heck, C++ has facilities for marking
>> things that won't be altered after being initialised, whereas Lisp does
>> not. That makes C++ slightly more FP than Lisp.
>
> That's purely an implementation detail, not a language feature. Pure math-like
> functions with no side-effects only depend on you writing side-effect-free code.

No, a design that enforces purity has profound consequences for any 
implementation of that design. (Not to mention the entire way that the 
language is used.)

> Would you really call C++ more FP than ML because nothing in the language spec
> say "hey, this section is pure, you can optimize it away for concurrency"?

Who said anything about ML? I just said C++ seems more functional than Lisp.

> BTW, Haskell is the only such (well-known) language enforcing purity even for
> IO.

And Clean doesn't count because...?

>> You complain about "operator obfuscation" and then tout "custom syntax"
>> as improving readability. Which is it? :-P
>
> new syntax doesn't mean&*%#$.  macros have as much self-describing names as
> functions.

To me, "new syntax" means that I can write something that doesn't look 
like the host language. From what little I've seen, a Lisp DSL just 
looks like Lisp.

>> Since Lisp explicitly permits self-modifying code, wouldn't that mean
>> that the compiler has to be present at run-time?
>
> You got that "self-modifying code" all messed up.  Lisp macros can operate on
> Lisp code to produce new custom code, but that happens at compile-time.
>
> If you really want it, you have eval at runtime, but it's as slow as everywhere
> else.

As I say, it's news to me that Lisp can be compiled. I thought the fact 
that you have an eval function more or less prevents this. Every other 
language I've ever seen that has an eval function has been interpreted.

>>>> And, unlike Lisp
>>>> macros (AFAIK), Template Haskell statically guarantees that the code
>>>> written by a template is *always* syntactically correct. (It does _not_
>>>> guarantee that it is well-typed, just that there are no syntax errors.)
>>>
>>> that's good for a language heavy on syntax.
>>
>> I won't claim to be a Lisp expert, but I was under the impression that
>> not every possible list is a valid executable Lisp expression.
>
> I mean there not much syntax in Lisp besides brackets, an expression evaluating
> to a function (or macro) at the head and arguments for that function (or macro)
> in the rest of that list.

And a Haskell expression consists of some combination of exactly 6 (go 
count them) structures. The rest is just sugar. That's peanuts compared 
to (say) the number of possible statements in Java. (For-loops, 
while-loops, if-then-else, return, switch/break...)

>>    From what I can tell, Lisp macros work by you passing some data to
>> them, which they then transform into some sort of executable code.
>> Template Haskell allows you to inspect parts of the current module not
>> passed in as arguments. So you can, e.g., look up the definition of a
>> user-defined data structure.
>
> that sounds as painfully side-effectful as when I first heard it.  Hence Ninja
> Patching.  google it up...

So, what, I can tell the compiler to import some extra modules that I'm 
about to reference, and I can look at the definition of a user-defined 
data type and generate some boilerplate code to work on it? Yes, sounds 
terrible.

Or perhaps you meant that people could abuse it? Well, yes, obviously 
any powerful feature can be abused. So what?

>>> BTW, you know quasi-quoting comes from Lisp, don't you?  `(let ((r ,(fact 5)))
>>> r) will turn into (let ((r 120)) r) at compile-time... it's so friggin' easy
>>> it's used a lot.
>>
>> I must be missing something... those two expressions look almost
>> identical to me.
>
> yeah, you're missing that you write the first and the compiler turns it into the
> second, at compile time:  yes, it evaluates (fact 5) during compilation.  It
> would certainly not do any magic and evaluate (fact n) if n was not known at
> compile time, but since it's a literal...

Oh, right, I hadn't noticed that fact 5 had vanished.

Now a Haskell compiler /might/ evaluate that at compile-time without any 
special hints. You know why? IT CAN'T HAVE SIDE EFFECTS. :-P

Anyway, with Template Haskell, I can merely write

   $(let x = factorial 5 in [| x |])

to achieve the same effect. Not exactly intractably complex, eh?

>>> you're taking a comic book too seriously...
>>
>> Perhaps. Maybe I'm just tired of Lisp fanboys claiming that this one
>> single language is some kind of perfect Utopia.
>
> now, there are not that many Lisp fanboys out there, there are?  Most I know are
> more like Lisp fanoldmen...
>
> Scheme doesn't have many fanboys either, more like irate compsci freshmen...

Whatever. I try not to make wild claims like "Haskell is perfect". I 
think it's a fantastic language, but I know for a fact that it's not 
perfect.

>>> nah, inc is 1 char shorter. ;)
>>
>> And (+1) is more descriptive.
>
> not in Lisp where it looks like a silly function application... :p

It does what it says on the tin.

>> Also, I can soon change that to "iterate (*2) 1" to generate a geometric
>> sequence instead. I bet Lisp doesn't have a predefined "multiply by 2"
>> function. :-P
>
> we could have something like:
> (define (pf op x) (lambda (n) (op n x)))
>
> (iterate (pf * 2) 0)

Yes, that's just as succinct as (*2). Oh, wait...

>> Actually, just the other day I was trying to pin down in my head exactly
>> which sorts of programs Haskell is good for and bad for.
>
> math problems and formal proof automation would be my guess for best suited.
> Real world is more complicated.  say a script to automate a task.  You don't
> really need all that type theory to get it done...

You don't really need "all that type theory" to understand or use 
Haskell. :-P

Anyway, the basic answer seems to be programs where getting the right 
answer is more important than getting it fast, or where figuring out how 
to solve the problem at all is really complex and you need all the help 
you can get...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.