POV-Ray : Newsgroups : povray.off-topic : Land of Lisp : Re: Land of Lisp Server Time
3 Sep 2024 17:16:31 EDT (-0400)
  Re: Land of Lisp  
From: Invisible
Date: 1 Nov 2010 06:07:58
Message: <4cce917e$1@news.povray.org>
>> "There's a long-forgotten place where they have weapons SO POWERFUL that
>> they can defeat ANY BUG! They call it the Land of Lisp."
>>
>> Uhuh. So an untyped language with a single global namespace and which
>> touts self-modifying code as its single most significant feature is the
>> way to beat program bugs?
>
> if by global namespace you mean no modules/libs, you're very wrong.

Is it possible to create variables with local rather than global scope?

> Lisp is not untyped either, just leaves type-handling to runtime. :)

Strictly speaking, an /untyped/ language is one where you can apply any 
operator to any data - no matter how stupid. (E.g., you can multiply two 
strings together, or take the cosine of a pointer.) Very few real 
programming languages fall into this category (although a few 
intermediate compiler languages do).

The main distinction is between statically-typed and dynamically-typed 
languages. A statically-typed language does extensive and usually 
exhaustive testing in a fully automated way to statically guarantee that 
your program cannot malfunction due to mismatched types. A 
dynamically-typed language doesn't bother doing any compile-time 
checking, and just lets your program crash at runtime if you make a mistake.

The advantages of static typing are obvious (i.e., vast swathes of bugs 
are eliminated before you even *get* to run-time). Dynamic typing, on 
the other hand, has only two tiny advantages: 1; You don't have to write 
type signatures everywhere. 2; The compiler doesn't stop you performing 
valid actions just because it upsets the type system.

Most statically-typed languages have a primitive, puny type system and 
demand explicit type signatures everywhere. This makes the language 
designer's job much easier, but isn't much fun for the programmer. 
Coding in (say) Java, it's actually quite common to need to work around 
the type system.

When you get to Haskell, of course, automatic type inference blows #1 
out of the water, and the sophisticated Turing-complete type system more 
or less kills #2 as well. If you really can't express what you want to 
do easily, you can always make a small, carefully controlled hole in the 
corner of the type system to get around that, and still keep the 
overwhelming benefits of strong type-checking for the rest of the program.

Really, in the modern age, there is no reason for dynamically-typed 
languages to exist at all. It just invites bugs.

>> "Greetings, your highness!"
>>
>> Creatures with 12 eyes and an arm for a nose: WTF-O-Meter: 1.5
>
> your WTF-O-Meter should go up once you realize that is how he depicts Lispers...
> which BTW is not really that far from reality... :)

I've always been puzzled by the assertion that Lisp is THE ultimate 
solution to every programming problem. It seems to me to be more of a 
religion than a rational argument. The exchange

"Why is Lisp the best?"
"Because it has macros!"

is as nonsensical as

"Why do you believe in God?"
"Because Christ died on the cross to save mankind from sin!"

It makes no sense.

>> "Each of the Seven Guilds possesses a powerful bug-killing weapon that
>> is unique to Lisp!"
>>
>> Oh yeah?
>>
>> So macros, functional programming, software transactional memory,
>> restartable code and "conciseness" are unique only to Lisp?
>
> all in the same line, yes. :)

I'm not sure how you came to that particular conclusion...

> well, not quite "conciseness" as Lisp languages tend to favor words and
> long-descriptive names rather than single-char operator obfuscation.  Unless
> we're talking about big problems where DSL-building with macros give a clear
> gain in readability can indeed bring conciseness...

I would say that (0 - b + sqrt (b*b - 4*a*c))/(2*a) is significantly 
clearer and more concise than (div (add (sub 0 b) (sqrt (sub (mul b b) 
(mul (mul 4 a) c)))) (mul 2 a)). (Note that I had to get out a text 
editor with syntax highlighting just to type all that with the correct 
number of brackets!)

Yes, it's possible to overuse special characters as operators. But 
generally a few carefully chosen operator names can greatly shorten the 
code /and/ make it more readable.

>> "Functional programming is a mathematical approach to programming that
>> was pioneered by Lisp."
>>
>> I beg to differ.
>
> you don't:  in the 50's and 60's when Lisp was born there was no Haskell,
> Miranda, ML nor even lowly C:  FORTRAN or the soon to appear COBOL were your
> only alternatives for high-level code... and those were definitely not
> functional.  Lisp was.

I still don't think Lisp is very functional. :-P

>> Actually, I'm not even sure why people consider Lisp to be a functional
>> programming language. JavaScript is about as functional as Lisp!
>
> It's because any language with support for creating functions on the fly,
> receiving functions as arguments to other functions and returning functions as
> the result of evaluation of functions is able to use the functional programming
> paradigm.

Depends on what you consider to be "functional programming". There are 
really two separate characteristics that most FP languages have:

1. Pure functions.
2. Functions as first-class objects.

Any programming language that supports functions can support writing 
pure functions. But as far as I can tell, Lisp does not in any way 
encourage or facilitate doing so. Heck, C++ has facilities for marking 
things that won't be altered after being initialised, whereas Lisp does 
not. That makes C++ slightly more FP than Lisp.

>> And writing code in a language which *enforces* a functional style makes
>> it drastically easier to debug. :-P
>
> yes, and writing it in a lazy evaluation language makes it damn hard to debug as
> you also well know.

No, lacking a decent debugger makes debugging hard. Lazy evaluation 
makes *performance* harder to predict/control, but it makes *debugging* 
easier, since functions always produce repeatable results.

And, in fact, LoL touts lazy evaluation as an *advantage* just a few 
points further down the list. :-P

>> Newsflash: lazy evaluation
>> eliminates one of the major reasons for wanting macros in the first place.
>
> It's true that unevaluated argument-passing (call-by-name) is one of the fine
> reasons for macros or, obviously, lazy evaluation.  But I'd say it's a minor
> bonus when compared to its main use to generate code at compile-time and build
> custom syntax for the sake of readability.

You complain about "operator obfuscation" and then tout "custom syntax" 
as improving readability. Which is it? :-P

>> (Lisp, AFAIK, doesn't have a
>> "compile-time", so things are slightly different.)
>
> Pretty much all Common Lisp implementations are compilers

Since Lisp explicitly permits self-modifying code, wouldn't that mean 
that the compiler has to be present at run-time?

>> And, unlike Lisp
>> macros (AFAIK), Template Haskell statically guarantees that the code
>> written by a template is *always* syntactically correct. (It does _not_
>> guarantee that it is well-typed, just that there are no syntax errors.)
>
> that's good for a language heavy on syntax.

I won't claim to be a Lisp expert, but I was under the impression that 
not every possible list is a valid executable Lisp expression.

>> Also unlike Lisp macros, a Haskell template can inspect and alter
>> anything in the current module (up to and including completely rewriting
>> it).
>
> sounds as deadly as ninja patching. :)

I have no idea what that is.

> other than that, I'm not sure what you mean by inspect and alter anything in the
> current module...

 From what I can tell, Lisp macros work by you passing some data to 
them, which they then transform into some sort of executable code. 
Template Haskell allows you to inspect parts of the current module not 
passed in as arguments. So you can, e.g., look up the definition of a 
user-defined data structure.

> do you have any practical example other than the short but confusing already
> examples at:

No.

> the examples there seem to deal with a problem that doesn't exist in Lisp in the
> first place:  generate code to deal with different types.

The examples demonstrate how to use TH, not why you'd /need/ TH. It's 
perfectly possible to write highly polymorphic code with nothing but 
plain vanilla Haskell '98.

>> "They're using this incredible device called a Wii. Say cadet, why
>> aren't you shooting anything?"
>>
>> "I'm trying to, but the controller keeps thinking that I want to HUG the
>> insectiod storm-troopers!"
>>
>> LOLrus.
>
> completely random Nintendo ad... :p

Or... anti-ad I suppose?

>> "Those ships are from the DSL Guild."
>>
>> Uhuh. Because no other language allows you to embed a DSL right into
>> your programming language. Or even, you know, embed a language with a
>> syntax entirely different from that of the host language. (Haskell's
>> "quasi-quoting" allows you to embed a DSL that gets read using a parser
>> that you write, using any parser libraries you desire.)
>
> yeah, but it seems so mindnumbing complex nobody uses.

I get the impression that people avoid it because it's not part of the 
official language spec, it's GHC-specific. But mainly, because it's not 
really needed especially often.

> BTW, you know quasi-quoting comes from Lisp, don't you?  `(let ((r ,(fact 5)))
> r) will turn into (let ((r 120)) r) at compile-time... it's so friggin' easy
> it's used a lot.

I must be missing something... those two expressions look almost 
identical to me.

>> "Basically, continuations let you put 'time travel' into your code."
>>
>> Actually, you don't need continuations to do that, necessarily.
>
> you may not need special syntax for it if you explicitly write your programs in
> continuation-passing-style, in which case you get continuations for free.  Of
> course, that requires tail-call optimizations otherwise your stack blows...

Haskell provides a monad version of CPS, which means you don't even have 
to obfuscate all your code to do crazy continuation manipulations. 
Personally, this kind of stuff makes my mind melt. :-}

>> "Weaknesses: Continuations are such an awesome feature that they don't
>> really have a downside."
>>
>> O RLY?
>>
>> How about the ease with which you can make your program so complicated
>> that it becomes totally unmaintainable?
>
> remember continuations are essentially gotos. ;)

I think that says it all. :-)

> you're taking a comic book too seriously...

Perhaps. Maybe I'm just tired of Lisp fanboys claiming that this one 
single language is some kind of perfect Utopia.

>> As an aside, does Lisp have arbitrary-precision arithmetic yet? Cos that
>> Haskell fragment gives you arbitrary-precision results. Using the GMP,
>> no less.
>
> arbitrary-precision arithmetic is what Lisp got since ancient times.

OK, fair enough.

>> "(take 20 (filter even? (iterate inc 0)))"
>>
>> Or, to put it another way,
>>
>>     take 20 (filter even (iterate (+1) 0))
>>
>> or even
>>
>>     take 20 $ filter even $ iterate (+1) 0
>
> nah, inc is 1 char shorter. ;)

And (+1) is more descriptive.

Also, I can soon change that to "iterate (*2) 1" to generate a geometric 
sequence instead. I bet Lisp doesn't have a predefined "multiply by 2" 
function. :-P

> whew, I'll call it a draw here

My point was that here the Lisp and Haskell versions are so utterly 
similar that nobody can seriously claim that Lisp is more concise than 
Haskell (or vice versa). So saying "only Lisp can be concise - look at 
this!" isn't very compelling.

> and reply about the Seven Guilds of Haskell later...

Heh, yeah, I need to go think about that myself.

Actually, just the other day I was trying to pin down in my head exactly 
which sorts of programs Haskell is good for and bad for.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.