POV-Ray : Newsgroups : povray.off-topic : Wall of text Server Time
5 Jul 2024 08:49:54 EDT (-0400)
  Wall of text (Message 1 to 10 of 10)  
From: Orchid Win7 v1
Subject: Wall of text
Date: 18 Jun 2015 15:27:27
Message: <55831b9f$1@news.povray.org>
Arguably one of the most famous Haskell functions is Map().

Map() takes two arguments --- a function and a list --- and it applies 
the function to every element of the list, returning the results in a 
new list. Simple.

In fact, Haskell has a version of the Map() function which is more 
polymorphic than that. It can be applied to things that aren't lists.

For example, Haskell does not have null. But it does have something 
called Maybe. An Int variable definitely holds an integer. However, a 
Maybe<Int> variable does exactly what it sounds like --- it's a value 
that *maybe* contains an Int.

This is actually quite a big deal. In a language such as Java or C#, 
every non-primitive datatype is basically a pointer type --- and every 
such type can potentially be null.

Here's the problem though: 99% of the code in your code is probably 
expecting that all of its arguments will never, ever be null, and will 
behave in some unspecified way if null is present. (Typically by 
throwing an exception, but possibly several method calls *after* the 
problematic one!) On the other hand, 1% of your code *does* expect null 
as a legitimate value that actually signals something important. Sadly, 
it's nearly impossible to figure out what 1% that is, unless by some 
fluke it happens to be documented.

In Haskell, this stuff is in the type signature. If a function takes an 
Customer argument, then it is 100% guaranteed at compile-time that the 
input will *always* be an actual Customer value, and never null. Because 
Haskell *doesn't have* null. But if the Customer argument is actually 
optional, you can explicitly declare it as Maybe<Customer>. This 
instantly signals to anybody trying to use this function that it's 
expecting the possibility of there being no Customer. And it also means 
the function itself won't compile unless it actually *does* handle the 
Customer argument being empty. You cannot "forget" to do this; it won't 
type-check.

Maybe does a similar thing for results. Rather than throw an exception 
if something goes wrong, you can return a Maybe<something>. Again, this 
signals to anybody using your code that it may fail in some way. And it 
forces you to check for failure. (You cannot "forget" to do this.) And 
you don't need any special exception-catching machinery; it's just data. 
You just inspect it.

(In a way, it's kind-of like the C# Nullable<> type. But in C#, you only 
use that for value-types [i.e., almost no real-world types except 
built-in primitives]. But I digress...)

So Maybe is wonderful. But, as you can imagine, if you're not careful 
you can end up spending a lot of code dealing with maybies, to the point 
of obscuring your actual logic.

Fortunately, Map() works on Maybe values. Instead of a list of values, 
you can have a Maybe<something>, and if it contains a value, Map() will 
apply your function to that value, and put the result back into a 
Maybe<something else>.

Anything that has a Map() implementation is called a "functor". (This 
term apparently derives from category theory. Formally, Haskell's 
Functor class represents the endomorphisms on the category of valid 
Haskell types... Yeah.) So list and Maybe are both functors. You might 
also expect that Set would be a functor. Well, it *should* be... but it 
isn't, since it doesn't work for *all* possible types. (Only types that 
have a total ordering.) There's a few other container types that are 
functors though (e.g., Sequence).

Slightly bizarrely, *functions* are functors. Given a function from type 
X to type Y, you can Map() a function from type Y to type Z over it, 
yielding a new function from type X to type Z and a fairly obvious way. 
The fact that you can do this routinely confuses the hell out of people. ;-)

If this was an object-oriented programming language, there would be an 
IMappable interface, and all the standard container classes would 
implement it. But you know what else is a functor? Parsers.

Now a parser is obviously no kind of container whatsoever. But given a 
Parser that takes some input and returns a String, you can type a 
function from String to Int and Map() it over the Parser, yielding a new 
Parser that produces an Int.

Here we don't have a *container* of stuff, we have an "action" that 
*produces* stuff. And we can Map() a function over the stuff that will 
ultimately be produced.

Once you wrap your mind around this idea, it comes as little surprise 
that other sorts of "actions" are also functors. For example, an atomic 
shared-memory transaction is a functor. An I/O operation is a functor.

So here we take two things --- data containers and executable actions 
--- which appear wildly different, and unify them under a single 
abstraction (functor) based on their abstract similarity. That's what 
happens when your library design is guided by advanced mathematics 
rather than commercial interests.



Suppose we want to run two parsers and combine their output. Now the 
Map() function allows us to apply a 1-argument function to the output of 
1 parser. But how do we apply a 2-argument function to the output of 2 
parsers?

Well, recall that Haskell has curried functions. If you call a 
5-argument function with 1 argument, you end up with a 4-argument 
function. That being the case, we actually *can* Map() a 2-argument 
function over a parser. The result is a function that produces a 
1-argument function.

At this point we're stuck. There's no obvious way to proceed. Indeed, 
using only Map(), you can't go any further.

Anything that implements Map() is called a functor. But anything that 
implements Apply() is called an *applicative functor*. (Yeah, it's so 
easy to pronounce...)

Forget about parsers for a moment. Let's think about lists again. So 
let's say we have two lists and a 2-argument function. If we Map() the 
function over the first list, we now have a list of 1-argument 
functions, and we're stuck.

The Map() function takes *a* function and a list of inputs. But the 
Apply() function takes *a list* of functions and a list of inputs. And 
it applies every function to every list, collecting all the results into 
a single, flat list. That means that we can do something like

   Apply(Map(my_func, list1), list2)

Assuming my_func is a 2-argument function, Map(my_func, list1) applies 
my_func to every datum in list1, producing a *list* of functions, which 
Apply() then applies to every datum in list2.

Notice that we can now handle *any* number of arguments. If I take a 
4-argument function and Map() it, I get a list of 3-argument functions. 
If I Apply() that, now I have a list of 2-argument functions, which I 
can Apply() again to get a list of 1-argument functions, and Apply() 
again to get a list of results. Something like

   Apply(Apply(Apply(Map(my_func, list1), list2), list3), list4)

Recall that in Haskell, the syntax for function calls is not

   my_func(x, y)

but rather

   my_func x y

Well, in Haskell, we define the following aliases:

   <$> = Map()
   <*> = Apply()

That done, we can now write

   my_func <$> list1 <*> list2

This is much more readable! The syntax for a normal function call is 
nearly identical to the syntax for calling a function multiple times 
with arguments pulled out of lists. Indeed, we can also write

   my_func <$> list1 <*> list2 <*> list3 <*> list4

Now it is clear that we can handle any desired number of arguments. And 
it's all thanks to curried functions!

This trick of course works with parsers too. If we have a function that 
combines the output of our two parsers, we can just do

   combine_func <$> parser1 <*> parser2

(Or, indeed, any other number of parsers.) So now we have a systematic 
way to glue an arbitrary number of simple parsers together to form more 
complex parsers. We can hierarchicly build parsers as complex as we wish.



If you've read this monologue before, you might recall me rambling on 
about "monads", and how you can build parsers using them. Where are 
those guys now?

You actually can build sophisticated parsers using only applicative 
functors. And, indeed, it is considered good practise to do so, for 
reasons that I'll come to later. But it turns out that monadic parsers 
can do things that applicative parsers cannot. There are grammars for 
which you can write a monadic parser, but you cannot write an 
applicative one.

Most particularly, you cannot decide which parser to run next based on 
what input you've seen so far. Neither Map() nor Apply() allow you to 
inspect the input and decode what parser to do next; the sequence of 
parsers is decided long before we actually "run" the parsers.

This is very frequently no problem at all. But sometimes it *is* a 
problem. Most particularly, an applicative functor is often all you need 
for parsing, but for I/O it's nearly useless. You *want* your program to 
perform different I/O based on what input you feed it!

So let's try to make a parser that changes its mind. Suppose we want to 
parse two identical characters. Say we have a parse_any() function that 
accepts any character (and returns it), and a parse_char() function that 
only accepts the specified character. What we want to do is something like

   var c = parse_any();
   parse_char(c);

But of course, parse_any() doesn't return a *character*; it constructs a 
*parser*! So what are we to do? Well, this *almost* works:

   Map(parse_char, parse_any)

This will run the parse_any parser, and then apply the parser_char 
function to its output. [Notice how the parsers have ended up in 
"backwards" order, with parser_char *last* and parse_any *first*.] The 
output of parse_any is a character, so that's right. Trouble is... well, 
look:

   Parser<char> parse_any() {...}
   Parser<char> parse_char(char c) {...}

   Parser<Y> Map(Func<X, Y>, Parser<X>)

   Map(Func<char, Parser<char>>, Parser<char>) ==> Parser<Parser<char>>

We've written a parser that returns another parser! Oops! That's not 
right... We don't want to *return* the second parser, we want to *run* it!

Fortunately, we can sort this out with one extra function: Join(). The 
Join function takes a functor of functors and returns a single functor.

For a list, it's quite easy to see what Join() does. It takes a list of 
lists, and returns a flat list. So basically it takes the outer list, 
and concatenates together all the lists inside it. Easy.

For a parser, it symbolically "looks like" the function just deletes one 
copy of the word "Parser" from the type signature:

   Parser<X> Join(Parser<Parser<X>> nested);

But in fact, what this function is doing is constructing a new parser 
that runs the outer parser and then carefully connects all the parse 
state up to the inner parser before running it. So that (for example) 
the new parser continues from the same point that the outer parser had 
got up to. That kind of thing. So you can kind of imagine what the code 
might look like.

Using Join(), we can implement our desired parser:

   Join(Map(parse_char, parse_any))



It turns out that anything which implements Apply() and Join() is 
actually a monad. Well, not quite. I've missed out one function: 
Inject(). Sometimes you need a parser in a certain place (e.g., a 
function input), but you don't want to actually *parse* anything, you 
just want to always return a specific result. That's what Inject() is for.

   Parser<X> Inject(X);

Perhaps it's easier to think in terms of lists. In that case, Inject() 
takes an element and returns a 1-element list.

Let us survey the scene so far:

   Functor:
     List<Y> Map(Func<X, Y>, List<X>);
   Applicative functor:
     List<X> Inject(X);
     List<Y> Apply(List<Func<X, Y>>, List<X>);
   Monad:
     List<X> Return(X);
     List<Y> Bind(List<X>, Func<X, List<Y>>);
   or
     List<X> Return(X);
     List<X> Join(List<List<X>>);

It's absolutely obvious that Inject == Return. These two functions both 
do exactly the same thing. It may not be immediately obvious that every 
Applicative is also a Functor. Why? Because Map = Inject+Apply:

   Map(f, listX) = Apply(Inject(f), listX);

Apply() takes a *list* of functions, whereas Map() takes *one* function. 
Well if you use Inject() to put your function into a 1-element list, 
then you're done.

So, everything that implements Applicative basically implements Functor 
as well. (In fact, Haskell *requires* you to implement Map() if you have 
an implementation for Inject() and Apply().)

It's not totally obvious that Bind = Map+Join. But it is:

   Bind(listX, f) = Join(Map(f, listX));

Bind takes a list of values and a function that takes a value and 
produces a list of results. If you just Map() the function, you get a 
list of lists. Join() will flatten that down for you.

It might seem dumb to *require* that the result of the function passed 
to Bind() has to return a *list* of results. But if you think about 
parsing again, then Bind() takes a parser and a function that takes the 
result of the parser and constructs the next parser to run. And now it 
makes sense why Bind() looks like this. Again, Join() is running an 
outer parser and connecting its workings up to the inner parser thus 
returned.

It's also not obvious, but Join = Bind+identity:

   Join(list_list_X) = Bind(list_list_X, list_X => list_X);

Bind will execute a function for each element of the outer list. If that 
function just *returns* whatever you give it, then we've flattened a 
list of lists into one list --- exactly what Join() is specified to do!

So you can construct Bind() from Map()+Join(), and you can construct 
Join() from Bind(). Incidentally, you can build Apply() from Bind() too:

   Apply(list_funcs, list_X) =
     Bind(list_funcs, func => Bind(list_X, x => Return(f(x)));

So every Monad really is an Applicative, and every Applicative really is 
a Functor.

(Until recently, Haskell required everything declared as Applicative to 
be declared as Functor as well, but you could declare something as Monad 
without any other declarations. With the latest revision of the standard 
libraries, every Monad is now *required* to be an Applicative [and hence 
a Functor] --- a breaking change which will probably affect quite a lot 
of older Haskell code...)

(Sadly, we still have Inject() *and* Return(), even though they clearly 
do precisely the same thing. When Monad wasn't required to be 
Applicative, that was necessary. Now that Monad automatically implies 
Applicative, maybe someday we can get rid of the duplicate function.)

Although a monad can be defined by return+bind *or* return+apply+join, 
Haskell chooses to define return+bind as the required elements, and join 
is then implemented in terms of bind. One might argue that Join() is 
perhaps easier to comprehend then Bind()... or maybe not. It depends on 
your perspective.



Back to that statement I made earlier. Applicative parsers are less 
powerful than monadic ones. So why are they considered better?

One aspect is notation. Compare:

   my_parser = do
     x <- parser1
     y <- parser2
     z <- parser3
     return $ my_func x y z

   my_parser = my_func <$> parser1 <*> parser2 <*> parser3

Some say that applicative *style* is less imperative and more readable. 
In this example, it's certainly less verbose!

But there's more. The monadic parser above basically desugars into

   Bind(parser1, x =>
     Bind(parser2, y =>
       Bind(parser3, z => Return(my_func(x, y, z))
     )
   )

Now we don't know exactly how Bind() or Return() might be implemented 
for a particular parser library. But what we *do* know is this: the 
outer-most Bind() function is being given two arguments. One is parser1, 
which is a parser object that we, the library authors, can probably 
inspect in all sorts of interesting ways. The other is a function. 
That's it. It's just an arbitrary blob of Turing-complete code. We can't 
do anything with this code except execute it.

(I notice that in C#, you can actually *introspect* such code, so you 
can notionally see what it does. But beware Rice's theorem: determining 
almost any useful information about this function is formally equivalent 
to the Halting Problem. So even in C#, you're not going to get very far!)

Consider, on the other hand, the applicative parser. Desugared, it looks 
like this:

   Apply(Apply(Map(my_func, parser1), parser2), paser3)

We still can't do anything with my_func. But it's plainly possible for 
Map() to return a parser that contains a marker saying "this was build 
by Map()", and likewise for Apply(). And this opens up a fantastic 
possibility: you can execute the above function calls to build your 
complicated parser, and then you can write an "optimiser" function that 
rearranges the internal structure of the parser to improve performance.

This is possible because the parsers are directly connected to each 
other. Only at the final step do we Map() an arbitrary Turing-complete 
computation over the *output* of a parser. The parsers themselves are 
statically generated and cannot change based on the input being parsed. 
And that means we can potentially do the kind of rearrangements that an 
SQL engine might do to a complex query.

In short, an applicative parser supports run-time optimisation before we 
run it. A monadic parser utterly thwarts any attempts at run-time 
analysis. (Although perhaps the compiler can do something interesting at 
compile-time.) Then again, you can write a monadic parser that parses a 
BNF grammar followed by a document written in that grammar! There's not 
much hope of optimising such a parser before it runs...


Post a reply to this message

From: Stephen
Subject: Grouting of comment.
Date: 18 Jun 2015 17:00:05
Message: <55833155$1@news.povray.org>
On 18/06/2015 20:27, Orchid Win7 v1 wrote:
> Arguably one of the most famous Haskell functions is Map().

I do like reading your paragraphs, even when I can't follow them 100%. 
They are educational.
So much so I understood the mention of Haskell in the BBC series of 15 
minutes on "Codes that changed the World".

http://www.bbc.co.uk/programmes/b05qqhqp

(The last prog. 5/5)

-- 

Regards
     Stephen


Post a reply to this message

From: Stephen
Subject: Re: Grouting of comment.
Date: 18 Jun 2015 17:01:01
Message: <5583318d$1@news.povray.org>
On 18/06/2015 22:00, Stephen wrote:
>
> http://www.bbc.co.uk/programmes/b05qqhqp
>
> (The last prog. 5/5)

Oops!

http://www.bbc.co.uk/programmes/b05qqhqp/episodes/player


-- 

Regards
     Stephen


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Grouting of comment.
Date: 21 Jun 2015 17:41:09
Message: <55872f75$1@news.povray.org>
On 18/06/2015 10:00 PM, Stephen wrote:
> On 18/06/2015 20:27, Orchid Win7 v1 wrote:
>> Arguably one of the most famous Haskell functions is Map().
>
> I do like reading your paragraphs, even when I can't follow them 100%.
> They are educational.

That's always good to hear.

> So much so I understood the mention of Haskell in the BBC series of 15
> minutes on "Codes that changed the World".

Hmm. Kinda interesting. And yet, at the same time, it's one of those 
where everything has been dumbed down to the point where they might as 
well be talking about invisible pixie fairies or something. A bit like...

http://blog.xkcd.com/2015/05/13/new-book-thing-explainer/


Post a reply to this message

From: Stephen
Subject: Re: Grouting of comment.
Date: 22 Jun 2015 05:49:57
Message: <5587da45@news.povray.org>
On 21/06/2015 22:41, Orchid Win7 v1 wrote:
>> So much so I understood the mention of Haskell in the BBC series of 15
>> minutes on "Codes that changed the World".
>
> Hmm. Kinda interesting. And yet, at the same time, it's one of those
> where everything has been dumbed down to the point where they might as
> well be talking about invisible pixie fairies or something.

Yes, it is for a radio 4 audience. Grown ups who are interested but 
never had the time to learn.

> A bit like...
>
> http://blog.xkcd.com/2015/05/13/new-book-thing-explainer/

That looks like fun. :-)

For a slightly different take on educating those who do not know.
Do you remember the secret life of machines?

http://www.secretlifeofmachines.com/index.shtml

https://en.wikipedia.org/wiki/The_Secret_Life_of_Machines



-- 

Regards
     Stephen


Post a reply to this message

From: Thomas de Groot
Subject: Re: Grouting of comment.
Date: 22 Jun 2015 08:23:13
Message: <5587fe31$1@news.povray.org>
On 22-6-2015 11:49, Stephen wrote:
> On 21/06/2015 22:41, Orchid Win7 v1 wrote:
>>> So much so I understood the mention of Haskell in the BBC series of 15
>>> minutes on "Codes that changed the World".
>>
>> Hmm. Kinda interesting. And yet, at the same time, it's one of those
>> where everything has been dumbed down to the point where they might as
>> well be talking about invisible pixie fairies or something.
>
> Yes, it is for a radio 4 audience. Grown ups who are interested but
> never had the time to learn.
>
>> A bit like...
>>
>> http://blog.xkcd.com/2015/05/13/new-book-thing-explainer/
>
> That looks like fun. :-)

It does, and it immediately shows were things go seriously wrong with 
this concept: /fire/ does not come out of volcanoes; /land/ is not going 
away.

Trying to translate concepts into a simpler language needs also 
/understanding/ of those concepts. I fear that is not the case here.

>
> For a slightly different take on educating those who do not know.
> Do you remember the secret life of machines?
>
> http://www.secretlifeofmachines.com/index.shtml

This is a much better/accurate work than the book by Randall Munroe.


-- 
Thomas


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Grouting of comment.
Date: 22 Jun 2015 14:14:25
Message: <55885081$1@news.povray.org>
>> Hmm. Kinda interesting. And yet, at the same time, it's one of those
>> where everything has been dumbed down to the point where they might as
>> well be talking about invisible pixie fairies or something.
>
> Yes, it is for a radio 4 audience. Grown ups who are interested but
> never had the time to learn.

Yeah, I guess.

I'm always disappointed when I watch Horizon or something, and they tell 
you about something interesting, but simplified to the point where you 
have no idea what they actually mean anymore.

>> A bit like...
>>
>> http://blog.xkcd.com/2015/05/13/new-book-thing-explainer/
>
> That looks like fun. :-)
>
> For a slightly different take on educating those who do not know.
> Do you remember the secret life of machines?

No idea. What I *do* remember is this:

https://en.wikipedia.org/wiki/The_Way_Things_Work

This is how I learned how logic gates work. I was *so* upset when Emma 
destroyed the book. Then again, she only did it to upset me, so...


Post a reply to this message

From: Stephen
Subject: Re: Grouting of comment.
Date: 22 Jun 2015 15:38:41
Message: <55886441$1@news.povray.org>
On 22/06/2015 19:14, Orchid Win7 v1 wrote:
>>> Hmm. Kinda interesting. And yet, at the same time, it's one of those
>>> where everything has been dumbed down to the point where they might as
>>> well be talking about invisible pixie fairies or something.
>>
>> Yes, it is for a radio 4 audience. Grown ups who are interested but
>> never had the time to learn.
>
> Yeah, I guess.
>
> I'm always disappointed when I watch Horizon or something, and they tell
> you about something interesting, but simplified to the point where you
> have no idea what they actually mean anymore.
>

It is sad when you know more than your teacher. :-(


>
> This is how I learned how logic gates work. I was *so* upset when Emma
> destroyed the book. Then again, she only did it to upset me, so...

When you go to her wedding. Forget to put on your trousers on the big 
day. ;-)

-- 

Regards
     Stephen


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Grouting of comment.
Date: 22 Jun 2015 16:25:57
Message: <55886f55$1@news.povray.org>
>>> Yes, it is for a radio 4 audience. Grown ups who are interested but
>>> never had the time to learn.
>>
>> Yeah, I guess.
>>
>> I'm always disappointed when I watch Horizon or something, and they tell
>> you about something interesting, but simplified to the point where you
>> have no idea what they actually mean anymore.
>
> It is sad when you know more than your teacher. :-(

Well, that was my experience for much of higher education. But I meant 
more when you watch a program about string theory or something, and it 
*sounds* like it's probably really interesting... but they didn't want 
to do an hour of equations, so instead they just have arty shots of 
expensive particle accelerators and some whizzy computer graphics. And 
an analogy about ants.

>> This is how I learned how logic gates work. I was *so* upset when Emma
>> destroyed the book. Then again, she only did it to upset me, so...
>
> When you go to her wedding. Forget to put on your trousers on the big
> day. ;-)

Pff. Emma got married years ago.


Post a reply to this message

From: Stephen
Subject: Re: Grouting of comment.
Date: 22 Jun 2015 18:32:53
Message: <55888d15$1@news.povray.org>
On 22/06/2015 21:26, Orchid Win7 v1 wrote:


>> It is sad when you know more than your teacher. :-(
>
> Well, that was my experience for much of higher education. But I meant
> more when you watch a program about string theory or something, and it
> *sounds* like it's probably really interesting... but they didn't want
> to do an hour of equations, so instead they just have arty shots of
> expensive particle accelerators and some whizzy computer graphics. And
> an analogy about ants.
>

Sometimes radio programmes are better. No graphics but light on the hard 
sums. :-)

>>> This is how I learned how logic gates work. I was *so* upset when Emma
>>> destroyed the book. Then again, she only did it to upset me, so...
>>
>> When you go to her wedding. Forget to put on your trousers on the big
>> day. ;-)
>
> Pff. Emma got married years ago.

Sorry, I thought you were talking about your sister.


-- 

Regards
     Stephen


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.