POV-Ray : Newsgroups : povray.off-topic : Reach for the SKI Server Time
5 Jul 2024 08:39:39 EDT (-0400)
  Reach for the SKI (Message 1 to 10 of 16)  
Goto Latest 10 Messages Next 6 Messages >>>
From: Orchid Win7 v1
Subject: Reach for the SKI
Date: 20 Jul 2015 17:03:11
Message: <55ad620f$1@news.povray.org>
(OK, that was terrible.)



Over the weekend, I implemented an interpreter for the SKI combinator 
calculus. In Bash script.

I am a sick, sick individual.



"The who to the what now?" you ask. Well, around about the time Alan 
Turing was inventing Turing machines, a guy called Alonzo Church was 
inventing something called the lambda calculus. If a Turing machine is 
like an ultra-simplified computer, then the lambda calculus is like an 
ultra-simplified programming language.

(Of course, all the experts know that "really" they're neither of these 
things. They're actually abstract models of computation. But we'll 
cheerfully ignore that for a moment.)

The lambda calculus is a programming language which is *so simple* that 
it doesn't even have arithmetic built-in. In order to do trivial 
calculations, you first have to manually *define* what arithmetic "is". 
For every single program. Needless to say, programs in the lambda 
calculus are... quite verbose.

The lambda calculus is conceptually simple, but rather fiddly to parse, 
and even more awkward to execute. (In particular, there are some fun 
edge-cases to do with scoping rules.) However, every possible lambda 
expression can be encoded into an equivalent expression that contains 
only three pre-defined values, traditionally denoted by the letters S, K 
and I.

Let me say that again: The SKI combinator calculus is a programming 
language consisting only of three letters (and millions of brackets), 
and it's somehow Turing-complete.

The rules for executing SKI expressions are considerably simpler. And 
the language has much easier to parse. So I set about writing an 
interpreter for it. As a Unix shell script. It uses some features 
specific to the Bourne Again Shell ("Bash"), so it's not portable to 
other systems. But Bash is everybody's default shell anyway, so...



As you may know, any possible computable function can be computed by a 
suitable Turing machine. However, Turing machines have no notion of 
input/output operations, which is what we generally want real-world 
programs to have. Likewise, neither the lambda calculus nor the SKI 
calculus has a way to do I/O. So I added one.

It's based on the idea of how the original Haskell I/O system used to 
work. (I.e., before we got monads.) Initially, I made the system so that 
the SKI program accepts an arbitrary-length stream of code numbers as 
input, and returns an arbitrary-length stream of code numbers as output. 
I later changed it to be a stream of *bits*. My plan was that the 
interpreter would read these codes and do different I/O operations based 
on the "commands" it receives. I've since decided that since even the 
most trivial SKI program is quite hard enough, this extra layer isn't 
needed. (Yet?)

I've dubbed the resulting language "Pointless". So how do you execute a 
Pointless program? The rules are quite simple:

* The only legal input characters are: S K I ( )

* The input source code is a sequence of "items". An "item" consists of 
a single letter, or a pair of brackets enclosing ONE or more sub-items. 
(It is an error for there to be ZERO items in the brackets.)

* To "execute" the program, you apply a series of transformations to the 
source code, according to the rules below.

* First, append the items G, E, P to the right-hand end of the program.

* Now endlessly apply the transformation rules. Look at the LEFT-MOST 
item in the program, and select the applicable rule:

RULE #1: "(" - Delete the opening bracket and the MATCHING closing bracket.

Rule #2: "I" - Delete the I.

Rule #3: "K" - Delete the K, keep the next item, delete the item after that.

Rule #4: "S" - Delete the S. Number the following items as 1, 2, 3. 
Remove them, and replace with the sub-expression "1 3 (2 3)".

Rule #5: "G" - Read one bit of input. Delete the G and replace it with:

+ S(K(S(SI(K (KI) )))G   if the bit read was 0
+ S(K(S(SI(K (K ) )))G   if the bit read was 1

Rule #6: "E" - End of program. Stop executing.

Rule #7: "P" - Delete the P, keep the next item, insert O, Z after it.

Rule #8: "O" - Delete the O. Output a 1 bit.

Rule #9: "Z" - Delete the Z. Output a 0 bit.

The first 4 rules constitute the entire SKI calculus. The 5 subsequent 
rules specify how to perform I/O.

For example, consider the expression

   SSKKII

By rule #4, 1=S, 2=K, 3=K, and we get

   SK(KK)II

Applying rule #4 again, 1=K, 2=KK, 3=I and we have

   KI((KK)I)I

Now rule #3 applies, which deletes the ((KK)I) item:

   I

As a slightly longer example,

   S(S(KS)K)IZO
                        S: 1=(S(KS)K), 2=I, 3=Z

   (S(KS)K)Z(IZ)O
   ^      ^             Brackets

   S(KS)KZ(IZ)O
                        S: 1=(KS), 2=K, 3=Z

   (KS)Z(KZ)(IZ)O
   ^  ^                 Brackets

   KSZ(KZ)(IZ)O
                        K: 1=S, 2=Z

   S(KZ)(IZ)O
                        S: 1=(KZ), 2=(IZ), 3=O

   (KZ)O((IZ)O)
   ^  ^                 Brackets

   KZO((IZ)O)
                        K: 1=Z, 2=O

   Z((IZ)O)
                        Output 0

   ((IZ)O)
   ^     ^              Brackets

   (IZ)O
   ^  ^                 Brackets

   IZO
                        I

   ZO
                        Output 0

   O
                        Output 1

So, in the end, this program outputs 001.



It turns out you can write a parser that parses items and an execution 
engine that implements the 9 rules in less than 99 lines of shell 
script, as I shall shortly demonstrate.

My initial thought was that certainly the first 4 rules ought to be 
pretty simple to implement using sed (the Unix Stream EDitor). However, 
sed only understands regular expressions, and such a construct cannot 
handle *nested* brackets. For that, you need a context-sensitive grammar.

After many experiments, I hit upon a single idea: represent 
single-letter items as themselves, and represent compound items by ID 
numbers. So the shell script first runs the input through the "tr" 
command (TRanslate), which strips all characters except the ones we're 
interested in. (In particular, it strips all whitespace.) Then the 
script creates one file on disk for each compound item.

For example, the expression

   S(SIK)(KI(SS))

might be translated into a bunch of disk files that look like this:

   0001.item: S 2 3
   0002.item: S I K
   0003.item: K I 4
   0004.item: S S

Now bash has a "read" command, which reads one line of input and splits 
it into "words". By putting spaces around each item, this makes Bash 
automatically split up the words, and put them into separate variables. 
(It also means we can count higher than 9, of course...)

The parser is a fairly simple affair; once all spaces have been 
stripped, look at each character. If it's an open bracket, allocate a 
new item ID and start parsing items into that file. If it's a close 
bracket, return to the caller [presumably a bracket handler, which will 
then back out of the current bracket sequence].

Now ordinarily Bash tries to read lines. But according to Google [more 
exactly, Stack Overflow], the magical incantation

   IFS='' read -r -n1 CHAR

will read exactly one character into ${CHAR}. (IFS='' disables all 
delimiter characters, -r disables character escapes in the input string, 
-n1 means "read 1 char".)

Once everything is item files containing space-delimited tokens, a call like

   read TOKEN1 TOKEN2 TOKENS <Filename

will redirect stdin to "Filename", and execute the "read" command, each 
puts the first token in ${TOKEN1}, the second in ${TOKEN2}, and ALL 
remaining tokens into ${TOKENS}.

At this point, it's just a matter of reading the item representing the 
top-level program expression, figuring out what the first token is, and 
overwriting the item file with the new, modified expression. (In some 
cases, that means allocating new items, but for the most part it's just 
rearranging existing ones.)



So, without further ado, here is the actual shell script:

----------------------------------------------------------------------

#!/bin/bash

shopt -s extglob   # Allow regex in case patterns.

if [ ! -r "$1" ]; then echo "Can't open file '$1' for reading." ; exit 
10 ; fi

if [ -d Heap ]; then rm -r Heap ; fi
mkdir Heap
echo 1 >Heap/Next

function NewItem()
{
   local ITEM=$( cat Heap/Next )
   echo ${ITEM}
   let ITEM++
   echo ${ITEM} >Heap/Next
}

function ItemPath() { printf "Heap/%04d.item" $1 ; }

function Ingest()
{
   printf '('
   tr -cd 'SKI()' <"$1"
   printf 'GEP)'
}

function ParseItem()
{
   IFS='' read -r -n1 CHAR
   if [ "${CHAR}" == "(" ]; then ParseItems ; return 0 ; fi
   if [ "${CHAR}" == ")" ]; then              return 1 ; fi
   printf "${CHAR} " ; return 0
}

function ParseItems()
{
   local ITEM=$( NewItem )
   local FILE=$( ItemPath ${ITEM} )
   printf "${ITEM} "
   while ParseItem >>${FILE} ; do : ; done
}

function Decode()
{
   local ITEMS ITEM
   read  ITEMS <$( ItemPath $1 )
   for ITEM in ${ITEMS}
   do
     if [[ "${ITEM}" =~ [0-9]+ ]]
       then printf '(' ; Decode ${ITEM} ; printf ')'
       else printf "${ITEM}"
     fi
   done
}

function Exec()
{
   local FILE=$( ItemPath $1 )
   while true
   do
     Decode $1 && printf '\n'
     read CMD ITEM1 ITEM2 ITEM3 ITEMS <${FILE}
     case "${CMD}" in
       +([0-9]))
         read OTHERS <$( ItemPath ${CMD} )
         printf "${OTHERS} ${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" >${FILE}
         ;;
       "I") printf "${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" >${FILE} ;;
       "K") printf "${ITEM1}          ${ITEM3} ${ITEMS}" >${FILE} ;;
       "S")
         CHILD=$( NewItem )
         printf "${ITEM2} ${ITEM3}"                   >$( ItemPath 
${CHILD} )
         printf "${ITEM1} ${ITEM3} ${CHILD} ${ITEMS}" >${FILE}
         ;;
       "G")
         read BIT <&10
         case "${BIT}" in
           "")  printf "K          ${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" 
 >${FILE} ;;
           "0") printf "${CONS0} G ${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" 
 >${FILE} ;;
           "1") printf "${CONS1} G ${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" 
 >${FILE} ;;
         esac
         ;;
       "E") echo "End of program." ; exit 0 ;;
       "P") printf "${ITEM1} O Z ${ITEM2} ${ITEM3} ${ITEMS}" >${FILE} ;;
       "Z") echo 0 >&11 ; printf "${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" > 
${FILE} ;;
       "O") echo 1 >&11 ; printf "${ITEM1} ${ITEM2} ${ITEM3} ${ITEMS}" > 
${FILE} ;;
     esac
   done
}

CONS0=$( printf '(S(K(S(SI(K(KI))))))' | ParseItem )
CONS1=$( printf '(S(K(S(SI(KK)))))'    | ParseItem )
ROOT_ITEM=$( Ingest "$1" | ParseItem )
Exec ${ROOT_ITEM} 10<Input.rec 11>Output.rec

----------------------------------------------------------------------

Assumptions / limitations:

* The input bits must be stored in a text file [or conceivably a named 
pipe] called "Input.rec", which must contain one "0" or "1" per line. 
Even if the program being interpreted has no input commands!

* The output bits are stored in a similar file Output.rec, which will be 
created if it doesn't already exist.

* The shell script must be executed with the filename of the SKI program 
as its only input. Otherwise you'll get an error message about being 
unable to open the file.

* There is minimal error-checking. It isn't hard to add this, but it 
makes the shell script longer than 99 lines. I wanted to show how 
"simple" the job is, without a lot of error-handling clutter.

* In particular, mismatched brackets or incorrect SKI programs will 
probably provoke utterly baffling failures.

To try it out, you need some SKI programs to run. Possibly the simplest 
program is one that reads 0 bits of input and returns 0 bits of output:

   KK

Even simpler is a program who's output is identical to its input:

   I

Unimpressed, huh? OK, let's try something a bit more interesting. This 
program takes no input, and always produces 101 as its output:

   K
   (
     S(K(S(SI(K K ))))
     (
       S(K(S(SI(K (KI) ))))
       (
         S(K(S(SI(K K ))))
         K
       )
     )
   )

I said the lambda calculus is verbose, right? Well, the SKI calculus is 
even worse! (Wikipedia says something about  Θ(3^n) worse. Then again, 
Wikipedia also says [citation needed], so...) By adding more possible 
predefined letters, it's possible to reduce the bloat --- at the expense 
of making the interpreter more complicated, of course.

To do anything much less trivial than this... well, the programs start 
to get LARGE. I need to build myself a "compiler" to produce such 
programs. (But not in Bash - GOD NO!)

Still, this whole idea came about as I was thinking about a Bash script 
that contains a tarball that it then unzips, which mostly contains a 
giant SKI program and the SKI interpreter. Good luck to anybody trying 
to reverse-engineer that! ;-)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Reach for the SKI
Date: 21 Jul 2015 12:58:03
Message: <55ae7a1b$1@news.povray.org>
On 20/07/2015 10:03 PM, Orchid Win7 v1 wrote:

> Rule #5: "G" - Read one bit of input. Delete the G and replace it with:
>
> + S(K(S(SI(K (KI) )))G if the bit read was 0
> + S(K(S(SI(K (K ) )))G if the bit read was 1

Alternatively, if reading the input reached EOF, replace G with K.

> Rule #7: "P" - Delete the P, keep the next item, insert O, Z after it.

I haven't looked into it properly yet, but it ought to be possible to 
dispense with this rule.

> The first 4 rules constitute the entire SKI calculus. The 5 subsequent
> rules specify how to perform I/O.

Overall, I'm a little disappointed with how "complicated" this stuff 
looks. It's meant to be like "wow, this is so simple, yet you can write 
entire programs with it? No way!" But it comes across as maybe too 
complicated for that.

> My initial thought was that certainly the first 4 rules ought to be
> pretty simple to implement using sed (the Unix Stream EDitor). However,
> sed only understands regular expressions, and such a construct cannot
> handle *nested* brackets. For that, you need a context-sensitive grammar.
>
> After many experiments, I hit upon a single idea: represent
> single-letter items as themselves, and represent compound items by ID
> numbers.

It's really not hard to write a Bash script that linearly scans the 
input, keeping a count of how many brackets it's seen, and handle items 
that way. Trouble is, I anticipate "real" SKI programs being very large 
indeed, and this method requires copying potentially huge blocks of text 
around. OK, you can copy one character at a time, but Bash isn't noted 
for being a high-performance I/O engine.

By contrast, by working with item IDs, each individual item is actually 
tiny. We also get "full sharing" (i.e., if a sub-expression gets 
duplicated [as the S command happens to do], then it still only gets 
"executed" once).

Hypothetically, it could do with some kind of "garbage collector", which 
removes item files that are no longer reachable. In practice, I doubt 
you're going to run out of disk space. And it's not like we can reuse 
item IDs (unless you start also renumbering items!)

> Assumptions / limitations:

> * There is minimal error-checking. It isn't hard to add this, but it
> makes the shell script longer than 99 lines. I wanted to show how
> "simple" the job is, without a lot of error-handling clutter.
>
> * In particular, mismatched brackets or incorrect SKI programs will
> probably provoke utterly baffling failures.

Also, illegal characters in the input (e.g., J) will get silently 
ignored, rather than reported as an error. Also note that SKI programs 
are case-sensitive.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Reach for the SKI
Date: 21 Jul 2015 14:51:20
Message: <55ae94a8$1@news.povray.org>
On 20/07/2015 10:03 PM, Orchid Win7 v1 wrote:

> The lambda calculus is a programming language which is *so simple* that
> it doesn't even have arithmetic built-in. In order to do trivial
> calculations, you first have to manually *define* what arithmetic "is".
> For every single program. Needless to say, programs in the lambda
> calculus are... quite verbose.

The lambda calculus has a single data-type: functions. Everything in the 
language is basically an infinite-argument function. Because each 
function takes a function as input and returns another function as 
output. The notion of "curried functions" originates here; we can 
represent a 2-argument function as actually being a 1-argument function 
that returns a 1-argument function that returns the actual result [which 
is again a function, but hey].

It goes without saying that trying to describe "functions that take 
functions and construct functions that return functions for constructing 
functions that make functions" gets confusing rather rapidly!

So how the hell do you "do" anything with the lambda calculus? Well, 
think about it this way: To do anything with a digital computer, you 
have to take whatever it is you want processed and *encode* it into 
binary. You then run a program --- which is *also* binary --- which 
produces a result [again binary]. You then *decode* the result binary to 
get back something meaningful.

For example, to add two numbers, you encode each of them as a series of 
bits. The computer than has a hardware-accelerated operation for 
transforming the bits in such a way that when you decode the result, it 
represents the sum of the original two numbers. Phrased like that, it 
sounds really complicated. But in practise it isn't so much.

The lambda calculus is similar. Except that there's no 
hardware-accelerated "add two numbers" operation built-in. You have to 
define one yourself. But I'm getting ahead of myself.

The syntax of the lambda calculus goes like this. A lambda expression is 
always one of three things:

* A variable reference.

* An anonymous function.

* A function call.

You reference a variable by simply writing its name. Likewise, you call 
a function by writing the function to call followed by the argument to 
call it with. An anonymous function consists of the Greek letter lambda 
("λ"), followed by a variable name, followed by an arrow, followed by 
the function body.

In symbols:

* <var name>

* λ <var name> → <expression>

* <expression> <expression>

You "execute" lambda expressions via a process of "beta reductions". For 
example:

   (λ x → x x) foo

To "execute" this, take the body of the lambda function (the bit after 
the arrow) and replace every "x" with "foo". The result is obviously

   foo foo

As another example,

   (λ x → x y) foo

This becomes

   foo y

The thing on the right doesn't have to be a variable, of course. It can 
be another function. For example,

   (λ x → x x) (λ x → x x)

Now we have a confusion; the "x" on the left is a *different* variable 
from the "x" on the right. Each is only in scope between the lambda and 
the close-bracket. This kind of thing gets confusing quickly; this is 
one of the reasons I went with the SKI calculus for my cryptic interpreter.



So let's try to implement some stuff in the lambda calculus. We'll start 
with something really trivial: Bool functions. Now before we can do 
that, we need to figure out a way to turn Bools into functions, and 
functions back into Bools. In other words, we need an *encoding*.

According to the Church encoding [named after Alonzo Church], we have

   True  =   (λ t → (λ f → t))
   False =   (λ t → (λ f → f))

In other words, "True" is represented by any function that accepts 2 
arguments and returns the 1st argument, unmodified. Similarly, "False" 
is represented by any function that accepts 2 arguments and returns the 
2nd argument unmodified.

With this way of thinking, then if "b" is a variable which contains [a 
function that's meant to represent] a Bool, then we can do

   b abc xyz

If b = True, then the result of this expression is abc, otherwise it is 
xyz. So it's a little like writing

   if (b == true) then return abc; else return xyz;

Suppose we have a "b", and we want the logical-NOT of that. How can we 
compute it? Well, actually that's easy:

   b False True

Or rather, in proper lambda syntax,

   b (λ t → (λ f → f)) (λ t → (λ f → t))

If we want a NOT *function*, we can write

   NOT = λ b → b (λ t → (λ f → f)) (λ t → (λ f → t))

Now we have a 1-argument function that accepts a Bool and returns the 
logical-NOT of it.

First: that was easy! Second: man, that's really verbose! This makes 
*Java* look succinct. (!)

How about a logical-AND?

   AND = λ b → (λ b → a b (λ t → (λ f → f)))

If you're not expert in Boolean algebra, this might not be immediately 
clear, but take a look at the truth table for the AND operation:

   a | b | a & b
  ---+---+-------
   F | F |   F
   F | T |   F
   T | F |   F
   T | T |   T

If you study this table, you will notice that when a=T then a & b = b. 
So in our lambda expression, if a is true, we return b. You will also 
notice that if a=F then a & b = F, utterly regardless of b. So if a is 
false, we return false [which is the (λ t → (λ f → f)) part at the end].

The OR function is fairly similar:

   OR = λ b → (λ b → a (λ t → (λ f → t)) b)

This time, if a=T we return T unconditionally, otherwise we return b.

Given the above, we can implement XOR as

   a XOR b = (a OR b) AND (NOT (a AND b))

which is wordy in words, but as a lambda expression it's

   XOR =
     (λ A → (λ B →
       (λ b → (λ b → a b (λ t → (λ f → f))))
         (A (λ t → (λ f → t)) B)
         (A B (λ t → (λ f → f)))
       )
     )

That is a LOOONG way to write A XOR B! Perhaps a better way is to attack 
the truth table directly. If you study it, you'll notice that when A=F 
then A XOR B = B, otherwise A XOR B = NOT B.

   XOR = (λ A → (λ B → A (B (λ t → (λ f → f)) (λ t → (λ f → t))) B))

That's a bit shorter.

It should be clear by this point that figuring out what some random 
chunk of lambda expression does is quite fiddly. You can also see why 
I'm looking into building automated tools for constructing this stuff. 
If it takes multiple lines of code just to XOR two variables, imagine 
trying to implement something actually complex!



In fact, the Church encoding offers a general way to encode any 
algebraic data type. Suppose, for example, that an IO Mode has three 
possible values: ReadOnly, ReadWrite, and Append. You can represent an 
IO Mode as a 3-argument function that returns one of its arguments 
unchanged. Which one tells you what of the three possible values it 
represents.

   ReadOnly  = λ r → (λ w → (λ a → r))
   ReadWrite = λ r → (λ w → (λ a → w))
   Append    = λ r → (λ w → (λ a → a))

It goes without saying that you can't "tell" what a particular function 
is supposed to represent just by looking at it --- much like you can't 
tell what the bit-sequence 11010011 is meant to be. (Is that unsigned 
binary? Ones complement? Twos complement? Excess-N? Sign and magnitude? 
Fixed point? Floating-point? Scaled integer? ASCII code? Some other 
random code...?)

For data records, you can use the following trick. Let's say we want to 
store a data pair. "5" is not a valid lambda expression, but for 
simplicity let's pretend it is. To represent the pair (5, 7), you could 
write a function

   (λ f → f 5 7)

This function takes a function as input, and passes the two elements of 
the pair to that function as arguments. Using this, you can easily write 
a "get X" or "get Y" function:

   GetX = λ p → p (λ x → (λ y → x))
   GetY = λ p → p (λ x → (λ y → y))

That is, we take a pair ("p") as input, and call it, passing a function 
that accepts two values ("x" and "y") as input and returns the desired 
one. We could also easily do some calculation on the two coordinates 
before returning them. It's also easy to see how we could extend this 
scheme to deal with more than just 2 items.

Let us consider the Haskell programmer's favourite ADT:

   data List x = Null | Cons x (List x)

We can turn this into lambda expressions:

   null     = λ n → (λ c → n)
   cons h t = λ n → (λ c → c h t)

So a "list" is a 2-argument function. If it's an end-of-list marker, it 
returns the first argument. Otherwise it takes the second argument and 
calls it with the head and tail of the list as arguments. Now we can 
easily write a function to (say) get the first element out of a list:

   head = λ l → l ??? (λ h → (λ t → h))

The "???" stands for the fact that if the list is empty, then there *is* 
no "first element" to return, so we'd need to decide what to do here. 
(Perhaps return some sort of default value instead, depending on what 
this is a list *of*.)



One tricky issue is recursion. Since all functions are anonymous, it 
*looks like* it's impossible for a function to call itself recursively. 
However, it *is* in fact possible.

The basic idea is to write a non-recursive function, and pass a copy of 
the function in as an argument. For example:

   (λ x → x x) (λ f → ...my function ...)

This does *almost* what we want. The function on the left makes two 
copies of the function on the right, and passes one copy as the argument 
to the other. The function on the right accepts a copy of itself as 
input, and is therefore able to call itself.

But this isn't quite right. This only lets us do *one* recursive call. 
We only made *two* copies of the function. To allow unlimited recursion, 
we really need an unlimited number of copies somehow.

The function we search for is the so-called "Y combinator". It has the 
property that

   Y f = f (Y f) = f (f (Y f)) = f (f (f (Y f))) = ...

In short, if f expects a copy of itself as input, then Y f generates 
infinity copies of f, and we can have unlimited recursion.

Now, how to implement Y?

Wikipedia gives one implementation:

   Y = λ g → (λ x → g (x x)) (λ x → g (x x))

If we give it an argument:

   Y f = (λ g → (λ x → g (x x)) (λ x → g (x x))) f
       =       ((λ x → f (x x)) (λ x → f (x x)))
       =     f ((λ x → f (x x)) (λ x → f (x x)))
       =     f (Y f)

The first line is the definition of Y. On the second like, every g has 
become f. Taking the lambda on the left and replacing each x with the 
lambda on the right yields the third line. But if you look, the part in 
brackets is identical to the line above, which represents Y f. Which is 
what the final line says. So here we have unlimited recursion.

Now we can do something like

   MAP =
     Y
     (λ map →
       (λ f → (λ list →
         (λ null → (λ cons →
           list null (λ head → (λ tail → cons (f head) (map f tail)))
         )
       ))
     )

The Y combinator applies the inner function to itself, allowing it to 
recurse over a list. If the list is null, return null. Otherwise, apply 
f to the head of the list, and recursively apply map f to the tail.

Yeah, clear as mud, right? Trust me, it does work...

> Let me say that again: The SKI combinator calculus is a programming
> language consisting only of three letters (and millions of brackets),
> and it's somehow Turing-complete.

First, let me define what these letters actually are:

   I = λ x → x
   K = λ x → (λ y → x)
   S = λ f → (λ g → (λ x → (f x) (g x)))

The I function ("identity") takes an argument and just returns it 
unchanged. The K function ("konstant") takes two arguments, and returns 
the first one. In other words, Kx is a function that already returns x. 
Finally, the S function... is the complicated one. But basically it 
accepts two functions (f and g) and an input (x), and passes that input 
to both functions, before calling the result of one with the result of 
the other.

It should now be fairly obvious how the SKI interpreter rules come 
about, given the lambda definitions for what these functions actually do.

Amazingly, every possible lambda expression can be transformed into a 
lambda expression containing only S, K and I. [Provided there are no 
"free variables" to start with.] Not only that, but there is a very 
simple algorithm for doing precisely this. Let's write an expression in 
square brackets to denote converting it to SKI. The rules are then:

What [λ x → e] becomes depends on e:

* If e is just x, then we have [λ x → x] = I.

* If e doesn't contain x at all, then e never changes no matter what x 
is. So we can do [λ x → e] = K[e].

* If e is a function call, then e = l r

   * If r is just x, then we have [λ x → l x] which is equivalent to 
[l]. (This is an "eta reduction".)

   * Otherwise, we can use S: [λ x → l r] = S[λ x → l][λ x → r]

* Finally, [λ x → (λ y → e)] = [λ x → [λ y → e]]. (That is, encode the 
inner lambda first, and then encode the other lambda.)

Let's try our luck with some simple expressions. Perhaps the True 
expression:

   λ t → (λ f → t)

OK, so how does this go down?

   [λ t → (λ f → t)]

Well, t definitely appears in the function body, so let's encode the 
inner lambda first.

   [λ t → [λ f → t]]

Well, f doesn't appear in the body at all, so the K rule applies:

   [λ t → Kt]

Now the eta reduction rule applies:

   [K]

The SKI encoding of K is just K. So

   True = K

How about False?

   [λ t → (λ f → f)]
   K[λ f → f]
   KI

That was easy!

So how about NOT?

   [λ b → b (λ t → (λ f → f)) (λ t → (λ f → t))]
   S[λ b → b (λ t → (λ f → f))][λ b → (λ t → (λ f → t))]
   S[λ b → b (λ t → (λ f → f))](K[λ t → (λ f → t)])
   S[λ b → b (λ t → (λ f → f))](K[λ t → [λ f → t]])
   S[λ b → b (λ t → (λ f → f))](K[λ t → Kt])
   S[λ b → b (λ t → (λ f → f))](KK)
   S(S[λ b → b][λ b → (λ t → (λ f → f))])(KK)
   S(SI[λ b → (λ t → (λ f → f))])(KK)
   S(SI(K[λ t → (λ f → f)]))(KK)
   S(SI(K(K[λ f → f])))(KK)
   S(SI(K(KI)))(KK)

Well, it's a lot of steps, but the end result isn't too bad. The hardest 
part is getting all the millions of brackets right! Fortunately, this 
stuff is easy for a computer to do. Indeed, I've written a small Haskell 
program where you type stuff into a HTML form, and the web server send 
back a trace like the one above, pretty colour-coded as it does the 
encoding. (It also allows you to execute arbitrary lambda expressions, 
execute SKI expressions, and more besides.)

One of the most annoying things about carrying this process out by hand 
is when you see an expression like λ b → ...b..., where the variable 
*is* mentioned, but only in one place, and really deeply burried. So you 
end up with a trail of K values. And then, because there's another 
lambda around it, you've got to wrap all those Ks in Ss, and the 
expression ends up really big and complicated.

You can somewhat fix that by adding a few new functions:

   L = λ f → (λ g → (λ x → (f x) (g  )))
   R = λ f → (λ g → (λ x → (f  ) (g x)))

These functions are both similar to S, but without the need to K the 
left or right sub-expression. Which is a tiny saving in itself, but it 
means that as you encode further lambdas wrapping the whole thing, the K 
doesn't accumulate a stream of other combinators. The end result can be 
quite a saving in text.

   [λ b → b (λ t → (λ f → f)) (λ t → (λ f → t))]
   L[λ b → b (λ t → (λ f → f))][λ t → (λ f → t)]
   L[λ b → b (λ t → (λ f → f))][λ t → [λ f → t]]
   L[λ b → b (λ t → (λ f → f))][λ t → Kt]
   L[λ b → b (λ t → (λ f → f))]K
   L(L[λ b → b][λ t → (λ f → f)])K
   L(LI[λ t → (λ f → f)])K
   L(LI(K[λ f → f]))K
   L(LI(KI))K

As you can see, this is a bit smaller than the previous expression.


Post a reply to this message

From: scott
Subject: Re: Reach for the SKI
Date: 22 Jul 2015 03:32:52
Message: <55af4724$1@news.povray.org>
An interesting read... What are the uses of SKI then, apart from just 
making simple algorithms take up a million characters to encode?

> Still, this whole idea came about as I was thinking about a Bash script
> that contains a tarball that it then unzips, which mostly contains a
> giant SKI program and the SKI interpreter. Good luck to anybody trying
> to reverse-engineer that! ;-)

Sounds like the start of an IOCCC entry to me, although it seems the 
recent entries are just ridiculous (like PC emulators, ray tracers and 
realtime CFD solvers in a few lines of gibberish C code with lots of 
commas).

I once wrote an interpreter for a one instruction set computer (that's a 
lot easier than what you did!) and even wrote the code to generate prime 
numbers using it, but it got complicated and boring very quickly to do 
anything useful. OK I found it on my old HD, see attached, it should(!) 
compile and output all the prime numbers up to 1000. I wouldn't attempt 
to try and figure out how it's doing it though :-)

I have a feeling though, writing a prime number generator in SKI might 
be a tad longer...


Post a reply to this message


Attachments:
Download 'test.c.txt' (1 KB)

From: Orchid Win7 v1
Subject: Re: Reach for the SKI
Date: 22 Jul 2015 16:21:18
Message: <55affb3e$1@news.povray.org>
On 22/07/2015 08:32 AM, scott wrote:
> An interesting read... What are the uses of SKI then, apart from just
> making simple algorithms take up a million characters to encode?

Erm... that's pretty much the only use.

Although, Wikipedia does mutter something about people making silicon 
that runs SKI commands directly. (Not exactly sure how that would work, 
but you can imagine a CPU with only 3 possible opcodes would be quite 
simple to implement... Certainly a damned site easier than trying to 
implement the full lambda calculus!)

>> Still, this whole idea came about as I was thinking about a Bash script
>> that contains a tarball that it then unzips, which mostly contains a
>> giant SKI program and the SKI interpreter. Good luck to anybody trying
>> to reverse-engineer that! ;-)
>
> Sounds like the start of an IOCCC entry to me

That's pretty much what I was channelling, yeah.

I actually started thinking about a POV-Ray SDL program that's actually 
a self-decompressing program that then executes SKI code to build a 
scene description. But maybe that's just crazy...

Either way, I figured nobody would ever figure out WTF it actually does. 
I would think the set of people who know how to unravel obfuscated shell 
code and the set of people who know about theoretical models of 
computation are totally disjoint.

> although it seems the
> recent entries are just ridiculous (like PC emulators, ray tracers and
> realtime CFD solvers in a few lines of gibberish C code with lots of
> commas).

Ah, C. The entire language is specifically designed for crazy.

> I once wrote an interpreter for a one instruction set computer (that's a
> lot easier than what you did!) and even wrote the code to generate prime
> numbers using it, but it got complicated and boring very quickly to do
> anything useful.

Quitter. ;-)

My dream is to build non-trivial programs, encode them to SKI, and then 
marvel at how huge they are. A bit like compiling Hello World in Eiffel 
and then looking at the 25MB executable it produces.

> OK I found it on my old HD, see attached, it should(!)
> compile and output all the prime numbers up to 1000. I wouldn't attempt
> to try and figure out how it's doing it though :-)

Uh... I don't even... How can you index an integer as if it's an array? 
If my understanding is not wildly broken, that's Undefined Behaviour...

> I have a feeling though, writing a prime number generator in SKI might
> be a tad longer...

Division is fairly hard to implement in the lambda calculus. (Depending 
on which way you encode integers.) The Fibonacci numbers should be 
comparatively easy to compute, however.

[In fact, I managed to write a working implementation for that in my 
lunch break. But it uses Church numerals, whereas the SKI interpreter I 
wrote expects the output to be a linked list of Church Bools. It 
shouldn't be too hard to convert, but I haven't done it yet...]


Post a reply to this message

From: Mr
Subject: Re: Reach for the SKI
Date: 23 Jul 2015 03:05:01
Message: <web.55b0919e51ecd4c216086ed00@news.povray.org>
Orchid Win7 v1 <voi### [at] devnull> wrote:
> On 22/07/2015 08:32 AM, scott wrote:
> > An interesting read... What are the uses of SKI then, apart from just
> > making simple algorithms take up a million characters to encode?
>
> Erm... that's pretty much the only use.

ROTFL !!! isn't that being slightly pervert ?! :)

I'm sure however that it could find unpredicted real applications one day!


Post a reply to this message

From: scott
Subject: Re: Reach for the SKI
Date: 23 Jul 2015 03:25:10
Message: <55b096d6$1@news.povray.org>
> Although, Wikipedia does mutter something about people making silicon
> that runs SKI commands directly. (Not exactly sure how that would work,
> but you can imagine a CPU with only 3 possible opcodes would be quite
> simple to implement... Certainly a damned site easier than trying to
> implement the full lambda calculus!)

As I found with my OISC interpreter though, you just shift the 
complexity away from the CPU/interpreter and into the program itself. In 
a real system you'd probably end up with a "core" library to do things 
like add, subtract etc, and then because every single program uses those 
library calls, you might as well implement them in hardware...

> I actually started thinking about a POV-Ray SDL program that's actually
> a self-decompressing program that then executes SKI code to build a
> scene description. But maybe that's just crazy...

Yes that's crazy, but so long as the output was half-decent I'm sure it 
would generate a lot of interest after someone asked to see the source 
code :-)

> Either way, I figured nobody would ever figure out WTF it actually does.
> I would think the set of people who know how to unravel obfuscated shell
> code and the set of people who know about theoretical models of
> computation are totally disjoint.

I wouldn't say *totally*.

> Uh... I don't even... How can you index an integer as if it's an array?
> If my understanding is not wildly broken, that's Undefined Behaviour...

I think that was one of the first things I learned from the IOCCC! a[b] 
means read the value at memory address a+b*L, where L is the length of 
each array element in bytes. If L=1 then a[b] == b[a]. Also if you then 
use that value to index into another array: c[ a[b] ], by the same rule 
that is equivalent to a[b][c] or any other combination you can come up with.

> Division is fairly hard to implement in the lambda calculus. (Depending
> on which way you encode integers.) The Fibonacci numbers should be
> comparatively easy to compute, however.

In my OISC code I used a prime sieve to avoid having to do any 
multiplications or divides. The only tricky bit was having to use 
self-modifying code to write the results to the correct place in the 
output array.

> [In fact, I managed to write a working implementation for that in my
> lunch break. But it uses Church numerals, whereas the SKI interpreter I
> wrote expects the output to be a linked list of Church Bools. It
> shouldn't be too hard to convert, but I haven't done it yet...]

I tried figuring out Fibonacci numbers in my head the other day (it was 
a game-show question on TV) - I got lost pretty quickly, it's a lot 
harder than you think without any pen and paper.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Reach for the SKI
Date: 23 Jul 2015 03:37:00
Message: <55b0999c@news.povray.org>
On 23/07/2015 08:02 AM, Mr wrote:
> Orchid Win7 v1<voi### [at] devnull>  wrote:
>> On 22/07/2015 08:32 AM, scott wrote:
>>> An interesting read... What are the uses of SKI then, apart from just
>>> making simple algorithms take up a million characters to encode?
>>
>> Erm... that's pretty much the only use.
>
> ROTFL !!! isn't that being slightly pervert ?! :)
>
> I'm sure however that it could find unpredicted real applications one day!

Well, no, the SKI combinators are of theoretical interest, because they 
show you something about how small a Turing-complete programming 
language can be.

It's like, Turning machines have to *practical* use (unless you enjoy 
writing excruciatingly complicated programs!), but they are 
theoretically important.

Of course, the Iota calculus is even smaller, consisting of only *one* 
predefined function. If we write it as X (rather than Iota), then its 
definition is

   X = λ a → a S K

Therefore we have

   XX = XSK = (XS)K = SSKK = SK(KK)

If we eta-expand that, we find

   λ x → SK(KK)x = λ x → Kx(KKx) = λ x → x

which is the definition of I. Therefore

   XX = I

Similarly,

   X(XX) = XI = ISK = SK
   X(X(XX)) = X(SK) = SKSK = KK(SK) = K
   X(X(X(XX))) = XK = KSK = S

So, using only X, we can recover S, K and I, which we already know to be 
Turing-complete. Therefore, any possible computable function can be 
constructed just using X and brackets.

OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see 
that anything in the Iota calculus is going to be *really huge*!

This seems to be the general thing; the simpler your programming 
language, the more complicated your programs get!


Post a reply to this message

From: Stephen
Subject: Re: Reach for the SKI
Date: 23 Jul 2015 04:05:59
Message: <55b0a067$1@news.povray.org>
On 7/23/2015 8:37 AM, Orchid Win7 v1 wrote:
> On 23/07/2015 08:02 AM, Mr wrote:

>> I'm sure however that it could find unpredicted real applications one
>> day!
>
> Well, no, the SKI combinators are of theoretical interest, because they
> show you something about how small a Turing-complete programming
> language can be.
>
> It's like, Turning machines have to *practical* use (unless you enjoy
> writing excruciatingly complicated programs!), but they are
> theoretically important.
>

Wrong argument, I think. The outcome and next step is. Who would have 
thought that quantum mechanic theory in the 1920's would lead to 
transistors and your job. ;-)
Which is what Mr said.

> So, using only X, we can recover S, K and I, which we already know to be
> Turing-complete. Therefore, any possible computable function can be
> constructed just using X and brackets.
>
> OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see
> that anything in the Iota calculus is going to be *really huge*!
>
> This seems to be the general thing; the simpler your programming
> language, the more complicated your programs get!

Isn't that the point of high level languages? To get rid of all those 
brackets.
Have you just proved: 1 = 1?


-- 

Regards
     Stephen


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Reach for the SKI
Date: 25 Jul 2015 04:09:05
Message: <55b34421$1@news.povray.org>
On 23/07/2015 08:25 AM, scott wrote:
>> Although, Wikipedia does mutter something about people making silicon
>> that runs SKI commands directly. (Not exactly sure how that would work,
>> but you can imagine a CPU with only 3 possible opcodes would be quite
>> simple to implement... Certainly a damned site easier than trying to
>> implement the full lambda calculus!)
>
> As I found with my OISC interpreter though, you just shift the
> complexity away from the CPU/interpreter and into the program itself.

Indeed. The simpler the language, the more complex the programs.

>> I actually started thinking about a POV-Ray SDL program that's actually
>> a self-decompressing program that then executes SKI code to build a
>> scene description. But maybe that's just crazy...
>
> Yes that's crazy, but so long as the output was half-decent I'm sure it
> would generate a lot of interest after someone asked to see the source
> code :-)

Hehe, we'll see. ;-)

You may remember many years ago I built a self-decompressing SDL program 
based on Huffman codes. Oh, the compressor was written in Smalltalk, but 
the decompressor was SDL. It decompresses itself and then runs the 
result. Curiously, when I posted it somebody instantly recognised the 
decompression algorithm, despite the set of people who know 3D and the 
set of people who know data compression being completely disjoint...

>> Either way, I figured nobody would ever figure out WTF it actually does.
>> I would think the set of people who know how to unravel obfuscated shell
>> code and the set of people who know about theoretical models of
>> computation are totally disjoint.
>
> I wouldn't say *totally*.

In other news: I often wonder just how hard it "really" is to 
reverse-engineer compiled Haskell code. Supposedly it's intractably 
hard, since GHC aggressively optimises your Haskell code, and the result 
is statically linked against a non-trivial run-time system, and it does 
things like store the call stack on the heap and other strangeness... 
OTOH, game studios deliberately obfuscate stuff all the time, and this 
stops approximately 0% of all cracking attempts.

>> Uh... I don't even... How can you index an integer as if it's an array?
>> If my understanding is not wildly broken, that's Undefined Behaviour...
>
> I think that was one of the first things I learned from the IOCCC! a[b]
> means read the value at memory address a+b*L, where L is the length of
> each array element in bytes.

Which would mean, for an int variable, unless b=0 you are now accessing 
undefined memory locations. That's UB. At this point you have to *guess* 
what appears next in memory (which is compiler-dependent).

> If L=1 then a[b] == b[a].

Uh, how do you figure? The memory locations of the two variables might 
be in no way related.

> Also if you then
> use that value to index into another array: c[ a[b] ], by the same rule
> that is equivalent to a[b][c] or any other combination you can come up
> with.

That seems clear enough.

>> Division is fairly hard to implement in the lambda calculus. (Depending
>> on which way you encode integers.) The Fibonacci numbers should be
>> comparatively easy to compute, however.
>
> In my OISC code I used a prime sieve to avoid having to do any
> multiplications or divides. The only tricky bit was having to use
> self-modifying code to write the results to the correct place in the
> output array.

I don't have it with me, but I did manage to come up with a recursive 
algorithm for generating prime numbers. It's based on the fact that the 
prime numbers repeat, until multiples of the next prime come into play. 
The algorithm is tricky as hell to get right, but it's quite nice when 
it works. Does require O(N!) RAM though...

> I tried figuring out Fibonacci numbers in my head the other day (it was
> a game-show question on TV) - I got lost pretty quickly, it's a lot
> harder than you think without any pen and paper.

What, you can't compute the square root of 5 in your head? ;-)

No, seriously - humans have shallow stacks.


Post a reply to this message

Goto Latest 10 Messages Next 6 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.