|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Although, Wikipedia does mutter something about people making silicon
> that runs SKI commands directly. (Not exactly sure how that would work,
> but you can imagine a CPU with only 3 possible opcodes would be quite
> simple to implement... Certainly a damned site easier than trying to
> implement the full lambda calculus!)
As I found with my OISC interpreter though, you just shift the
complexity away from the CPU/interpreter and into the program itself. In
a real system you'd probably end up with a "core" library to do things
like add, subtract etc, and then because every single program uses those
library calls, you might as well implement them in hardware...
> I actually started thinking about a POV-Ray SDL program that's actually
> a self-decompressing program that then executes SKI code to build a
> scene description. But maybe that's just crazy...
Yes that's crazy, but so long as the output was half-decent I'm sure it
would generate a lot of interest after someone asked to see the source
code :-)
> Either way, I figured nobody would ever figure out WTF it actually does.
> I would think the set of people who know how to unravel obfuscated shell
> code and the set of people who know about theoretical models of
> computation are totally disjoint.
I wouldn't say *totally*.
> Uh... I don't even... How can you index an integer as if it's an array?
> If my understanding is not wildly broken, that's Undefined Behaviour...
I think that was one of the first things I learned from the IOCCC! a[b]
means read the value at memory address a+b*L, where L is the length of
each array element in bytes. If L=1 then a[b] == b[a]. Also if you then
use that value to index into another array: c[ a[b] ], by the same rule
that is equivalent to a[b][c] or any other combination you can come up with.
> Division is fairly hard to implement in the lambda calculus. (Depending
> on which way you encode integers.) The Fibonacci numbers should be
> comparatively easy to compute, however.
In my OISC code I used a prime sieve to avoid having to do any
multiplications or divides. The only tricky bit was having to use
self-modifying code to write the results to the correct place in the
output array.
> [In fact, I managed to write a working implementation for that in my
> lunch break. But it uses Church numerals, whereas the SKI interpreter I
> wrote expects the output to be a linked list of Church Bools. It
> shouldn't be too hard to convert, but I haven't done it yet...]
I tried figuring out Fibonacci numbers in my head the other day (it was
a game-show question on TV) - I got lost pretty quickly, it's a lot
harder than you think without any pen and paper.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 23/07/2015 08:02 AM, Mr wrote:
> Orchid Win7 v1<voi### [at] devnull> wrote:
>> On 22/07/2015 08:32 AM, scott wrote:
>>> An interesting read... What are the uses of SKI then, apart from just
>>> making simple algorithms take up a million characters to encode?
>>
>> Erm... that's pretty much the only use.
>
> ROTFL !!! isn't that being slightly pervert ?! :)
>
> I'm sure however that it could find unpredicted real applications one day!
Well, no, the SKI combinators are of theoretical interest, because they
show you something about how small a Turing-complete programming
language can be.
It's like, Turning machines have to *practical* use (unless you enjoy
writing excruciatingly complicated programs!), but they are
theoretically important.
Of course, the Iota calculus is even smaller, consisting of only *one*
predefined function. If we write it as X (rather than Iota), then its
definition is
X = λ a → a S K
Therefore we have
XX = XSK = (XS)K = SSKK = SK(KK)
If we eta-expand that, we find
λ x → SK(KK)x = λ x → Kx(KKx) = λ x → x
which is the definition of I. Therefore
XX = I
Similarly,
X(XX) = XI = ISK = SK
X(X(XX)) = X(SK) = SKSK = KK(SK) = K
X(X(X(XX))) = XK = KSK = S
So, using only X, we can recover S, K and I, which we already know to be
Turing-complete. Therefore, any possible computable function can be
constructed just using X and brackets.
OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see
that anything in the Iota calculus is going to be *really huge*!
This seems to be the general thing; the simpler your programming
language, the more complicated your programs get!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 7/23/2015 8:37 AM, Orchid Win7 v1 wrote:
> On 23/07/2015 08:02 AM, Mr wrote:
>> I'm sure however that it could find unpredicted real applications one
>> day!
>
> Well, no, the SKI combinators are of theoretical interest, because they
> show you something about how small a Turing-complete programming
> language can be.
>
> It's like, Turning machines have to *practical* use (unless you enjoy
> writing excruciatingly complicated programs!), but they are
> theoretically important.
>
Wrong argument, I think. The outcome and next step is. Who would have
thought that quantum mechanic theory in the 1920's would lead to
transistors and your job. ;-)
Which is what Mr said.
> So, using only X, we can recover S, K and I, which we already know to be
> Turing-complete. Therefore, any possible computable function can be
> constructed just using X and brackets.
>
> OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see
> that anything in the Iota calculus is going to be *really huge*!
>
> This seems to be the general thing; the simpler your programming
> language, the more complicated your programs get!
Isn't that the point of high level languages? To get rid of all those
brackets.
Have you just proved: 1 = 1?
--
Regards
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 23/07/2015 08:25 AM, scott wrote:
>> Although, Wikipedia does mutter something about people making silicon
>> that runs SKI commands directly. (Not exactly sure how that would work,
>> but you can imagine a CPU with only 3 possible opcodes would be quite
>> simple to implement... Certainly a damned site easier than trying to
>> implement the full lambda calculus!)
>
> As I found with my OISC interpreter though, you just shift the
> complexity away from the CPU/interpreter and into the program itself.
Indeed. The simpler the language, the more complex the programs.
>> I actually started thinking about a POV-Ray SDL program that's actually
>> a self-decompressing program that then executes SKI code to build a
>> scene description. But maybe that's just crazy...
>
> Yes that's crazy, but so long as the output was half-decent I'm sure it
> would generate a lot of interest after someone asked to see the source
> code :-)
Hehe, we'll see. ;-)
You may remember many years ago I built a self-decompressing SDL program
based on Huffman codes. Oh, the compressor was written in Smalltalk, but
the decompressor was SDL. It decompresses itself and then runs the
result. Curiously, when I posted it somebody instantly recognised the
decompression algorithm, despite the set of people who know 3D and the
set of people who know data compression being completely disjoint...
>> Either way, I figured nobody would ever figure out WTF it actually does.
>> I would think the set of people who know how to unravel obfuscated shell
>> code and the set of people who know about theoretical models of
>> computation are totally disjoint.
>
> I wouldn't say *totally*.
In other news: I often wonder just how hard it "really" is to
reverse-engineer compiled Haskell code. Supposedly it's intractably
hard, since GHC aggressively optimises your Haskell code, and the result
is statically linked against a non-trivial run-time system, and it does
things like store the call stack on the heap and other strangeness...
OTOH, game studios deliberately obfuscate stuff all the time, and this
stops approximately 0% of all cracking attempts.
>> Uh... I don't even... How can you index an integer as if it's an array?
>> If my understanding is not wildly broken, that's Undefined Behaviour...
>
> I think that was one of the first things I learned from the IOCCC! a[b]
> means read the value at memory address a+b*L, where L is the length of
> each array element in bytes.
Which would mean, for an int variable, unless b=0 you are now accessing
undefined memory locations. That's UB. At this point you have to *guess*
what appears next in memory (which is compiler-dependent).
> If L=1 then a[b] == b[a].
Uh, how do you figure? The memory locations of the two variables might
be in no way related.
> Also if you then
> use that value to index into another array: c[ a[b] ], by the same rule
> that is equivalent to a[b][c] or any other combination you can come up
> with.
That seems clear enough.
>> Division is fairly hard to implement in the lambda calculus. (Depending
>> on which way you encode integers.) The Fibonacci numbers should be
>> comparatively easy to compute, however.
>
> In my OISC code I used a prime sieve to avoid having to do any
> multiplications or divides. The only tricky bit was having to use
> self-modifying code to write the results to the correct place in the
> output array.
I don't have it with me, but I did manage to come up with a recursive
algorithm for generating prime numbers. It's based on the fact that the
prime numbers repeat, until multiples of the next prime come into play.
The algorithm is tricky as hell to get right, but it's quite nice when
it works. Does require O(N!) RAM though...
> I tried figuring out Fibonacci numbers in my head the other day (it was
> a game-show question on TV) - I got lost pretty quickly, it's a lot
> harder than you think without any pen and paper.
What, you can't compute the square root of 5 in your head? ;-)
No, seriously - humans have shallow stacks.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 20/07/2015 10:03 PM, Orchid Win7 v1 wrote:
> It turns out you can write a parser that parses items and an execution
> engine that implements the 9 rules in less than 99 lines of shell
> script, as I shall shortly demonstrate.
Of course, you can also delete the Decode() function and all references
to it, and the script will still work (and considerably faster too).
That reduces the line count quite a bit.
So, does anybody feel bored enough to have a go at a SKI interpreter in
other random programming languages?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 21/07/2015 05:58 PM, Orchid Win7 v1 wrote:
>> Rule #7: "P" - Delete the P, keep the next item, insert O, Z after it.
>
> I haven't looked into it properly yet, but it ought to be possible to
> dispense with this rule.
It appears instead of appending "GEP" to the initial program, you can
append "GE(S(SI(KO))(KZ))", and dispense with the P-rule.
(I haven't actually tested this yet.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> If L=1 then a[b] == b[a].
>
> Uh, how do you figure? The memory locations of the two variables might
> be in no way related.
"a" is a pointer-to type, "b" is an integer type.
a[b] just means read the memory at location a+b. The "+" operator
doesn't care which way round the variables are. Also thinking about it,
this probably works for any array type, as the "+" operator knows how to
add an integer to a pointer-to type in general (it multiplies the
integer by the size of each array element first).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 23-07-15 09:37, Orchid Win7 v1 wrote:
> OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see
> that anything in the Iota calculus is going to be *really huge*!
depends on your encoding. if you use all brackets, the Xs are redundant.
if you then encode ( by 0 and ) by 1, S is still only 8 bits and K only 6
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 28/07/2015 03:55 PM, Andrel Linnenbank wrote:
> On 23-07-15 09:37, Orchid Win7 v1 wrote:
>
> > OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see
> > that anything in the Iota calculus is going to be *really huge*!
>
> depends on your encoding. if you use all brackets, the Xs are redundant.
> if you then encode ( by 0 and ) by 1, S is still only 8 bits and K only 6
I don't know, man. Consider:
X(X(X(XX))) => ((()))
X(XX(X(XX))) => ((()))
The former encodes S, the latter encodes K.
Another way of doing it is to use 0 to represent pushing X onto the
stack, and 1 to represent pulling the top two stack items and making
them into a tree, pushing the result back to the stack.
X(X(X(XX))) = XXXXX^^^^ = 000001111
In this way, you can do something like
KI = ( X(X(XX)) )(XX) = XXXX^^^XX^^ = 00001110011
I did also sit down and think about making a prefix code for SKI.
Something where the code encodes how many arguments follow or something...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 28-07-15 19:13, Orchid Win7 v1 wrote:
> On 28/07/2015 03:55 PM, Andrel Linnenbank wrote:
>> On 23-07-15 09:37, Orchid Win7 v1 wrote:
>>
>> > OTOH, if you look at the size of "S" verses "X(X(X(XX)))", you can see
>> > that anything in the Iota calculus is going to be *really huge*!
>>
>> depends on your encoding. if you use all brackets, the Xs are redundant.
>> if you then encode ( by 0 and ) by 1, S is still only 8 bits and K only 6
>
> I don't know, man. Consider:
>
> X(X(X(XX))) => ((()))
> X(XX(X(XX))) => ((()))
I think you are missing a couple of brackets
X(X(X(X(X)))) => (((())))
X(X(X)(X(X(X)))) => (()((())))
> The former encodes S, the latter encodes K.
>
> Another way of doing it is to use 0 to represent pushing X onto the
> stack, and 1 to represent pulling the top two stack items and making
> them into a tree, pushing the result back to the stack.
>
> X(X(X(XX))) = XXXXX^^^^ = 000001111
>
> In this way, you can do something like
>
> KI = ( X(X(XX)) )(XX) = XXXX^^^XX^^ = 00001110011
>
> I did also sit down and think about making a prefix code for SKI.
> Something where the code encodes how many arguments follow or something...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|