POV-Ray : Newsgroups : povray.off-topic : Logic programming Server Time
29 Jul 2024 08:10:21 EDT (-0400)
  Logic programming (Message 19 to 28 of 28)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Invisible
Subject: Turning-complete
Date: 1 Jun 2012 06:48:16
Message: <4fc89df0$1@news.povray.org>
Theorem: The language implemented by Logic Box is Turing-complete.

Proof: I have written a program in Logic Box which executes any program 
written in the SKI calculus. The SKI calculus is known to be 
Turing-complete.



Actually, things are not quite so simple. But here goes...

First, the SKI calculus consists of abstract variables, and the three 
combinators "S", "K" and "I", which have the following rules:

   Sfgx => (fx)(gx) => fx(gx)
   Kxy  => x
   Ix   => x

It may seem absurd that this trivial system is Turing-complete. But 
there is actually several even /more/ trivial systems which are somehow 
Turing-complete.

So, how do we implement these transformation rules in Logic Box? Well, 
first we need to decide how to store this stuff. I have chosen to use 
S[], K[] and I[] to stand for the three combinators, and used the 
structure at[x, y] to represent a function application. (In other words, 
what is normally written as "xy" becomes "at[x, y]".) This makes things 
very verbose, but it does work.

The actual transformation rules are now simple enough to write:

   Reduce1{in, out}:
     (~x in = at[I[], x] & out = x) |
     (~x y in = at[at[K[], x], y] & out = x) |
     (~f ~g ~x in = at[at[at[S[], f], g], x] & out = at[at[f, x], at[g, x]])

This simple predicate takes an SKI expression as its first argument, and 
arranges for the second argument to be made equal to the result of a 
single reduction.

Actually, to get a "complete" reduction, you need to add a few extra 
rules to recursively reduce sub-expressions, or the thing never makes 
any progress.

This only performs one reduction step. To make it reduce /all/ the 
steps, you can do

   Reduce{in, out}:
     out = in | ~new Reduce1{in, new} & Reduce{new, out}

Unfortunately, because the OR operator tries /all/ possibilities rather 
than just keeping the first successful one, what this does is to produce 
a result set containing all possible reductions, using all possible 
reduction orders.

So in summary, we can say that running the program produces a result set 
which contains the correct answer /somewhere/, plus a whole heap of 
various intermediate results. As best as I can tell, there's no way to 
determine /where/ in the solutions it is. So I'm not completely sure 
this counts as a valid proof.

The complete test code is here:

Reduce1{in, out}:
   (~arg1 at[I[], arg1] = in & out = arg1) |
   (~arg1 ~arg2 at[at[K[], arg1], arg2] = in & out = arg1) |
   (~fn1 ~fn2 ~arg3 at[at[at[S[], fn1], fn2], arg3] = in & out = 
at[at[fn1, arg3], at[fn2, arg3]]) |
   (~left ~right
     at[left, right] = in &
     (
       ( ~left2 Reduce1{ left,  left2} & out = at[left2, right ]) |
       (~right2 Reduce1{right, right2} & out = at[left , right2])
     )
   )
;

Reduce{in, out}:
   in = out | ~new Reduce1{in, new} & Reduce{new, out}
;

Test1{out}:
   ~s s = S[] &
   ~k k = K[] &
   ~i i = I[] &
   out = at[at[at[at[s, at[at[s, at[k, s]], k]], i], f[]], x[]]
;

The final bit generates the expression S(S(KS)K)Ifx - the Church numeral 
2 - as a simple example which you can run.

   Test1{test} & Reduce{test, out}

It takes many, many steps, but it does eventually produce f(fx) as 
result #7. (And several more times after that, as it explores every 
possible reduction sequence. The number of these is presumably 
exponential in the size of the input...)

I would have to add either a NOT predicate, or an exclusive-OR predicate 
to get this to terminate at normal form.


Post a reply to this message

From: Invisible
Subject: Re: Turning-complete
Date: 1 Jun 2012 07:03:55
Message: <4fc8a19b@news.povray.org>
On 01/06/2012 11:48 AM, Invisible wrote:

> It takes many, many steps, but it does eventually produce f(fx) as
> result #7. (And several more times after that, as it explores every
> possible reduction sequence. The number of these is presumably
> exponential in the size of the input...)

Not so much the /size/ of the input, but the number of redexes 
("reducible expressions") it contains.

At every point, the solver chooses a redex by some algorithm, and then 
reduces it. Overall, the solver makes all possible choices, and 
generates [at least] one result per choice. So clearly the number of 
redexes put a lower bound on the number of solutions.

Reducing a K-redex or an I-redex results in one fewer redexes. However, 
reducing an S-redex can duplicate an existing redex into /two/ new 
redexes. (Or it can leave the number of redexes unchanged. It depends on 
the exact expression.)

I would hazard a guess, therefore, that the number of possible reduction 
orders is a non-computable function of the input expression.

Look at it this way: It appears that the only way to compute the number 
of reduction orders is to actually /find/ all possible reduction orders. 
In other words, to execute the program and count how many ways you can 
do it. If the program is non-terminating, then the counting procedure is 
also non-terminating. So it appears that counting the number of 
reduction orders is trivially equivalent to the Halting Problem.

(This relies on the assumption that running the program is the /only/ 
way to compute the number of reduction orders...)


Post a reply to this message

From: Invisible
Subject: Arithmetic
Date: 1 Jun 2012 10:01:33
Message: <4fc8cb3d$1@news.povray.org>
On 31/05/2012 11:47 AM, scott wrote:

> Also how hard would it be to add support for arithmetic operators? Then
> you would have an equation solver, which arguably would be more useful.

Peano arithmetic. Basically, you use Z[] to represent 0, and S[x] to 
represent x+1. In this way, 3 is represented by S[S[S[Z[]]]]. (In other 
words, 3 lots of S[].)

It's then quite easy to write sum and product predicates:

ISum{x, y, z}:
   (x = Z[] & y = z) |
   (~px ~pz
     x = S[px] &
     z = S[pz] &
     ISum{px, y, pz}
   )
;

IProduct{x, y, z}:
   (x = Z[] & z = Z[]) |
   (~px ~t
     x = S[px] &
     IProduct{px, y, t} &
     ISum{t, y, z}
   )
;

Using this, I was able to solve the following system of simultaneous 
equations:

   x + y = 9
   2x = y

The first equation can be represented by

   ISum{x, y, nine}

where nine is S[S[S[S[S[S[S[S[S[Z[]]]]]]]]]]. The second equation can be 
written

   IProduct{two, x, y}

where, again, two is S[S[Z[]]].

It takes several thousand steps, but Logic Box finds exactly one solutions:

   x := S[S[S[Z[]]]];
   y := S[S[S[S[S[S[Z[]]]]]]];

In other words, x=3 and y=6, which does solve the given equations.

Notice that the ISum{} predicate on its own gives the solution set

   x := 0; y := 9;
   x := 1; y := 8;
   x := 2; y := 7;
   x := 3; y := 6;
   ...
   x := 8; y := 1;
   x := 9; y := 0;

Note also that the ISum{} predicate needs to come first, since it has a 
finite solution set. The IProduct{} predicate has the infinite solution set

   x := 0; y := 0;
   x := 1; y := 2;
   x := 2; y := 4;
   x := 3; y := 6;
   ...

Either way, the correct solution is located. But if IProduct{} comes 
first, the solver never terminates, fruitlessly searching the infinite 
search space for non-existent further solutions. If ISum{} comes first, 
the search terminates eventually.

Either way, it's pretty damned slow. It's basically searching for all 
possible combinations of numbers having the requested properties.

It shouldn't be too hard to add an IExponent{} predicate too, and ask 
the solver to solve some polynomial equations. Remember, though, that at 
this point it handles only integers.

More generally, I'm going to see whether I can implement some more 
efficient type of arithmetic. (E.g., binary arithmetic.)


Post a reply to this message

From: Darren New
Subject: Re: Logic Box v2 release #1
Date: 2 Jun 2012 16:39:59
Message: <4fca7a1f$1@news.povray.org>
On 5/31/2012 5:27, scott wrote:
> Now that would be a very cool solution, as in you wouldn't need to tell the
> computer *how* to solve it (as you have to when writing a "traditional"
> sudoku solver), just what the rules are.

I saw one discussion of writing a soduko solver with backtracking and etc. 
The guy gave it an empty grid and said "solve this one". And it did.

-- 
Darren New, San Diego CA, USA (PST)
   "Oh no! We're out of code juice!"
   "Don't panic. There's beans and filters
    in the cabinet."


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Logic Box v2 release #1
Date: 3 Jun 2012 10:15:31
Message: <4fcb7183$1@news.povray.org>
>> Having written something like this is also very good for CVs and job
>> interviews :-)
>
> yes, at least if he's willing to change "written by Orphi the AweKid"... :p

Please. I have the source code. And the Git repo. ;-) Nobody else on 
Earth has that...


Post a reply to this message

From: Kevin Wampler
Subject: Re: Logic Box v2 release #1
Date: 3 Jun 2012 18:30:35
Message: <4fcbe58b$1@news.povray.org>
On 6/2/2012 1:39 PM, Darren New wrote:
> On 5/31/2012 5:27, scott wrote:
>> Now that would be a very cool solution, as in you wouldn't need to
>> tell the
>> computer *how* to solve it (as you have to when writing a "traditional"
>> sudoku solver), just what the rules are.
>
> I saw one discussion of writing a soduko solver with backtracking and
> etc. The guy gave it an empty grid and said "solve this one". And it did.
>

It's also pretty trivial to modify the algorithm so that it enumerates 
all the possible solutions from any starting state (or at least tries 
to, if you run on an empty board it'll be listing them for quite a while).


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Arithmetic
Date: 4 Jun 2012 09:11:39
Message: <4fccb40b$1@news.povray.org>
>> Also how hard would it be to add support for arithmetic operators? Then
>> you would have an equation solver, which arguably would be more useful.

> Either way, it's pretty damned slow. It's basically searching for all
> possible combinations of numbers having the requested properties.

> More generally, I'm going to see whether I can implement some more
> efficient type of arithmetic. (E.g., binary arithmetic.)

One might think about implementing some kind of primitive which allows 
you to add and subtract real integers. Actually, this turns out to be 
less helpful than you might imagine.

Using Peano arithmetic, we can write things such as

   x = S[y]

this denotes the fact that x = y+1, without specifying which actual 
integer y contains. We can work that out later. By contrast, if you 
wanted to implement some primitive which works in definite integers, you 
would have to somehow augment the solution language to support 
constraints like "x is one greater than y". Using Peano arithmetic, you 
get that for free.

Perhaps the only primitive I ought to be adding is one to convert 
between real integers and Peano integers...

I did manage to implement addition (and therefore subtraction) of binary 
numbers. On the one hand, that makes numbers much shorter to write. (The 
amount of characters required to represent N is now O(log N) rather than 
O(N).) On the other hand, the predicates become quite a lot more 
complicated, so it's dubious whether it actually runs any /faster/. It's 
also much harder to debug if it doesn't work properly.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Arithmetic
Date: 4 Jun 2012 09:14:03
Message: <4fccb49b$1@news.povray.org>
> I did manage to implement addition (and therefore subtraction) of binary
> numbers.

The sum of two Peano integers:

ISum{x, y, z}:
   (x = Z[] & y = z) |
   (
     ~predX x = S[predX] &
     ~predZ z = S[predZ] &
     ISum{predX, y, predZ}
   )
;

The sum of two binary integers:

ISum{x, y, z}:
   ~co ~t ISum_Dn{x, y, 0, co, t} &
   (
     (co = 0 & z = t) |
     (co = 1 & z = D[1, t])
   )
;

ISum_D1{x, y, ci, co, z}:
   (x = 0 & y = 0 & ci = 0 & co = 0 & z = 0) |
   (x = 0 & y = 0 & ci = 1 & co = 0 & z = 1) |
   (x = 0 & y = 1 & ci = 0 & co = 0 & z = 1) |
   (x = 0 & y = 1 & ci = 1 & co = 1 & z = 0) |
   (x = 1 & y = 0 & ci = 0 & co = 0 & z = 1) |
   (x = 1 & y = 0 & ci = 1 & co = 1 & z = 0) |
   (x = 1 & y = 1 & ci = 0 & co = 1 & z = 0) |
   (x = 1 & y = 1 & ci = 1 & co = 1 & z = 1)
;

ISum_Dn{x, y, ci, co, z}:
   ~headX ~tailX x = D[headX, tailX] &
   ~headY ~tailY y = D[headY, tailY] &
   ~headZ ~tailZ z = D[headZ, tailZ] &
   (
     (
       tailX = D[] &
       tailY = D[] &
       tailZ = D[] &
       ISum_D1{headX, headY, ci, co, headZ}
     ) |
     (
       ~ct ISum_Dn{tailX, tailY, ci, ct, tailZ} &
           ISum_D1{headX, headY, ct, co, headZ}
     )
   )
;

Note also that if we use Peano integers, there is exactly one way to 
write each number. With binary, there are several ways. For example, 11 
and 0011 both denote 3. The ISum{} predicate above works only for 
numbers containing the same number of bits. I haven't even attempted to 
implement binary multiplication yet. (That would probably turn out to be 
drastically faster, however...)


Post a reply to this message

From: Invisible
Subject: Re: Logic Box v3 release #1
Date: 22 Jun 2012 09:41:52
Message: <4fe47620$1@news.povray.org>
On 01/06/2012 09:02 AM, Invisible wrote:

> There's a couple of other ideas I have for what could be done with this
> program. (E.g., improving the output, refactoring the code, adding new
> features.) But honestly, I think I'm bored with it now. I don't suppose
> I'll be making any further releases soon. It's time for a different
> project...

Heh. So much for that!

Attached is v3, which is a 100% rewrite of the entire codebase. (I mean, 
it's not like it's /large/ or anything...) In particular, you can now 
step /backwards/ as well as forwards. And I've tried to make the status 
display a little less perplexing. (Not sure I succeeded!)

Also, the previous version of this program has a bug relating to named 
predicate calls. Basically, a name capture bug. I believe this new 
version is free of such flaws.

I know how excited everybody has been about this program (i.e., not in 
the slightest), so I imagine this new release will generate a wave of 
excitement...


Post a reply to this message


Attachments:
Download 'logicbox-03 release1.jar.zip' (225 KB)

From: Invisible
Subject: Re: Turning-complete
Date: 17 Jul 2012 10:10:51
Message: <5005726b$1@news.povray.org>
On 01/06/2012 11:48 AM, Invisible wrote:
> Theorem: The language implemented by Logic Box is Turing-complete.
>
> Proof: I have written a program in Logic Box which executes any program
> written in the SKI calculus. The SKI calculus is known to be
> Turing-complete.
>
> Actually, things are not quite so simple. But here goes...

> Unfortunately, because the OR operator tries /all/ possibilities rather
> than just keeping the first successful one, what this does is to produce
> a result set containing all possible reductions, using all possible
> reduction orders.
>
> So in summary, we can say that running the program produces a result set
> which contains the correct answer /somewhere/, plus a whole heap of
> various intermediate results. As best as I can tell, there's no way to
> determine /where/ in the solutions it is. So I'm not completely sure
> this counts as a valid proof.

Actually, it turns out that by swapping the order of some predicates, 
you can make it so that Normal Order Reduction is always solution #1. 
This conclusively proves the Turing-completeness of my little 
programming language.

It also conclusively proves something that will surprise nobody: My 
talents are wasted here.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.