|
|
>> Now, how the heck you formulate that into some kind of coherent
>> definition......
>
> When you do, submit it as a PHD thesis. ;-)
Thanks. I'll pass.
> Actually I don't think that I agree with your comparison between the
> abacus and pocket calculator. I think that the latter is just a better
> type of the former.
In order to work an abacus, /you/ have to know that ten beans on one
line is worth one bean on the next line. With a pocket calculator, this
kind of thing is automatic. You don't have to understand it.
I will conceed that automation is a kind of "shades of grey" thing.
There are greater and lesser degrees of automation, rather than just
"manual" and "automatic".
>> I'm well aware that there have been many computers that used decimal
>> instead of binary. (A few even used other number systems.)
>
> I've worked on at least one system that used BCD (Binary Coded Decimal)
Yes, but generally you use BCD if you have a /binary/ system and you
specifically want to do exact /decimal/ arithmetic. (It's a well-known
fact that 1/10 is not representable as an exact binary fraction. In some
settings, that might be a problem.)
> alter its input or operating instructions or what?
> for each problem they needed to solve. But it could be done.
The formal definition of Turing-completeness is abstract and technical.
Essentially Alan Turing proved two main things:
1. It is possible to build *one* machine that can perform *any* possible
calculation.
2. Some calculations are impossible.
[Simplifying wildly, of course.]
You would think that a more complex, more sophisticated machine would be
able to calculate more things than a simpler machine. In fact, it turns
out that if they are both Turing-complete, they can both calculate
exactly the same stuff. (The more sophisticated machine is probably much
/faster/, of course...)
Consider an analogue computer, for a moment. It might have circuits that
can add numbers together. But if I want to add up one million operands,
I need to wire together [at least] one million adder circuits.
Now consider a simple digital computer. It might have only one binary
adder circuit, and yet it can add up an unlimited number of operands,
with unlimited precision.
Analogue computer: More steps = more hardware.
Digital computer: More steps = more time.
That's a fundamental difference. If I have an analogue computer with ten
adder circuits, I cannot add up eleven numbers. But if I have a
Turing-complete machine, I can add up an /unlimited/ number of operands.
You never need more hardware. It just takes more time.
This difference comes about mainly because a digital computer has a
/memory/. It can /store/ operands and feed them through its arithmetic
circuits one at a time. An analogue computer can't do that, so it has to
have one dedicated circuit for each step of the process. A digital
computer can /reuse/ the same circuits again and again for different
steps, because the steps won't happen all at once.
Also, you can often "program" an analogue computer. But you basically do
that by rewiring it. The machine can't rewire itself. But a
Turing-complete machine can have self-modifying code.
Actually /proving/ that something is Turing-complete is usually not
simple at all. But probing that something /isn't/ Turing-complete just
requires you to find one thing that it can't do [and a Turing-complete
machine can].
You could say it's a calculator if it only calculates one thing (e.g., a
mechanical tide predictor), it's a calculator construction kit if it can
calculate many things (e.g., a typical "programmable" analogue
"computer"), and it's a computer if it's Turing-complete (i.e., it has a
stored program, decision-making capabilities, etc.)
Post a reply to this message
|
|