|
![](/i/fill.gif) |
On 9/30/2011 8:47 AM, Stephen wrote:
>> Regardless, have you ever actually seen an analogue computer with
>> independently addressable memory cells? I haven't heard of such a thing.
>>
> What to answer first? I don't know, the only actual working valve
> computer I've ever been in the presence of was at Glasgow University,
> over 40 years ago. I've never had hands on experience working with them.
> But you would not use addressable memory as you would in digital
> computers. Remember that we are talking about voltage levels
> representing numbers. Also back in the day, what was being asked of then
> was much simpler and less complex so a lot of memory would not have been
> required.
>
Just use a large, covered, lake as the "storage". You get a certain
level of error, and possible leaks, but you could compute a number as
large as... what ever the lake could contain in what ever unit of water
your system dealt with. lol Seriously though, I would say that the
distinction between digital and analog in this sense is a) how long you
can hold the value, and that assumption that the value you can hold has
a set number of limits, in DNA, assuming you where using it to encode
numbers, that is like what 4 values (ATGC), in some indefinite set of
combinations, 3 states, for like the "quantum" systems they are trying
to develop, or 2, on/off, for current systems. In all of these cases,
the "values" are discrete, hence digital. You can't, in the case of
something like DNA "get" any other values (unless you just completely
change the proteins involved), I am not sure what you get with quantum
effects, but generally they seem to be discrete states, but with binary
we *intentionally* ignore any difference of state, other than below X
level, or above it. So, if your "analog" circuit had these values 0.1,
0.3, 0.6, 0.12, and 0.7, "binary" simply enforces the rule that this is
actually 00101.
In principle, your "analog" computer just throws out that assumption.
The result is less reliable, which is the main reason we stopped trying
to use it. But, in theory, if someone had wanted to, they maybe could
have made a base 10 computer, by treating each "range", 0-0.1, 0.11-0.2,
etc. as a different "state". It probably wouldn't have been at all
feasible, in that the fail rate on circuits that didn't produce the
correct result, or match specifications, would have been much, much,
larger. Like, if now we threw out one in every 1,000 processors, you
might see a fail rate of like 1:50 for such a base 10 system, or worse.
Because, of you are dealing with on and off, 0.6-1.0 is "acceptable" for
the "on" state, and 0-0.4 might be for the "off", with only the absolute
middle range of values being too ambiguous.
Post a reply to this message
|
![](/i/fill.gif) |