|
 |
Warp wrote:
> Of course everything can be proven formally. The problem is how hard it
> is to prove it. If you need to be a mathematician to formally prove that
> certain values always keep within certain boundaries, that can be pretty
> demanding for a programmer.
Fair enough. I guess the research is in making it easier to prove.
Apparently the stuff for proving Ada (or rather the subset called
"Spark") is valid lets you do proofs by induction and by contradiction,
which seems pretty ... intense.
> One practical situation where it may be hard to prove you never access
An excellent example. Thanks.
I'm wondering if it's really as hard as it sounds, *given the code*.
Obviously the hash is going to have to be taken modulo the table size,
for example, and storing the link into the hash table is going to have
to calculate the link somehow. I think by leaving out the details, it
might look harder to prove than it is. Given the code for calculating
what the next link would be, and where to store it into the table, it
might be pretty obvious you can't run off the end.
Of course, shrinking the table might also be either really hard, or less
hard, to prove, depending on the actual code that decides when to shrink
the table. How does the code know it's time to shrink the table? The
same code that checks there aren't pointers off the end of the table
would be able to prove that it's safe to shrink the table.
If you actually have such code you could show, that would be cool.
Otherwise, yah, it's probably a good example. Something like B-tree
maintenance could be interesting to look at.
> I would be pretty impressed by the programming language where you are
> even able to express that.
Here's an example out of the Spec# document, for inserting into a
variable-length array:
one can specify the postconditions of Insert as follows:
void Insert(int index , object value)
ensures Count == old(Count) + 1;
ensures value == this[index ];
ensures Forall{int i in 0 : index ; old(this[i]) == this[i]};
ensures Forall{int i in index : old(Count);
old(this[i]) == this[i + 1]};
{ ... }
(Which basically says the array is one larger, the element you passed in
shows up where you said, the ones before didn't change, and the ones
after moved up one.)
So indicating that all the second elements of the entries in the hash
are less than the size of the hash is just a Forall expression there.
Forall(int i in 0 : hasharray.size; hasharray[i].next < hasharray.size);
Now, of course, proving that without a runtime check is the hard part.
> Intuitively proving this to a human could be much easier (although
> it would probably not be a formal proof).
Hmmmm... Hard to say, really. Certainly it would be easier to be less
precise, but I'm not sure it would be easy to *prove* it to a human than
it would be to prove it to a machine. For example, it's pretty easy for
a compiler to prove a binary search is safe and to remove the runtime
checks from that.
Correctness, yes. But safety? I'm still unconvinced.
As in, it could be much easier to prove for a text editor you'll never
overflow the screen buffer than it is to prove that the sequence of
operations you do to insert a line of text on the screen actually leaves
the screen looking how you'd expect it to. The types of things you want
to prove for a safe language are actually rather constrained compared to
"proving this program matches the specs."
Personally, my take is that if you can't formally prove your code
doesn't have holes in it, the run-time check is probably worthwhile at
that point, if you're actually interested in the robustness of your
code. If performance is a problem, put in enough asserts that the
runtime checks happen outside the hot spots instead of inside the hot
spots.
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
 |