|
 |
>> I have found that in complex systems, it's all about well-defined
>> interfaces. If
>> your interfaces are poor, the best unit implementations will get you
>> nowhere.
>
> Agreed. The contention of the TDD people is that by writing the tests
> first, you're specifying the interfaces. I just don't find that's a good
> idea.
>
> For example, I have a library called "S3" where error codes come back as
> a list whose first element is "S3", whose second element says
> (basically) whose fault it is (yours, the ISP, the server's, etc), and
> the third thru Nth tell you what the error was in increasingly more
> detail. If I did TDD, I wouldn't have a flag on the front saying it was
> my one and only library that threw the error. (Alternately, one could
> say my exception hierarchy is rooted in an exception specific to my one
> library.) This isn't the sort of thing you do with TDD - you don't start
> creating a hierarchy of exceptions that make the library play well with
> others until you actually run into other libraries that don't play well.
> I guess maybe you could say it isn't needed, but what you wind up with
> is a fragile design that's constantly in need of being fixed because it
> wasn't thought out well.
>
> I.e., TDD is "we're too stupid to think out a design far enough to have
> a good idea what we'll need." You shouldn't be designing stuff, if
> that's the case. And stay the *hell* away from my database schema.
In principle, designing the test cases first is a nice idea. However, no
amount of test cases are going to help with future requirements. A
well-planned design has some hope of doing that.
For that reason alone, I'd say that "if the test cases pass then the
software *is* correct" is rather short-sighted.
(There's also the minor issue of the test cases being wrong. I know I've
done that before now...)
Post a reply to this message
|
 |