 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> Not TDD. TDD is "the test is the specification." If it passes the test, you
> don't change anything, and if it doesn't, it's broken by definition.
To me, that seems like a *very* stupid way to design your code.
I'm really a fan of PbC; when I need to implement some rather complex unit, I
seem to be automatically going for this approach - and it leads me to very
robust interfaces, because in addition to how each side *must* or *may* behave,
I start thinking separately about how the other side may actually *expect* it to
behave. Thus, between one side's obligations and the other side's expectations I
get some room for maneuver for later changes or fault-tolerance.
I have found that in complex systems, it's all about well-defined interfaces. If
your interfaces are poor, the best unit implementations will get you nowhere.
You will get all kinds of poorly understood errors, because of race conditions
or deadlocks between units, or other such crap, often being sporadic and
timing-dependent, and therefore both hard to notice in the first place, and
hard to hunt down.
Yeah, module testing is a good thing, but specifying interfaces by virtue of
test cases... yuck!
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 3/1/2009 7:38 PM, Darren New wrote:
> This is kind of what inspired the question to start with:
>
>
http://gojko.net/2009/02/27/thought-provoking-tdd-exercise-at-the-software-craftsmanship-conference/
Interesting... this kind of goes along with the fact that many
programmers tend to add features not needed, on the assumption that they
may, one day, be needed. Coding for future problems, rather than the
one at hand :)
I read a blog post from someone (can't remember who at the moment) who
stringently told his programmers to always specific code, even if they
were certain that a general case method would be useful later on. 90%
of the time, they never ended up using the general case methods.
Instead, he said they could write a simplified general case after they
already used the same functionality in two separate projects. Once they
used it in three separate projects, then they were allowed to do a full
design cycle on it and add it to the company's other libraries.
--
...Chambers
www.pacificwebguy.com
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"clipka" <nomail@nomail> wrote:
> I have found that in complex systems, it's all about well-defined interfaces. If
> your interfaces are poor, the best unit implementations will get you nowhere.
> You will get all kinds of poorly understood errors, because of race conditions
> or deadlocks between units, or other such crap, often being sporadic and
> timing-dependent, and therefore both hard to notice in the first place, and
> hard to hunt down.
Oh, and worst of all: Once you find and identify one of those bugs, each
module's developer will insist that *his* module is perfectly ok, and that it
is the other modules that need to be fixed. Yay!
And once you implement a workaround, chances are you break something else...
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> Chambers wrote:
> > It passes the test, so TDD assumes that the function is correct. This,
> > to me, is wrong.
>
> This is kind of what inspired the question to start with:
>
>
http://gojko.net/2009/02/27/thought-provoking-tdd-exercise-at-the-software-craftsmanship-conference/
There is one big flaw in the message the exercise seems to try to convey: In
reality, starting from a simple testcase may *not* necessarily lead you
automatically to an implementation suitable for future testcases to come.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> There is one big flaw in the message the exercise seems to try to convey: In
> reality, starting from a simple testcase may *not* necessarily lead you
> automatically to an implementation suitable for future testcases to come.
Yes. It also assumes you know what the simple testcase is supposed to be.
--
Darren New, San Diego CA, USA (PST)
My fortune cookie said, "You will soon be
unable to read this, even at arm's length."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> Darren New <dne### [at] san rr com> wrote:
>> Not TDD. TDD is "the test is the specification." If it passes the test, you
>> don't change anything, and if it doesn't, it's broken by definition.
>
> To me, that seems like a *very* stupid way to design your code.
It depends on what you mean by "design", really. I think this particular
discussion I linked to exemplifies it well. The person running the meeting
knew what he wanted and what tests had to pass.
Let them finish the entire game, then say "by the way, we want a GUI
interface also", and all of a sudden the whole "don't do it as a grid" thing
falls apart.
> I'm really a fan of PbC;
I agree. PbC is also fairly close to "functional" (in the Haskell sense of
the word) also.
> I have found that in complex systems, it's all about well-defined interfaces. If
> your interfaces are poor, the best unit implementations will get you nowhere.
Agreed. The contention of the TDD people is that by writing the tests first,
you're specifying the interfaces. I just don't find that's a good idea.
For example, I have a library called "S3" where error codes come back as a
list whose first element is "S3", whose second element says (basically)
whose fault it is (yours, the ISP, the server's, etc), and the third thru
Nth tell you what the error was in increasingly more detail. If I did TDD, I
wouldn't have a flag on the front saying it was my one and only library that
threw the error. (Alternately, one could say my exception hierarchy is
rooted in an exception specific to my one library.) This isn't the sort of
thing you do with TDD - you don't start creating a hierarchy of exceptions
that make the library play well with others until you actually run into
other libraries that don't play well. I guess maybe you could say it isn't
needed, but what you wind up with is a fragile design that's constantly in
need of being fixed because it wasn't thought out well.
I.e., TDD is "we're too stupid to think out a design far enough to have a
good idea what we'll need." You shouldn't be designing stuff, if that's the
case. And stay the *hell* away from my database schema.
> Yeah, module testing is a good thing, but specifying interfaces by virtue of
> test cases... yuck!
Yep. I think a good insight was that TDD is really "separate your code into
functional and side-effectsful pieces." Then the TDD tests the functional
part without testing the side effects. The side-effects get tested as a
"gee, we hope our functional emulation of the stateful server is right."
--
Darren New, San Diego CA, USA (PST)
My fortune cookie said, "You will soon be
unable to read this, even at arm's length."
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chambers wrote:
> Personally, I prefer PbC, as it minimizes side-effects.
>
> With TDD, the whole point of programming is to get it to pass the tests.
> When I first read about it, they even gave this example (pseudocode):
>
> Test:
> add(3,5)=8
>
> add(a,b)
> return 8
>
> It passes the test, so TDD assumes that the function is correct. This,
That's really pathetic.
I've never read a book on it, so I don't know if a true believer really
would do that. What I've read is that the first task after writing the
test (and yes, they did encourage multiple cases) is to write a
_correct_ algorithm, but not worry about efficiency. Once it is correct
(i.e. passes all test cases), you can start optimizing it - running the
tests each time you optimize to make sure you didn't break something.
--
Dopeler effect: The tendency of stupid ideas to seem smarter when they
come at you rapidly.
/\ /\ /\ /
/ \/ \ u e e n / \/ a w a z
>>>>>>mue### [at] nawaz org<<<<<<
anl
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
TDD always sounds to me like learning by mistakes.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 3/2/2009 8:43 AM, Mueen Nawaz wrote:
> Chambers wrote:
>> add(a,b)
>> return 8
>>
>> It passes the test, so TDD assumes that the function is correct. This,
>
> That's really pathetic.
>
> I've never read a book on it, so I don't know if a true believer really
> would do that.
They wouldn't do it that extremely, because the example was specifically
thought up to provide a counterargument to TDD :)
The point is, that methodology of TDD is to make your code "beat" the
test case, and that's all.
--
...Chambers
www.pacificwebguy.com
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> I have found that in complex systems, it's all about well-defined
>> interfaces. If
>> your interfaces are poor, the best unit implementations will get you
>> nowhere.
>
> Agreed. The contention of the TDD people is that by writing the tests
> first, you're specifying the interfaces. I just don't find that's a good
> idea.
>
> For example, I have a library called "S3" where error codes come back as
> a list whose first element is "S3", whose second element says
> (basically) whose fault it is (yours, the ISP, the server's, etc), and
> the third thru Nth tell you what the error was in increasingly more
> detail. If I did TDD, I wouldn't have a flag on the front saying it was
> my one and only library that threw the error. (Alternately, one could
> say my exception hierarchy is rooted in an exception specific to my one
> library.) This isn't the sort of thing you do with TDD - you don't start
> creating a hierarchy of exceptions that make the library play well with
> others until you actually run into other libraries that don't play well.
> I guess maybe you could say it isn't needed, but what you wind up with
> is a fragile design that's constantly in need of being fixed because it
> wasn't thought out well.
>
> I.e., TDD is "we're too stupid to think out a design far enough to have
> a good idea what we'll need." You shouldn't be designing stuff, if
> that's the case. And stay the *hell* away from my database schema.
In principle, designing the test cases first is a nice idea. However, no
amount of test cases are going to help with future requirements. A
well-planned design has some hope of doing that.
For that reason alone, I'd say that "if the test cases pass then the
software *is* correct" is rather short-sighted.
(There's also the minor issue of the test cases being wrong. I know I've
done that before now...)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |