|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Every work on unit testing I've read basically says you should run all the
unit tests after each change, or certainly at least before each check-in.
However, those same texts say that unit testing should only test individual
units, and everything else should be mocked.
Why would I want to run the unit tests for the XML parser after making
changes to the color picker dialog box? Or, more generally, why would I
want to run unit tests that are only executing code that hasn't changed
since last time the unit tests passed?
(Note I'm talking specifically of unit tests, not automated test suites in
general.)
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 08/06/2011 22:01, Darren New nous fit lire :
> Every work on unit testing I've read basically says you should run all
> the unit tests after each change, or certainly at least before each
> check-in.
>
> However, those same texts say that unit testing should only test
> individual units, and everything else should be mocked.
>
> Why would I want to run the unit tests for the XML parser after making
> changes to the color picker dialog box? Or, more generally, why would I
> want to run unit tests that are only executing code that hasn't changed
> since last time the unit tests passed?
>
> (Note I'm talking specifically of unit tests, not automated test suites
> in general.)
>
If you had designed your software in blocks, you would have as many unit
test blocks, on a 1 for 1 ratio.
And only the integration phase would know about the dependencies of
blocks and how to arrange them.
When they talk about running again the unit tests, it is only for the
updated block(s), not the whole set.
*BUT*
The main issue is that to perform correctly, if a block X relies on the
blocks A, B & C, the unit tests for X would need replacement plug-ins
for A, B & C (which would emulate the expected behaviour).
Because it might be cheaper to use directly A, B & C, you end up with a
contaminated system: when updating block B, you now have to also run the
unit test for X.
And because the architecture sucks about documentation and enforced
isolation (you can never be sure... unless you are using a fascist
language in its most fascist mode, like C++ without "using namespace
std" and other restriction (no global, no singleton resources...)... or
Ada... but certainly not Java or php or perl (or VB)), it is simpler
(dumber) to rerun all the unit tests of all blocks.
It's all in the budget: aeroplane industries required the plug-ins for
tests (and made specifications for even testing the plug-ins!)... it has
an expensive initial cost. But it allows to test each element and
subsystems in a simulator... software industries goes often the
house-builder ways: fast, cheap to produce, will fix later if caught.
(and when you only have a hammer, everything looks like a nail). Using
army of monkey coders, the usual way is to require to run the whole set
of unit tests (using brute force instead of undocumented (and unchecked)
dependencies' graph). They even make tools to automate that.
(with the same quality!)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 08/06/2011 9:01 PM, Darren New wrote:
> Every work on unit testing I've read basically says you should run all
> the unit tests after each change, or certainly at least before each
> check-in.
>
> However, those same texts say that unit testing should only test
> individual units, and everything else should be mocked.
>
> Why would I want to run the unit tests for the XML parser after making
> changes to the color picker dialog box? Or, more generally, why would I
> want to run unit tests that are only executing code that hasn't changed
> since last time the unit tests passed?
>
> (Note I'm talking specifically of unit tests, not automated test suites
> in general.)
>
It doesn't make sense to me. If you're going to run all unit tests after
one change then you might as well call it a system test.
It sounds as if someone is trying to cover their back. IMO It is not
feasible to run all UT after a change. You would never get the
implementation finished.
But then I've never read anything on the theory although I've 15 full
implementations. As long as you do unit, system, integration and
possibly regression testing before UAT then you will be OK. IMO.
--
Regards
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/8/2011 13:27, Le_Forgeron wrote:
> When they talk about running again the unit tests, it is only for the
> updated block(s), not the whole set.
I have never seen that recommendation. It's begrudingly accepted as perhaps
necessary if your unit tests take too long to run.
> *BUT*
>
> The main issue is that to perform correctly, if a block X relies on the
> blocks A, B& C, the unit tests for X would need replacement plug-ins
> for A, B& C (which would emulate the expected behaviour).
Yep. The real problem with *this* is that you might update A and its tests,
then run its unit tests and all goes well, and not realize you had broken X.
It never really seemed a good idea to me to be mocking stuff that's
closely integrated without proper API documentation.
(I.e., mocking the config system that tells you where to load the XSD from
when testing your XML parser would seem to be reasonable, given that the
config system is probably well-isolated. Mocking the compiler's lexer while
testing the parser or mocking the parser while testing the code generator
never seemed to make much sense to me.)
> Because it might be cheaper to use directly A, B& C, you end up with a
> contaminated system: when updating block B, you now have to also run the
> unit test for X.
Then that's integration testing.
> And because the architecture sucks about documentation and enforced
> isolation (you can never be sure...
OK. So basically it's "run all the unit tests because there might be
integration tests in with the unit tests."
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 08.06.2011 22:01, schrieb Darren New:
> Every work on unit testing I've read basically says you should run all
> the unit tests after each change, or certainly at least before each
> check-in.
>
> However, those same texts say that unit testing should only test
> individual units, and everything else should be mocked.
>
> Why would I want to run the unit tests for the XML parser after making
> changes to the color picker dialog box? Or, more generally, why would I
> want to run unit tests that are only executing code that hasn't changed
> since last time the unit tests passed?
(1) You only run the unit tests for those units you actually changed.
But I guess you knew that.
(2a) If you /know/ beyond doubt that your code change in part X of a
unit can't affect part Y of the same unit, then X and Y should probably
be separate units.
(2b) If on the other hand you do /not/ know beyond doubt whether your
code change in part X of a unit might affect part Y of the same unit,
then obviously you should test part Y as well to be sure. But I guess
that's no news to you either.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> Why would I want to run the unit tests for the XML parser after making
> changes to the color picker dialog box? Or, more generally, why would I
> want to run unit tests that are only executing code that hasn't changed
> since last time the unit tests passed?
No amount of unit tests can catch all possible bugs. You may have a
thousand unit tests for a small module, yet there may still be bugs there.
In some cases these bugs *might* be triggered by seemingly unrelated
changes somewhere else in the program, even if there is no connection
between the two modules (might be less likely in "safe" languages, but
probably not impossible).
Unless running the unit tests takes hours, does it hurt to run them?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/9/2011 4:03, clipka wrote:
> (1) You only run the unit tests for those units you actually changed. But I
> guess you knew that.
Well, that's what I figured out. But everything I *read* says stuff like
"run all unit tests before you check anything in." I've never heard a
recommendation that said it's *good* to run partial unit tests.
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/9/2011 7:56, Warp wrote:
> Darren New<dne### [at] sanrrcom> wrote:
>> Why would I want to run the unit tests for the XML parser after making
>> changes to the color picker dialog box? Or, more generally, why would I
>> want to run unit tests that are only executing code that hasn't changed
>> since last time the unit tests passed?
>
> No amount of unit tests can catch all possible bugs. You may have a
> thousand unit tests for a small module, yet there may still be bugs there.
>
> In some cases these bugs *might* be triggered by seemingly unrelated
> changes somewhere else in the program, even if there is no connection
> between the two modules (might be less likely in "safe" languages, but
> probably not impossible).
I'd agree with this, except that unit tests are supposed to be independent.
(Indeed, that's my main complaint about relying on unit tests to tell you if
something is broken.) So if there's a bug in the XML parser, say, it
shouldn't even be running in the same process as the unit tests for the
color picker.
Now, unsafe languages, that's interesting. But I don't think that's a unit
test thing as much as it's a test-the-hell-out-of-everything thing. Sure,
the more you test code in different circumstances, the more you're likely to
find uncaught violations of the language specification. I guess that makes
sense, tho. That's the first good reason I've heard. :-)
And yes, something like not cleaning up a global might get caught in unit
tests even in a safe language. Indeed, there are several places in XNA where
static variables are used that should be instance variables. Indeed, the
fact that calling "Exit()" during a game not only exits that game at the end
of the current frame, but also all other future games instantiated in the
same process, makes it very difficult to automate an entire set of tests in
one process. :-)
> Unless running the unit tests takes hours, does it hurt to run them?
No. But I've seen recommendations that one mocks everything *because* that
makes the unit testing fast enough to test everything every time, as if this
gave a clear benefit over just testing what you changed.
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 09/06/2011 17:11, Darren New a écrit :
>
> No. But I've seen recommendations that one mocks everything *because*
> that makes the unit testing fast enough to test everything every time,
> as if this gave a clear benefit over just testing what you changed.
Well, the real issue with unit tests is: does the specification know
what to test ? really ?
Most unit tests are just "I checked that 3+3 still gives 6". Which is
only a basic test of nominal functionality. They usually fail on the
coverage of real unit test, due to lack of specification of all possible
issues.
For instance, in Java, you can redefine the value of 3 and 6. Which
means that if your patch keep the 3+3 -> 6, even if now 3 has a value of
14 and 6 has the value of 28, the unit tests would still be ok.
Would have the unit tests checked for some other intrinsics properties
of the module, it might have failed.
And to make matter worst: module often ends up at the top of a huge
pyramid of other modules (at best, when there is at least an ordering),
so emulating the other lower modules is only practical for the very few
lower modules at the base. The higher you go, the more expansive it
became to emulate at the specification time (which is also the time
where you should specify the tests) the interface needed to performed
isolated unit tests.
So they ends with a brute force approach.
--
A good Manager will take you
through the forest, no mater what.
A Leader will take time to climb on a
Tree and say 'This is the wrong forest'.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 6/10/2011 1:18, Le_Forgeron wrote:
> For instance, in Java, you can redefine the value of 3 and 6.
Of all the languages out there, I'm pretty sure Java isn't one of those. :-)
How do you redefine 3 to be 14?
> Would have the unit tests checked for some other intrinsics properties
> of the module, it might have failed.
Yeah. I much prefer pre- and post-conditions to be documented, etc. At least
you can test them.
And you still wind up only testing what you think to test. Clearly, if MS
had thought to test "what happens if we try to run two consecutive games in
the same process", they would have noticed half a dozen things fail to work
right the second time around. But hey, all the unit tests passed. :-)
> And to make matter worst: module often ends up at the top of a huge
> pyramid of other modules (at best, when there is at least an ordering),
> so emulating the other lower modules is only practical for the very few
> lower modules at the base.
Well, the mock frameworks seem to do an OK job, as long as you're willing to
just specify what calls you expect the unit test to make. If you try to unit
test something actually complex, it winds up just being equivalent to "run
the program and compare the output with the previous run."
Unit tests really only work well to test your invariants, your pre- and
post-conditions, on units that are fairly stand-alone. And it only works if
you have an objective answer as to what the right result should be. You
can't unit test for "this indeed looks like water coming out of a fountain."
--
Darren New, San Diego CA, USA (PST)
"Coding without comments is like
driving without turn signals."
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|