|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
OK, so I have a media player.
After initialization, it should have a mode of "stopped" and a speed of 100%
and a volume of 100% and an empty error code and an elapsed time of zero.
So I write a unit test to create and initialize a player, then check that
answer.
Now I want to make sure elapsed time moves forward while playing. So I
create a player and initialize it, tell it to play, wait 12 seconds, and
make sure I have about 10 seconds of elapsed time in the elapsed time counter.
The question is this: since all tests are supposed to be independent, should
I test that after initialization I have a speed of 100%, mode of stopped,
and all that yadda in the first test? Or can I assume that first test
worked, and just test that switching to play makes the mode go to "playing"
and the elapsed time counter increment?
What do people who write lots of unit tests suggest?
--
Darren New, San Diego CA, USA (PST)
Human nature dictates that toothpaste tubes spend
much longer being almost empty than almost full.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> OK, so I have a media player.
>
> After initialization, it should have a mode of "stopped" and a speed of
> 100% and a volume of 100% and an empty error code and an elapsed time of
> zero.
>
> So I write a unit test to create and initialize a player, then check
> that answer.
>
> Now I want to make sure elapsed time moves forward while playing. So I
> create a player and initialize it, tell it to play, wait 12 seconds, and
> make sure I have about 10 seconds of elapsed time in the elapsed time
> counter.
>
> The question is this: since all tests are supposed to be independent,
> should I test that after initialization I have a speed of 100%, mode of
> stopped, and all that yadda in the first test? Or can I assume that
> first test worked, and just test that switching to play makes the mode
> go to "playing" and the elapsed time counter increment?
>
> What do people who write lots of unit tests suggest?
>
>
>
My understanding of unit testing is that they are just used to test one
aspect of the software, independently. You then go on to write a system
test which tests the end to end process. That is followed by an
integration test, testing any links to other parts of the program.
Finally there is the UAT or User Acceptance Test where you hold your
breath as the customer tries to break it ;)
Integration testing may not be applicable in your case. I hope this helps.
As an aside it does not always work this way. Last week we were system
testing at the same time I was doing development. (What a way to run a
project!) Now were are to start Integration Testing and some of the unit
tests have not been documented. (I hate project managers who never leave
their M$ Project and look at the real world)
--
Best Regards,
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Stephen wrote:
> My understanding of unit testing is that they are just used to test one
> aspect of the software, independently.
Sure. But if you have one "aspect" that takes a dozen steps to set up, and
those steps might fail, do you check each of those previous worked (which
means you're testing a bunch of stuff, possibly obscuring what you really
want to test), or do you assume they worked because you already tested them
(making the tests not independent)?
--
Darren New, San Diego CA, USA (PST)
Human nature dictates that toothpaste tubes spend
much longer being almost empty than almost full.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Sure. But if you have one "aspect" that takes a dozen steps to set up,
Or, to be more clear, I have dozens of tests for my media player. Each one
depends on the file it's playing to be present on disk. I have one test that
says "Yes, the WMV file is here" that runs first. Should I re-do that check
in ever unit test? Pros/cons?
--
Darren New, San Diego CA, USA (PST)
Human nature dictates that toothpaste tubes spend
much longer being almost empty than almost full.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Stephen wrote:
>> My understanding of unit testing is that they are just used to test
>> one aspect of the software, independently.
>
> Sure. But if you have one "aspect" that takes a dozen steps to set up,
> and those steps might fail, do you check each of those previous worked
> (which means you're testing a bunch of stuff, possibly obscuring what
> you really want to test), or do you assume they worked because you
> already tested them (making the tests not independent)?
>
Are we having a “two countries separated by a common language” moment
here or is it me speaking fluent “Stephen”? :-)
What you want is to test the components separately under all the
conditions that you can think of and when you are confident that they
work you assume that they will work in the end-to-end test (ideally
written and tested by someone else). If that fails then you start fault
finding OK trouble shooting ;)
The quality of your unit tests are paramount in proceeding to the next
step that is the system test.
For instance in SAP, I’ll write a functional spec for a developer to
write an upload interface and I’ll also write a spec for a functional
module to take that data and enter it into tables either directly or via
a standard SAP transaction (which validates the data, somewhat). After
the developer unit tests the developments separately I’ll rerun the
tests using my own data including throwing bad data at it. Then I’ll
join the developments together and see what the output is for all the
various condition I can think of. Later someone else (the customer) will
re-run similar tests with different data as an acceptance test.
You often find that an end-to-end test shows up problems that the unit
tests don’t.
Again I feel I may be speaking fluent “Stephen” :-)
--
Best Regards,
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Darren New wrote:
>> Sure. But if you have one "aspect" that takes a dozen steps to set up,
>
> Or, to be more clear, I have dozens of tests for my media player. Each
> one depends on the file it's playing to be present on disk. I have one
> test that says "Yes, the WMV file is here" that runs first. Should I
> re-do that check in ever unit test? Pros/cons?
>
A good example and unless you can think of any reason for the first test
to pass then the WMV file not to be there when the next test is run. I
would say no do not run it during subsequent unit tests. Pros/cons are
difficult for me to say other than you have to trust some things and you
will test the end-to-end process in your final test.
--
Best Regards,
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Stephen wrote:
> here or is it me speaking fluent “Stephen”? :-)
I don't know, but I'm talking about the contents of a specific unit test,
given that other unit test have already passed or failed.
I know what I want to test. I'm saying that if it takes 15 steps to get t
o
the 16'th step I want to test, and I already tested the 15 steps in anoth
er
test, do I need to re-test them in the 16'th step?
If I want to test "media player elapsed-time counter stops incrementing a
nd
stays at 20 seconds if I pause the media player after 20 seconds of
playing", do I also need to test "media player constructor works, media
player starts playing, elapsed time counter increases one second per seco
nd"
in the "pause" unit test?
Or, in your example, if your first unit test tests that what the develope
r
gave you in the upload is right, do you have to test that again in the
second test that submits it to the SAP upload interface?
--
Darren New, San Diego CA, USA (PST)
Human nature dictates that toothpaste tubes spend
much longer being almost empty than almost full.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Stephen wrote:
>> Are we having a “two countries separated by a common language” moment
>> here or is it me speaking fluent “Stephen”? :-)
>
> I don't know, but I'm talking about the contents of a specific unit
> test, given that other unit test have already passed or failed.
>
> I know what I want to test. I'm saying that if it takes 15 steps to get
> to the 16'th step I want to test, and I already tested the 15 steps in
> another test, do I need to re-test them in the 16'th step?
>
> If I want to test "media player elapsed-time counter stops incrementing
> and stays at 20 seconds if I pause the media player after 20 seconds of
> playing", do I also need to test "media player constructor works, media
> player starts playing, elapsed time counter increases one second per
> second" in the "pause" unit test?
>
> Or, in your example, if your first unit test tests that what the
> developer gave you in the upload is right, do you have to test that
> again in the second test that submits it to the SAP upload interface?
>
>
The answer is no, you don’t need to retest previously passed tests, that
is the whole idea of *unit* tests.
Sorry if I over complicated things.
--
Best Regards,
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Stephen wrote:
> The answer is no, you don’t need to retest previously passed te
sts, that
> is the whole idea of *unit* tests.
OK, then unit tests aren't "independent" in that sense. I guess the advic
e
is to prevent a unit test from relying on stuff left over from a previous
unit test, so you could in theory run them in any order *if* they all pas
s.
--
Darren New, San Diego CA, USA (PST)
Human nature dictates that toothpaste tubes spend
much longer being almost empty than almost full.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Stephen wrote:
>> The answer is no, you don’t need to retest previously passed tests,
>> that is the whole idea of *unit* tests.
>
> OK, then unit tests aren't "independent" in that sense. I guess the
> advice is to prevent a unit test from relying on stuff left over from a
> previous unit test,
That’s right for each unit test you can and generally must prepare the
data the unit has to test. Sometimes preparing that data is the only way
you can get the error conditions you want to check for.
> so you could in theory run them in any order *if*
> they all pass.
>
You can run them in any order and since they are independent tests they
do not rely on other tests passing or failing. It is hard to separate
them in your mind from the system test where the data created in one
unit test is used as an input in a following test.
--
Best Regards,
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|