I Like this. Have seen most of them (and probably been guilty of a few a while ago…)
I suggested a couple of others…
How about the ‘Stealth Bomber’. This is probably an extreme extension of the Hidden Dependency. I’ve seen a few tests that would simply ‘never’ fail when run interactively or debugged, or during a full test run in the day. When a supposed ‘fix’ was implemented it would even fool the developer into feeling pleased with themselves by passing for a day or two before bombing again in an overnight run. The logic wouldn’t show anything untoward, and it continued to bomb every few days without warning or trace of ‘why’. More than likely a group of no-no’s of execution sequence-dependent, date/time-dependent, database-dependent, context-dependent etc.
There’s probably another variation – maybe the ‘Blue Moon’. A test that’s specifically dependent on the current date, and fails as a result of things like public holidays, leap years, weekends, 5 week months etc. This is again most likely guilty of not setting up its own data.
What next – COBOLUnit? Haven’t tried this yet, but this could be a good tool for ‘self-diagnostic’ tests on Servers…
This excellent post from Jake Lawlor’s a great practical guide to applying agile processes in the real world.
This isn’t new thinking by any means, but when trying to coach others and improve your own processes you find surprisingly few practical reference points for applying agile processes into actual organisations.
This is a re-hash of a post I wrote and ‘lost’ a while ago after I was reading Charlie Poole’s first blog entry (from September 05) ‘What’s a test worth?‘, and found a hard-copy this morning.
It occurred to me (originally) that we tend to never remove unit tests as we have some strange and irrational fear that we should only ever move forwards with tests and the ‘rainy day’ tests should be retained as a ‘just in case’ safety net. All this does of course is water down your test library for a number of reasons:
- The test library takes longer than is desired AND required to run
- Tests exist with a purpose that no-one’s quite sure about and thus are more difficult to maintain and fix when something causes them to fail
- The test library cannot possibly be well defined and categorised due to point 2, leading to more developer and tester confusion
I therefore tried to give myself a small number of practical rules to follow when adding or maintaining unit tests. Refactoring is something that applies equally to unit tests (and I don’t just mean changes that stop your build from breaking when you change your functionality).
The question is ‘What characteristics should a test display in order to avoid being deleted’
- If the test covers unique specific functionality (not covered in any other test) in a contained and specific way, sets up its own data and tears it down whilst making a number of useful assertions, and tests logic ‘not’ data – it should stay
- If the test duplicates other coverage, but also covers something else at a higher level – i.e. more of a system test – it may also still be of some use. If the new functionality is at the same logical level as that already covered, then it’s probably an indicator that some refactoring should be undertaken to allow that to be specifically unit-tested (that’s not very clear but hopefully you get what I mean)
- We’re clutching at straws now, but if the test does ‘anything’ useful at all (that’s not been done in another test), then it may still be a good ‘catcher’ for some other high-level scenarios – you’d probably want to re-categorise the test at least in this case.
If you can’t place the test in any of the 3 above then you need to do one or more of the following
- Remove the test
- Refactor the functionality you’re testing
- Improve the test so that there is a specific and unique purpose
I’ll try and add to and refine this over time, but for now there’s my starting point.
This new article goes through more of a journey of unit testing a console app than anything else – but might prove useful to people…