What test documentation does your team have? How can you improve it?
That’s a really interesting question.
I’ve talked earlier in this series about not valuing documentation very much, so I’ve been giving this some thought.
The thing I’m most concerned is how I document the results of my testing.
I cringe when I look back at a ticket I’ve closed and see
or even just a green tick icon with no context.
This is usually the result of time pressure, but - that shouldn’t be an excuse for shoddy craftsmanship, eh!
One thing I’ve started doing is to try and avoid unspecific phrases like “works as expected” (expected by whom?) or “is correct” (what does correct look like?)
There's not much more to say about that - I'm just realising that it's going to be much nicer for someone who reads my work if I can be more descriptive.
Another thing I’m finding useful is adapting “specification by example” to document test results.
(Specification by Example is a requirements definition process - there’s a great book on it by Gojko Adzic)
This is useful as it makes it really easy to read what values were entered, what result came out, and indicate whether that’s what we accept.
Here's a simple example of documenting my test results using this method, for a payments screen:
A business has two maximum payment limits: $2000 for Credit Card, and $1000 for Bank Payments.
It also has a minimum overall limit of $5.
Excuse my shoddy handwriting. These are the examples, and the final (status) column is what I expect to see.
But this sort of thing. I’m still experimenting, it’s a work in progress!
There are (potentially) a couple of things wrong / missing from my example - keen to hear what you think and if you can spot them!
Thanks for reading!