Tuesday, December 18, 2012

Mysteries and Puzzles

National security expert Gregory Treverton (2009) wrote for Smithsonian Magazine in 2007 the following:

"Even when you can't find the right answer, you know it exists. Puzzles can be solved; they have answers.

But a mystery offers no such comfort. It poses a question that has no definitive answer because the answer is contingent; it depends on a future interaction of many factors, known and unknown."


The article is worth reading at some length if you haven't already done so (see references).

And I think within this definition lies a great deal to say about attitudes towards software testing. You understand the infinite test space and the necessary uncertainty of the quality of the systems you are testing, but I believe a lot of confusion, from certain people who are usually in non-testing roles, stems from their misunderstanding of the difference between determining good perceived quality through testing as puzzle solving and as mystery investigation.

The essential differentiating factor, as I see it, between a puzzle and a mystery is it's relationship to the quantity of information at hand. An unsolvable puzzle is one that simply requires more information to be added. Or as Malcolm Gladwell (2009, pp.160) puts it in his book What the Dog Saw: "A puzzle grows simpler with the addition of each new piece of information". A mystery cannot be solved purely through the addition of more information - in fact as we increase focus on collection of information then any further information we gain will contain a majority of noise in which more problems can hide; making the issue not necessarily worse, but necessarily bigger.

A bug found in production by a user is a puzzle; We can identify the problem, and immediately see what the desired behaviour is. We can trace it back, identify its root cause (or near root cause) and solve it. Preventing bugs going into production (basically: software testing) is a mystery; it's a cognitive process of questioning and re-framing with no (as yet) known problems and confusion over desired behaviour. We have nothing from which to trace back and we're executing the root causes (or near root causes) for problems that we haven't found yet.

This is one reason that we can only make predictive insights into user-experienced software quality; true quality is un-knowable. We have an understandable obsessive focus with solutions in business, and it's easy to see why, but treating software testing as a solution is risky if we identify the wrong problem to be solved. Our biggest problem is not a lack of information about a product, but too much. We have documentation, implementations, previous experience, relationships to the product, relationships to the software provider and its employees, the structure and culture of the company, the users, the customer, and I'm sure you can think of more. The problem we have is the attempt to identify what is useful, questioning both the product and the project. It's not about learning and applying a requirements document (like some developers do), it's about understanding and evaluating it; interpreting data into information and refining information into heuristic oracles. The investigation into the mystery of software bugs. To be better at software testing it isn't usually more initial information we need, but better analysis of the information we have (meaning we gather better information, not just more).

This is exactly why testing is a human-centric process and why it requires application of thought and engagement. As Gladwell (2009, pp.166) states: "Puzzles are 'transmitter-dependent'; they turn on what we are told. Mysteries are 'receiver-dependent'; they turn on the skills of the listener". And it's our skills in challenging what we are told, and more generally as information listeners, that make us good software testers. We are in the practice of a kind of Bayesian analysis of information, adjusting our belief in the subjective to rationally account for new evidence.

The logical solution to the puzzle of software testing is simply the collusion of information collection and persistence. The mystery of software testing requires experience, insight and investigation. Puzzles require us to think. Mysteries require us also to think about the thinking. This is why bug fix verification is easy, but boring. It's no longer a mystery to solve, but a set of puzzles... but if we continue to apply the mystery of software testing through Rapid and Exploratory techniques such as "branching and backtracking" (Bach, J., 20xx, pp.20) areas of opportunity then when we do bug fix verification we immediately add value to the testing in higher risk (due to the necessary recent code changes to fix the bug) areas.

Business has become good at solving puzzles, but seems to ignore or avoid mysteries; and as Philip Herr (2011) says: "And if we have been especially successful at solving puzzles, we may be tempted to define all business problems as such. (As the saying goes, “If all you have is a hammer, everything begins to look like a nail.”).". Our over-reliance on problem solving skills in business, combined with a desire to demarcate everything into metrics and simple, usable facts, has lead to a misunderstanding of the complexities and puzzle-resistant elements of software testing and therefore how testing can be best used to add value to any project. If you've ever worked in a company that views testing as a "necessary evil" or an exercise in compliance then you'll understand the point I'm making.

Dealing with useful information on and around a product being mixed together with a greater quantities of useless information, misleading information, incorrect information, and observation posing as objective fact requires us to look at finding important bugs quickly as a mystery, not a puzzle. As Gladwell (2009, pp.168-169) says, "If you can't find the truth in a mystery - even a mystery shrouded in propaganda - it's not just the fault of the propagandist. It's your fault as well."


References:

Bach, J., 2006. Breaking Down (and building up) Exploratory Testing Skill. [online] Available at: <http://www.quardev.com/content/whitepapers/ET_dynamics_Google_3_06.pdf> [Accessed 16 December 2012]

Gladwell, M., 2009. What the Dog Saw: and Other Adventures. [e-book]. Penguin UK. Available at: play.google.co.uk <http://play.google.co.uk> [Accessed 16 December 2012].

Herr, P., 2011. Solving Puzzles Delivers Answers; Solving Mysteries Delivers Insights. Brand News, [online] Available at: <http://www.millwardbrown.com/Libraries/MB_POV_Downloads/MillwardBrown_POV_Solving_Puzzles_Delivers_Answers.sflb.ashx> [Accessed 16 December 2012]

Tiverton, G., 2007. Risks and Riddles. Smithsonian Magazine, [online] Available at: <http://www.smithsonianmag.com/people-places/presence_puzzle.html> [Accessed 16 December 2012]

No comments:

Post a Comment