Tuesday, December 18, 2012

Mysteries and Puzzles

National security expert Gregory Treverton (2009) wrote for Smithsonian Magazine in 2007 the following:

"Even when you can't find the right answer, you know it exists. Puzzles can be solved; they have answers.

But a mystery offers no such comfort. It poses a question that has no definitive answer because the answer is contingent; it depends on a future interaction of many factors, known and unknown."


The article is worth reading at some length if you haven't already done so (see references).

And I think within this definition lies a great deal to say about attitudes towards software testing. You understand the infinite test space and the necessary uncertainty of the quality of the systems you are testing, but I believe a lot of confusion, from certain people who are usually in non-testing roles, stems from their misunderstanding of the difference between determining good perceived quality through testing as puzzle solving and as mystery investigation.

The essential differentiating factor, as I see it, between a puzzle and a mystery is it's relationship to the quantity of information at hand. An unsolvable puzzle is one that simply requires more information to be added. Or as Malcolm Gladwell (2009, pp.160) puts it in his book What the Dog Saw: "A puzzle grows simpler with the addition of each new piece of information". A mystery cannot be solved purely through the addition of more information - in fact as we increase focus on collection of information then any further information we gain will contain a majority of noise in which more problems can hide; making the issue not necessarily worse, but necessarily bigger.

A bug found in production by a user is a puzzle; We can identify the problem, and immediately see what the desired behaviour is. We can trace it back, identify its root cause (or near root cause) and solve it. Preventing bugs going into production (basically: software testing) is a mystery; it's a cognitive process of questioning and re-framing with no (as yet) known problems and confusion over desired behaviour. We have nothing from which to trace back and we're executing the root causes (or near root causes) for problems that we haven't found yet.

This is one reason that we can only make predictive insights into user-experienced software quality; true quality is un-knowable. We have an understandable obsessive focus with solutions in business, and it's easy to see why, but treating software testing as a solution is risky if we identify the wrong problem to be solved. Our biggest problem is not a lack of information about a product, but too much. We have documentation, implementations, previous experience, relationships to the product, relationships to the software provider and its employees, the structure and culture of the company, the users, the customer, and I'm sure you can think of more. The problem we have is the attempt to identify what is useful, questioning both the product and the project. It's not about learning and applying a requirements document (like some developers do), it's about understanding and evaluating it; interpreting data into information and refining information into heuristic oracles. The investigation into the mystery of software bugs. To be better at software testing it isn't usually more initial information we need, but better analysis of the information we have (meaning we gather better information, not just more).

This is exactly why testing is a human-centric process and why it requires application of thought and engagement. As Gladwell (2009, pp.166) states: "Puzzles are 'transmitter-dependent'; they turn on what we are told. Mysteries are 'receiver-dependent'; they turn on the skills of the listener". And it's our skills in challenging what we are told, and more generally as information listeners, that make us good software testers. We are in the practice of a kind of Bayesian analysis of information, adjusting our belief in the subjective to rationally account for new evidence.

The logical solution to the puzzle of software testing is simply the collusion of information collection and persistence. The mystery of software testing requires experience, insight and investigation. Puzzles require us to think. Mysteries require us also to think about the thinking. This is why bug fix verification is easy, but boring. It's no longer a mystery to solve, but a set of puzzles... but if we continue to apply the mystery of software testing through Rapid and Exploratory techniques such as "branching and backtracking" (Bach, J., 20xx, pp.20) areas of opportunity then when we do bug fix verification we immediately add value to the testing in higher risk (due to the necessary recent code changes to fix the bug) areas.

Business has become good at solving puzzles, but seems to ignore or avoid mysteries; and as Philip Herr (2011) says: "And if we have been especially successful at solving puzzles, we may be tempted to define all business problems as such. (As the saying goes, “If all you have is a hammer, everything begins to look like a nail.”).". Our over-reliance on problem solving skills in business, combined with a desire to demarcate everything into metrics and simple, usable facts, has lead to a misunderstanding of the complexities and puzzle-resistant elements of software testing and therefore how testing can be best used to add value to any project. If you've ever worked in a company that views testing as a "necessary evil" or an exercise in compliance then you'll understand the point I'm making.

Dealing with useful information on and around a product being mixed together with a greater quantities of useless information, misleading information, incorrect information, and observation posing as objective fact requires us to look at finding important bugs quickly as a mystery, not a puzzle. As Gladwell (2009, pp.168-169) says, "If you can't find the truth in a mystery - even a mystery shrouded in propaganda - it's not just the fault of the propagandist. It's your fault as well."


References:

Bach, J., 2006. Breaking Down (and building up) Exploratory Testing Skill. [online] Available at: <http://www.quardev.com/content/whitepapers/ET_dynamics_Google_3_06.pdf> [Accessed 16 December 2012]

Gladwell, M., 2009. What the Dog Saw: and Other Adventures. [e-book]. Penguin UK. Available at: play.google.co.uk <http://play.google.co.uk> [Accessed 16 December 2012].

Herr, P., 2011. Solving Puzzles Delivers Answers; Solving Mysteries Delivers Insights. Brand News, [online] Available at: <http://www.millwardbrown.com/Libraries/MB_POV_Downloads/MillwardBrown_POV_Solving_Puzzles_Delivers_Answers.sflb.ashx> [Accessed 16 December 2012]

Tiverton, G., 2007. Risks and Riddles. Smithsonian Magazine, [online] Available at: <http://www.smithsonianmag.com/people-places/presence_puzzle.html> [Accessed 16 December 2012]

Sunday, November 18, 2012

Testing Towers


I typed into Google the following:

"We test software to"

And read the results. I took some ideas from these results, and some of my own, and started to write down reasons to test. Things that motivate testers. Then I put them in an ordered list... looking at the list now it looks like I put them in (arguably) order of lowest to highest risk, the most important (to some imaginary business developing the software) at the bottom and what is considered the "icing on the cake" at the top. I looked for priority, and I looked for context, and I came up with a lot of different building blocks. Here are some:

Good User Experience (product is easy and/or fun to use, even if it doesn't do what is required)

Product Solves Problem (product solves the problem it was set out to solve, in that the functionality it has actually fixes the problem that existed that the product was made to fix)

Product Has Fewest Number Of Bugs (the highest possible number of bugs has been found, leaving the fewest number remaining in the software)

Product Won't Do Bad Things In Production (the product doesn't do anything that would

Product Is Reliable (product doesn't crash, shut down, run slowly, fail)

Product Won't Kill People (doesn't fail, causing death or serious injury)

Product "Works" (is operational. The buttons work, the functions do something)

I'm Learning (I know nothing directly about the "quality" of the product as a result, but I know more about the product, so I understand better what a "quality" product would mean)

Manager Isn't Bothering Me (ignore the software, I have my manager off my back)

Fulfils Requirements Doc (does what the documentation says it should)

Product Is Testable (product can actually be tested)


Based on context (software development methodology, culture, strategy, resources, and a host of other things) these could be stacked in multiple ways. You might want "product won't kill people" in there specifically because you're testing a safety-critical product. But if you're testing military weaponry perhaps you want your product to be able to kill people. It could be that Product Is Testable is on the bottom as it's required to test that it Fulfils Requirements in documentation and has the Fewest Number of Bugs which thus Solves The Problem. Personally I think that would be appalling in all sorts of ways, but it might be what your testing tower looks like. Like this:

Solves Problem
Fewest Bugs Fulfils Requirements Doc
Product Testable

Unfortunately I'd say that the Fewest Bugs block doesn't really exist. I'd say that the idea of finding a quantum quantity of bugs in an infinite test space is nonsensical. *POOF* it's gone, and the Solves Problem brick is unstable and could fall at any second! I'd say also that Fulfilling Requirements Doc does not support the solution of the test problem, but only the solution of the requirements doc, which is not the same thing. *CRACK*, it's broken. And the whole (if overly short for this example) testing tower comes tumbling down. Oops.

We all test to investigate the software, but what are you investigating? What is your aim? Something worthwhile I hope.

So, what are your bricks? Are they strong ones that support the layers above? Are they in the correct order, so as to stop your tower being top-heavy? Do they even exist in your tower (apply, and logically follow, in the real world)? It's an interesting way to see your processes, and the dependencies in your workflow, and what you're trying to achieve by testing the product, and for whom you are testing it. I wouldn't use it for modelling your desired practices, personally, but it does give some insight into these things, and possibly even the culture and practice of your company, and where things could be improved. Build your tower, and see if it has a worthwhile top to climb to. Then throw logic at it and see if it's still standing.

If you're looking for more (or better) bricks you might turn to something like James Bach's CRUSSPIC STMPL quality characteristics heuristics, or his CIDTESTD project environment heuristics. You could take stock of your resources, or learn about a software development methodology that you don't use.

What does your testing tower look like?

Edit:
Thinking about this recently I've used the blocks to identify the divergence from why we should test to why we DO test and the importance of company culture. Strategy, planning, learning... all vital to set out what we should do. Add a layer of culture, habit and company practices and you get what we ACTUALLY do.

Tuesday, November 13, 2012

Learn to Code or Learn to Stack Shelves

Okay, that was a little inflammatory. I'll calm it down.

I haven't read much on the subject of testers learning to code, but people seem to be in one of three camps. The "nearly all testers must be able to code" camp, the "nearly all testers don't need to be able to code" camp and the "I kinda sorta agree with both" camp, which is actually the second camp but without the strong sense of reaction to the first.

I think one problem is that there is a great deal of diversity in testing, including automation coders, developers who write unit tests, exploratory testers, script checkers... and let's face it not EVERYONE needs to learn to code. We're not dragging "Introduction to Java" books into every vaguely IT role and telling them they need to learn this stuff. I'm next to certain my managing director does not know how to code. And I would say that the variance in testers (and checkers) and the context in which they work, is so widely varied as to bear the weight of my hyperbolic example.


Let's try it this way. Ask yourself this question:

Would learning to code improve my odds on remaining employed and/or ability to test what I'm testing?

If "no", then don't learn to code. If "yes" then ask yourself this question:

Would learning to code be the best use of my time and energy it would take?

If "no", then don't learn to code. If "yes"... I'm sure you can work out the rest.


Coding vs Testing?

Okay, so it's a bit facetious, but you get the idea. There is no answer to "should testers learn to code", and anyone that says there is a universal answer is either being deliberately obtuse or has failed to understand the non-universal nature of testing. So I suppose the only thing now is to give some ideas to the benefits and drawbacks to learning to test.

Firstly I must point out that if all you do is execute scripts then you are a checker, and you might be in trouble. Sorry.

What learning to code will cost you

Firstly the downsides.

1. Time

Learning to code takes time. That's about it. You could use that time 

2. Effort

Learning to code takes energy and focus which could be better directed to learning your product, or learning more about testing or related subjects (goodness knows there's enough to keep you busy!)

3. Choice

Choosing to learn to code in one language does not mean that you are immediately familiar with others (although it does help). So choosing one language to code in will not be much use to a tester if you find yourself in a position that you can no longer use it.

4. You might be awful at it

The mind of a developer is a confusing place, as I'm sure you probably know, and this difference isn't accidental. Not everyone is good at coding. It can be tiring, frustrating, repetitive and boring. Some people love it, but I would venture that a great tester would not make for a great developer.

5. Coding can be dull

You're probably a tester, and you're interested in what's new and interesting. Different angles and approaches, alternative ways of thinking, exploration, creativity and drive. Personally I find coding at any sort of professional level my own personal idea of hell; the same syntax every single day, building non-physical creations with a virtual soupy bucket of strands that must be built, re-factored and honed to a solution. Not for me. Although I do know how to code in some languages, and I find it quite fun... but only knowing that I don't have to be exact or precise or impress anyone in particular. Just as I enjoy occasionally writing poetry but I'd be crying with frustration into my notepad if I were paid to do it every day for a critical audience... wow, I suddenly feel a wave of empathy with developers!


What learning to code will get you


1. Insight

I've found knowing how software is made (from a blank screen to a simple set of functions) helps me understand the mistakes and shortcuts that occur. I use this information to perform quick-attack tests. If you are in a position to apply creativity to your testing then knowing how the people that make the bugs make the bugs will give you an array of tools to help you find them.

2. Alternative employment

A dodgy claim, but knowing how to code gives you another trade to fall back on. To be quite honest if you have a lot of experience as a tester and no experience as a coder you will very likely have a lot of trouble getting this sort of work, but.. it might help.

3. Automation

Automated checks with any worth usually require some level of testing experience. Chances are there's something you could automate, or partially automate, to help you reach your testing objectives. 

4. Tool building

I use my coding ability to build tools to make my job easier. I am a true computer user, in that I am essentially lazy. I spend a little time to save a lot of time in my testing, and reporting. For example, I've coded systems that detect the version and build of the system I am testing and copy bug reporting information to the clipboard because I'm too lazy to copy, paste and fill in a bug report's environment details. And I do that many times a day. Now it's never victim to copy/paste errors, and it saves me a lot of frustration and a bit of time.


Okay, so we broke that down into parts... Where do we go from here?

Well, if you feel that you'll get something out of learning to code, and that you have some passion for it, and the time and energy to spend, then go for it! I think it's vitally important that you have an interest in coding to be even vaguely successful at it. If I came to the belief that I needed to eat broccoli every day to keep my job I'd start looking for another job... even if I was very good at it without any need for broccoli. I do coding as a hobby some times. Mind you, I pick locks, play Dwarf Fortress and have Magic tournaments with my girlfriend, so maybe I'm not a yardstick of entertainment. Either way, I think the main point in all this that the phrase

"Testers need to learn to code"

or similar are rife with hidden presumptions and worrying overtones. The meaning of "Tester", the meaning of "need" and the meaning of "code". The variance of testers I've already covered, and I hope I've gotten across some of the needs (or wants!). Of course "code" means so many different things... and these (and other points) have been far better covered by Rob of "The Social Tester", here.

So will you learn to code? Or perhaps there's another book on epistemology, general systems or game theory that you have left unread on your shelf that would provide you more insight and use. I won't advocate any need to learn to code, but if you want to I'd be the last to tell you not to. Whatever you take away from this, don't forget to learn something every day.


Edit
I've come to realise that the "you need to learn to code" accusation is levelled at people whose testing value isn't sufficient as to keep them employed. Perhaps it's not the "testers" (checkers) that need to learn to code, but the higher-ups that need to learn how to get value of their testing, and therefore learning how to advocate good testing (and admonish bad testing) would be more valuable in keeping one's job than learning to code.

Wednesday, October 24, 2012

Looking for Test Tools?

Are you looking for tools to help you in your testing?

You are probably in one of two situations.

Situation One

You have a problem for which you hope there is a tool to make your life easier. This is usual, and fine, and I wish you well in your journey. The way I usually do this is to identify the problem I have, and search Google for potential solutions, and you're probably not even reading this.

Situation Two

You are perusing a page of tools used by, and suggested by, other people. There is nothing intrinsically wrong with this, but here is where you need to be wary, and there are two main reasons that I believe you should:

Your context is not their context

What works for one person doesn't work for all others. This is fairly obvious, but you shouldn't confuse the process of testing with the tools of testing - if you try to save effort by changing the way you work to fit in with a tool then you're allowing the tool to potentially reduce how well you can work. You need to temper your lust for "smarter working" with a stoic judgement of the benefit the tool is going to provide.

Tools don't do any work

If I went out and bought a trowel and bricks and cement mixer and so on all I would end up with is well-prepared ingredients for a brick construction, because I do not know how to lay bricks. Anything I made would be slanted, unbalanced, weak and possibly dangerous. The tools we have to help us perform tasks are not replacements for a complete job but a short-cut to save time, effort or frustration in reaching our goal. If you seek, or use, a tool to try to do your testing for you ensure that you are comfortable with the fact that you are no longer testing. When you use a tool to help you you are taking yourself out of that part of the equation. This, on balance, may be an acceptable loss given the constraints on your time or sanity, or the lack of need for your personal touch for the task, but it is a loss nonetheless.


So, in conclusion, if you want to use a tool to help you do your testing then you need to balance the negative with the positive. The tasks you are replacing, your newly found lack of attention, your assumption the tool is doing the job, if the tool will work for your context, the maintenance costs and so on versus the time, effort and frustration you will save by using it. Don't confuse tools with processes.

Friday, July 6, 2012

Exploratory Testing, Management and Tools

Disclaimer: You wouldn't eat with a hammer and you wouldn't bang a nail in with a fork. Tools are options. They are a bag of tricks you keep by your side. A mechanic doesn't use the whole tool chest to check your breaks, and you shouldn't let software tools be your dogmatic methodology. Stay light, simple and efficient is my advice. Your own set of tools will be dependent on your own context. But if you want some more brushes in your paintpot here is a list of some exploratory and session-based management tools:

Second Disclaimer:

You will find a better list here: http://www.ministryoftesting.com/resources/software-testing-tools/

Exploratory Testing & Management Tools


Note Taking

Paper and Pen

Paper & pen is flexible (in both senses), versatile, expressive, cheap, easily accessed, and useful. It stores well in a suitable filing system, it's portable, and generally ubiquitous. Plus it doesn't need a computer, or even electrical mains power or batteries to use. Not to be ignored.

Notepad

A limited version of paper & pen for the computer! Your OS probably has an equivalent.

Rapid Reporter

Most ET practicioners have heard of this, in my experience. It is a "note taking application that aids in the reporting process of SBTM". It doesn't take up much real-estate, it sits always-on-top, and has a transparency slider, so it's very convenient to use. It's standalone (no installation), and outputs to the easy-to-parse CSV format. It's customisable, easy to control with the keyboard, and does its job very well, I highly recommend it.

http://testing.gershon.info/reporter/



Session Capture

Video Screen Recorders

Screenr, Camtasia, Hypercam

They all record everything you do, and if the disk space these videos use isn't an issue for you then they are a valuable asset. Once you have the video at hand you can answer those questions... Did it do that before? Did I click the button before or after? Why did it work the second time and not the first?...

qTrace

After a couple of suggestions I downloaded qTrace. I've played only marginally with this so far but it's remarkably good. You boot it up, select the application to record, hit record, then start testing. When you hit stop it presents you with every action you performed, and these trace files can be saved. You can even specify the number of screens you want kept so that you can raise the processes as a bug, and the software interacts directly with many tracking systems including JIRA (ours).. although you need to pay a small fee for this (and to remove a small ad). It also saves a very useful table of environment information (OS, browser version, CPU, RAM, Available RAM, current user...).
It has facilities to force a screen capture, add a notes to the steps, blur out sensitive information on the page, add arrows/boxes/text to your screens, and more.
Together with good notes I consider this a very viable candidate for our ET sessions. One thing it doesn't seem to do is track timestamps, so these would have to go in your notes.


Other Useful Tools

Mindmap Tools

FreeMind, XMind, MindMeister

JIRA Plugin: Bonfire [£]

A good-looking JIRA plugin that sits in your browser and handles private or public test sessions, bug report templates and more. Worth a look if you're planning on using session-based management.



Any further suggestions please tweet me @kinofrost, letting me know if you'd like direct credit for it (defaults to "no" for reasons of privacy and spam).

Many thanks to the many testers of Twitter, and a very useful RT from Software Testing Club for the suggestions.

Monday, April 30, 2012

Zendo: Testing in game form

I love Zendo. Its fans will often describe it as the best game, or gaming concept, ever made. It is deeply strategic without being overly competitive, in my opinion, and is also a lot of fun, especially with good players.

Here it is on Wikipedia:
http://en.wikipedia.org/wiki/Zendo_(game)

I suggest getting a few playing pyramids and a copy of Playing With Pyramids from Looney Labs.

Took me far too long (plus the time to watch a James Bach lecture, plus the time for it to sink in) to see that it contains a lot of the same thought processes as testing. My prediction: a good exploratory tester would kick arse at Zendo.

Here's the idea behind Zendo (icehouse piece version).

First you have a set of "pyramids" (icehouse pieces, treehouse pieces, or whatever they're calling them these days). They are often transparent, coloured, stackable pieces that come in sets of 15 (5 each of small, medium and large, or 1-point, 2-point and 3-point pieces if you prefer to call them that).

You also need a pile of black stones and a pile of white stones.

The "Master" player thinks up a rule, creates two "Koans" (small constructions made out of the pieces), one of which obeys the rule and one which does not. A rule could be "A koan has the Buddha nature if and only if it contains a blue piece" or "A koan has the Buddha nature if and only if it contains two medium pieces" or "...contains a blue piece pointing at a red piece" or "the value of the pieces is a square number"...etc.

Basically the idea for the other players is to guess this rule by building koans and having them marked as having the rule (white stone) or not (black stone) until they can determine enough about the system to guess at the rule.

If the master cannot build a koan that disproves the guess (build one that has the Buddha nature but disobeys the guessed rule, or build one that does not have the Buddah nature and obeys the guessed rule) then that player wins.

In order to do this you need to evaluate what the white koans have that the black koans do not and vice versa. Each guess will form part of the rest of your test plan as the resulting information is useful feedback. You need to make "quick attack" guesses that halve the problem or find useful information quickly rather than slow, plodding ones or wild guesses that make little advance to your current knowledge. You must not repeat tests (build exact copies of other koans) and reduce partial repetition to a minimum when you have enough knowledge to know better (building near-exact copies of other koans, such as the same koan with different colours after you have ruled out colour being important in the rule).

If it's not a good similie for exploratory testing I fail to see how.

Just a quick thought, I might formulate it into a more formal assessment later.

Wednesday, April 25, 2012

QA & Testing Resources

I'm investigating the world of QA & Testing Blogs and resources!

Mine has its moments of testing-related goings on in between robotics, ranting and philosophical silence, but if that simply isn't enough there are many more resources available. I will try to update this list based on comments, and things that I find in my search or buried in my bookmarks. Some I know well, some I do not, and the list will require a tidyup. If I particularly enjoy or find use in a resource I might give it a little star or something, who knows? Nobody, that's who.

If you would like your site removed from this list please contact me or leave a comment and I will do so. If you'd like it added please leave a comment and I'll give it a once over and add it to the list!

Blogs

Phil's Expected Results
Andy Glover's http://cartoontester.blogspot.co.uk/
QA Hates You
Trish Khoo's testing blog
Testy Redhead
Alister Scott's Watirmelon [Watir/Watir-webdriver]
Hugh McGowan's The Science of Testing
Test Side Story
Michael Bolton's blog
James Bach's blog

Videos

http://www.satisfice.com/videos.shtml
MoT Test Bash

Groups / Communities / Forums

Software Testing Club
Ministry of Testing
Developer Testing
Test Driven
SQA Forums
Sticky Minds

Tools


Podcasts



Thursday, March 29, 2012

Test Metrics - Dangers In Communication

At the Software Testing Club meet last night many interesting points were made but one that captured my interest in particular was testing metrics.


What Are Metrics?

Testing, like most things, is full of numbers and statistics. Numbers and statistics are everywhere. And metrics are just numbers that have reason to them. Metrics are "an analytical measurement intended to quantify the state of a system"1


Unfortunately numbers say whatever you would like them to say. I always return to an old adage taught to me at school:
Information = Data + Meaning
So the numbers in statistics are data, and what they refer to is the meaning. Or the numbers are the quantification and what they refer to is the state of the system. But this is all that these numbers refer to, and that is the inherent problem with metrics.


Why Do We Use Metrics?

Well to "quantify the state of the system" of course! But is that really why we use metrics? We know that's what they refer to but metrics have a habit of miscommunication meaning and I think the operative word we hit on last night was context.


Managers love metrics. They like reports, charts and graphs because they can gain information about a system that they know perhaps very little about or don't have time to understand. It's easier to go to your boss with "Our bug count is 17% lower this quarter" than "well, looks like it's going okay and the testing guys say that the quality looks higher". But do the numbers tell the truth about the system? If making you or your team look good, or putting your head down and getting away with nominal improvement is your goal then metrics are a gift from heaven.... but if relaying the true quality of your product is the goal then qualification and context are vital.


Our discussion last night prompted the example of "severity". Saying "We have 20% fewer Severity One bugs" sounds great, but this statement should prompt a lot of questions, such as:

  • What on earth is a "Severity One" bug?
  • What does it mean to 'have' the bug? That we found fewer? That we fixed more than we found?
  • Is 'having' fewer of these bugs really a good thing? Are we spending time fixing bugs and not finding them? Are we simply not finding them any more when we should?

Science!

Science is a methodology for finding truth about the universe despite our own cognitive biases. One look at an optical illusion shows us that seeing isn't believing, and starting with an answer and then trying to prove it makes the likelihood of experimental data showing the pre-selected answer more likely ("hmmn, that doesn't look right, I'll take that reading again..."). We are pattern seeking animals, we tend to confirm our own beliefs and look for "hits" and forget the "misses" - this is confirmation bias. It's why we see shapes in the clouds and the face of Jesus on a crisp, we are evolutionarily programmed to respond to patterns and false negatives (is that rustling in the trees wind or an enemy? Better run just in case).


All this means that we should not interpret the data after deciding what we want it to say, otherwise we are creating data that does not tell the truth. They are the most basic rules of scientific philosophy. It might make us look good or cover up our mistakes or those of our colleagues, but it's not the genuine state of the world, or the system under test.


How Should We Use Metrics?

Interpret the numbers on behalf of those receiving them, do not let them do it themselves. Those collecting the metrics are probably at the best point of understanding, so add the context and qualifications there before handing them off to someone else. Do not use confusing short-hand terms like "code coverage" and "bug count" without explanation. Formalise the meaning of domain terms like "severity" so that testers can categorise issues by a domain-wide understanding of the term.

This all feeds into our need to understand more complex things about the system than "number of bugs fixed" or "test coverage", to the ability to understand opportunity cost and apply it to schedules, risk assessment, functional prioritisation and so much more. 




 1 http://en.wikipedia.org/wiki/Metric

Wednesday, February 22, 2012

Robotics ho!

To the land of robotics!

I am going to build my first Arduino line-following robot. It is decided.

I have a shopping list, I have the tools, I may be moving house within 2 weeks so I can't have anything delivered yet, but I will start the planning process and decide where I can get a cheap chassis and some cheap wheels from... it may be time to get back to the boot sale! I can't wait to stop being ill so I can chase these things.

If you're following along at home I will need:

* LEDs
* A battery
* A chassis
* 2 hobby servo motors (which I will adapt to run continuously. I have to learn sometime.)
* Nuts/bolts
* Resistors
* Matrix board
* IR Emitter/detectors
* A motor shield
* A caster wheel for the front

I'll try to update with photos when I actually build this thing.

Thursday, February 16, 2012

Organising My Life 2

Well I've tried Astrid and Toodledo and Lifetick and a host of other motivational goal-based GTD to-do list widgets and I've come to no satisfactory conclusion on them. I have found out as much what I want as what I didn't want, and I've decided what I want is three things:

1. A to-do list that is simple, has multiple lists, and contains actionable short-term things that need doing.

2. A system for maintaining personal and work projects

3. Something to hold user stories

Number one is still under consideration but at the moment I am using Wunderlist. It is simple and it works.

Number two I stumbled upon via Wunderlist, their new Wunderkit which (even without the social aspect) is a great way of braindumping my larger-scale personal projects. It holds a few points over the dishevelled notes I was holding in OneNote and Google Docs, and has a much more personable layout than both, as well as Evernote which I can't really abide.

Number three, without a doubt, is PivotalTracker which is nothing short of an amazing piece of programming, but I'm always up for suggestions!

My trouble will now be checking all these things each day! When I get home in the evening I will often not remember to check to-do lists. Wunderkit has tasks, but they don't interrupt me enough to ensure they'll get done.

So:

If I have something that needs doing I will put it on Wunderlist. The downsides are no alarms and a slightly fiddly Android app.

If I have a long-term goal or project with subtasks or recurring tasks I will use Wunderkit. The downsides are no alarms, and no visible progress towards the goal, but I like the idea and if I had other people working on a project it does make for a good project collaboration space (although I haven't needed to try others).

And I will continue to use PivotalTracker :).