Wednesday, July 2, 2014

STP Podcast - Automation In Testing with Richard Bradshaw

Listen here:

This was a good listen! So good I'm inspired to comment in more space than can fit in a tweet. I do agree with basically everything mentioned, which is a shame, but it did bring up ideas for me that I hadn't considered and ways of expressing things I hadn't heard yet, which is a treat.

Forgive the quality, I rushed the whole post.

Automation Is A Tool

Richard is clear and passionate on this topic, and I could not agree with him more. Automation is a tool, and as a tool it has to serve a test strategy in some way. Automation is so often seen as chasing some benefit that doesn't exist. It's like buying a new battery-powered electronic drill purely under the assumption that it will save you time with your DIY. Well it depends on what you're going to use it for - why do you need a drill, and why does it need to be this drill? A drill doesn't mean you don't have to drill holes any more. It also requires maintenance. It requires skill to use. It can enhance your ability to drill holes, and it can enhance your ability to make mistakes and do damage. It has disadvantages over other hole-boring tools for certain tasks. It requires exploratory learning during its use, examining and interpreting the results.

Exploratory Automation

I agree with the statements in the podcast. I'd go as far as to say that there's no such thing as an exploratory tool. I'd accept "pseudo-exploratory", maybe. Exploration requires learning, and only humans can learn in that way. For a tool we have to encode how it interacts with the product, what oracles it can use and how to use them, how to make decisions... and we cannot encode it to cope with the implicature of that information nor can we code it to be inspired to do anything we haven't yet thought of. Humans can notice a symptom, follow it, design new tests, learn from the results, design further tests, and so on leveraging heuristic principles to find problems or build better mental models. They can get inspired by results and use that to explore new ideas based on their understanding of risk (heuristics such as "it's bad when people get hurt" that are hard to teach to a machine).


"How do I measure my manual testing and have visibility into the value of my manual testers?"
"How do I have visibility into the value of my automation?"

Richard gave a great answer to this.

There is no way to usefully quantitatively measure the value (someone's regard of the importance or usefulness of something) of a tool. That's the problem, it's subjective. Don't automate to save money. Automate when it serves the test strategy. Are you willing to invest in the quality and speed of the information  testing provides, or are you looking to replace an expensive but necessary service? You wouldn't buy a new IDE for your development team with fancy code tools so that you could fire some of your developers and get return on investment, so you shouldn't do it with testing (unless your testers are THAT replaceable, in which case your problems can't be solved with a tool). As Richard says, "Automation is there to serve the team".

I'm always curious as to why people want to value their tools like that. I understand the hesitation if someone is buying an automated check execution tool under impossible promises from a vendor, but I'm more interested in reality. Let me ask this: How do you get visibility into the value of your car? Your car (or a car you might buy if you don't have one) is a tool that you can use. Well, you can look at the cost of the car, maintenance and fuel but you have to measure it against how useful it is. How do you measure how useful something is? It's a subjective matter. What do you use your car for? Could you do without a car? What's the difference in utility between having the car and not having the car? What are the drawbacks? How do you factor those into the utility? What about things you haven't thought of?


I also really liked Richard's point around here of "automation is dumb" and that it doesn't need to be intelligent because we have intelligent testers to use it.

This wasn't even half of the content of the podcast, so I'd recommend listening to the whole thing. They go on to discuss automation tools beyond automatic check execution and how to use them to inform mental models, the kinds of people who do different types of testing, reporting and so much more. I'd like to write about this too but I only have so much time!

Thanks to STP, Mark Tomlinson and Richard Bradshaw for a great podcast!

1 comment:

  1. Chris,
    I especially like your statements in the Exploratory Automation section of this post. Yes automation is a tool and so are the 'tools' to support implementation of this type of work. The 'automation code' is only as smart as the person who wrote it and cannot be an end all AI. The day we create an AI to do software testing is the day we should run for the hills because we just created SkyNet.

    It all harkens back to my favorite saying "It's Automation, Not Automagic".