Monday, July 28, 2014

It's Just Semantics

Why is it so important to say exactly what we mean? Checking vs testing, test cases aren't artefacts, you can't write down a test, best practice vs good practice in context - isn't it a lot of effort for nothing? Who cares? 

You should. If you care about the state of testing as an intellectual craft then you should care. If you want to do good testing then you should care. If you want to be able to call out snake oil salesmen and con artists who cheapen your craft then you should care. If you want to be taken seriously then you should care. If you want to be treated like a professional person and not a fungible data entry temp then you should care.

Brian needs to make sure that their software is okay. The company doesn't want any problems in it. So they hire someone to look at it to find all the problems. Brian does not understand the infinite phase space of the testing problem. Brian wants to know how it's all going - if the money the tester is getting is going somewhere useful. He demands a record of what the tester is doing in the form of test cases because Brian doesn't understand that test cases don't represent testing. He demands pass/fail rates to represent the quality of the software because Brian doesn't understand that pass/fail does not mean problem/no problem. The test cases pile up, and writing down the testing is becoming expensive so Brian gets someone to write automated checks to save time and money because Brian doesn't understand that automated checks aren't the same as executing a test case. So now Brian has a set of speedy, repeatable, expensive automated checks that don't represent some test cases that don't represent software testing and can only pretend to fulfil the original mission of finding important problems. I won't make you sit through what happens in 6 months when the maintenance costs cripple the project and tool vendors get involved.

When we separate "checking" and "testing", it's to enforce a demarcation to prevent dangerous confusion and to act as a heuristic trigger in the mind. When we say "checking" to differentiate it from "testing" we bring up the associations with its limitations. It protects us from pitfalls such as treating automated checks as a replacement for testing. If you agree that such pitfalls are important then you can see the value in this separation of meaning. It's a case of pragmatics - understanding the difference in the context where the difference is important. Getting into the "checking vs testing" habit in your thought and language will help you test smarter, and teach others to test smarter. Arguing over the details (e.g. "can humans really check?") is important so we understand the limitations of the heuristics we're teaching ourselves, so that not only do we understand that testing and checking are different in terms of meaning, but we know the weaknesses of the definitions we use.

So, we argue over the meaning of what we say and what we do for a few reasons. It helps us talk pragmatically about our work, it helps to dispel myths that lead to bad practices, it helps us understand what we do better, it helps us challenge bad thinking when we see it, it helps us shape practical testing into something intellectual, fun and engaging, and it helps to promote an intellectual, fun and engaging image of the craft of testing. It also keeps you from being a bored script jockey treated like a fungible asset quickly off-shored to somewhere cheaper.

So why are people resistant to engaging with these arguments?

It's Only Semantics

When we talk about semantics let's first know the difference between meaning and terminology. The words we use contain explicature and implicature, operate within context and have connotation. 

Let's use this phrase: "She has gotten into bed with many men".

First there's the problem of semantic meaning. Maybe I'm talking about bed, the place where many people traditionally go in order to sleep or have sex, and maybe you think I'm talking about an exclusive nightclub called Bed.

Then there's the problem with linguistic pragmatic meaning (effect of context on meaning). We might disagree on the implication that this women had sex with these men. Maybe it's an extract of a conversation about a woman working with a bed testing team where she's consistently the only female and the resultant effects of male-oriented bed and mattress design.

Then there's the problem with subjective pragmatic meaning. We might agree that she's had sex with men, but might disagree on what "many" is, or what it says about her character, or even agree that I'm saying that she's had sex with many men where I am implying something negative about her character but you disagree because you've known her for years and she's very moral, ethical, charitable, helpful and kind.

So where we communicate we must understand the terms we use (in some way that permits us to transmit information, so when I say "tennis ball" you picture a tennis ball and not a penguin), and we must appreciate the pragmatic context of the term (so when I say "tennis ball" you picture a tennis ball and not a tennis-themed party).

When we say "it's only a semantic problem" we're inferring that the denotation (the literal meaning of the words we use) is not as important as the connotation (the inference we share from the use of the words).

Firstly, to deal with the denotation of our terminology. How do we know that we know what we mean unless we challenge the use of words that seem to mean something completely different? Why should we teach new people to the craft the wrong words? Why do we need to create a secret language of meaning? Why permit weasel words for con artists to leverage?

Secondly, connotation. The words we use convey meaning by their context and associations. Choosing the wrong words can lead to misunderstandings. Understanding and conveying the meaning IS the important thing, but words and phrases and sentences are more than the sum of their literal meaning, and intent and understanding of domain knowledge do not always translate into effective communication. Don't forget that meaning is in the mind of the beholder - an inference based on observation processed by mental models shaped by culture and experience. It's arguments about the use of words that lead to better examination of what we all believe is the case. For example some people say that whatever same-sex marriage is it should be called something else (say, "civil union") because the word marriage conveys religious or traditional meaning that infers that it is between a man and a woman. The legal document issued is still the marriage licence. We interpret the implicature given the context of the use of these words which affects the nature of the debate we have about it. We can argue about "marriage" - the traditional religious union, and we can argue about "marriage" - the relationship embodied in national law and the rights it confers. We can also talk about "marriage" - the practical, day to day problems and benefits and trials and goings on of being married.


Things Will Never Change

Things are changing now. Jerry Weinberg, James Bach, Michael Bolton and many others' work is rife with challenging the status quo, and now we have a growing CDT community with increasing support from people who want to make the testing industry respected and useful. There's work in India to change the face of the testing industry (Pradeep Soundararajan's name comes to mind). There's STC, MoT, AST and ISST. The idea that a problem won't be resolved is a very poor excuse not to be part of moving towards a solution. Things are always changing. We are at the forefront of a new age of testing, mining the coalface of knowledge for new insight! It's an incredibly exciting time to be in a changing industry. Make sure it changes for the better.


It's The Way It's Always Been

Yes, that's the problem.


I'll Just Deal With Other Challenges

Also known as "someone else will sort that out". Fine, but what about your own understanding? If you don't know WHY there's a testing vs checking problem then you're blinding yourself to the problems behind factory testing. If you don't investigate the testing vs checking problem then you won't know why the difference between the two is important to the nature of any solution to the testing problem, and you're leaving yourself open to be hoodwinked by liars. Understand the value of what you're doing, and make sure that you're not doing something harmful. At the very least keep yourself up to date with these debates and controversies.

There's another point to be made here about moral duty in regard to such debates and controversies. If you don't challenge bad thinking then you are permitting bad thinking to permeate your industry. If we don't stand up to those who want to sell us testing solutions that don't work then they will continue to do so with complete repudiation. I think that this moral duty goes hand-in-hand with a passion for the craft of testing, just as challenging racist or sexist remarks goes hand-in-hand with an interest in the betterment of society as we each see it. It's okay to say "I'm not going to get into that", but if you claim to support the betterment of the testing industry and its reputation as an intellectual, skilled craft then we could really use your voice when someone suggests something ridiculous, and I'd love to see more, innovative approaches to challenging bad or lazy thinking.

"Bad men need nothing more to compass their ends, than that good men should look on and do nothing." - John Stuart Mill


Sounds Like A Lot Of Effort

Testing is an enormously fun, engaging, intellectual activity, but it requires a lot of effort. Life is the same way. For both of these things what you get out of it depends on what you put in.


Arguments Are Just Making Things Less Clear/Simple

This is an stance I simply do not understand. The whole point of debate is to bash out the ideas and let reason run free. Clear definitions give us a starting point for debate, debate gives us rigour to our ideas. I fail to see how good argument doesn't make things better in any field.

As for simplicity, KISS is a heuristic that fails when it comes to the progress of human knowledge. We are concerned here with the forefront of knowledge and understanding, not implementing a process. Why are we so eager to keep it so simple at the expense of it being correct? I've talked to many testers and I've rarely thought that they couldn't handle the complexity of testing, as much as they could the complexity of life. I don't know who we're protecting with the idea that we must keep things simple, especially when the cost is in knowledge, understanding and the progression of our industry.


P.S.
I've been reluctant to release this post. It represents my current thinking (I think!), but I think it's uninformed which may mean that it's not even-handed. Linguistics and semantics are fairly new to me. I'm reading "Tacit and Explicit Knowledge" at the moment, I may come back and refine my ideas. Please still argue with me if you think I'm wrong, I have a lot to learn here.

Friday, July 11, 2014

Best Practices Aren't

Best Practices Aren't

I assume that I'm directing myself at a group of people who know that there's no such thing as a Best Practice, and that we shouldn't use the term. There's a bit of consternation around this, some people defending the term whilst understanding the nature of contextual value of practices.

I'm working under the assumption that you've read this: http://www.satisfice.com/blog/archives/27

Of course there are no Best Practices! Of course practices only apply to a particular context! Okay, with that under our belt I'd like to talk about why we shouldn't use the term at all, and why it matters that we shouldn't.

It's A Trap!

Saying what we mean is important. We can call poison "food" and assume that tacitly we all know it's actually poison, really - until the amateur or black-and-white thinker wanders in and eats it, or some dishonest salesperson sells it to someone as food (after all, that's what we're calling it). In the same way we can all agree that "best" just means a particularly good thing to do in the context, but when people use "best practices" as a benchmark, ignoring the subtleties in the change in context, bad things can happen. Someone can look at a standard in one company and assume it will work for them because they're in the same industry - but there's a lot more to context than that. There's methodologies used, hours worked, company culture, relationships between people and teams, budget, development speed, even the geographical location of the office building. So why do we need this secret society of people who understand that "Best Practice" doesn't really mean "best"? Especially when we can talk about practices without using that term in the first place? Maybe it's some sort of egotistical elitism or some desire to find tranquil simplicity in a complex world, but either way it should affront your professional ethics to want to be part of the Secret Society of Avoidable Confusion.

It's Lazy Thinking

Understanding the flaws in a practice - the contexts in which it won't work or has little value - is important. Knowing what we should do is better than assuming that emulating someone else's practices will be just fine. It's thinking like this that permits tool vendors to sell snake oil cure-alls for the testing problem, and permits the persistence of worthless certifications. We can say "of course it's for a particular context" but that's because we don't want to have to think about the effect of context on the value of our practices because then we'd have to do some work. If you're the sort of person that believes that "automation" is more than an automatic check execution system to replace good, exploratory testing then you understand the importance that context has on the value of a practice. So don't let your brain go on holiday just because someone stuck the word "best" in front of "practice" and claimed it as some sort of benchmark of greatness, even if they permit you the luxury of implicit contextual value. Think critically, and don't let anyone stop you from doing so.

It Replaces Discussion

The "of course it's for a particular context" argument is the end of the discussion of our practices when we hold Best Practices up as a benchmark. Thinking about our practices, and their value, with the influence of practices we know work well elsewhere is not something to avoid or be ashamed of. There seems to be some resistance to the idea of eliminating best practices because of laziness, and some fear of "reinventing the wheel" - except that we're not re-inventing a wheel, we're adapting the wheel, changing the wheel, assigning cost to the wheel, using a different kind of wheel, examining how suitable the wheel is for our purpose and perhaps deciding that we don't need a wheel. "Don't re-invent the wheel" does not mean that we should use stone cylinders with a hole in them on our motor vehicles. Use the right wheel for what you need to do, even if someone says that a particular wheel is the "Best". Use your awesome tester powers of critical thinking.

It's Not What "Best" Means

"Best" is a superlative. The form of the adjective "good" that infers that it is the greatest of any other possible degree of something. When you say (without hyperbole) that "this is the best-tasting sandwich I've ever eaten" you are saying that there is no sandwich that you have ever eaten that tastes better than this one. Best Practice simply does not make sense, even with context factored in. Even for a particular context there is no Best Practice that we can know is the best one. "Best Practice for this context" means "There is no practice that exists or could be thought of that could be better than this one for this context" - which is quite the claim! Don't let people get away with that sort of thing.

An Alternative

If you really need a short-handed way to say "Best Practice" but actually mean something more like "good practice for a particular context" here are some ideas:
  • (I've found it to be an) Effective Practice
  • Cool Idea
  • The "<your name here>" Method
  • (My) Favourite Practice (for)

Or alternatively you could just talk openly and honestly about practices, where they seem to work for you, and where and how they could be applied to good affect.

Wednesday, July 2, 2014

STP Podcast - Automation In Testing with Richard Bradshaw

Listen here:
https://www.youtube.com/watch?v=UonH_5-X14k

This was a good listen! So good I'm inspired to comment in more space than can fit in a tweet. I do agree with basically everything mentioned, which is a shame, but it did bring up ideas for me that I hadn't considered and ways of expressing things I hadn't heard yet, which is a treat.

Forgive the quality, I rushed the whole post.

Automation Is A Tool

Richard is clear and passionate on this topic, and I could not agree with him more. Automation is a tool, and as a tool it has to serve a test strategy in some way. Automation is so often seen as chasing some benefit that doesn't exist. It's like buying a new battery-powered electronic drill purely under the assumption that it will save you time with your DIY. Well it depends on what you're going to use it for - why do you need a drill, and why does it need to be this drill? A drill doesn't mean you don't have to drill holes any more. It also requires maintenance. It requires skill to use. It can enhance your ability to drill holes, and it can enhance your ability to make mistakes and do damage. It has disadvantages over other hole-boring tools for certain tasks. It requires exploratory learning during its use, examining and interpreting the results.

Exploratory Automation

I agree with the statements in the podcast. I'd go as far as to say that there's no such thing as an exploratory tool. I'd accept "pseudo-exploratory", maybe. Exploration requires learning, and only humans can learn in that way. For a tool we have to encode how it interacts with the product, what oracles it can use and how to use them, how to make decisions... and we cannot encode it to cope with the implicature of that information nor can we code it to be inspired to do anything we haven't yet thought of. Humans can notice a symptom, follow it, design new tests, learn from the results, design further tests, and so on leveraging heuristic principles to find problems or build better mental models. They can get inspired by results and use that to explore new ideas based on their understanding of risk (heuristics such as "it's bad when people get hurt" that are hard to teach to a machine).

Metrics

"How do I measure my manual testing and have visibility into the value of my manual testers?"
"How do I have visibility into the value of my automation?"

Richard gave a great answer to this.

There is no way to usefully quantitatively measure the value (someone's regard of the importance or usefulness of something) of a tool. That's the problem, it's subjective. Don't automate to save money. Automate when it serves the test strategy. Are you willing to invest in the quality and speed of the information  testing provides, or are you looking to replace an expensive but necessary service? You wouldn't buy a new IDE for your development team with fancy code tools so that you could fire some of your developers and get return on investment, so you shouldn't do it with testing (unless your testers are THAT replaceable, in which case your problems can't be solved with a tool). As Richard says, "Automation is there to serve the team".

I'm always curious as to why people want to value their tools like that. I understand the hesitation if someone is buying an automated check execution tool under impossible promises from a vendor, but I'm more interested in reality. Let me ask this: How do you get visibility into the value of your car? Your car (or a car you might buy if you don't have one) is a tool that you can use. Well, you can look at the cost of the car, maintenance and fuel but you have to measure it against how useful it is. How do you measure how useful something is? It's a subjective matter. What do you use your car for? Could you do without a car? What's the difference in utility between having the car and not having the car? What are the drawbacks? How do you factor those into the utility? What about things you haven't thought of?

Other

I also really liked Richard's point around here of "automation is dumb" and that it doesn't need to be intelligent because we have intelligent testers to use it.

This wasn't even half of the content of the podcast, so I'd recommend listening to the whole thing. They go on to discuss automation tools beyond automatic check execution and how to use them to inform mental models, the kinds of people who do different types of testing, reporting and so much more. I'd like to write about this too but I only have so much time!

Thanks to STP, Mark Tomlinson and Richard Bradshaw for a great podcast!


Thursday, June 19, 2014

I Failed The ISTQB Practice Exam - Part VI & Final Thoughts

Series: Part IPart IIPart III & IVPart V

I was on 19/36...

Part VI - Tool Support For Testing

37. From the list below, select the recommended principles for introducing a chosen test tool in an organization? 

1. Roll the tool out to the entire organization at the same time. 
2. Start with a pilot project. 
3. Adapt and improve processes to fit the use of the tool. 
4. Provide training and coaching for new users. 
5. Let each team decide their own standard ways of using the tool. 
6. Monitor that costs do not exceed initial acquisition cost. 
7. Gather lessons learned from all teams. 

a) 1, 2, 3, 5.
b) 1, 4, 6, 7.
c) 2, 3, 4, 7.
d) 3, 4, 5, 6.

My Answer: B - only one without 3, except 6!
ISTQB Says: C

Obviously 3 is stupid, so (b) must be the answer! Except that it also has 6, which is rediculous. 1 would vary depending on the tool (does the CEO need RapidReporter?), 2 might be important if it's a big tool, 3 means being shaped by our tools which is dangerous, 4 might be necessary if they haven't used the tool before and it's complex or we want to use it in a certain way, 5 is a possibility (I don't teach anyone how to use Excel or a log viewer in a specific way, for example), 6 seems weird and 7 seems like a good idea. I suppose if it wasn't for 3 I might have picked C.

And who's the guy recommending these principles? Come on, someone must have had to. Unless they're sent down from on high etched into stone tablets or something.

Score: 19/37


38. Which one of the following best describes a characteristic of a keyword-driven test execution tool?

a) A table with test input data, action words, and expected results, controls execution of the system under test
b) Actions of testers recorded in a script that is rerun several times.
c) Actions of testers recorded in a script that is run with several sets of test input data
d) The ability to log test results and compare them against the expected results, stored in a text file.

My Answer: A
ISTQB Says: A

Another point for yours truly as he deftly answers the challenge of keyword-driven test execution by choosing the only answer that contains any test execution.

Score: 20/38


39. Which of the following is NOT a goal of a Pilot Project for tool evaluation?

a) To evaluate how the tool fits with existing processes and practices.
b) To determine use, management, storage, and maintenance of the tool and test assets.
c) To assess whether the benefits will be achieved at reasonable cost.
d) To reduce the defect rate in the Pilot Project.

My Answer: D
ISTQB Says: D

I wonder what this all has to do with testing? I can name animals but I'm not a vet.

Score: 21/39


40. Below is a list of test efficiency improvement goals a software development and test organization would like to achieve. 
 
Which of these goals would best be supported by a test management tool? 
 
a) To build traceability between requirements, tests, and bugs.
b) To optimize the ability of tests to identify failures.
c) To resolve defects faster.
d) To automate selection of test cases for execution.
 
My Answer: D
ISTQB Says: A

What's a test management tool? A tool to manage testing? I do that myself. Unless you mean to manage the test project? I do that too. What does this even mean? I chose D because A is pointless, B is impossible, C is just plain false (the opposite may be true) and there are tools that can generate test cases for you based on inputs (like Hexawise).

Score: 21/40

The Result

I scored 21/40, which is 52.5%. I required 65% to pass, which would mean I'd have to have answered 26 of the 40 questions correctly.

This means that I needed to get 5 more points to have passed the exam and gotten Foundation level certification. It means that I am permitted to answer 14 of these questions incorrectly and get Foundation level certification.

I don't want to disturb anyone, but it seems to me that it's remarkably easy to get this certification provided you play ball and answer based on what you expect of them.

Final Thoughts

Well, we've all had some fun over the last few days, but there are some serious points to be made here. I got 52.5% on this exam by forgetting to answer questions, being facetious, answering honestly and not studying the course materials or syllabus. We can probably say that if I'd studied I would have passed, and so would nearly anyone else who wanted to. We can also probably say that if someone within the test industry can get such a low score it's questionable if the content of the exam.tests for useful knowledge.

I know I've berated some of the answers in this exam. I expected to. I took this, thinking it might help to clear up for me why the ISTQB is so hated by some with a little first-hand knowledge. I expected a highly academic tightly-written exam that glossed over social sciences and self-learning in exchange for repetition of terms and testing the knowledge of techniques and documentation. I found that, but what I found exceeded my own expectations of such a divide. I have enough problems with the questions in this exam to ensure no interest in the materials. Taking this test has taught me so much about the ISTQB exam already and frankly the exam paper reads like a scam. It's like renaming fruit with names of vegetables, then showing you pictures of the fruit and getting you to utter the name of the associated vegetable. Then charging you money and giving you paper in order to pass a contrived barrier to the field of testing, and everything that that implies.

Firstly, there's the quality of the questions themselves. There's no precision of language, misuse of commas, grammatical errors, terms given in confusing English, and questions that can be logically reduced to 50% guesses or solved without testing knowledge. Secondly, the content of the questions. It's well known that the ISTQB exam does not test for skill or ability. It tests for knowledge. But what knowledge, and how useful is it? Is it useful to know what equivalence partitioning is? Yes. But what about how it works, when it's used, how it's useful, when it's not useful, when it fails, in what situations it's probably more costly than valuable. What about practising on real world examples? What about defending an opinion on it, not just presenting one or being given one by the exam board? What about development of skill? None of this seems to be tested in the exam. Don't get me wrong, it may or may not be taught on the course (I've never studied for it and I've never taken it) but the exam doesn't test for it. So no matter how good the course is people who misunderstand testing will pass because of the quality of the exam questions and people who do understand testing will fail because of the quality of the exam questions.

What I found was confused, badly written, badly designed questions asking pointless things that do not test even the knowledge of a factory school course let alone an understanding of testing and how to do it. No matter what your camp, school, field, knowledge, skill or experience I'd seriously consider examining the claims of this certification versus what it actually is capable of doing. If you want to be a good tester you have to ask questions of the thing under test. Be a good tester, and ask questions of certifications. You'll come up with your own opinion, you certainly don't need me to tell you that, but you can't have an informed opinion without first informing yourself. And if you still want to take the certification, you're going to need to take the material with enough salt to risk sodium poisoning.

Let's just say this: If I show you that someone that got 100% on this exam would it positively influence your decision to hire them?

So what are your thoughts? Any insights on the questions? Anything you disagree with? Let me know in the comments, or contact me on Twitter @kinofrost.

Again, the exam and marking sheet are all under copyright owned by the ISTQB.

Good follow-ups to this post by better writers are below:

James Bach - http://www.satisfice.com/blog/index.php?s=istqb
Alan Richardson - http://blog.eviltester.com/2008/05/a-short-history-of-my-iseb-software-testing-certification-involvement.html
Rob Lambert - http://thesocialtester.co.uk/certifications-are-creating-lazy-hiring-managers/
Pradeep Soundararajan - http://testertested.blogspot.co.uk/2013/04/the-istqb-scam-and-why-you-should-sign.html

I Failed The ISTQB Practice Exam - Part V

Series: Part IPart IIPart III & IVPart VPart VI and Final Thoughts

I was on 15/28...

Part V - Test Management

29. Which of the following best describes the task partition between test manager and tester?

a) The test manager plans testing activities and chooses the standards to be followed, while the tester chooses the tools and controls to be used. 
b) The test manager plans, organizes and controls the testing activities, while the tester specifies, automates and executes tests. 
c) The test manager plans, monitors and controls the testing activities, while the tester designs tests. 
d) The test manager plans and organizes the testing and specifies the test cases, while the tester prioritizes and executes the tests. 

My Answer: None / it depends
ISTQB Says: B

It depends on the company. In A sometimes testers choose tools to be used, in D sometimes the tester specifies test cases and the manager prioritizes testing.
What's really interesting go me is the answer in B (also in C) that the manager controls the testing activities. Only if you hire dead testers and tie strings to them like a macabre puppeteer. Hey testers, do you have any control over your test activities? That is to say when you're doing the things you do to test do you have any control over the things you do to test? Does your manager have any direct control over this at all?

Well you're wrong because it's B.

Score: 15/29



30. Which of the following can be categorized as product risks?

a) Low quality of requirements, design, code and tests.
b) Political problems and delays in especially complex areas in the product.
c) Error-prone areas, potential harm to the user, poor product characteristics.
d) Problems in defining the right requirements, potential failure areas in the software or system.

My Answer: C
ISTQB Says: C

A is project, B is project, D is project, C is product. Me good test now.

Score: 16/30


31. Which of the following are typical test exit criteria?
a) Thoroughness measures, reliability measures, test cost, schedule, state of defect correction and residual risks.
b) Thoroughness measures, reliability measures, degree of tester independence and product completeness. 
c) Thoroughness measures, reliability measures, test cost, time to market and product completeness, availability of testable code.
d) Time to market, residual defects, tester qualification, degree of tester independence, thoroughness measures and test cost.

My Answer: A
ISTQB Says: A

What are "test exit criteria"?  For me that's things like "end of the session" or "I'm bored now" or "Not much more I can do here" or "time for a break". Things like that. How do we measure thoroughness? Seriously, measure my thoroughness, I dare you. Then measure my reliability. I like to use a tricorder. State of defect correction is a test exit criterion?
This doesn't make any sense. I'm so confused. Oh, the answer was a guess.

Score: 17/31


32. As a Test Manager you have the following requirements to be tested: 
Requirements to test: 
R1 - Process Anomalies – High Complexity 
R2 - Remote Services – Medium Complexity 
R3 – Synchronization – Medium Complexity 
R4 – Confirmation – Medium Complexity 
R5 - Process closures – Low Complexity 
R6 – Issues – Low Complexity 
R7 - Financial Data – Low Complexity 
R8 - Diagram Data – Low Complexity 
R9 - Changes on user profile – Medium Complexity 


Requirements logical dependencies (A -> B means that B is dependent on A): 
 
How would you structure the test execution schedule according to the requirement dependencies? 

a) R4 > R5 > R1 > R2 > R3 > R7 > R8 > R6 > R9.
b) R1 > R2 > R3 > R4 > R5 > R7 > R8 > R6 > R9.
c) R1 > R2 > R4 > R5 > R3 > R7 > R8 > R6 > R9.
d) R1 > R2 > R3 > R7 > R8 > R4 > R5 > R6 > R9.

My Answer: I wouldn't.
ISTQB Says: C

Who in the name of buttery buttocks structures a test execution schedule ahead of time? If you've worked on a serious product and had a test execution schedule (a schedule for executing tests) that actually says when what tests are executed and stuck to it I'll give you 20 quid. Why are we assuming that this document is accurate? Who wrote it and why? Are these really the dependencies? According to whom? What about changes to the project? What about parallel activities or early availability of functionality or documentation? Can we express the logical relationship between explicit requirements with an arrow? Can we do it with implicit/tacit requirements at all? Hint: no. There's an infinite number of places where this schedule can fail. Is it important to schedule by dependency in whatever this example would be in real life? These are the interesting questions.
So no, I would not just write a test execution schedule, I'd sketch a plan based on things like availability, testability, deadlines, deliverables, and not just complexity but RISK.
I'd like to do an experiment. Change the question to just show the diagram, the answers, and a big question mark and see if the pass rate for this question remains the same.

Score: 17/32



33. What is the benefit of independent testing?

a) More work gets done because testers do not disturb the developers all the time.
b) Independent testers tend to be unbiased and find different defects than the developers.
c) Independent testers do not need extra education and training.
d) Independent testers reduce the bottleneck in the incident management process.

My Answer: D?
ISTQB Says: B

What is independent testing? Independent of what? Freedom of thought? I could recommend some books and scientific papers indicating that independent testers have a raft of biases. And what evidence is there that they find "different" defects? Different how? Do they necessarily and all the time find only different defects?This question is so infuriating.

Score: 17/33


34. Which of the following would be categorized as project risks?

a) Skill and staff shortages.
b) Poor software characteristics.
c) Failure-prone software delivered.
d) Possible reliability defect (bug).

My Answer: All of them, obviously
ISTQB Says: A

Okay, if you have a skill/staff shortage that could run a risk to your project, depending on what the skill and staff shortages are. I mean, we have a huge skill shortage when it comes to arc welding, architecture and railway maintenance but it's never seemed to come up.
If the software has "poor characteristics" then that could be a project risk. For example if the product has the poor characteristic of "not very testable" then testers may not be able to provide fast useful feedback to test clients which means that people are making uninformed decisions which is a project risk.
Delivering failure-prone software is a project risk. It's a risk to the success of the project. Delivering a failure-prone product is definitely a risk to the success of the project. I don't know how else to put this.
A "possible reliability defect" is a risk (as it's only a possible bug) and it threatens the success of the project so it's a project risk.

Am I just being dense? Anyway, the correct answer is A.

Score: 17/34


35. As a test manager you are asked for a test summary report. Concerning test activities and according to IEEE 829 Standard, what should you consider in your report? 

a) The number of test cases using Black Box techniques.
b) A summary of the major testing activities, events and its status in respect of meeting goals.
c) Overall evaluation of each development work item.
d) Training taken by members of the test team to support the test effort.

My Answer: B
ISTQB Says: B

I don't know the IEEE 829 standard. That didn't help as I wasn't allowed to look it up. I've never needed to. I've never worked in an IEEE 829 standard environment. And if I did the company would ensure I knew IEEE 829 well enough. So why is there a question on it here? Do they teach IEEE 829 for the exam? But hey, if there's anything that says "efficiency" is standardisation of test documentation, right? How about "report what's needed to the right person in the right way at the right time"? I like that as a standard. I'll call it Common Sense 829.
I guessed the answer they wanted. The actual answer is: you'd consider them all if you felt it was necessary to summarise your testing. Say you needed to train your test team to use a helpful tool in order to better test the product? I'd include that in my report to a manager who wants to know what I've been doing with my time and why more testing wasn't done.

Score: 18/35


36. You are a tester in a safety-critical software development project. During execution of a test, you find out that one of your expected results was not achieved. You write an incident report about it. What do you consider to be the most important information to include according to the IEEE Std. 829? 

a) Impact, incident description, date and time, your name.
b) Unique id for the report, special requirements needed.
c) Transmitted items, your name and you’re feeling about the defect source.
d) Incident description, environment, expected results.

My Answer: A?
ISTQB Says: A

Again, I don't know the standard. A is the only one that contained both description and impact. D was another candidate as it contained environment and expectation (oracle information). What  the gibbering testicle are "transmitted items" anyway? I don't know "you're" feeling on the matter.*

* Remember, this is the practice exam directly downloaded from the ISTQB's own website, provided by them as a workable example.

Score: 19/36


Next time, the thrilling conclusion and my final thoughts! See you then!

Wednesday, June 18, 2014

I Failed The ISTQB Practice Exam - Part III and IV

Series: Part IPart IIPart III & IVPart VPart VI and Final Thoughts

Welcome back! Static Techniques is short, so I've thrown in Test Design Techniques at no extra charge. I was on 7/13...

Part III - Static Techniques

14. Which of the following are the main phases of a formal review?

a) Initiation, status, preparation, review meeting, rework, follow up.
b) Planning, preparation, review meeting, rework, closure, follow up.
c) Planning, kick off, individual preparation, review meeting, rework, follow up.
d) Preparation, review meeting, rework, closure, follow up, root cause analysis.

My answer: ?
ISTQB says: C

A formal review of what? Any review of anything that conforms to a particular structure. Okay. So any of these could be the main phases of a review that is formal. The question is actually asking "which of the following are the main phases of the kind of formal review that I'm thinking of right now". I couldn't have answered this without reading the material and memorising a specific kind of formal review.

Score: 7/14


15. Which TWO of the review types below are the BEST fitted (most adequate) options to choose for reviewing safety critical components in a software project? 
Select 2 options. 

a) Informal review.
b) Management review.
c) Inspection.
d) Walkthrough.
e) Technical Review.

My answer: C & E, it depends
ISTQB says: C and E

I gave myself the marks, because I had the same answer. I don't know why these are always the most adequate options... unless they're just the most adequate out of the selection. An informal review (by my terminology as any review that is informal) can have powerful results in terms of learning. A review by management may be helpful in some circumstances - even a review of management might help. Inspection is the stupidest answer I've ever heard for anything - of course you're going to inspect the safety critical components. 1/40th of this exam rides on the answer to the question "do you think you should look at safety critical components when reviewing them?" being "yes". Walkthroughs are incredibly useful for understanding the structure, interface and purpose of components. Technical reviews are... also useful. I just picked two because the question told me to pick two, but I could debate the case for each. Oh, and what the hell is a "component" in the context of the project, and what is the project for? To replicate the feeling I'm having here imagine having George R R Martin on your radio show about improving creating writing and asking him "so, which is better, pencil or pen?".

Score: 8/15


16. Which of the following statements about static analysis is FALSE?

a) Static analysis can be used as a preventive measure with appropriate process in place. 
b) Static analysis can find defects that are not easily found by dynamic testing.
c) Static analysis can result in cost savings by finding defects early.
d) Static analysis is a good way to force failures into the software.

My answer: D
ISTQB says: D

What an utter waste of words. You don't need me to explain this to you, do you? Which of the following is FALSE about Test Technique Foo? a) It can be used to do good things, b) It can help with stuff, c) It's totally good, d) It will make your software worse by magic. Thank you, now I completely understand your knowledge of Test Technique Foo.

Score: 9/16


Part IV - Test Design Techniques

17. One of the test goals for the project is to have 100% decision coverage. The 
following three tests have been executed for the control flow graph shown 
below.

Test A covers path: A, B, D, E, G. 
Test B covers path: A, B, D, E, F, G. 
Test C covers path: A, C, F, C, F, C, F, G. 



Which of the following statements related to the decision coverage goal is 
correct? 

a) Decision D has not been tested completely.  
b) 100% decision coverage has been achieved.  
c) Decision E has not been tested completely.  
d) Decision F has not been tested completely.  

My answer: I want nothing to do with this
ISTQB says: A

Anyone that can follow a set of arrows could have answered this question, but I stuck to my principles with this one. Obviously the path A, B, D, F isn't covered by the test cases. But why should we care?
Firstly, this is a diagram and not a program. How helpful is it to have this diagram without knowing what it's modelling? What is it leaving out? What ARE these decisions? And what can we honestly and reliably say about them being "tested completely". Okay, so you've gone through your if statement and it's gone to each direction and it hasn't errored. What value does that give you? What have you learned about the system and if it's fit for use?
Secondly what on earth are these tests, and what does it mean to "cover" a path? Are the paths really covered by those tests? That's an important question. What about hidden paths? Say you're testing a web application and the session times out. Say you pull the plug out of your desktop machine. That's a path too. You can look at the decisions independently (see what conditional statements the workflow relies upon) but what does it mean to weave them all together in this graph?
I will not answer this question without my curiosity assuaged, sir. And for my impertinence I am punished by one point.

Score: 9/17


18. A defect was found during testing. When the network got disconnected while receiving data from a server, the system crashed. The defect was fixed by correcting the code that checked the network availability during data transfer. The existing test cases covered 100% of all statements of the corresponding module. To verify the fix and ensure more extensive coverage, some new tests were designed and added to the test suite. 

What types of testing are mentioned above?

A. Functional testing.
B. Structural testing. 
C. Re-testing. 
D. Performance testing. 

a) A, B and D.  
b) A and C.  
c) A, B and C.  
d) A, C and D.  

My answer: Type C, so none
ISTQB says: C

Was functional testing mentioned? Well the statement "A defect was found during testing" could be any testing. "When the network got disconnected while receiving data from a server, the system crashed" could be as part of testing I suppose, and if it is then the type of testing used wasn't mentioned.  Re-testing is mentioned, as something was tested again. Performance testing - we'll never know if someone was specifically testing for performance issues.

Score: 9/18


19. Which of the following statements about the given state table is TRUE? 



a) The state table can be used to derive both valid and invalid transitions.
b) The state table represents all possible single transitions.
c) The state table represents only some of all possible single transitions.
d) The state table represents sequential pairs of transitions.

My answer: C
ISTQB says: B

Okay. A is actually true. It could be used to derive valid and invalid transitions. It does not represent all possible single transitions that exist because this is an abstract model. Therefore C is correct. D is...

I'm wasting my time. Why is this important? How am I a better tester for this? The more I analyse what I wrote the more I question why I bothered to write it.

Score: 9/19


20. Which of the following statements are true for the equivalence partitioning test technique? 
A. Divides possible inputs into classes that have the same behaviour.
B. Uses both valid and invalid partitions.
C. Makes use only of valid partitions.
D. Must include at least two values from every equivalence partition.
E. Can be used only for testing equivalence partitions inputs from a Graphical User Interface.

a) A, B and E are true; C and D are false.
b) A, C and D are true; B and E are false.
c) A and E are true; B, C and D are false.
d) A and B are true; C, D and E are false.

My answer: D
ISTQB says: D

This sort of examines if I understand what equivalence partitioning is. Not what it's for, how it's used, when it fails, just what it is. A is basically the case, B is also basically the case. If B is true then C can't be true. D is just an arbitrary rule. E is the realm of the unhinged.

Knowing that E is probably wrong to the layman we can narrow down our choices to (b) and (d). Both contain A as true. B and C are mutually exclusive, so we don't have to know D. So with a little applied logic we can safely say that this question can be replaced with:

Does the equivalence partitioning test technique make use of both valid and invalid partitions? YES/NO.

Just trying to save you the ink, guys.

Score: 10/20


21. Which TWO of the following solutions below lists techniques that can all be categorized as Black Box design techniques?
Select 2 options. 

a) Equivalence Partitioning, decision tables, state transition, and boundary value.
b) Equivalence Partitioning, decision tables, use case.
c) Equivalence Partitioning, decision tables, checklist based, statement coverage, use case.
d) Equivalence Partitioning, cause-effect graph, checklist based, decision coverage, use case.
e) Equivalence Partitioning, cause-effect graph, checklist based, decision coverage and boundary value.

My answer: A and B
ISTQB says: A and B

Okay, cross out equivalence partitioning as it's in all of them. Decision tables split A, B and C from D and E, so the answer is either (d) and (e) or some part of (a), (b) or (c). See, we're learning testing! Now then, "checklist based" is found in (c), (d) and (e) which means that (c) cannot be correct. So it's either (a) and (b) or (d) and (e) - down to a 50% chance of guessing!

(a) and (b) contain: EP, DT, ST, BV, UC.
(d) and (e) contain: EP, CEG, CB, DC, UC, BV.

Removing common elements:
(a) and (b) contain: DT, ST
(d) and (e) contain: CEG, CB, DC

So is the answer "Decision tables and state transition" or is it "cause-effect graph, checklist based and decision coverage".

Well, the shorter list is more likely, I went for that one.

That is the entirety of the knowledge I used in answering this question. This is not an interesting or useful question.

Score: 11/21


22. An employee’s bonus is to be calculated. It cannot become negative, but it can be calculated to zero. The bonus is based on the duration of the employment. An employee can be employed for less than or equal to 2 years, more than 2 years but less than 5 years, 5 to 10 years, or longer than 10 years. Depending on this period of employment, an employee will get either no bonus or a bonus of 10%, 25% or 35%. How many equivalence partitions are needed to test the calculation of the bonus?
a) 3.
b) 5.
c) 2.
d) 4.

My answer: D
ISTQB says: D

It CANNOT BECOME NEGATIVE. The laws of nature simply preclude it from being so. God himself has written the validation - etched it into the soul of the universe. If you can count a list of items you can guess the probable "answer" to this question. Of course in reality context needed, it depends, asking questions, blah, blah.

Score: 12/22


23. Which of the following statements about the benefits of deriving test cases from use cases are most likely to be true? 
A. Deriving test cases from use cases is helpful for system and acceptance testing. 
B. Deriving test cases from use cases is helpful only for automated testing. 
C. Deriving test cases from use cases is helpful for component testing. 
D. Deriving test cases from use cases is helpful for testing the interaction between different components of the system. 

a) A and D are true; B and C are false.  
b) A is true; B, C and D are false.  
c) A and B are true; C and D are false.  
d) C is true; A, B and D are false.  

My answer: All of them, so none of the answers.
ISTQB says: A

There are situations in which all of them are most true. A can be true, B can be true, C can be true, D can be true. Depends on the test cases and the use cases and the situation. Usually use cases are flawed and partially missing with mistakes in them and not maintained for changes, so I was looking for "Deriving test cases from use cases is probably a bad idea". What does it even mean? Derive them from use cases? You mean do the use cases and see what happens? Well then that's all of them isn't it. Someone please tell me I'm wrong, I really want to know why A is correct and I'm wrong.

Thinking about it, likelihood is a very hard thing to calculate. We'd need to know a statistically significant number of unbiased selected occurrences of such derivations and when it was helpful and when it was not helpful for each.

What a bloody stupid question.

Score: 12/23


24. Which of the below would be the best basis for fault attack testing?

a) Experience, defect and failure data, knowledge about software failures.  
b) Risk analysis performed at the beginning of the project.  
c) Use Cases derived from the business flows by domain experts.  
d) Expected results from comparison with an existing system.  

My answer: A?
ISTQB says: A

I don't know what fault attack testing is and I guessed A. I don't know why it's better than risk analysis or whatever but I know faults (something that allows a product to exhibit a problem) and failures (something the product does that we wish it wouldn't) are directly linked. Threats cause faults to show failures. So I guessed A.

Score: 13/24


25. Which of the following would be the best test approach when there are poor specifications and time pressures? 

a) Use Case Testing.  
b) Condition Coverage.  
c) Exploratory Testing.  
d) Path Testing.  

My answer: C
ISTQB says: C

C is the only approach listed. Which of these vehicles has the highest top speed? a) Banana, b) Spoon, c) Motorbike, d) Soil

Score: 14/25


26. Which one of the following techniques is structure-based?

a) Decision testing.  
b) Boundary value analysis.  
c) Equivalence partitioning.  
d) State transition testing.  

My answer: D?
ISTQB says: A

What does that even mean? Structure, to me, means code and hardware and sample data and packaging and paper documents - the building blocks of the product. So decision testing - well it's not based on structure. Boundary values... again, not based on structure. None of them are, I just got desperate and put D.

Score: 14/26


27. You have started specification-based testing of a program. It calculates the greatest common divisor (GCD) of two integers (A and B) greater than zero. 

calcGCD (A, B); 

The following test cases (TC) have been specified. 


INT_MAX: largest Integer 

Which test technique has been applied in order to determine test cases 1 through 6? 

a) Boundary value analysis.
b) State transition testing.
c) Equivalence partitioning.
d) Decision table testing.

My answer: A & C
ISTQB says: A

I have started specification-based testing have I? Okay, what else dwells within this dystopian hell? Looks boundary value-y. Also to separate the nature of the inputs they've been put in equivalence classes. But I suppose it depends on what the damn specifications contain, doesn't it?

Score: 14/27


28. Consider the following state transition diagram and test case table: 


Which of the following statements are TRUE?

A. The test case table exercises the shortest number of transitions.
B. The test case gives only the valid state transitions.
C. The test case gives only the invalid state transitions.
D. The test case exercises the longest number of transitions.

a) Only A is true; B, C and D are false.
b) Only B is true; A, C and D are false.
c) A and D are true; B, C are false.
d) Only C is true; A, B and D are false.

My answer: B
ISTQB says: B

"The test case gives only the valid state transitions". I'm guessing they mean the test case table. Let me just say that I don't really fully understand the nature of the question, I just guessed this to be correct.

Score: 15/28

Anyone else as exhausted as I am? If not, meet me next time for "Test Management"!

Tuesday, June 17, 2014

I Failed The ISTQB Practice Exam - Part II

Series: Part IPart IIPart III & IVPart VPart VI and Final Thoughts

Welcome back, failure fans! For those following closely I'm currently on 5/7.

Next up...

Part II - Testing Throughout The Software Life Cycle

8. Which statement below BEST describes non-functional testing?

a) The process of testing an integrated system to verify that it meets specified requirements. 
b) The process of testing to determine the compliance of a system to coding standards. 
c) Testing without reference to the internal structure of a system.  
d) Testing system attributes, such as usability, reliability or maintainability.  

My Answer: D
ISTQB Says: D

There is plenty of room here for the non-functional vs quasi-functional arguments. Also D isn't the only possible answer, there's an argument for A, B and C as they are all possibly "non-functional" testing, but D just seemed like the obvious choice. A little too obvious.

Score: 6/8


9. What is important to do when working with software development models? 

a) To adapt the models to the context of project and product characteristics. 
b) To choose the waterfall model because it is the first and best proven model. 
c) To start with the V-model and then move to either iterative or incremental models. 
d) To only change the organization to fit the model and not vice versa.  

My Answer: A
ISTQB Says: A

Why would anyone just choose Waterfall or V-model just because without looking at it as a solution for their company? The organisation must change to fit the model, but obviously we don't "only change" it that way around. That leaves A. The most positive-sounding one there is.

Also why is this a foundation question for a tester? I test in the methodology I'm given, personally, maybe this is more variable than I expect.

Score: 7/9


10. Which of the following characteristics of good testing apply to any software development life cycle model? 

a) Acceptance testing is always the final test level to be applied.  
b) All test levels are planned and completed for each developed feature.  
c) Testers are involved as soon as the first piece of code can be executed.  
d) For every development activity there is a corresponding testing activity.  

My Answer: None
ISTQB Says: D

A is false (if we're applying "test levels") and is not a "characteristic of good testing", I'm not sure about B, maybe one would do such a thing in some circumstance or other, I don't think that it's important. C is false, they should probably be involved as early as possible.

D is interesting. What does "corresponding" mean? Do they mean coding, or all development? Writing the source code is a development activity - what is the corresponding testing activity? Writing test code? Yes testers can code, but if that's "corresponding" then what value does this information have? We both can drink coffee too. Here's a development activity: writing a use case document. What't the corresponding testing activity? Here's another: researching code libraries to handle file compression. What's the corresponding testing activity there? Maddening. So I say none of them are true. I am wrong, though, the answer was "D".

Score: 7/10


11. For which of the following would maintenance testing be used?

a) Correction of defects during the development phase.  
b) Planned enhancements to an existing operational system.  
c) Complaints about system quality during user acceptance testing.  
d) Integrating functions during the development of a new system.  

My Answer: C?
ISTQB Says: B

What on earth is "maintenance testing"? Here I feel that reading the material would have been helpful in answering the question. Correction of defects sounds like a coding endeavour (although what does it really mean?). Complaints during UAT sounds like it might be done during some sort of maintenance of the shipped product which why I guessed it. Planned enhancements - so testing the plans to add things to an existing system? Integrating functions during development - that's not even a testing activity.

But no, it's B. Okay, listen to this as the answer: "Maintenance testing would be used for planned enhancements to an existing operational system". Would it? What is it? Why would it be used? What makes it different to any other testing? If this is taught as part of the course it isn't tested in this exam.

I'll re-write this question's answers to show how effective I think it is:
For which of the following would maintenance testing be used?
a) Driving to school
b) Testing the user story cards for changes we want to make in the live environment
c) Testing calls we get during UAT
d) Making the tea

And guess what - the ISTQB site says Maintenance testing is "testing an existing system" - which means that ALL TESTING OF ANY EXISTING PRODUCT IS MAINTENANCE TESTING. (source: here, look at 2.4)

Gah! Damn you, marking sheet.


Score: 7/11


12. Which of the following statements are TRUE? 
A. Regression testing and acceptance testing are the same. 
B. Regression tests show if all defects have been resolved. 
C. Regression tests are typically well-suited for test automation. 
D. Regression tests are performed to find out if code changes have introduced or uncovered defects. 
E. Regression tests should be performed in integration testing. 

a) A, C and D and E are true; B is false.  
b) A, C and E are true; B and D are false.  
c) C and D are true; A, B and E are false.  
d) B and E are true; A, C and D are false.  

My Answer: None
ISTQB Says: C

Regression testing and acceptance testing are obviously different. Nothing will show if all defects have been resolved... although I question what that even means. Regression tests are typically well-suited for test automation. That doesn't even make any sense - what kind of regression tests? What kind of automation? Anyway, automation is a tool to assist all testing, and automation is often expensive and difficult, so I reason that regression tests are not typically well-suited for test automation. If you want more examples I could say that regression tests (tests to look for a backslide in quality) are not necessarily or even usually checks. I don't know about D. Is it equivalent to say "to find out if code changes have introduced or uncovered defects regression tests are performed"? If so that I could get behind D as an answer. E... well, maybe, it depends.

Score: 7/12


13. Which of the following comparisons of component testing and system testing are TRUE?

a) Component testing verifies the functioning of software modules, program objects, and classes that are separately testable, whereas system testing verifies interfaces between components and interactions with different parts of the system.
b) Test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases.
c) Component testing focuses on functional characteristics, whereas system testing focuses on functional and non-functional characteristics.
d) Component testing is the responsibility of the technical testers, whereas system testing typically is the responsibility of the users of the system.

My Answer: A
ISTQB Says: B

C and D are clearly bizarre. I imagine a written driving test with answers like "a) run him over, b) run over his child, c) run over his dog, d) stop at the red light".

A - Component testing verifies the functioning of (components), whereas system testing verifies (the system). Seemed okay to me, even if the wording was rather dodgy. Apparently not, though as the answer is "test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases". I don't really know what that means. Are they explicit or tacit? Are they specifications and models or specification and model documents? What about the other oracles for testing? Why would the model we build for testing be reliant on only one set of specifications? Aren't test cases derived from tacit contextual information as much as explicit specification? Otherwise how would you know the value of your testing? And what about test cases derived from testing, surely they're more common?

Just me being over-critical in my thinking, because it's definitely B on the score sheet.

Score: 7/13

That's it, exam fans! Meet up with me next time for "Static Techniques"!

Monday, June 16, 2014

I Failed The ISTQB Practice Exam - Part I

Series: Part I, Part II, Part III & IV, Part V, Part VI and Final Thoughts

Oh, the ISTQB! A source of confusion and marketing evil that the devil himself could learn from. Or maybe it's the one standard of the beginners of our industry that sets some common domain language bringing everyone together in a way that will eventually unify the world under a flag of peace.

Either way I wanted to take the ISTQB practice paper and see how I'd do if someone just thrust it in front of me. So here were the rules:

1. No looking anything up.
2. No reading of the ISTQB materials, syllabus, website, etc.
3. Try to answer honestly
4. Use this practice paper: http://www.istqb.org/downloads/finish/41/87.html (ISTQB CTFL Sample Exam Paper), marked using the marking sheet: http://www.istqb.org/downloads/finish/41/82.html (Answer Sheet for Practice Exam Version 2011)
5. If the ISTQB's marking sheet says I don't get marks then I don't get marks.

Note: The ISTQB is the source and copyright owner of this practice exam. I republish any materials therein under this statement from the exam: "Any individual or group of individuals may use this Practice Exam as the basis for articles, books, or other derivative writings if ISTQB is acknowledged as the source and copyright owner of the practice exam.

The first thing I notice is that the practice exam is 4 years old, but that's what I got from their website.

So how did I do? Well I failed. The pass rate is 26/40 and I got 21/40. Oops. Let's take a look at my answers!

Part 1- Fundamentals Of Testing

1. Which of the following statements BEST describes one of the seven key 
principles of software testing? 

a) Automated tests are better than manual tests for avoiding the Exhaustive Testing. 
b) Exhaustive testing is, with sufficient effort and tool support, feasible for all software. 
c) It is normally impossible to test all input / output combinations for a software system. 
d) The purpose of testing is to demonstrate the absence of defects.  

My answer: C
ISTQB says: C

This seems fairly simple to me. I don't know what "the Exhaustive Testing" is, though or why it might want to be avoided.

Edit: Should point out that I also don't know what the "seven key principles of software testing" are. This is not required information to answer this question.

Score: 1/1


2. Which of the following statements is the MOST valid goal for a test team?

a) Determine whether enough component testing was executed.  
b) Cause as many failures as possible so that faults can be identified and 
corrected. 
c) Prove that all faults are identified.  
d) Prove that any remaining faults will not cause any failures. 

My answer: B
ISTQB says: B

I feel guilty about this answer. I put it because I felt it was the most valid goal for most situations out of the ones provided, but honestly I wonder now if that is the answer. B is certainly not a particularly valid goal in any case as it doesn't specify what failures are being caused. As many failures as possible, or as many useful and informative failures? I mean I could create an almost identical failure every 5 minutes and report it every 5 minutes and achieve this goal while I play Angry Birds and sip a can of Pepsi.

Score: A guilty 2/2


3. Which of these tasks would you expect to perform during Test Analysis and 
Design? 

a) Setting or defining test objectives.  
b) Reviewing the test basis.  
c) Creating test suites from test procedures.  
d) Analyzing lessons learned for process improvement.  
My answer: I forgot to answer this question
ISTQB says: B

I skipped over this question and forgot to come back to it. Would I expect to set or define test objectives during test analysis and design? What is test analysis and design? Presumably they mean analysis of the situation to design tests. Well I'd probably want to define the objectives of a test I was designing (A). I don't know what test basis means - I think it's the thing that defines what the tests are going to be so we're talking about the context and the product and things I find while I'm testing so I will definitely do that (B). I might create test suites from test procedures, although that sounds very unlikely.

Score: 2/3
4. Below is a list of problems that can be observed during testing or operation. 
Which is MOST likely a failure? 

a) The product crashed when the user selected an option in a dialog box.  
b) One source code file included in the build was the wrong version.  
c) The computation algorithm used the wrong input variables.  
d) The developer misinterpreted the requirement for the algorithm.  
My answer: A
ISTQB says: A

I got this question wrong. ISTQB says I didn't, but again this is a guilty answer. They're all failures. A is a failure of the product. B is a failure of whoever put the file there. C is a failure of whoever input those variables or permitted them to be input. D is a failure of developer to interpret the requirement or of the person telling them about the requirement to make themselves understood (unless they also misinterpreted the requirement of course). I presume we're talking about an explicit requirement.

Score: A guiltier 3/4


5. Which of the following, if observed in reviews and tests, would lead to problems (or conflict) within teams? 

a) Testers and reviewers are not curious enough to find defects.  
b) Testers and reviewers are not qualified enough to find failures and faults. 
c) Testers and reviewers communicate defects as criticism against persons and not against the software product. 
d) Testers and reviewers expect that defects in the software product have already been found and fixed by the developers. 
My answer: A, C, D
ISTQB says: C

Okay, I misread the question. I actually answered the question "Which... could lead to problems...?" If testers aren't curious enough to find defects then test clients will be annoyed at the lack of service. If they communicate defects as criticism against, say, the developers this could cause a problem (but if they blamed the government perhaps not). If testers expect that defects have been found and fixed then what are they doing there? I imagine this could cause all sorts of misunderstandings.

The sentence "Testers and reviewers are not qualified enough to find failures and faults." makes absolutely no sense.

The question actually specifies that they would lead to problems, so the correct answer (in my eyes) is none of them would necessarily lead to problems.

Score: 3/5

6. Which of the following statements are TRUE? 
A. Software testing may be required to meet legal or contractual requirements. 
B. Software testing is mainly needed to improve the quality of the developer’s work. 
C. Rigorous testing and fixing of defects found can help reduce the risk of problems occurring in an operational environment. 
D. Rigorous testing is sometimes used to prove that all failures have been found. 

a) B and C are true; A and D are false.  
b) A and D are true; B and C are false.  
c) A and C are true, B and D are false.  
d) C and D are true, A and B are false.  
My answer: C (a and c are true, b and d are false)
ISTQB says: C

Software testing is obviously sometimes required for legal or contractual requirements. That's like saying "sometimes you need a buy a ticket to ride the bus". It's also obvious that finding and fixing defects can reduce risk of them occurring later on. Obviously the main point isn't to improve the developer's work.

There is a possible argument to be made for statement D being true. In a simple enough system with strictly defined parameters and with a simple enough definition of failure (say, subjective to a single user) one could define a scenario where testing proves all failures have been found. Interesting thought experiment... probably we'd have to define failures here as "product failures".

I'm fairly sure my mum could answer this question with minimum prior testing knowledge. I might try the next series as "My Mum Passes/Fails The ISTQB Practice Exam", to compare results.

Score: 4/6

7. Which of the following statements BEST describes the difference between testing and debugging?
a) Testing pinpoints (identifies the source of) the defects. Debugging analyzes the faults and proposes prevention activities. 
b) Dynamic testing shows failures caused by defects. Debugging finds, analyzes, and removes the causes of failures in the software.
c) Testing removes faults. Debugging identifies the causes of failures.
d) Dynamic testing prevents causes of failures. Debugging removes the failures.

My answer: B
ISTQB says: B

I don't think any of these answers describe the difference between testing and debugging. I just guessed what sounded most sensible. Obviously testing is so much more than showing failures caused by defects, but the first half of the statement is true. The second half isn't necessarily true. Arguments could be made for A (sometimes testing finds the source of a defect, sometimes debugging reveals prevention activities).

Score: 5/7


Well that's it so far. Come back next time for Part II - Testing Throughout The Software Lifecycle.

Wednesday, April 9, 2014

TestBash Retrospective #4 - Automation: Time to Change Our Models

Automation: Time to Change Our Models

Mr Iain McCowatt took a relaxed but jaunty stance and delivered the first words about his talk on automation: "This talk isn't about automation".

And therein lay the point. He continued: "It's about people". This was a talk about replacing testing as a goal with the purpose of testing in the context of our tools. Seeking out what to automate replaces the question of why automation in the first place, and how?

A great little heuristic I latched onto was changing perspective from automation as a replacement for the test process to using tools as an instrument (as a scientist would use a thermometer. The example used was the invention of the barometer in solving a liquid dynamics problem.). But it doesn't stop there! How you view your tools also provides a filter for your choice of tools - and your choice of tools will change your behaviour and how the testing problem is perceived. Getting this wrong at the outset corsets the possibilities and the value that testing can provide.

There was also talk of our mental constraints which can poison people from solving problems. The example used was testers asking for access to the production environment. Why? Because they needed to do something there that they couldn't do in the test environment. Why? Because they can no longer do testing in the test environment. Why? Because there's no more disk space in the test environment. So the belief that they couldn't ask for more disk space lead to a bad solution to a fairly simple problem. The use of WHY as a powerful questioning tool against imposed constraints I think was a glaringly obvious and often overlooked and forgotten point. Another example given was goal displacement - the instruction from on high to automate tests because of a recent goal-setting meeting rather than it being a solution to a problem.

Considering our test tools as instruments lets us look at how tools can be used to help specific tasks - implement tools to solve problems (whilst understanding their limitations and flaws) rather than find tools to replace existing tasks for the sake of automating them.

This was a talk that is better than I can describe in a blog post. It really illuminated the power in a shift in perspective on tools rather than mitigating individual risks by patching them with automation tools.

Wednesday, April 2, 2014

TestBash Retrospective #3 - Inspiring Testers

Inspiring Testers - Stephen Blower

This was an amazing talk. An interesting story of a journey from factory to context-driven. A story of empowerment (a common theme at this TestBash, it seems) and by the end there were plenty of inspired questions being asked. The story rotated around being bored and uninspired at work - something I'm very familiar with from past jobs where scripting was the order of the day and I didn't know enough to challenge it - and moving from place to place until inspired by Michael Bolton's work. We've all been there! For me it was a lecture on YouTube by James Bach. 

Some of the points raised included:

Have Integrity - Stand by your principles, be prepared to defend your actions, be open, honest and truthful
Pragmatism - Be prepared to negotiate a solution. Be pragmatic and understanding.
Use Facts - Not assumptions
Use Paired Testing - Dev/Tester and Tester/Tester. Swap who drives and allow people to learn.
Don't Say Pass/Fail - Use problem known/no problem known

There were more, but I won't go into too much detail. I loved a little tip that was mentioned for dealing with infuriatingly named "Pass" button in JIRA - that it means to "pass it along". Some people audibly smiled.

The thing that really hit me hard, mainly because it's so blindingly obvious whilst being a vital reminder for me, is to stop complaining and take action. Provide solutions. Get up and do a good job, rather than sit around complaining about it. I need to do this more - I do tend to whine when I should be doing something about the problem, and that is born, basically, from fear. As a few people said at TestBash (I think including Mr Blower himself) "I don't need anyone's permission to do a good job".


How's this working for y'all? It's pretty rough first-draft reporting but it's actually satisfying (instead of my usual state of unease) when I hit "publish".

Tuesday, April 1, 2014

TestBash Retrospective #2 - Don't Be Vague, Respect the Bug Report

Anna's 99 Second Talk

Anna Baik [@TesterAB] (no less) gave a talk on being less vague. The cornerstone of the talk was to be more specific in our language so that we're respected within and outside testing - in our company and beyond. To talk about skills instead of just naming processes. To use words like modelling and exploring in place of test cases and scripts. Be specific and proud of what you do, and talk about it in a professional way, for the benefit of you and the software industry.


Amy's 99 Second Talk

Amy Phillips [@ItJustBroke] (no less) gave a talk called "Respect The Bug Report", with a personal admission of a lapse in bug report quality. A brave thing to do, and it gives me the confidence to admit that I have been known to lapse in my communication. The thrust of this talk was to remember to respect your communication channels - to provide your test clients with good quality information, which is after all half the purpose of a tester. I've done some work on this at my own company and I found that talking to developers (I know! Crazy!) asking for feedback is a great way to identify the information they want from their report, and what information they actually use that you provide. You can streamline your communications by understanding the needs of your audience. Don't forget that testing and then keeping the information to yourself is a waste of time - testing isn't an end in itself, but a means to an end.

It strikes me at this point that all the people I've begun with are women. Hey, non-testers, if you're wondering where all the female talent is in the tech industry, I think we stole them all. Sorry*.
*not sorry.