Tuesday, November 26, 2013

Post Festum Code Changes

Just a quick one.

The mind of a programmer is rife with models of states and UI and objects and relationships. When they're coding something. Then later they're coding something else with different models of states and UI and objects and relationship. Ask them to go back and fix a bug in the first thing they wrote and they will do so from a fresh perspective. Ask them to go back and add functionality to old code and they'll do that with a fresh perspective.

Here's an example. The administration page has been developed to add users to a system. When it was developed it was after absorbing the relevant information, which included the fact that the user can choose an employee ID for the user and type it in - the system complains about any duplicates, preventing a user from being saved. To force the choice to NOT have an employee ID the default is 00000 and must be removed manually or updated to something new. The system seemed to be working great, then it was discovered that the customer wants a way to add lots of users from a list of first names and last names. This was added, but the person adding it forgot about the requirement for users to have unique (but optional) employee IDs and the code that turned the list into the user injected the 00000 default employee ID into all of the users, bypassing the input validation!

This is what I'm now calling a "Post Festum" code change. This is from the Latin usually meaning "too late" or "after the fact" - as in too much time has passed since the original coding. Helpfully, it literally means "after the feast", as in the main party is already over.

Tuesday, September 10, 2013

Scripts Part I - What is a script?

Let's Define!

My current definition of a script is:

A set of instructions made explicit by one agent such that another (or the same) agent can later interpret them in order to replicate a process to achieve a desired outcome or behaviour.

So a script is, essentially, a communication tool. It can be verbal or written, it can be instructions to follow in order or a set of unordered instructions, but it is always done with the intention that the instructions will be followed by someone or something.

Here are some examples of scripts:
A computer program
A food recipe
A shopping list
A checklist

Communication

Implicit information in communication

When humans talk to each other we express two elements of communication. In pragmatics they are known as explicature and implicature. Explicature is what is actually said; for example "We went shopping, had a meal and ended up in the cinema". Implicature is what is implied by what is said. In the above example one might assume that going shopping, having a meal and ending up in a cinema happened in that order, but the sentence remains logically true even if the order is changed. There's also an implication that going to the cinema involved watching a film as part of a day out but the sentence remains logically true if the group stepped into a cinema then turned around and immediately walked out.

These two elements of communication are important when considering the audience for a script - the script has to be interpreted in order to be followed and that's what will affect the outcome of behaviour that actually happens (as opposed to the one desired by the script author)

Computers vs People

A computer program is a kind of script. It's a set of instructions made explicit by a programmer that the computer follows to execute the intentions of the programmer. Computers only pay attention to explicature in script instructions. This can be translated into the more common phrase "computers only do what you tell them to do".

Computer scripts may fail because computers are very picky about their explicature (they must be syntactically accurate). It may fail because the computer will execute the script as it exists while ignoring implicit desires. Just because you write a comment in the script saying "this function will return the square root of a number" does not mean it will. Just because you used a variable name called "square_root_of_input" does not mean that's what the variable contains - and your computer cannot care. We write unit tests in code as a way to try to ensure that the behaviour of the code we write matches our intended purpose - those are the lengths we go to to compensate for how mindless computers are and how literally they interpret our scripts.

Computers ignore implicature. They ignore context. And this is why they're so difficult to communicate with - human language is rife with implicature. We put up with this communications issue because computers are able to perform tasks (even ones that we are bad at or find impossible) in volume at an enormous rate.

A food recipe is a kind of script. It is a set of instructions made explicit by a person so that a (often) different person can follow them in order to end up with similar food. People have many powers that computers lack. They can understand implicit and tacit information. A recipe may fail because of a misunderstanding concerning implicit information. In the UK "crumble 3 plain biscuits" probably means McVities Digestives. In the US "biscuits" are basically a small bread a bit like a nearly tasteless scone. Try making a biscuit base with those.

Because it's followed by a person a recipe can have certain implicit or tacit information in it. For example if I'm told to use "self-raising flour" and I don't have any perhaps I might use plain flour and a leavening agent. The explicature is "add 200g self-raising flour". The implicature is "add something that has similar properties of 200g self-raising flour in the context of baking the pie this recipe describes". I know I can probably skip the "Add almonds" step but I can't very well skip "Bake at 180C for 40 minutes" because I can guess how differently they will affect the quality of the end product. This is the power humans have over computers.

Scripts in Testing

Automated Testing

Computers will interpret only the literal truth of explicature in scripts. This is the essence of checking to me - ignorance of implicature. This is a price we pay in exchange for speed - computers are much faster at making these checks.

Manual Scripts

Humans will interpret the implicature of a script. Humans are UNABLE to ignore implicature, we are simply not wired for it. It's part of our emotions. It's how we can process information in a heuristic way - not always accurately but with the efficiency to be evolutionarily successful. It's how we communicate complex ideas and their impact. It's what allows us to create art.

This means we cannot perform checks (under the strictest definition of check). A human check, I suppose, is the absence of knowledge to guide implicature in specific observations while making an evaluation in a product - we still make certain natural inferences but we can try to ignore parts of the context or try to behave as if we don't know the context in order to make a simple confirmation of the truth value of the explicature of an instruction (or what would be the explicature of an instruction were it uttered).

But why try to ignore inferences when we can use them?  Humans have enormous power over computers to investigate, learn, observe, interpret, infer, follow up, ask for information. So why write scripts to tell them what to do? If they're powerful enough to investigate a product surely it's cheaper and easier just to train them on how to look for problems rather than explicitly tell them what problems to look for. There is an answer to the above question which I hope to get to in a separate post.

What Is/Is Not A Script (In Testing)?

Things That Are Scripts (In Testing)


Automated Check Scripts

Automated "tests" are a set of instructions (the code) made explict by one agent (coded) such that another agent (the computer/compiler/interpretter) can later interpret (run/compile/interpret) them in order to replicate a process (making many checks quickly and forming some kind of output with the data) to achieve a desired outcome (the results of specific observations of the system in a way that a human can easily interpret, understand and evaluate)

Things That Aren't Scripts (In Testing)


Notes

Notes are not usually scripts. Notes are simply anything made explicit for the purposes of later reference. Scripts more specific - they refer to a set of instructions to be later interpreted to achieve a desired behaviour or outcome. Not all notes consist of a set of instructions and not all notes are designed to achieve a desired behaviour or outcome. All scripts are notes. Not all notes are scripts - in fact so few that we can make a abductive leap and say that when someone says "notes" they don't mean "scripts" because if they did they'd likely have said "scripts" to differentiate it from "notes".

Missions/Charters

A mission or charter is similar to a script in that it contains some explicit information and is used to communicate something in order to achieve a desired outcome or behaviour. The difference is that the communicated information is not a list of instructions to replicate a process. The instructions are not made explicit - the desired outcome or behaviour is. The steps taken (the set of instructions that replicate the process) is left up to the agent. It could be that a mission or charter contains a checklist (a type of script) so missions and charters can use the power of scripts. It may note a list of things that must be tested, information that's especially important, instructions on how to configure the system for the testing, and so on.

So while a recipe is a kind of script a note that says "We need a sponge cake. Plenty of lemon icing, please, and that fancy swirly decoration thing on the top."  is a mission, or charter. The recipe is left up to the chef - or maybe the recipient of the note will go out and buy a cake, or commission a custom cake.

Things That Might Be Scripts (In Testing)


Checklists

A checklist is simply a list of checks. The problem here is that it depends on how much information is left implicit in that list of checks that will determine if the checklist matches my definition of a script. Remember, a script is "a set of instructions made explicit by one agent such that another (or the same) agent can later interpret them in order to replicate a process to achieve a desired outcome or behaviour". This means that if the checklist has the intention and capacity to replicate a process then it's a script. If the process is left to the agent interpreting the checklist to achieve the desired outcome or behaviour then it's not. I'd go as far to say that most checklists are not scripts.


I'll update the list above as I find more things that are confused with scripts.

Thanks for reading, now please disagree with me!

- Kinofrost

Monday, April 8, 2013

Bad Testing Definitions

Life has become busy. I'm giving presentations and talks, I've taken up morning running, I'm getting involved with local groups and so on.  As a result I haven't had any time to blog, tweet or otherwise social media anything. So... here's my list of previously-tweeted "Bad Testing Definitions"!


#1: Focus/Defocus - what your eyes do after staring at the same page of the interface for an hour
#2: Exploratory Testing - Taking your work laptop with you when you go camping
#3: Risk-based testing - deciding your next test action using test techniques on post-it notes and a dartboard
#4: Test Coverage - what you wear while testing
#5: Test Cases - the capitalization of your text-based test data
#6: White-Box Testing - testing light-coloured modal windows in the UI
#7: Stress Testing - all testing up to and including 1 week before release
#8: Cross-Browser Testing - When you’re web testing and the browser throws a tantrum
#9: Touring - Moving companies once every 6 months
#10: Risk Impact - What you feel when you realise you didn’t test something important
#11: Claims Testing - Tests performed on functionality a customer has complained about
#12: Performance Testing - Making a song and dance about reports and metrics instead of doing any testing
#13: Test vs Check - How much testing you do for what you get paid each month
#14: Regression Testing - Testing while invoking the knowledge and experience of past lives
#15: Model-based testing - Fashion show focus groups
#16: Acceptance Testing - coming to terms with the state of the software
#17: Alpha Testing - Winning arguments online about testing term definitions
#18: User Acceptance Testing - confirming receipt of payment from customers
#19: Conformance Testing - Telling the project manager what they want to hear
#20: Volume Testing - Testing while sitting near to development on a Friday afternoon#21: Sanity Testing - Installing the fifth broken build in a row
#22: Fuzzy Penetration Testing - Some of these write themselves.#23: End to End Testing - Eating at a new Indian restauran
#24: Agile Testing - Dodging questions about the state of the product
#25: Best Practice
#26: Exhaustive Testing - Running out of coffee
#27: Black Box Testing - Using electronic devices while on a flight
#28: Static Testing - Touching the product whilst wearing a polycotton onesy
#29: All-Pairs Testing - The male component of a test team
#30: Backward Compatibility Testing - Idiot-proofing