Monday, February 8, 2010

Methods for testing in a scripted test design environment

The original title of the post was "Methods for exploration in the scripted enviroment". However I went a step forward, using Michael Boloton's distinction, to sharpen the point of the discussion. I know that not all of my points in this post adhere to Michael's definitions in his article (for example about using specs), but I permit myself to use his most basic definitions for the sake of my post.

Michael Bolton in his famous post about Testing Vs. Checking defines:
Checking Is Confirmation
Checking is something that we do with the motivation of confirming existing beliefs. Checking is a process of confirmation, verification, and validation. When we already believe something to be true, we verify our belief by checking…. Checking is focused on making sure that the program doesn't fail.


Testing Is Exploration and Learning
Testing is something that we do with the motivation of finding new information. Testing is a process of exploration, discovery, investigation, and learning. When we configure, operate, and observe a product with the intention of evaluating it, or with the intention of recognizing a problem that we hadn't anticipated, we're testing. We're testing when we're trying to find out about the extents and limitations of the product and its design, and when we're largely driven by questions that haven't been answered or even asked before…


Now that we have refreshed our minds with Bolton’s definition of Testing, the challenge of doing testing in a scripted environment is clearer. In other words – when we are in a scripted environment that naturally emphasizes following the written instructions which is actually checking, how do we encourage the testers to do testing when scripted test design is a given? This is the case in most of the testing that I am involved in, although ET is getting more and more space in the activities that I am part of.


I would like to list a few methods that I used, learned or was witness to. I don't have a “preferred one”, but I like them all:

"Run, then read” –Read the test title, design the test in your mind or on a piece of paper, run the test, then read the test steps and complete any items you hadn't thought of, if there are such items.

“Do something new” – In this method test results of “Pass” or “Fail” are not enough. The test report tool also has a mandatory field of a description of an additional test that you perfomed.

“Reveal the sources” – the test case author attached, linked or copied the requirements and/or design document as a relevant part of the test case. When the base of the test design appears in front of the tester as part of the test design itself – the tester is invited to again review the logic behind the design and to think about new ideas.

“Dialogue with the script” - Use the script as an exploration tool - see what the test case isn’t telling me, and to do so in each of its steps. “Do A, B, C; expect D” becomes: “Does it matter where A started from? What happens to B if A isn’t done? In how many ways I can do C? What else happens to the system apart from D?

“Doubt everything” or “Challenge everything” - start the tests considering the Test Case as wrong, equivocating and lying; everything is disputed (sometimes aggressively). The script told us to do something, we didn’t agree with it, or with the design that generated it, or with the requirement that generated the design, or even with the customer who generated the requirement.

(Thanks to my colleagues: BatSheva, Itzik, Shmuel and Jose who inspired me with these and other methods during the last decade.)


Enviromental factors that increase Testing among testers:

Focus on bugs, less on coverage: A culture of “Fail is good”, example - “Bug of the week” discussion in each team meeting...

Emphasize the responsibility of the testers for the quality, not for the test execution

Encourage continuous learning - reading the requirements(again…), learning the product technologies and continuous discussions with the customers (and their rep. in the organization) architects and developers .



Are you (still) working in a scripted environment? How do you increase testing? please Tell me.