In November 2013 I attended the official launch of the test approach Situational Testing (Dutch: ‘Situationeel testen’) by the Dutch test consultancy SYSQA. I was interested in the approach because it claims to be inspired by the work of a number of context-driven testers, among which James Bach, Michael Bolton, Cem Kaner and our very own Huib Schoots. It was my aim to find out how much this new approach appealed to my sense of what software testing is about. Also, I wanted to see how this approach could be beneficial to the work of the context-driven community.
Situational Testing can be seen as an approach to testing in which the selection of the test approach (Dutch: test vorm) is based on a set of project aspects (the ‘Situation’). The test approaches that are available in Situational Testing are ‘Factory based testing’, ‘Global scripting’, ‘Session based testing’, ‘Bug hunts’, ‘Test tours’ and ‘Freestyle exploratory testing’. An exhaustive list of project aspects is not provided, but the approach focuses on the business value of the system that is being developed and the questions that testing needs to answer (the goal or the added value of testing). From this it is gathered to which degree tests should be scripted, with ‘full scripting’ and ‘no scripting’ at the extremes of the scale. The approach claims to be pragmatic and flexible in that it tailors the amount of scripting to the amount of scripting that is valuable to the organisation.
Situational Testing is an addition to the Dutch testing landscape that is dominated by methodologies such as TMap Next and ISTQB, that emphasize a scripted and formal, process-driven approach to testing. SYSQA offers a new perspective in two ways. First and foremost Situational Testing states that the formal script-based testing does not always deliver the most value to a project. With this statement SYSQA identifies adding value to a project, rather than following steps in a process, as the premier purpose of testing. Secondly, Situational Testing clears the way for the recognition of relatively recent developments in testing. A significant difference between Situational Testing and TMap Next is that TMap proposes to use exploratory testing when the specifications are not clear or missing, whereas Situational Testing suggests to use exploratory testing where it can be of most value.
The Tester Freedom Scale
The centerpiece of the Situational Testing approach is an adaptation of Jon Bach’s Tester Freedom Scale (Bach, 2007). Bach published this sliding scale to model the variation in the degree of freedom a tester has when testing. In his own words, the scale models “the extent to which we are allowed to think”. Below, his model is displayed.
With regards to the Tester Freedom Scale I think we should note two things. I believe, it was Bach’s intention to offer an explanation for the degree of tester freedom involved in various situations in testing.
So firstly, the model serves to explain observed phenomena. It offers a descriptive theory that may be verifiable. I might look at my own situation, observe to which degree my actions are prescribed and to which degree I am free to decide how to conduct my testing. There will be different ways to indicate the degree of freedom, but, for the sake of my argument, that’s beside the point. Let’s just say that I am able to determine my degree of prescription, my degree of freedom, plot this on Bach’s scale and thereby confirm or invalidate his theory.
What the Tester Freedom Scale does not tell us is how to define the testing approach. It does not offer a theory for the success of testing in various situations in testing. The model does not, for example, say: when the scripts are vague, you should aim for a degree of freedom of 12% in order to achieve a testing success of 91% or higher. Bach’s Tester Freedom Scale is not a normative theory.
To many of you this may seem to be an extremely obvious point. You would probably argue that there is no point in directing the tester’s degree of freedom and that telling a tester not to think is like telling a race horse to exclusively trot. During testing, the brain is always engaged. This was made clear a long time ago by Glenford Myers who stated that “testing is an extremely creative and intellectually challenging task” (Myers, 1979).
The second observation is that Bach’s model describes circumstances in testing and not approaches to testing. As far as I can recall there is, for example, no such defined testing approach or technique as ‘vague scripting’, while there is, in my fairly recent memory, a project in which the test scripts were vague. The point is important because the model does not link testing approaches to the degree of freedom or the degree of prescription. It does not offer a normative theory in which it might be stated that “if you want a degree of freedom of 60% then you should use testing tours (Whittaker, 2009),” or “if you desire 30% scripting then session-based test management (Bach, 2000) is the thing for you.”
Both points mentioned above are important because Jon Bach’s model is used – in an adjusted form – by SYSQA as a normative theory for testing. The modified Tester Freedom Scale (hereafter the SYSQA scale) is displayed below. The attribution of the SYSQA scale to Bach’s model can be found in the presentation material concerning Situational Testing (in Dutch, slide 16).
Many things can probably be said about the modifications that were made. One of the more striking adjustments is the replacement of the word “Freedom” in the right scale of Bach’s version by the words “Non-scripted testing”, which essentially takes away the need for the right (“Non-scripted testing”) scale, because we must assume that the opposite of “Scripted testing” is “Non-scripted testing” and that a reduction of scripted tests implies an increase in non-scripted tests. Also, “Freedom”, or “the extent to which we are allowed to think”, from my point of view, is something completely different than “Non-scripted testing”.
This leads us to a point that can be made about the SYSQA model and its use as a normative theory, based on a distinction between scripted and non-scripted testing. It is true that scripts, in one form or the other, are often a product of testing. But scripts, merely being products of a process, do not tell us all about the thoughts that produced them and therefore, in my opinion, they do not serve well as a criterion for the selection of a testing approach.
As can be concluded from what is described above, Situational Testing is a test management method. It describes a way to map a set of test approaches on a project. The method acknowledges that the characteristics of the project are very important in determining the test approach. In the white paper the following is stated: “The principle of Situational Testing is that the characteristics of the system, the project and the expectations of the organisation determine the test approach.” Yet for an approach that apparently places the context front and center, there is a conspicuous lack of methods to investigate the project context. As a method for determining the purpose of testing, Situational Testing uses an adaptation of Gojko Adzic’s pyramid of software quality (Adzic, 2012). While this method allows us to determine the desired level of quality, it is not specifically a tool to investigate the project context. An example of such a tool can be found in the Heuristic Test Strategy Model (Bach, 1996), in which James Bach offers a rich set of heuristics. It seems to me that an approach that claims to be determined by the context, needs at least one tool for heuristic investigation of aspects of that context.
Situational Testing claims to be inspired by the context-driven community, which, of course, is a good thing. During the presentations that I attended in November, I therefore naturally expected that the experience reports of testing executed using the situational approach, would be on topics that are currently favored in the context-driven community. Yet concepts such as testing skills, problem solving, investigation and analysis, decision making, social sciences, heuristics, exploration, ambiguity and uncertainty, the scientific method, and various forms of thinking, were hardly ever mentioned. From the list of of topics it is clear that the context-driven school places the skills and judgment of the individual tester in the foreground. In an approach that is inspired by this fundamental principle, one would expect this to shine through in versatility, inquisitiveness and the focus on learning and creativity. It is not easy to recognize these aspects in the current version of Situational Testing.
My conclusion after having read and heard about Situational Testing is that it provides a welcome shift of focus from testing tied to specifications to testing as adding value. While it does not provide an in-depth view of testing and while the modifications to Jon Bach’s model and its use probably stretch the intentions of the original, the approach may pave the way for testers to access and to practise new ideas in testing. Beyond a doubt, that would be a step forward. However, for a Dutch context-driven approach to mature, I think a clearer view on what software testing is about, is required.
Adzic, Gojko – Redefining software quality, personal blog, 2012
Bach, James – Heuristic Test Strategy Model, Satisfice, 1996
Bach, Jonathan – Session-Based Test Management, Satisfice, 2000
Bach, Jonathan – A Case Against Test Cases, Quardev blog, 2007
Myers, Glenford J. – The Art of Software Testing, John Wiley & Sons, 1979
Whittaker, James A. – Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design, Addison-Wesley Professional, 2009
The SYSQA material