The Real Future of Software Testing

It is a common notion that we can learn about the future by looking at the past. At the very least, by looking at the paths that were travelled to get us to the here and now, we have an understanding of where we stand. Through that single point we may, for example, draw the lines from the past and project them into the future.

This technique is frequently used to make inferences about the future of software testing. Many of the visions are based on extrapolating market trends in software development. If, in the recent past, we noticed a growing use of mobile devices, it is likely that we will see continued growth in mobile testing. If security emerged as a hot topic in the past couple of years, security testing will continue to be of interest. Most of these predictions, about e.g. cloud, mobile, Agile, security and big data, can be found in the regular forecasts by IT research companies such as Gartner, Ovum and Forrester. To draw a picture of the future of testing, the only thing left to do is to add ‘testing’ to each of the trends. It is that simple.

But do ‘cloud testing’, ‘mobile testing’, ‘Agile testing’ and all other collations actually tell us something about the future of software testing? In many cases, not specifically. Take mobile testing. It is a certainty that the development of applications for a mobile devices carries with it a great number of specific technologies, tools and challenges. These things affect the day-to-day work of the tester. He has to grasp new domains, new tools, new ways of working and novel perspectives on software usage. We know they affect the work, but what we fail to investigate or even notice, is how they affect the craft; how the basic paradigms in software testing evolve because of whatever happens in software development. It is an important realization that by mapping the future of software testing on shifting technologies and, for example, changing perspectives on software usage, we focus on how the work is affected, but not on the underlying paradigms that drive the craft. This, to me, is not helpful in identifying the ways in which software testing evolves. In the worst case it causes regression, giving way to views that functional testing is an ancient and obscure specialism, for which the need is rapidly waning (Whittaker, 2011).

To further elucidate this example I would like to look at the popular test automation tools such as Selenium, HP QuickTest Professional or Watir. The evolution of software testing has become intertwined with test automation tools in such a way that, by focussing increasingly on familiarity with tools, knowledge of testing is dispersed. While the tool clearly advances the reach and capabilities of functional testing, it does not advance the paradigms that drive the testing effort. Tests still need to be created and the intelligence with which the tests are created is one of the factors seriously affecting the success of test automation. The tool merely amplifies intelligent use or the lack thereof. Everyone knows the old adage that ‘a fool with a tool is still a fool’. By this adage, while we educate hosts of ‘Selenium testers’, ‘mobile testers’ or ‘cloud testers’, what we get may still be only a handful of testers who grasp the paradigms of functional testing and are able to use the tool successfully. From this particular point of view, the term ‘Agile tester’, for example, is nothing more than an empty vessel.

If we want to take a look at the future of software testing we have to look at what is left when we strip from it the knowledge of tools, technologies or domains. Functional testing is, among other things, the art of investigation by experimentation and for this we basically have two paradigms: (functional) specification based test design by using (formal) test techniques and exploratory test design. Test specification techniques were mostly created in the 1970’s while exploratory testing was introduced (formally) by Cem Kaner in the 1988 (Kaner, 1988). Both these ways of investigating software are recognized as established points of view in the testing literature. And since they have been around for quite a while, the question arises whether there has been a long pause in the growth of software testing as a discipline. In our collective view, dominated by the perspectives of those who casually couple software testing with the latest software development infatuation, this may be the case.

So what we fail to notice is the real way forward in functional testing. Advances in this area,  especially in the area of exploratory testing, have been made. In 1996 James Bach introduced the Heuristic Test Strategy Model (Bach, 1996), drawing from the research by Herbert Alexander Simon (Simon, 1957) and the Hungarian mathematician George Pólya (Pólya 1947), on heuristic discovery and problem solving. In 2008 Julian Harty presented his talk Six Thinking Hats for Software Testers at StarWest (Harty, 2008). Borrowing from the ideas of the British psychologist Edward de Bono (De Bono, 1985), Harty introduced the notion of different ways of thinking into software testing. In recent years Michael Bolton published on the concept of tacit knowledge in software testing (Bolton, 2011), drawing from work by the British sociologist Harry Collins (Collins, 2010) and the Hungarian philosopher Michael Polanyi (Polanyi, 1966), who introduced the concept of tacit knowledge.

Among other scientific concepts that were introduced into software testing is systems thinking, as conceived by the Austrian biologist Ludwig von Bertalanffy (Von Bertalanffy, 1968). The notion that we should look at systems as a whole and not as a sum of parts was applied to software engineering by the great Gerald Weinberg (Weinberg, 1975). Another concept is that of grounded theory, which is, in essence, the building of a theory through the qualitative research of data. It was introduced by sociologists Barney Glaser and Anselm Strauss (Glaser, 1967) and applied to software testing by Rikard Edgren (Edgren, 2009).

The list above is by no means conclusive. For now it suffices to say if we regard software testing as skillful investigation by experimentation, we should try to benefit from what we know about investigation and experimentation. As we have seen this knowledge comes from different areas of scientific research. For the future of the software testing to be bright, it is to be built on these foundations.

References and further reading
Bach, James. Heuristic Test Strategy Model (1996)
Von Bertalanffy, Ludwig, General System Theory (1968)
Bolton, Michael. Shapes of Actions (2011)
De Bono, Edward. Six Thinking Hats (1985)
Collins, Harry. Tacit and Explicit Knowledge (2010)
Edgren, Rikard. Grounded Test Design (2009)
Glaser, Barney and Strauss, Anselm. The Discovery of Grounded Theory (1967)
Harty, Julian. Six Thinking Hats for Software Testers, StarWest (2008)
Kaner, Cem. Testing Computer Software (1988)
Polanyi, Michael. The Tacit Dimension (1966)
Pólya, George. How To Solve It (1947)
Simon, Herbert Alexander. Models of Man (1957)
Weinberg, Gerald. An Introduction to General Systems Thinking (1975)
Whittaker, James. All That Testing is Getting in the Way of Quality, StarWest (2011)

This entry was posted in Context-driven testing. Bookmark the permalink.

7 Responses to The Real Future of Software Testing

  1. pencildot says:

    Technolgy is ever changing and testers need to keep upgrading their knowledge to understand and use these gadgets inorder to test them but I like the adage, a fool with a tool is still a fool!

  2. Pingback: Five Blogs – 28 March 2013 | 5blogs

  3. Pingback: Neotys Testing Roundup, April 2013 Issue 1

  4. Pingback: Perspectives on Testing » The Seapine View

  5. bh dikjaw says:

    cud u please be more specific on tools and technologies?

  6. Pingback: Slides of my presentation on the history of software testing « Patterns of Proof

  7. Sofia Hunt says:

    Very informative post and it was quite helpful to me. I also wrote something similar lines on future of software testing – http://bit.ly/1TnrPQx

Leave a comment