DEWT4 Afterparty

DEWT4-participants-v4 From left to right: Jeanne Hofmans, Rob van Steenbergen, Jurian van de Laar, Peter Simon Schrijver, Jean-Paul Varwijk, Bernd Beersma, Huib Schoots, Arjen Verweij, Zeger van Hese, Joris Meerts, Markus Gärtner, Bart Broekman, Angela van Son, Pascal Dufour, Ard Kramer, Jeroen Mengerink, Kristoffer Nordström‏, Philip Hoeben, Daniël Wiersma, Joep Schuurkes, Duncan Nisbet, Eddy Bruin, Wim Heemskerk, Ruud Cox, Richard Scholtes, Ray Oei.

Below are a number of links to DEWT4 reports produced by participants. If you want to get an impression about what happened during this peer conference, then you’ve got to read them all.

On behalf of all the participants I’d like to thank the AST for the grant which helped in making this conference a success.

Posted in Peer conference | Tagged | 3 Comments

DEWT Experiences from the TestNet Autumn Event

This article is a translation of two-part series of articles that was published in the TestNet News. The original articles can be found here and here. On behalf of DEWT, the author is Philip Hoeben.

TestNetOn October 31, 2013 the TestNet Autumn Event took place in Nieuwegein (the full program of the event is displayed below this article). The members of the Dutch Exploratory Workshop on Testing (DEWT) were present at this conference. The theme of the event was ‘Exploring context-driven testing – a new hype or here to stay?’. In this article we want to share our experiences of the event with the TestNet community.

As a starting point it is important to know what DEWT means by context-driven testing. For this we would like to refer to the article by Huib Schoots in the TestNet News Fall Special October 2013 with the appropriate title ‘What is context-driven testing?’.

Ray Oei and Peter Schrijver

Ray Oei and Peter Schrijver conducted a workshop at the TestNet Autumn Event 2013.

Let’s start with the theme of the conference. A whole day of context-driven testing! Apparently the time had come for the Dutch testing community to set aside an entire day to examine this subject and to share experiences. Something we as DEWT applaud. From the considerable turnout we can conclude that there were many people who are curious and want to learn, while for a part of those present context-driven testing was still uncharted territory. This is already a success in itself.

How DEWT prepared for the conference

In preparation for the TestNet event, DEWT came together on Wednesday, October 23, along with James Bach, to talk about “What is context-driven testing?’ and ‘How to recognize a context-driven presentation?’. A report of that meeting and a list of context-driven presentation heuristics can be found on Huib’s blog.

The presentations

An important starting point for a presentation on context-driven testing is presenting a story based on experiences. One of the principles in the context-driven testing community is ‘There are no best practices’. Many of the speakers indeed picked this up by sharing experiences. Without stressing how something should be done in a particular context, the speakers told how a specific problem was addressed, with parts that were good, parts that did not go well and what lessons were learned. Gitte Ottosen, Jos van Rooyen and Rik Teuben told a story about an approach that worked and an approach that did not work. Wim ten Tusscher even merged elements from his personal life into his experiential story. Tim Koomen talked about best practices as a guide. These speakers were able to put in perspective the absolute character that best practices sometimes have, and regard them as practices that may or may not work in context.

There was also a number of presentations where attendees were invited to participate in a discussion. Attending a presentation should be a matter of active participation, in which it is allowed to challenge each other. TestNet could be a good stage for such a presentation style, in which a provocative point of view may provide clarification of the subject. A fine example of this was Eibert Dijkgraaf’s presentation. Eibert conducted a ‘naked session’, therewith referring to the fact that he did not use slides. After a brief introduction, a discussion was held in which all participants could take part.

Jeanne Hofmans

Jeanne Hofmans presented at the TestNet Autumn Event 2013 on how to test a tunnel.

An important aspect of context-driven testing is to use knowledge from other disciplines, even if this knowledge does not appear to be related to software testing at first sight. We want to stress ’at first sight’ because we think input from other disciplines is an important part of testing. In her keynote ‘Building a Thinking Portfolio’ (PDF), Karen Johnson did not focus on testing processes, techniques or technical means. She highlighted the human side of testing and talked about how we think and analyze. Her talk was yet another experience report, in which she told how she was fascinated by the subject and how it is important for testing. Rikard Edgren (Curing Our Binary Disease, PDF) talked about a theory from the social sciences and explained with appropriate examples what this theory means to testing.

Arjen Verweij’s story was an experience report of his approach to studying (context-driven) testing. One of his strong point was that he only mentioned subjects that he had actually studied. Oftentimes it is not easy for a listener to gather what the speaker is referring to exactly. Arjen was able to pinpoint his references and his experiences with them. A study approach should not, however, turn into a recipe that will make you a context-driven tester, within a pre-determined time frame.

With this, perhaps unintentionally, an interesting question is raised: can one become a context-driven tester? Or more generally, can one become a tester of a particular school? Or are you, from the first day of your career, already a tester belonging to a particular school of thought, irrespective of your knowledge, skills or experience, because you carry with you a certain worldview? We like to leave the answer to that question, and its implications open for discussion.

Aspects of the presentations that satisfied the context-driven presentation heuristics to a lesser degree

There is still a lack of understanding of certain aspects of context-driven testing , a number of which we would like to discuss here. One such aspect is the notion of context, which is often seen from limited perspective. Examples of this are:

  • context is the assignment that is given to you by the customer, or
  • context is the development method that is used.

To us, context is not something that is static and pinned-down, but dynamic: everything that happens in a project and everything that affects the project is part of the context and cannot, by definition, be laid down in a list. In order to deal with this, it is important that the tester understands what context is, that he is able to observe and analyze his context and that he has the skills to adapt to an ever-changing context. Talking about frameworks within which you should always work can  keep you from being receptive to the ever-changing context.

The mixing of schools is another common misconception. A number of speakers, such as Wim Ten Tuscher and Eibert Dijkgraaf, argued for an approach that consists of a mixing of schools in which you can choose, depending on the context, for one or the other school. From the context -driven perspective the verdict on this approach is very simple: it cannot be done! It is noteworthy that some speakers think it possible. The reason for the misunderstanding is that people confuse artifacts with paradigms. In every school you can use testing techniques, such as described in, for example, TMap or ISTQB.  Just as well you can use, in every school, session-based test management, which was introduced by the context-driven school. However, this is not a mixing of schools, but the use of artifacts from different schools. A factory school tester who uses context-driven artifacts, is not, all of the sudden, a bit context-driven. And a context-driven tester who makes use of artifacts that have been described by the factory school is not suddenly to a certain extent a factory school tester. The answer to the question of what testing is or what it comprises, does not change by mixing artifacts from different schools.

Another conspicuous point are some unsubstantiated claims, such as ‘context-driven testing can not be applied in complex chain testing’ or ‘in a certain situation it is best to take an approach from a particular school’. When confronted with these claims our advice is to use three powerful questions from James Bach, to stimulate critical thinking.

  • Huh? (Do I really understand what is being said?),
  • Really? (How do I know it’s true?) and
  • So? (Is this the only solution? So what?).

Unsubstantiated claims are often not based on experiences – let alone thorough research – but on assumptions that cannot be clarified or traced. If these assumptions are not met with critical examination, it is not possible to interpret them.

Jean-Paul Varwijk

Jean-Paul Varwijk presented at the TestNet Autumn Event 2013 on focusing testing more on the results.

What is also striking, is that context-driven testing is sometimes taken lightly. It is reduced to its (context-driven) principles, which then take on a universal character. Also, context -driven is sometimes equated with common sense. It is argued that context-driven testing is merely based on semantic discussions, or that it is something that everyone does but about which a group of people likes to make a lot of fuss. Here the context-driven community and DEWT in particular learned a valuable lesson, because apparently we have not been able to elucidate the importance of the paradigm, the context-driven semantics, what common sense means to context-driven testers, and why we love to talk about testing. Clearly, these are lessons learned for us.

What can be improved by the context-driven testing community, and specifically DEWT?

A frequently heard comment was that after a number of presentations, people still did not know what context-driven testing really is, or that, as previously stated, context-driven testers are a group of people who make things needlessly complicated. DEWT sets itself the task of explaining in a better way what context-driven testing is – without simplifying it. We want to pick up this glove immediately:

  • the first step is to share our experiences of the TestNet Autumn Event, and
  • the second step is an article for the TestNet News explaining context-driven testing.

Additionally, we are open to questions and discussions.

Looking back at the event, we, as DEWT, can say that it was an informative day, during which there was room for different voices and visions. We hope that some testers noticed that context-driven testing suits them and that they would like to know more about it. In that case, do not hesitate to get in touch with the DEWT members. And if you want to continue the discussion on context-driven testing within your company, or want to organize a meeting for your colleagues, we are here to help.

The pictures in this article were taken by Rik Marselis.

Program of the TestNet Autumn Event 2013

Program of the TestNet Autumn Event 2013

Posted in Context-driven testing | Tagged | Leave a comment

Announcing DEWT4

Preparing K-Cards

The 4th DEWT peer conference will take place Februari 7-9th at Hotel Bergse Bossen Driebergen, the Netherlands. DEWT is a conference that falls into the series of peer conferences on testing like LAWST, LEWT and SWET.

The central theme is “Teaching Software Testing”, in all its life-forms.

The twitter hashtag for this peer conference is #DEWT4.

This conference is for DEWTs and invitees only. 27 people will participate; Angela van Son, Ard Kramer, Arjen Verweij, Bart Broekman, Bernd Beersma, Bryan Bakker, Daniël Wiersma, Duncan Nisbet, Eddy Bruin, Huib Schoots, Jean-Paul Varwijk, Jeanne Hofmans, Jeroen Mengerink, Joep Schuurkes, Joris Meerts, Jurian van de Laar, Kristoffer Nordström‏, Markus Gartner, Pascal Dufour, Peter Simon Schrijver, Philip Hoeben, Ray Oei, Richard Scholtes, Rob van Steenbergen, Ruud Cox, Wim Heemskerk and Zeger van Hese.

 

Posted in Peer conference | Tagged | Leave a comment

Steps toward Becoming Context-driven in the Netherlands

In November 2013 I attended the official launch of the test approach Situational Testing (Dutch: ‘Situationeel testen’) by the Dutch test consultancy SYSQA. I was interested in the approach because it claims to be inspired by the work of a number of context-driven testers, among which James Bach, Michael Bolton, Cem Kaner and our very own Huib Schoots. It was my aim to find out how much this new approach appealed to my sense of what software testing is about. Also, I wanted to see how this approach could be beneficial to the work of the context-driven community.

Situational Testing can be seen as an approach to testing in which the selection of the test approach (Dutch: test vorm) is based on a set of project aspects (the ‘Situation’). The test approaches that are available in Situational Testing are ‘Factory based testing’, ‘Global scripting’, ‘Session based testing’, ‘Bug hunts’, ‘Test tours’ and ‘Freestyle exploratory testing’. An exhaustive list of project aspects is not provided, but the approach focuses on the business value of the system that is being developed and the questions that testing needs to answer (the goal or the added value of testing). From this it is gathered to which degree tests should be scripted, with ‘full scripting’ and ‘no scripting’ at the extremes of the scale. The approach claims to be pragmatic and flexible in that it tailors the amount of scripting to the amount of scripting that is valuable to the organisation.

Situational Testing is an addition to the Dutch testing landscape that is dominated by methodologies such as TMap Next and ISTQB, that emphasize a scripted and formal, process-driven approach to testing. SYSQA offers a new perspective in two ways. First and foremost Situational Testing states that the formal script-based testing does not always deliver the most value to a project. With this statement SYSQA identifies adding value to a project, rather than following steps in a process,  as the premier purpose of testing. Secondly, Situational Testing clears the way for the recognition of relatively recent developments in testing. A significant difference between Situational Testing and  TMap Next is that TMap proposes to use exploratory testing when the specifications are not clear or missing, whereas Situational Testing suggests to use exploratory testing where it can be of most value.

The Tester Freedom Scale

The centerpiece of the Situational Testing approach is an adaptation of Jon Bach’s Tester Freedom Scale (Bach, 2007). Bach published this sliding scale to model the  variation in the degree of freedom a tester has when testing. In his own words, the scale models “the extent to which we are allowed to think”. Below, his model is displayed.

With regards to the Tester Freedom Scale I think we should note two things. I believe, it was Bach’s intention to offer an explanation for the degree of tester freedom involved in various situations in testing.

So firstly, the model serves to explain observed phenomena. It offers a descriptive theory that may be verifiable. I might look at my own situation, observe to which degree my actions are prescribed and to which degree I am free to decide how to conduct my testing. There will be different ways to indicate the degree of freedom, but, for the sake of my argument, that’s beside the point. Let’s just say that I am able to determine my degree of prescription, my degree of freedom, plot this on Bach’s scale and thereby confirm or invalidate his theory.

What the Tester Freedom Scale does not tell us is how to define the testing approach. It does not offer a theory  for the success of testing in various situations in testing. The model does not, for example, say: when the scripts are vague, you should aim for a degree of freedom of 12% in order to achieve a testing success of 91% or higher. Bach’s Tester Freedom Scale is not a normative theory.

To many of you this may seem to be an extremely obvious point. You would probably argue that there is no point in directing the tester’s degree of freedom and that telling a tester not to think is like telling a race horse to exclusively trot. During testing, the brain is always engaged. This was made clear a long time ago by Glenford Myers who stated that “testing is an extremely creative and intellectually challenging task” (Myers, 1979).

The second observation is that Bach’s model describes circumstances in testing and not approaches to testing. As far as I can recall there is, for example, no such defined testing approach or technique as ‘vague scripting’, while there is, in my fairly recent memory, a project in which the test scripts were vague. The point is important because the model does not link testing approaches to the degree of freedom or the degree of prescription. It does not offer a normative theory in which it might be stated that “if you want a degree of freedom of 60% then you should use testing tours (Whittaker, 2009),” or “if you desire 30% scripting then session-based test management (Bach, 2000) is the thing for you.”

Both points mentioned above are important because Jon Bach’s model is used  - in an adjusted form – by SYSQA as a normative theory for testing. The modified Tester Freedom Scale (hereafter the SYSQA scale) is displayed below. The attribution of the SYSQA scale to Bach’s model can be found in the presentation material concerning Situational Testing (in Dutch,  slide 16).

Many things can probably be said about the modifications that were made. One of the more striking adjustments is the replacement of the word “Freedom” in the right scale of Bach’s version by the words “Non-scripted testing”, which essentially takes away the need for the right (“Non-scripted testing”) scale, because we must assume that the opposite of “Scripted testing” is “Non-scripted testing” and that a reduction of scripted tests implies an increase in non-scripted tests. Also, “Freedom”, or “the extent to which we are allowed to think”, from my point of view, is something completely different than “Non-scripted testing”.

This leads us to a point that can be made about the SYSQA model and its use as a normative theory, based on a distinction between scripted and non-scripted testing. It is true that scripts, in one form or the other, are often a product of testing. But scripts, merely being products of a process, do not tell us all about the thoughts that produced them and therefore, in my opinion, they do not serve well as a criterion for the selection of a testing approach.

As can be concluded from what is described above, Situational Testing is a test management method. It describes a way to map a set of test approaches on a project. The method acknowledges that the characteristics of the project are very important in determining the test approach. In the white paper the following is stated: “The principle of Situational Testing is that the characteristics of the system, the project and the expectations of the organisation determine the test approach.” Yet for an approach that apparently places the context front and center, there is a conspicuous lack of methods to investigate the project context. As a method for determining the purpose of testing, Situational Testing uses an adaptation of Gojko Adzic’s pyramid of software quality (Adzic, 2012). While this method allows us to determine the desired level of quality, it is not specifically a tool to investigate the project context. An example of such a tool can be found in the Heuristic Test Strategy Model (Bach, 1996), in which James Bach offers a rich set of heuristics. It seems to me that an approach that claims to be determined by the context, needs at least one tool for heuristic investigation of aspects of that context.

Situational Testing claims to be inspired by the context-driven community, which, of course, is a good thing. During the presentations that I attended in November, I therefore naturally expected that the experience reports of testing executed using the situational approach, would be on topics that are currently favored in the context-driven community. Yet concepts such as testing skills, problem solving, investigation and analysis, decision making, social sciences, heuristics, exploration, ambiguity and uncertainty, the scientific method, and various forms of thinking, were hardly ever mentioned. From the list of of topics it is clear that the context-driven school places the skills and judgment of the individual tester in the foreground. In  an approach that is inspired by this fundamental principle, one would expect this to shine through in versatility, inquisitiveness and the focus on learning and creativity. It is not easy to recognize these aspects in the current version of Situational Testing.

My conclusion after having read and heard about Situational Testing is that it provides a welcome shift of focus from testing tied to specifications to testing as adding value. While it does not provide an in-depth view of testing and while the modifications to Jon Bach’s model and its use probably stretch the intentions of the original, the approach may pave the way for testers to access and to practise new ideas in testing. Beyond a doubt, that would be a step forward. However, for a Dutch context-driven approach to mature, I think a clearer view on what software testing is about, is required.

References

Adzic, Gojko – Redefining software quality, personal blog, 2012

Bach, James – Heuristic Test Strategy Model, Satisfice, 1996

Bach, Jonathan - Session-Based Test Management, Satisfice, 2000

Bach, Jonathan – A Case Against Test Cases, Quardev blog, 2007

Myers, Glenford J. – The Art of Software Testing, John Wiley & Sons, 1979

Whittaker, James A. – Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design, Addison-Wesley Professional, 2009

The SYSQA material

Situationeel testen – aanpak testtraject

Leaflet Situationeel testen

Slides on Situationeel testen

Posted in Context-driven testing | Tagged | 5 Comments

DEWT4 is planned

The DEWT4 Peer Conference has been planned from February 7–9 2014 in Driebergen, The Netherlands. The theme is “Teaching Software Testing” in all its forms. More news later….

Posted in Peer conference | Tagged , | 1 Comment

DEWT’s on TESTNET Najaarsevenement 2013

Much has changed the last few years. A lot of thing went much faster than any of us had anticipated. What started out as a place to discuss and talk testing has become something with a stronger voice and direction.

This week at least 6 DEWT’s are speaking at the Dutch TestNet Najaarsevenement (see here and here). Most of us wouldn’t have thought that would be something we would do, ever.

So we are very proud! And still there is a lot to learn. This event is also for us a place to learn, to interact with other testers, many of them not even aware of Context Driven Testing. Or with rather shallow definitions of what CDT really is. That is no problem, by discussing and interacting we want to improve ourselves and our craft. Stay tuned…

And: lets’s GO!

Posted in News | Tagged | Leave a comment

Reading Skills for Software Testers

From 19 till 21 April 2013, the third peer conference of the Dutch Exploratory Workshop on Testing was held in Driebergen, The Netherlands. In a talk I did during that conference, I made a slightly emotional appeal to start looking at software testing skills, in order to explore the question of what makes a good tester.

The particular skill I mentioned in my talk was reading. Some time ago, I did an assignment in which the reviewing of functional documentation played a major part. In preparation for this assignment, I was asked to take a close look at chapter 15 of the book TMap Next (Koomen, 2006). TMap Next represents a test management approach that is widely used in the Netherlands. Chapter 15 is on ‘Toetstechnieken’ which is translated (in the English edition) as ‘Evaluation techniques’. It was suggested that the information in the chapter would prepare me for the task of reviewing functional documentation.

Building a mental picture through reading

While working on my assignment, reviewing the documentation, I found that I was conducting an investigation into the testability of the design. Now testability is something that may have different meanings depending on who you ask. Testability for me meant that the design, the large set of documents that was placed before me, had to be internally consistent and consistent with relevant standards and related documents. Furthermore the structure of the design had to be complete and correct. This, among other things, meant that flows through the design could be followed without gaps, disconnects or open ends. A testable design, in my opinion, was a design based on which tests could be created without leaving too much room for interpretation and thus ambiguity. I was not actually looking for the functional correctness of the design with regards to user requirements. For several reasons, that would have severely complicated the scope of my review.

It turned out that, when reviewing for consistency and structure, many things need to be considered. Based on nothing more than words and diagrams it should be possible to create a mental model of the system and operate that system within the mind. We may have a model of the actual system in our minds. Such a model may contain a hierarchical structure, interacting components and flows of information. So concepts of systems need to be discovered and checked with what’s on paper.

But we may also build different models, based upon the text. Such are models of the organisation, models of how the organisation perceives reality, and models of how the organisation deals with its knowledge, how it describes what’s meaningful and special. All of these models can be investigated through interacting with the text.

Testers as first-rate readers

Reviewing documentation is a process of questioning and evaluation. In that respect it is like software testing, which is the art of skillful investigation through experimentation. A greater awareness of how we read, can aid this investigation. There are models, such as SQ3R (Survey, Question, Read, Recite, and Review, Robinson, 1946), that emphasize the investigative part of reading. There is also the famous book on how to read a book (Adler, 1940) which touches upon many of the themes that are important in testing, such as analytical reading and critical reading, but also prejudice and judgment.

Reading at a more detailed level can provide even more information. A heuristic such as ‘Mary had a little lamb’ introduced by Gerald Weinberg and Donald Gause (Weinberg, 1989) focuses on how the meaning of a sentence changes, whenever a different word is stressed.

The most fundamental skill that aids the creation of a mental model from paper, is reading. Actually – considering the fact that software testers are continually confronted with written material – reading should be a skill with which the average tester is liberally bestowed. We should be first-rate readers.

So why is the art of reading missing in the TMap methodology?

It is hard not to notice that chapter 15 of TMap Next does not mention reading at all. If we extremely summarize the contents of this chapter, then what is said about reviewing is that it can be done by one or more persons. The chapter focuses on the review process, using the IEEE Standard for Software Reviews (IEEE Std 1028-1997) as a starting point. It tells us something about the roles, criteria and procedures for the review types inspection, walkthrough (Waldstein, 1974, though, for some reason, not referenced in TMap Next) and reviews.

It may be so that the authors of TMap Next simply assumed that reading skills are omnipresent in software testers. What I did notice over the years is that people who are designated to review documentation can fail horribly because they absolutely do not know how to read usually rather complicated documentation. So at the very least reading is a cognitive skill that should be fostered carefully. Somewhere in the past I guess I believed, in a period of ill justified innocence, that all testers possessed those skills that they naturally require. I do not believe that anymore.

Another starting point for this chapter could be that not reading skills but the processes described in the chapter, are the way to handle reviewing. Here my main struggle is that I saw review processes in action (though not anything identical to the processes described in chapter 15) and I never once felt that the processes themselves were the reason why the results were as they were. The process suggests that actors (read: humans) are aligned to accomplish a task together. Chapter 15 does not shed even the dimmest of lights on the capacities of those actors or the circumstances under which they interact. Maybe it assumes that the actors are all the same and they only need to be switched on and off at the right time, in the right room, in the right combination. But even that basic and dreadfully incorrect assumption is not mentioned in the chapter. Without these fundamental assumptions, the process is a sham.

What it boils down to is that chapter 15 had better not be used in software testing and had better not be taught to testers as an introduction to or an elucidation of reviewing. As far as I am concerned inflicting the information contained in chapter 15 on testers, should be considered harmful. The main reason is that it is impossible to verify what is described in chapter 15. The second reason is that whatever is described in chapter,  is not related to the craft of software testing, because we know that software testing is about things such as reading. We should realize that testers, when reviewing, have far more pressing problems than figuring out the minimum number of warm bodies that is needed in order to officially commence the review process.

References

Adler, Mortimer – How to Read a Book, 1940

Koomen, Tim – TMap Next voor resultaat gericht testen, 2006

Robinson, Francis Pleasant – Effective Study, 1946

Waldstein, N.S. –  The Walk-Thru – A Method of Specification, 1974

Weinberg, Gerald – Exploring Requirements, 1989

Posted in Meeting reports, Peer conference | Tagged | 3 Comments

DEWT3 Experience Reports

DEWT3 Participants

Standing from left to right: Jurian van de Laar, Stephen Hill, Huib Schoots, Joep Schuurkes, Joris Meerts, Rik Marselis, Pascal Dufour, Ruud Cox, Bernd Beersma, Markus Gaertner, Philip Hoeben, Philip-Jan Bosch, Derk-Jan de Grood, Adrian Canlon, Ard Kramer, Peter Duelen, Jean-Paul Varwijk, Zeger van Hese, Angela van Son, James Bach, Ray Oei. Sitting: Michael Philips

Below is a number of experience reports written by attendees. If you want to get an in depth view on what happened during this peer conference, then I recommend to read them all.

Dutch Exploratory Workshop in Testing (DEWT) 3 and General Systems Thinking at the third Dutch Exploratory Workshop on Testing by Stephen Hill

DEWT3 Sketchnotes by Ruud Cox

DEWT3 Sketchnotes by Huib Schoots

DEWT3 experience report by Joep Schuurkens

In addition, Joris Meerts created an interesting list of Systems Thinking books.

On behalf of all the DEWT’s I’d like to thank the AST for the grant which helped in making this conference a success.

Posted in Uncategorized | Tagged | 3 Comments

DEWT3 has begun…

In the bar…

DEWT3 -Grand bar of 'Bergse Bossen' Driebergen

DEWT3 -Grand bar of ‘Bergse Bossen’ Driebergen

Testing begins...

Testing begins…

Posted in Uncategorized | 1 Comment

The Real Future of Software Testing

It is a common notion that we can learn about the future by looking at the past. At the very least, by looking at the paths that were travelled to get us to the here and now, we have an understanding of where we stand. Through that single point we may, for example, draw the lines from the past and project them into the future.

This technique is frequently used to make inferences about the future of software testing. Many of the visions are based on extrapolating market trends in software development. If, in the recent past, we noticed a growing use of mobile devices, it is likely that we will see continued growth in mobile testing. If security emerged as a hot topic in the past couple of years, security testing will continue to be of interest. Most of these predictions, about e.g. cloud, mobile, Agile, security and big data, can be found in the regular forecasts by IT research companies such as Gartner, Ovum and Forrester. To draw a picture of the future of testing, the only thing left to do is to add ‘testing’ to each of the trends. It is that simple.

But do ‘cloud testing’, ‘mobile testing’, ‘Agile testing’ and all other collations actually tell us something about the future of software testing? In many cases, not specifically. Take mobile testing. It is a certainty that the development of applications for a mobile devices carries with it a great number of specific technologies, tools and challenges. These things affect the day-to-day work of the tester. He has to grasp new domains, new tools, new ways of working and novel perspectives on software usage. We know they affect the work, but what we fail to investigate or even notice, is how they affect the craft; how the basic paradigms in software testing evolve because of whatever happens in software development. It is an important realization that by mapping the future of software testing on shifting technologies and, for example, changing perspectives on software usage, we focus on how the work is affected, but not on the underlying paradigms that drive the craft. This, to me, is not helpful in identifying the ways in which software testing evolves. In the worst case it causes regression, giving way to views that functional testing is an ancient and obscure specialism, for which the need is rapidly waning (Whittaker, 2011).

To further elucidate this example I would like to look at the popular test automation tools such as Selenium, HP QuickTest Professional or Watir. The evolution of software testing has become intertwined with test automation tools in such a way that, by focussing increasingly on familiarity with tools, knowledge of testing is dispersed. While the tool clearly advances the reach and capabilities of functional testing, it does not advance the paradigms that drive the testing effort. Tests still need to be created and the intelligence with which the tests are created is one of the factors seriously affecting the success of test automation. The tool merely amplifies intelligent use or the lack thereof. Everyone knows the old adage that ‘a fool with a tool is still a fool’. By this adage, while we educate hosts of ‘Selenium testers’, ‘mobile testers’ or ‘cloud testers’, what we get may still be only a handful of testers who grasp the paradigms of functional testing and are able to use the tool successfully. From this particular point of view, the term ‘Agile tester’, for example, is nothing more than an empty vessel.

If we want to take a look at the future of software testing we have to look at what is left when we strip from it the knowledge of tools, technologies or domains. Functional testing is, among other things, the art of investigation by experimentation and for this we basically have two paradigms: (functional) specification based test design by using (formal) test techniques and exploratory test design. Test specification techniques were mostly created in the 1970’s while exploratory testing was introduced (formally) by Cem Kaner in the 1988 (Kaner, 1988). Both these ways of investigating software are recognized as established points of view in the testing literature. And since they have been around for quite a while, the question arises whether there has been a long pause in the growth of software testing as a discipline. In our collective view, dominated by the perspectives of those who casually couple software testing with the latest software development infatuation, this may be the case.

So what we fail to notice is the real way forward in functional testing. Advances in this area,  especially in the area of exploratory testing, have been made. In 1996 James Bach introduced the Heuristic Test Strategy Model (Bach, 1996), drawing from the research by Herbert Alexander Simon (Simon, 1957) and the Hungarian mathematician George Pólya (Pólya 1947), on heuristic discovery and problem solving. In 2008 Julian Harty presented his talk Six Thinking Hats for Software Testers at StarWest (Harty, 2008). Borrowing from the ideas of the British psychologist Edward de Bono (De Bono, 1985), Harty introduced the notion of different ways of thinking into software testing. In recent years Michael Bolton published on the concept of tacit knowledge in software testing (Bolton, 2011), drawing from work by the British sociologist Harry Collins (Collins, 2010) and the Hungarian philosopher Michael Polanyi (Polanyi, 1966), who introduced the concept of tacit knowledge.

Among other scientific concepts that were introduced into software testing is systems thinking, as conceived by the Austrian biologist Ludwig von Bertalanffy (Von Bertalanffy, 1968). The notion that we should look at systems as a whole and not as a sum of parts was applied to software engineering by the great Gerald Weinberg (Weinberg, 1975). Another concept is that of grounded theory, which is, in essence, the building of a theory through the qualitative research of data. It was introduced by sociologists Barney Glaser and Anselm Strauss (Glaser, 1967) and applied to software testing by Rikard Edgren (Edgren, 2009).

The list above is by no means conclusive. For now it suffices to say if we regard software testing as skillful investigation by experimentation, we should try to benefit from what we know about investigation and experimentation. As we have seen this knowledge comes from different areas of scientific research. For the future of the software testing to be bright, it is to be built on these foundations.

References and further reading
Bach, James. Heuristic Test Strategy Model (1996)
Von Bertalanffy, Ludwig, General System Theory (1968)
Bolton, Michael. Shapes of Actions (2011)
De Bono, Edward. Six Thinking Hats (1985)
Collins, Harry. Tacit and Explicit Knowledge (2010)
Edgren, Rikard. Grounded Test Design (2009)
Glaser, Barney and Strauss, Anselm. The Discovery of Grounded Theory (1967)
Harty, Julian. Six Thinking Hats for Software Testers, StarWest (2008)
Kaner, Cem. Testing Computer Software (1988)
Polanyi, Michael. The Tacit Dimension (1966)
Pólya, George. How To Solve It (1947)
Simon, Herbert Alexander. Models of Man (1957)
Weinberg, Gerald. An Introduction to General Systems Thinking (1975)
Whittaker, James. All That Testing is Getting in the Way of Quality, StarWest (2011)

Posted in Uncategorized | 6 Comments