In the bar…
In the bar…
It is a common notion that we can learn about the future by looking at the past. At the very least, by looking at the paths that were travelled to get us to the here and now, we have an understanding of where we stand. Through that single point we may, for example, draw the lines from the past and project them into the future.
This technique is frequently used to make inferences about the future of software testing. Many of the visions are based on extrapolating market trends in software development. If, in the recent past, we noticed a growing use of mobile devices, it is likely that we will see continued growth in mobile testing. If security emerged as a hot topic in the past couple of years, security testing will continue to be of interest. Most of these predictions, about e.g. cloud, mobile, Agile, security and big data, can be found in the regular forecasts by IT research companies such as Gartner, Ovum and Forrester. To draw a picture of the future of testing, the only thing left to do is to add ‘testing’ to each of the trends. It is that simple.
But do ‘cloud testing’, ‘mobile testing’, ‘Agile testing’ and all other collations actually tell us something about the future of software testing? In many cases, not specifically. Take mobile testing. It is a certainty that the development of applications for a mobile devices carries with it a great number of specific technologies, tools and challenges. These things affect the day-to-day work of the tester. He has to grasp new domains, new tools, new ways of working and novel perspectives on software usage. We know they affect the work, but what we fail to investigate or even notice, is how they affect the craft; how the basic paradigms in software testing evolve because of whatever happens in software development. It is an important realization that by mapping the future of software testing on shifting technologies and, for example, changing perspectives on software usage, we focus on how the work is affected, but not on the underlying paradigms that drive the craft. This, to me, is not helpful in identifying the ways in which software testing evolves. In the worst case it causes regression, giving way to views that functional testing is an ancient and obscure specialism, for which the need is rapidly waning (Whittaker, 2011).
To further elucidate this example I would like to look at the popular test automation tools such as Selenium, HP QuickTest Professional or Watir. The evolution of software testing has become intertwined with test automation tools in such a way that, by focussing increasingly on familiarity with tools, knowledge of testing is dispersed. While the tool clearly advances the reach and capabilities of functional testing, it does not advance the paradigms that drive the testing effort. Tests still need to be created and the intelligence with which the tests are created is one of the factors seriously affecting the success of test automation. The tool merely amplifies intelligent use or the lack thereof. Everyone knows the old adage that ‘a fool with a tool is still a fool’. By this adage, while we educate hosts of ‘Selenium testers’, ‘mobile testers’ or ‘cloud testers’, what we get may still be only a handful of testers who grasp the paradigms of functional testing and are able to use the tool successfully. From this particular point of view, the term ‘Agile tester’, for example, is nothing more than an empty vessel.
If we want to take a look at the future of software testing we have to look at what is left when we strip from it the knowledge of tools, technologies or domains. Functional testing is, among other things, the art of investigation by experimentation and for this we basically have two paradigms: (functional) specification based test design by using (formal) test techniques and exploratory test design. Test specification techniques were mostly created in the 1970’s while exploratory testing was introduced (formally) by Cem Kaner in the 1988 (Kaner, 1988). Both these ways of investigating software are recognized as established points of view in the testing literature. And since they have been around for quite a while, the question arises whether there has been a long pause in the growth of software testing as a discipline. In our collective view, dominated by the perspectives of those who casually couple software testing with the latest software development infatuation, this may be the case.
So what we fail to notice is the real way forward in functional testing. Advances in this area, especially in the area of exploratory testing, have been made. In 1996 James Bach introduced the Heuristic Test Strategy Model (Bach, 1996), drawing from the research by Herbert Alexander Simon (Simon, 1957) and the Hungarian mathematician George Pólya (Pólya 1947), on heuristic discovery and problem solving. In 2008 Julian Harty presented his talk Six Thinking Hats for Software Testers at StarWest (Harty, 2008). Borrowing from the ideas of the British psychologist Edward de Bono (De Bono, 1985), Harty introduced the notion of different ways of thinking into software testing. In recent years Michael Bolton published on the concept of tacit knowledge in software testing (Bolton, 2011), drawing from work by the British sociologist Harry Collins (Collins, 2010) and the Hungarian philosopher Michael Polanyi (Polanyi, 1966), who introduced the concept of tacit knowledge.
Among other scientific concepts that were introduced into software testing is systems thinking, as conceived by the Austrian biologist Ludwig von Bertalanffy (Von Bertalanffy, 1968). The notion that we should look at systems as a whole and not as a sum of parts was applied to software engineering by the great Gerald Weinberg (Weinberg, 1975). Another concept is that of grounded theory, which is, in essence, the building of a theory through the qualitative research of data. It was introduced by sociologists Barney Glaser and Anselm Strauss (Glaser, 1967) and applied to software testing by Rikard Edgren (Edgren, 2009).
The list above is by no means conclusive. For now it suffices to say if we regard software testing as skillful investigation by experimentation, we should try to benefit from what we know about investigation and experimentation. As we have seen this knowledge comes from different areas of scientific research. For the future of the software testing to be bright, it is to be built on these foundations.
References and further reading
Bach, James. Heuristic Test Strategy Model (1996)
Von Bertalanffy, Ludwig, General System Theory (1968)
Bolton, Michael. Shapes of Actions (2011)
De Bono, Edward. Six Thinking Hats (1985)
Collins, Harry. Tacit and Explicit Knowledge (2010)
Edgren, Rikard. Grounded Test Design (2009)
Glaser, Barney and Strauss, Anselm. The Discovery of Grounded Theory (1967)
Harty, Julian. Six Thinking Hats for Software Testers, StarWest (2008)
Kaner, Cem. Testing Computer Software (1988)
Polanyi, Michael. The Tacit Dimension (1966)
Pólya, George. How To Solve It (1947)
Simon, Herbert Alexander. Models of Man (1957)
Weinberg, Gerald. An Introduction to General Systems Thinking (1975)
Whittaker, James. All That Testing is Getting in the Way of Quality, StarWest (2011)
At EuroSTAR 2012, Henrik Emilsson of The Test Eye did a talk about their Software Quality Characteristics poster. After his talk he asked if someone was interested in translating this poster into other languages. Today, DEWT proudly presents the Dutch translation; Software Kwaliteit Kenmerken.
The 3rd DEWT peer conference will take place on April 19-21st at Hotel Bergse Bossen Driebergen, the Netherlands. DEWT is a conference that falls into the series of peer conferences on testing like LAWST, LEWT, SWET and GATE.
The main theme will be
The twitter hashtag for this peer conference is #DEWT3.
Joris Meerts maintains a list of books related to the theme.
This conference is for DEWTs and invitees only. 21 people will participate in DEWT3.
James Bach (USA), Bernd Beersma, Philip-Jan Bosch, Peter Duelen (Belgium), Pascal Dufour, Markus Gaertner (Germany), Derk-Jan de Grood, Stephen Hill (UK), Jurian van de Laar, Rik Marselis, Joep Schuurkes, Angela van Son.
Adrian Canlon, Ruud Cox, Zeger van Hese (Belgium), Jeanne Hofmans, Philip Hoeben, Joris Meerts, Ray Oei, Huib Schoots, Jean-Paul Varwijk.
DEWT2 was a success. The list below contains blog posts written by attendees. If you want to get an in depth view on what happened during this peer conference, then I recommend to read them all.
During the conference pictures were taken and uploaded to twitter. I captured the pictures and included them. The quality of the pictures is low, but they cover the conference very well and they give a good impression.
The 2nd DEWT workshop will take place on October 5-6th in Hotel Bergse Bossen Driebergen, the Netherlands. DEWTis a workshop that falls into the series of peer workshops on testing like LAWST, LEWT, SWET and GATE.
The main theme of this peer workshop will be:
Experience Reports: Implementing Context-Driven Testing
The twitter hashtag for this peer conference will is #DEWT2.
Friday, October 5
18.00 – 19.00 Opening Remarks
19.00 – –.– Food, drinks, puzzles & lightning talks
Jean-Paul Varwijk on a context-driven test approach
Zeger van Hese on intakes
Saturday, October 6
09.00 – 09.15 Opening Remarks
09.15 – 10.00 IlariHenrik Aegerter – Introducing context-driven testing at Ebay
10.00 – 10.45 Markus Gärtner – What I learned from coaching a context-driven tester
10.45 – 11.30 Ray Oei - Workshop RST/CDT at clients and or Teaching testers at AST
11.30 – 13.00 Lunch, Walk in the forest, Group photo
13.00 – 14.00 Ruud Cox – Testing medical devices, a context-driven spin-off
14.00 – 15.00 Huib Schoots – Context-driven testing at Rabobank International
15.00 – 16.00 Test lab / live test / dojo
16.00 – 17.00 Open podium
*The current program is provisional and can be changed if the group so desires.
23 people will participate in DEWT2.
Markus Gartner (Germany), Ilari Henrik Aegerter (Switserland), Tony Bruce (UK), Gerard Drijfhout, Pascal Dufour, Rob van Steenbergen, Derk Jan de Grood, Joep Schuurkes, Leon Bosma, Bryan Bakker, Lilian Nijboer, Philip-Jan Bosch
Adrian Canlon, Ruud Cox, Philip Hoeben, Zeger van Hese (Belgium), Jeanne Hofmans, Joris Meerts, Ray Oei , Jeroen Rosink, Huib Schoots, Peter Simon Schrijver, Jean-Paul Varwijk
March 7th, we had an excellent workshop on coaching. People who were there; Angela van Son (trainer), Michael Bolton, Adrian Canlon, Ruud Cox, Zeger van Hese, Philip Hoeben, Jeanne Hofmans, Joris Meerts, Ray Oei, Jean-Paul Varwijk, Jeroen Rosink, Huib Schoots and Simon Peter Schrijver.
After eating pizza, the programme started with the question: What is coaching? What I put in my notes is that coaching is about finding answers. The coach supports the coachee by listening, asking questions and providing feedback. But it is key that the coachee finds the answers within him or herself.
After this introductional discussion we started with the first exercise. The assignment was to form teams of two people where one person is asking questions to find out what the destination of the next holiday of his or her partner is. Two minutes long, first only closed questions and in the next two minutes only open questions. The lesson learned from this exercise is that with open questions you make a coachee think of an answer while with closed questions the coachee can pick an answer from a discrete set of possible answers without too much thinking e.g. yes or no. To put it in an oversimplified way, from a coaching perspective, open questions are good and closed questions are bad.
The seconds exercise was about advice. Two persons are having a conversation. One person is asking for advice but the other person is not allowed to give advice. This exercise took 7 minutes per conversation. The result was hillarious. Some people were literally begging for advice others were using other tricks. From a coaching perspective this exercise showed that it is very tempting to give advice where the coachee has to find answers within him or herself. A lesson learned from this exercise is that giving advice is not bad, but it shouldn’t be the default.
The last topic on the agenda was about the layers of communication or the UI (onion) model. This model contains four layers which play a role in communication between two people; content, procedure, interaction and emotion, The reason why this model is called onion model is because it is normally visually represented as an onion. The content is normally most present in a conversation but there are deeper layers. All layers were discussed with examples but I didn’t make enough notes to add some examples to this post.
And that was the end of an excellent workshop. Angela van Son and Michael Bolton, thank you very much for being our guest this evening.
Angela van Son recommends the folllowing books about coaching (they’re all in Dutch):
Wednesday, March 7th, our DEWT meetup is about coaching. This evening we enthusiastically welcome our trainer Angela van Son. We are passionate about testing, she is passionate about coaching and communication, either verbal or in writing. I really think we can learn a lot from her even though the official program only takes 2 hours. Together we created the folllowing program:
Approach: do a little theory, then some work in subgroups.
18.00 – 20.00h Entry and pizza
20.00h – 20.15h Testers and coaching
Discussion: What is coaching, what it isn’t and what do you want to achieve?
20.15 – 20.35h The art of asking questions
Short, confrontational exercise in the setting of closed and open questions.
20.35 – 21.00h Forbidden to give advise
Exercise: who manages to put his knowledge to omit and still thriving conversation?
21.00 – 21.30h The layers when things go wrong
Troubleshooting in your communication: the UI model (Dutch). How do you keep asking questions on an unwilling partner? In groups, translate the model to your own context.
21.30 – 22.00h Practice
Coach each other, with all the tools discussed this evening. Subject is cases from your own work.
I will write a new post with an impression of the evening.
Last month DEWT organized a theme evening at TestNet, who graciously provided us with a conference room (a big one!) at the Nieuwegein Business Centre. More than 150 people attended this evening.
They looked and listened to James Bach who was ‘interviewed’ by Michael Bolton. Both were displayed at two big video screens in the room. They gave an entertaining and insightful presentation of the origin, what and why of CDT.
Their presentation can be found here at the TestNet site.
A CDT mindmap (as PDF) made by Michael Bolton can be found here: ContextDrivenTesting
Because of technical limitations it was not possible to have a Q&A with James and Michael. So we gathered questions from the audience and emailed them to James. This is what James had to say (question at the bullet):
The answer to this is not specific to the CDT approach, but I’ll show you what a CDT-style answer looks like:
What is the problem that testing solves? Because whatever that problem is, once it’s solved, you can stop testing. A good tester takes care to understand his own mission well enough to determine that. In most contexts I work in, the motivating problem for testing is this: what is the status of our product, and specifically what is the prospect that it will fail in an important way, in the field? Testing should begin when that question becomes important and our clients need answers. Testing should end when that question is settled to the satisfaction of our clients.
This brings us right to risk, because our clients want to know the status of the product in order to manage business risk (or risks to their customers, which is indirectly business risk). All testing (which some limited exceptions) is about risk. In a context of low risk, testing may be unwarranted.
This question is related to the notion of Good Enough Quality, about which I have written elsewhere.
First, you can refuse to do work that you believe is unnecessary and wasteful. Many testers believe they have no say in this, and no control. Well, of course they do control that. What they can’t control is whether their employer continues to employ them. If your employer needs you specifically for the purpose of faking a test project (they won’t call it that– they’ll call it “testing”) then if you say “hey you are forcing me to work in a way that’s not helping you” they will be upset. They wanted a group of people who would inexpensively shuffle papers so that they could tell their fellow executives or regulatory auditors that they “have a test team” working on the project. What they didn’t want was the headache of actually dealing with real test results. Naturally, if a consulting organization proposes a plausible sounding “best practice” that encourages the tester to be quiet and stay out of the way, many companies will embrace it.
I know this may sound absurd, so please read carefully the Space Shuttle Columbia Accident Investigation Board final report, which gives a detailed and disturbing picture of the kind of fakery I’m talking about.
“As the Board investigated the Columbia accident, it expected to find a vigorous safety organization, process, and culture at NASA, bearing little resemblance to what the Rogers Commission identified as the ineffective “silent safety” system in which budget cuts resulted in a lack of resources, personnel, independence, and authority. NASAʼs initial briefings to the Board on its safety programs espoused a risk-averse philosophy that empowered any employee to stop an operation at the mere glimmer of a problem. Unfortunately, NASAʼs views of its safety culture in those briefings did not reflect reality. Shuttle Program safety personnel failed to adequately assess anomalies and frequently accepted critical risks without qualitative or quantitative support, even when the tools to provide more comprehensive assessments were available.” [CAIB Report, vol.1, p.177]
“NASA policy dictates that safety programs should be placed high enough in the organization, and be vested with enough authority and seniority, to “maintain independence.” Signals of potential danger, anomalies, and critical information should, in principle, surface in the hazard identification process and be tracked with risk assessments supported by engineering analyses. In reality, such a process demands a more independent status than NASA has ever been willing to give its safety organizations, despite the recommendations of numerous outside experts over nearly two decades, including the Rogers Commission (1986), General Accounting Office (1990), and the Shuttle Independent Assessment Team (2000).” [CAIB Report, vol.1, p.185]
“[Safety] personnel were present but passive and did not serve as a channel for the voicing of concerns or dissenting views. Safety representatives attended meetings of the Debris Assessment Team, Mission Evaluation Room, and Mission Management Team, but were merely party to the analysis process and conclusions instead of an independent source of questions and challenges.” [CAIB Report, vol.1, p.170]
“Prior to Challenger, the can-do culture was a result not just of years of apparently successful launches, but of the cultural belief that the Shuttle Programʼs many structures, rigorous procedures, and detailed system of rules were responsible for those successes. The Board noted that the pre-Challenger layers of processes, boards, and panels that had produced a false sense of confidence in the system and its level of safety returned in full force prior to Columbia. NASA made many changes to the Space Shuttle Program structure after Challenger. The fact that many changes had been made supported a belief in the safety of the system, the invincibility of organizational and technical systems, and ultimately, a sense that the foam problem was understood.” [CAIB Report, vol.1, p.199]
The paragraphs above are telling us that NASA management wanted the credit and benefits that come from claiming to dedicate themselves to safety, but they didn’t want the trouble and effort that goes with actually dedicating themselves to safety. It’s natural for people to have an instinct to “be practical” and to therefore “go along and get along.” As a man who actually enjoys arguing and being loud, I can tell you that it is still difficult, even for me, to stand up against a crowd of managers who want to kick something out the door. But to fulfill our responsibility and maintain integrity as engineering practitioners (testers are participating in engineering regardless of whether they are considered to be professional engineers, after all) we must be prepared to face some slings and arrows.
Adaptive testing is not a commonly-used phrase. I assume this question is referring to the claim that T-Map is “adaptive.” To me that’s an empty term, in this context. Why do I say that? Well, read the T-Map book, guys! Let’s see, where does it talk about exploratory testing? Oh there it is! Marginalized to the status of a minor technique; relegated to a few pages like some curiosity of “unstructured testing” (which it absolutely isn’t). Exploratory testing is the ultimate in adaptivity. Adapting, adapting, adapting, is what it means to be testing in an exploratory way. Exploratory testing is when the design process of testing and the performance of the test are married together in one interactive process. That’s adaptive! Exploratory testing has been written about and spoken about for more than 25 years. Exploratory research has been written about for longer than that, as has exploratory data analysis. So, what possible excuse could the T-Map people have for so profoundly neglecting the central role of exploration (and therefore adaptation) in testing IF they are serious about being adaptive? Non-exploratory testing is a minor part of professional testing (even in medical devices, where I am currently working). If anyone feels that this is a surprising claim, then I humbly suggest that you have been misinformed about what exploratory testing actually is.
Lots of things are adaptive to some degree. The Constitution of the United States is adaptive, but it is adaptive in only the most clumsy, slow way imaginable. Just saying “we’re adaptive” doesn’t mean you can check off that box on your checklist.
Context-Driven testing means that the testing practitioner is always responsible for his own work processes (to the degree he is responsible for himself, at all, of course). He need not ask permission to change the way he’s testing if the way he’s testing isn’t getting the job done. His responsibility is to continually ask himself if the testing is fulfilling its purpose in a reasonable way.
Context-Driven testing is not a self-contained testing methodology. It’s at most an approach that embodies a set of principles. These principles can be embodied in a variety of ways. Rapid Software Testing is a context-driven testing methodology (though it’s not the only methodology that could be considered so.)
Does T-Map talk about how adaptation works? We do that in Rapid Testing, we have to, since the central message of Rapid Testing is “you are in charge of your work process, you select your heuristics, you control your work products.” What we do in Rapid Testing is not tell you what forms to fill out or what keys to press. Instead we get you to practice finding the hidden testing problems and coping with them when you are under fire. T-Map focuses on tasks and artifacts. Rapid Testing (and Context-Driven testing) focuses on skills and pretty much lets the tasks and artifacts fend for themselves. That’s seriously adaptive!
Small “a” agile can be context-driven. But most people speak about large-”A” Agile. The so-called Agile community tends to revere certain practices as sacrosanct. By doing that, they become context-imperial. Some leading thinkers, such as Brian Marick, are gleefully so. At the Agile Fusion conference (held at the premises of Satisfice, Inc. 7 or 8 years ago) a major debate broke out about the meaning of the word “agile” and the fact that for some people Agile means doing a certain set of practices. At that conference, Brian, who was one of the founders of the Context-Driven community, broke from us and declared himself Agile.
The Agile Fusion conference was an attempt to reconcile the differences between Agile leaders and Context-Driven testing leaders. I think it was a very productive event that made the participants realize that we were in two very different schools of thought.
We respect the Agilists, but we cannot follow them. We study testing, we think testing is worth studying, and we think there are lots of ways to do testing. We will not say that any one way of testing is inherently superior to any other. (Superior in context, perhaps, but not absent of context.)
Some Agilists respond by saying that this is a specious concern, because most Agile projects *are* suited to their context. I would reply that they don’t know whether or not they are suited, because they don’t study context, they don’t study methodology, and they don’t study testing. What they study is doing software development in their personal favorite way.
See www.context-driven-testing.com for more on that.