Context Driven Testing @ TestNet

Last month DEWT organized a theme evening at TestNet, who graciously provided us with a conference room (a big one!) at the Nieuwegein Business Centre. More than 150 people attended this evening.

They looked and listened to James Bach who was ‘interviewed’ by Michael Bolton. Both were displayed at two big video screens in the room. They gave an entertaining and insightful presentation of the origin, what and why of CDT.

Their presentation can be found here at the TestNet site.

A CDT mindmap (as PDF) made by Michael Bolton can be found here: ContextDrivenTesting

Because of technical limitations it was not possible to have a Q&A with James and Michael. So we gathered questions from the audience and emailed them to James. This is what James had to say (question at the bullet):

  • How do you manage CDT? How do you know when to stop testing?

JB:

The answer to this is not specific to the CDT approach, but I’ll show you what a CDT-style answer looks like:

What is the problem that testing solves? Because whatever that problem is, once it’s solved, you can stop testing. A good tester takes care to understand his own mission well enough to determine that. In most contexts I work in, the motivating problem for testing is this: what is the status of our product, and specifically what is the prospect that it will fail in an important way, in the field? Testing should begin when that question becomes important and our clients need answers. Testing should end when that question is settled to the satisfaction of our clients.

This brings us right to risk, because our clients want to know the status of the product in order to manage business risk (or risks to their customers, which is indirectly business risk). All testing (which some limited exceptions) is about risk. In a context of low risk, testing may be unwarranted.

This question is related to the notion of Good Enough Quality, about which I have written elsewhere.

  • How can we change the “fake testers” as a testing community like TestNet? And what can I individually do?

JB:

First, you can refuse to do work that you believe is unnecessary and wasteful. Many testers believe they have no say in this, and no control. Well, of course they do control that. What they can’t control is whether their employer continues to employ them. If your employer needs you specifically for the purpose of faking a test project (they won’t call it that– they’ll call it “testing”) then if you say “hey you are forcing me to work in a way that’s not helping you” they will be upset. They wanted a group of people who would inexpensively shuffle papers so that they could tell their fellow executives or regulatory auditors that they “have a test team” working on the project. What they didn’t want was the headache of actually dealing with real test results. Naturally, if a consulting organization proposes a plausible sounding “best practice” that encourages the tester to be quiet and stay out of the way, many companies will embrace it.

I know this may sound absurd, so please read carefully the Space Shuttle Columbia Accident Investigation Board final report, which gives a detailed and disturbing picture of the kind of fakery I’m talking about.

“As the Board investigated the Columbia accident, it expected to find a vigorous safety organization, process, and culture at NASA, bearing little resemblance to what the Rogers Commission identified as the ineffective “silent safety” system in which budget cuts resulted in a lack of resources, personnel, independence, and authority. NASAʼs initial briefings to the Board on its safety programs espoused a risk-averse philosophy that empowered any employee to stop an operation at the mere glimmer of a problem. Unfortunately, NASAʼs views of its safety culture in those briefings did not reflect reality. Shuttle Program safety personnel failed to adequately assess anomalies and frequently accepted critical risks without qualitative or quantitative support, even when the tools to provide more comprehensive assessments were available.” [CAIB Report, vol.1, p.177]

“NASA policy dictates that safety programs should be placed high enough in the organization, and be vested with enough authority and seniority, to “maintain independence.” Signals of potential danger, anomalies, and critical information should, in principle, surface in the hazard identification process and be tracked with risk assessments supported by engineering analyses. In reality, such a process demands a more independent status than NASA has ever been willing to give its safety organizations, despite the recommendations of numerous outside experts over nearly two decades, including the Rogers Commission (1986), General Accounting Office (1990), and the Shuttle Independent Assessment Team (2000).” [CAIB Report, vol.1, p.185]

[Safety] personnel were present but passive and did not serve as a channel for the voicing of concerns or dissenting views. Safety representatives attended meetings of the Debris Assessment Team, Mission Evaluation Room, and Mission Management Team, but were merely party to the analysis process and conclusions instead of an independent source of questions and challenges.” [CAIB Report, vol.1, p.170]

“Prior to Challenger, the can-do culture was a result not just of years of apparently successful launches, but of the cultural belief that the Shuttle Programʼs many structures, rigorous procedures, and detailed system of rules were responsible for those successes. The Board noted that the pre-Challenger layers of processes, boards, and panels that had produced a false sense of confidence in the system and its level of safety returned in full force prior to Columbia. NASA made many changes to the Space Shuttle Program structure after Challenger. The fact that many changes had been made supported a belief in the safety of the system, the invincibility of organizational and technical systems, and ultimately, a sense that the foam problem was understood.” [CAIB Report, vol.1, p.199]

The paragraphs above are telling us that NASA management wanted the credit and benefits that come from claiming to dedicate themselves to safety, but they didn’t want the trouble and effort that goes with actually dedicating themselves to safety.
It’s natural for people to have an instinct to “be practical” and to therefore “go along and get along.” As a man who actually enjoys arguing and being loud, I can tell you that it is still difficult, even for me, to stand up against a crowd of managers who want to kick something out the door. But to fulfill our responsibility and maintain integrity as engineering practitioners (testers are participating in engineering regardless of whether they are considered to be professional engineers, after all) we must be prepared to face some slings and arrows.

  • Can you explain the difference between “Adaptive Testing” and Context-Driven testing again?

JB:

Adaptive testing is not a commonly-used phrase. I assume this question is referring to the claim that T-Map is “adaptive.” To me that’s an empty term, in this context. Why do I say that? Well, read the T-Map book, guys! Let’s see, where does it talk about exploratory testing? Oh there it is! Marginalized to the status of a minor technique; relegated to a few pages like some curiosity of “unstructured testing” (which it absolutely isn’t). Exploratory testing is the ultimate in adaptivity. Adapting, adapting, adapting, is what it means to be testing in an exploratory way. Exploratory testing is when the design process of testing and the performance of the test are married together in one interactive process. That’s adaptive! Exploratory testing has been written about and spoken about for more than 25 years. Exploratory research has been written about for longer than that, as has exploratory data analysis. So, what possible excuse could the T-Map people have for so profoundly neglecting the central role of exploration (and therefore adaptation) in testing IF they are serious about being adaptive? Non-exploratory testing is a minor part of professional testing (even in medical devices, where I am currently working). If anyone feels that this is a surprising claim, then I humbly suggest that you have been misinformed about what exploratory testing actually is.

Lots of things are adaptive to some degree. The Constitution of the United States is adaptive, but it is adaptive in only the most clumsy, slow way imaginable. Just saying “we’re adaptive” doesn’t mean you can check off that box on your checklist.

Context-Driven testing means that the testing practitioner is always responsible for his own work processes (to the degree he is responsible for himself, at all, of course). He need not ask permission to change the way he’s testing if the way he’s testing isn’t getting the job done. His responsibility is to continually ask himself if the testing is fulfilling its purpose in a reasonable way.

Context-Driven testing is not a self-contained testing methodology. It’s at most an approach that embodies a set of principles. These principles can be embodied in a variety of ways. Rapid Software Testing is a context-driven testing methodology (though it’s not the only methodology that could be considered so.)

Does T-Map talk about how adaptation works? We do that in Rapid Testing, we have to, since the central message of Rapid Testing is “you are in charge of your work process, you select your heuristics, you control your work products.” What we do in Rapid Testing is not tell you what forms to fill out or what keys to press. Instead we get you to practice finding the hidden testing problems and coping with them when you are under fire. T-Map focuses on tasks and artifacts. Rapid Testing (and Context-Driven testing) focuses on skills and pretty much lets the tasks and artifacts fend for themselves. That’s seriously adaptive!

  • Why is agile not context-driven?

JB:

Small “a” agile can be context-driven. But most people speak about large-“A” Agile. The so-called Agile community tends to revere certain practices as sacrosanct. By doing that, they become context-imperial. Some leading thinkers, such as Brian Marick, are gleefully so. At the Agile Fusion conference (held at the premises of Satisfice, Inc. 7 or 8 years ago) a major debate broke out about the meaning of the word “agile” and the fact that for some people Agile means doing a certain set of practices. At that conference, Brian, who was one of the founders of the Context-Driven community, broke from us and declared himself Agile.

The Agile Fusion conference was an attempt to reconcile the differences between Agile leaders and Context-Driven testing leaders. I think it was a very productive event that made the participants realize that we were in two very different schools of thought.

We respect the Agilists, but we cannot follow them. We study testing, we think testing is worth studying, and we think there are lots of ways to do testing. We will not say that any one way of testing is inherently superior to any other. (Superior in context, perhaps, but not absent of context.)

Some Agilists respond by saying that this is a specious concern, because most Agile projects *are* suited to their context. I would reply that they don’t know whether or not they are suited, because they don’t study context, they don’t study methodology, and they don’t study testing. What they study is doing software development in their personal favorite way.

  • You mentioned context-specific, context-aware and context-driving. Can you explain what the differences are?

JB:
See www.context-driven-testing.com for more on that.

 

Advertisements
This entry was posted in Context-driven testing and tagged , , . Bookmark the permalink.

3 Responses to Context Driven Testing @ TestNet

  1. Great post!
    Thanks for sharing with the wider community…

  2. Pingback: Five Blogs – 10 February 2012 « 5blogs

  3. Stefan Croes says:

    It was a memorable evening!

    Nice, insightful answers by James Bach.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s