DEWT7: driven to find new perspectives

From Friday 27 January till Sunday 29 January 2017 the seventh annual peer conference of the Dutch Exploratory Workshop on Testing was held at Hotel Bergse Bossen in Driebergen, the Netherlands. The conference started with dinner, drinks and games in the bar on Friday evening. A number of the attendees also attended the first TestBash held in the Netherlands and so it took quite a while for all to get to Driebergen. Furthermore the fatigue of being at two conferences in a row forced several to go to bed at a reasonable hour. But despite this fatigue the conference saw an overall energetic crowd. The discussions were nicely spread over the experience reports and every topic got the attention it deserved.

Experience reports

The theme of DEWT7 was Lessons learned in software testing and around that topic a total of seven experience reports were presented. Rick Tracy kicked off the conference with a talk about how he unintentionally broke the test environment and found a number of bugs as a result.

He learned that we need fresh angles on software and that we cannot continue to use to same approach over and over again if we want to find bugs. Boris Beizer’s Pesticide Paradox was mentioned during the discussion.

Bas van Berkel was up next and he talked about his difficulties introducting the Heuristic Test Strategy Model (HTSM) as a means to diversify the test approach.

The HTSM approach did not fit well with the intentions (mental models) of the team that consisted mainly of developers. After that attempt Bas set up a risk based testing approach and succeeded in getting a conversation about risk started.

After lunch Gwen Diagram implored us all to introduce continuous delivery into our organisations. Her lesson learned was that continuous delivery very drastically reduces the pain of deployment. We were reminded of tedious but tricky manual deployments that lasted hours and took place in the evening or in the night. Gwen learned the technical aspects of continuous delivery in one assignment and was able to apply this knowledge in other projects.

Patrick Prill continued with a story about a software development effort in which the disciplines operated on separate ‘islands’.

He was able to explain his increasing frustration with this situation using the Cynefin model. This model allowed him to discuss the complexity of the organisation with other people on the project and to build bridges between the islands.

The last experience report on Saturday was presented by Joost van Wollingen. Joost approaches testing from a technical perspectives and this allows him to find technical failures that would be hidden to the eye of the functional tester. But in another project his focus on technology left him unaware of a functional defect. So his lesson learned was that we need different perspectives in order to find the defects that matter. Testers need to be aware of the technological implementation of the software and yet they also need to maintain a critical distance in order to bring new information to the table.

The second day of the conference started with a talk by Richard Sholtes.

He showed us a number of elaborate (Excel) reports that contained information about the progress of testing and the results. He used these reports to communicate with his manager. Gradually it dawned on him that the reports were not read as carefully as he thought and that the decision for releasing the software had become his own responsibility. His lesson learned was that his focus should have been more on finding problems than on making elaborate reports. A discussion about the role of the tester in making release decisions followed.

The last report of the conference was presented by Ash Winter.

Ash talked about the consulting work that he did in which he visited companies and advised them on the improvement of their test approach. He saw that the problems encountered in testing were influenced by what was going on in other parts of the organisation. So he took a look at the wider picture while studying systems thinking. He read An Introduction to General Systems Thinking by Gerald Weinberg which helped him to create this picture. His quest was driven by his belief that he could make a difference. During the discussion we touched upon the principles of a tester and on viewing the testing organisation as a part of a larger system.

Analysis of a lesson learned

Ruud Cox and Joris Meerts closed DEWT7 with a workshop. Its aim was to see how lessons can be learned in software testing. As we saw during the weekend, many of the lessons were learned by the personal motivation of the tester to change something in the way of working. Terms such as bravery and ownership were mentioned. Furthermore, the tester needs background information that allows her to see things in a different perspective. Patrick encountered the Cynefin model, Ash took to reading books by Weinberg and Gwen had the advantage of having been able to study an approach in detail early on.

Models

Every lesson learned is subjective but the question is how we deal with that subjectivity in order to be able to share experiences. We can share our experiences using models and an important part of the discussions during DEWT7 focused on models. During the talk by Bas we found out that we should not assume that our understanding of a certain approach (which is a model in our heads) is similar to how, for example, the developers see it. Richard challenged his own assumptions about what management needed by experimenting with different reports. We need to share our models and figure out what the model in the head of the other person looks like. Through the experience reports by Patrick and Ash we also learned that we can investigate our own subjective opinions by modeling them.

Bias and principles

Joost van Wollingen told us op front that he was biased toward technical testing. This is another form of subjectivity that we can compensate for and communicate to others once we are aware of it. Joost’s presentation also once again triggered the topic of diversification when it comes to the test approach. Rick already mentioned that it is nice to have a fresh perspective from time to time. Bas introduced his approach to take other dimensions of the software product into consideration. And both Patrick and Ash sought different perspectives by employing systems thinking.

Criteria

We saw many criteria by which the speakers judged the outcome of an approach. Many of those criteria were personal in nature and yet tied to the goals of the organisation. Gwen showed us a clear outcome; the reduction of the amount of frustration and the time it takes to deploy an application to production. Other criteria were less measurable, such as the ability to speak the language of the developers (mentioned by Joost), the degree to which a consultant makes a difference in the organisation (mentioned by Ash), or the ability of the team to look at the software from different perspectives (mentioned by Bas and Rick). Altogether we found that often we act in an organisation based on personal experiences and feelings and that criteria by which we judge our actions become clearer along the way.

Participants

DEWT7 was experienced in the gracious company of the following people.

Andreas Faes (BE)
Ash Winter (UK)
Bas van Berkel (NL)
Beren van Daele (BE)
Christopher Chant (UK)
Emma Preston (UK)
Gwen Diagram (UK)
Huib Schoots (NL) – Facilitator
Jean-Paul Varwijk (NL)
Joep Schuurkes (NL) – Content owner
Joost van Wollingen (NL)
Joris Meerts (NL)
Manon Penning (NL)
Marianne Duijst (NL)
Patrick Prill (DE)
Philip Hoeben (NL) – Conference chair
Pieter Withaar (NL)
Richard Scholtes (NL)
Rick Tracy (NL)
Rosie Sherry (UK)
Ruud Cox (NL)
Zeger van Hese (BE) – Facilitator

The DEWT7 attendees

Program

Saturday
Rick Tracy – One minor test, one huge bug
Bas van Berkel – Introducing HTSM within a project
Patrick Prill – The reason for my grumpiness
Gwen Diagram – You do what manually?!
Joost van Wolingen – About being a technically oriented tester
Sunday
Richard Scholtes – What to know and what to show?
Ash Winter – Inadvertent local optimisation
Ruud Cox, Joris Meerts – Analysis of a lesson learned (workshop)

Sponsorship

Association for Software TestingDEWT7 was sponsored by the Association for Software Testing. DEWT and the participants of DEWT7 thank them for their support!

Advertisements
Video | Posted on by | Leave a comment

Cooking up a lesson learned

The seventh edition of the annual peer conference of the Dutch Exploratory Workshop on Testing will be about lessons learned. The theme reminds oneself immediately of the book Lessons Learned in Software Testing. This book provides the reader with over two hundred lessons. But the aim of the peer conference is not to collect lessons. Rather, we want to look at how the lesson was learned, whether it was applied and, in case it was applied, what the outcome was.

In this article I want to provide some guidelines for the examination of how the lesson learned actually comes into being. My aim is to apply these guidelines during the conference so it enables me to ask better questions. Also, I want to use the guidelines as input for the workshop that Ruud Cox and I will running at the end of the conference.

As you can see I want to focus on how the lesson learned comes into existence, which is the first of a series of steps. The first step is the evaluation of the situation in which the lesson was learned and the analysis of the actions that were taken (who did what) in this situation. The second step is the abstraction of the actions to a more generalized level so that it can be stated in terms that are not so much tied to context in which it was learned. This makes it possible for people who were not part of the actual experience to understand (and evaluate!) the lesson. Both steps are important but I want to focus on the first one.

What is a lesson learned?

In order to examine how a lesson learned comes into existence, first, we need to know what it is. According to Merriam-Webster a lesson can be defined as ‘something learned by study or experience.’ The definition supposes two ways of learning; one by study and one by experience. Lessons learned, in the context this conferences, focuses on learning by experience and this is an important distinction to make. Obviously, it means we need to have an experience in order to learn a lesson. But it also means that the lesson is directly tied to the experience and maybe even generated by the experience. Just as much as the shape of a river bed is shaped by the flow of water, the lesson learned is shaped by experience.

Experience (Merriam-Webster) means the direct observation of or participation in events as a basis of knowledge. It assumes that a lesson can only be learned when a person is directly involved in a situation. Without this involvement there will be no lesson learned. So personal experience is a key factor.

Lessons learned are, for example, a familiar concept in project management. Commonly, projects have lessons learned sessions, in which it is customary to look back on a project and capture practices or approaches that had either advantageous or adverse consequences. The practices, once captured, can be shared so that they have—or avoiding them has—a positive effect on future projects. The two questions that form the basis of a lessons learned session are ‘what went well’ and ‘what did not go well.’

Evaluation, the messy bit

It seems that it is not hard to answer these previous two questions—’what went well’ and ‘what did not go well’. At least, if I look back at the last couple of months of my current project, I can easily identify some things that worked and some things that didn’t work. I am pretty sure my team members can come up with their own lessons learned without much trouble. But if we would compare those lessons, we would probably find that each person employs different criteria for the evaluation of what happened in the last couple of months.

Subjectivity

So there are a number of things that make it difficult to evaluate what happened in the past—that influence the quality of our perception of the lesson learned. First and foremost, since we are talking about personal experience, the lesson learned must be subjective. There are many situations in which many persons go through the same experience (for example, in a software project). Perhaps in this case a collective assessment counters some of the subjectivity of the individual assessment. But usually, the definition of what went well and what went wrong is a subjective one. Subjectivity should be considered when creating a lesson learned.

Criteria

The other point is that different criteria are used to evaluate a lesson learned. If we say something was a success or a failure, we need criteria by which to judge it. If I look at my project again, I can take for example, the sprint velocity as an indicator of success for a certain approach. Or I can use the general mood in the team, the readability of the code, the speed of the automated tests or the amount of technical debt. These are indicators—some are easy and some are hard to measure—that may tell us about the effect of a certain practice or the change of a certain practice. In the examination of a lesson learned, something has to be said about the (qualitative or quantitative) indicators by which success or failure is measured.

Cause and effect

Changes in practices can have effects on a project. Usually a lesson learned is about a change in some practice to which some effect is ascribed. Say I introduce, in an Agile team, risk analysis as a part of the refinement of a user story. In parallel I think up some indicators that should see improvement because of the introduction of risk analysis. The indicators may never show improvement, which makes it difficult to know if there was an effect, but even if they do, I should not jump to the conclusion that my introduction of risk analysis caused it. There may be other factors. Causal relationships are not easy to evaluate and there are causal fallacies that we can commit along the way. A discussion of cause and effect should be part of a lesson learned.

Context

Furthermore some analysis of the context is necessary. Why did the actions lead to success or failure in this particular context? And which circumstances caused the learning of the lesson to happen? In other words: what enabled you to learn that lesson? Obvious, if my lesson learned is that introducing risk analysis to an Agile team improves the efficiency of testing, I can only learn this lesson in a context with a team that does not yet use risk analysis. The context enabled me to learn this lesson. Interesting insights could be gained from the study of the factors enabling a lesson learned.

Skills

As a side note, this form of contextual analysis is strongly reminiscent of action research, in which the researcher is involved in a collective effort to, for example, find a solution to a problem. This kind of research requires specific skills in the area of data gathering (for example, the keeping of a journal or log), reflection and evaluation, organization and synthesis. Ultimately, a discussion of lessons learned touches upon the usage of these skills.

Posted in Context-driven testing, DEWT7, Peer conference | Leave a comment

DEWT 7 announced

We are happy to announce DEWT7, our seventh annual peer conference. The conference will be held from Friday January 27 until Sunday January 29 2017 at Hotel Bergse Bossen in Driebergen, the Netherlands. The conference starts on Friday evening at 6 pm  with dinner, fun, games & conversations. On Saturday morning, 9 am, the official part of the conference starts. On Sunday we will wrap up at about 3 pm in the afternoon.

The peer conferences of the Dutch Exploratory Workshop on Testing are based on the experiences of the participants. Therefore each participant is asked to prepare an experience report on the conference theme. DEWT peer conferences are invitation only.


The theme of DEWT7 is Lessons learned in software testing.

As we grow older, we build up experience. We might even learn a thing or two. Apply it with great success later. Forget it, repeat the old mistake, learn the same lesson again. Find ourselves in a similar situation, apply the lesson learned and deepen our understanding.

These are the kind of stories we would like to hear from you in your experience reports: how you learned one of your great lessons in software testing. One of those lessons that demarcate a period of “before” and “after” – although in some cases the before and after will be difficult to pinpoint to a specific second or day.

And we also want to hear what happened with you and with that lesson afterwards. Do you still apply it? Did you apply it once well-intended, but with horrendous result? Did it grow obsolete? Do you apply it all the time, or only in specific circumstances?

So in total that makes two stories: one of a lesson and one applying that lesson later. Two stories, one experience report, sharing something about what makes you the tester you are.


Our peer conferences follow these rules:

  • The main focus of your experience report should be an actual experience.
  • All presentations are 15/20 minutes followed by “open season” a facilitated discussion. See the blog post “A Guide to Peer Conference Facilitation” by Paul Holland to learn more about facilitation.
  • Anything shared during the peer conference can be made public with proper attribution. So be careful not to share any proprietary or confidential material.
  • Not everyone will get the opportunity to present. The reason for this is that the focus of a peer conference is on the facilitated group discussion after each experience report.

Philip Hoeben (conference chair) & Joep Schuurkes (content owner)


This peer conference is made possible by the Grant we received from AST.

ast

Posted in DEWT7, Peer conference | Leave a comment

An Experience Report Guideline

As a result of DEWT6 and other musings, Ruud Cox and I thought it was time to be more specific about what it means to do an experience report. The Dutch Exploratory Workshop on Testing (DEWT) believes the experience report is an important vehicle for learning about software testing. This is why the DEWT peer conferences are centered around them.

Ruud and I wrote a guideline because we think that a good experience report enables software testers to make better decisions about practices and practitioners. Over the years we saw quite a number of experience reports and though many of those talks and the following discussions provided the audience with great insights, we feel that by being more specific about what is expected from an experience report, an even better learning experience can be created.

The guideline can be downloaded (as PDF document) by clicking on the following link: An Experience Report Guideline.

We would like to thank Jean-Paul Varwijk, Joep Schuurkes and James Bach for reviewing the guideline.

Posted in DEWT6, Peer conference | Tagged | 1 Comment

DEWT6: The Medium is the Message

From Friday 22 January till Sunday 24 January 2016 the sixth annual peer conference of the Dutch Exploratory Workshop on Testing was held at Hotel Bergse Bossen in Driebergen, the Netherlands. The theme was ‘Communicating testing during software development’. In this article we give a summary of the proceedings of the conference.

First off, as the conference chair I would like to thank the organizers and the participants for bringing their energy, creativity and inspiration to the conference.

Experience reports

Over the weekend, five experience reports were presented on different aspects of communication (for an overview, see the Program below). Susan van de Ven reached out to the group to help her with a problem of trust and caring.  She found herself in a position in which she was responsible for a release decision based on the quality of the product, and yet she did not have the authority to demand the information that she needed. As a peer, she had to elicit the information she needed from the other testers. Topics such as the sharing of responsiblity, caring, motivation and the building of trust were discussed. Ard Kramer told us a powerful story of risk and meaning. We discussed risk as a threat to value and about the tester’s ability to identify risk. To discuss risk is to discuss value and meaning and thus Ard presented a three layer model in which meaning (why) was at the center, surrounded by the organisation, the outside world (what) and the processes and the project (how). Risk can also be discusssed using personas and in this respect Linda referred to how she used personas from the movie Aladin to be able to identify certain risks. By discussing risks and meaning, the tester could be a ‘cultural broker‘ between different people in the project.

Thomas Ponnet approached the conference theme by doing some research on different aspects of communication. He provided us with a mind map that he created and some personal stories to go with the model. He regarded the framework as a starting point for investigating, for example, your own communication. Joep (Deepak) Schuurkes talked about a situation in which he hardly communicated about testing at all for the duration of the project. Joep presented his story in a very laid-back manner, which triggered different emotions with the DEWT6 participants. Some felt anger, others depression and others were happy because Joep’s story appeared to be a story of success. This, in its turn triggered discussions not what Joep was communicating but about the way in which he was communicating it. Because of that, the phrase ‘the medium is the message’ was introduced and the discussion touched upon this rather scary higher level of abstraction. The fact that the project appeared to be succesful without much communication about testing also triggered the question how much communication about testing is enough.

In the final experience report Philip and Femke presented their approach for doing pair testing, based upon the rules for back-to-back DJing. They talked about some of the sessions that they did, the main objective of which was to transfer knowledge about the product from Femke (who is a tester and subject matter expert of the product) to Philip (who is a tester and new to the project). They did that by taking control of the application by turn. The discussion touched upon the transfer of knowledge, making tacit knowledge explicit and the balance between narration and asking questions. Interestingly, Philip and Femke tested in different environments and the discussion also touched upon noise and interruptions from outside that influenced the quality of the communication.

A Web of Meaning / Chaos / Unmeaning

The closing workshop of DEWT6 was organized by Joep Schuurkes. It was entitled ‘Web of Meaning’ and its purpose was to connect the different reports to identify common themes. We split up into four groups and connected the reports using stickies. After that the stickies were aggregated into a huge map, the Web of Meaning. This map is displayed below.

DEWT6 Web of Meaning

The map is an expression of the discussion during open season, the hallway talks, the lunch chats and the late-night philosophies of DEWT6. But because of its chaotic nature it was also quickly dubbed the ‘Web of Chaos’ or the ‘Web of Unmeaning’. During the writing of these proceedings I found it very useful as a reference, but I think it was only useful to me because I was at the conference. Michael correctly remarked that such a collection of insights and phrases requires analysis and synthesis in order to be useful for a larger audience. This will be certainly be part of our effort during DEWT7.

Participants

DEWT6 was experienced in the gracious company of the following people.

Aleksandar Simic (DE)
Alexandru Rotaru (RO)
Andreas Faes (BE)
Ard Kramer (NL)
Ben Peachey (NL)
Beren van Daele (BE)
Eddy Bruin (NL)
Femke Boerrigter (NL)
Jackie Frank (NL)
Joep Schuurkes (NL)
Joris Meerts (NL) – Conference chair
Linda van de Vooren (NL)
Michael Bolton (CA)
Peter ‘Simon’ Schrijver (NL) – Content owner
Philip Hoeben (NL)
Rob van Steenbergen (NL)
Robert Page (NL)
Ruud Cox (NL) – Facilitator
Simone de Ruijter (NL)
Susan van de Ven (NL)
Thomas Ponnet (DE)
Wim Heemskerk (NL)
Zeger van Hese (BE) – Facilitator

DEWT6 participants

Back row (from left to right): Simone de Ruijter, Andreas Faes, Thomas Ponnet
Middle row (from left to right): Aleksandar Simic, Joris Meerts, Peter ‘Simon’ Schrijver, Beren van Daele, Zeger van Hese, Jackie Frank, Susan van de Ven, Robert Page, Linda van de Vooren, Ben Peachey
Front row (from left to right): Philip Hoeben, Femke Boerrigter, Ruud Cox, Ard Kramer, Joep Schuurkes, Alexandru Rotaru, Eddy Bruin, Rob van Steenbergen
Not in the picture: Michael Bolton, Wim Heemskerk

Program

Saturday
Susan van de Ven – The perhaps too informal approach to testing communication
Ard Kramer –  Is there a risk?
Thomas Ponnet – W5H3
Joep Schuurkes – The time I didn’t communicate about my testing. Or did I?
Sunday
Philip Hoeben & Femke Boerrigter – Communication during dynamic pair testing
Joep Schuurkes – Web of Meaning (workshop)

Some resources that were mentioned

Blogs about the conference

Sponsorship

Association for Software TestingDEWT6 was sponsored by the Association for Software Testing. On behalf of DEWT and the participants of DEWT6, I thank them for their support!

Posted in DEWT6, Peer conference | Tagged | Leave a comment

DEWT6 announced: Communicating testing during software development

From Friday 22 January till Sunday 24 January 2016 the sixth annual peer conference of the Dutch Exploratory Workshop on Testing will take place at Hotel Bergse Bossen in Driebergen, the Netherlands. The conference is organized by Joris Meerts as conference chair, Jean-Paul Varwijk as content owner and Ruud Cox and Zeger van Hese as facilitators. The twitter hashtag for this peer conference will be #DEWT6.

The theme of DEWT6 is ‘Communicating testing during software development’. Jean-Paul Varwijk illustrates the theme as follows.

One of the goals of software testing, particularly context-driven software testing, is to supply our stakeholders with information. Often we mention how the type and quality of the information we provide extends beyond the presentation of mere metrics such as pass/fail rates. We believe that this information should enable our stakeholders to make informed and meaningful decisions on whether or not the developed software suits their needs and wants, and lives up to the relevant (quality) standards and expectations.

Much of this information, or at least what is communicated about this information, is directed towards the final stages of development but most exchanges of information happen during development itself. During DEWT6 we would like you to share with us your experiences in communicating about software testing while it is being developed. With whom did you share your test ideas and test results? How did you share it? How was your feedback received? Did it turn out the way you expected? Was it useful?

The peer conferences of the Dutch Exploratory Workshop on Testing are based on the experience reports of the participants. Therefore each participant is asked to prepare an experience report on the conference theme. Participation in this conference is by invitation only.

The DEWT peer conferences are modeled after the Los Altos Workshop on Software Testing (LAWST) and the Software Test Managers Roundtable (STMR). More information about this type of conference can also be found in Paul Holland’s Guide to Peer Conference Facilitation.

Posted in DEWT6, Peer conference | Tagged , , | Leave a comment

DEWT5 Report

The 5th DEWT peer conference took place January 16-18th at Hotel Bergse Bossen Driebergen, the Netherlands. The central theme was “Test Strategy“.

DEWT5 was attended by Ben Peachey, Daniel Wiersma, Eddy Bruin, Huib Schoots, Ilari Henrik Aegerter, Jackie Frank, Jeanne Hofmans, Jean-Paul Varwijk, Jeroen Mengerink, Joep Schuurkes, Joris Meerts, Maaike Brinkhof, Maaret Pyhäjärvi, Marjana Shammi, Massimo D’Antonio, Pascal Dufour, Peter “Simon” Schrijver, Philip Hoeben, Ray Oei, Richard Bradshaw, Ruud Cox, Ruud Teunissen, Simon Knight and Zeger van Hese.
Helena Jeret-Mäe unfortunately couldn’t make it.

Below is the schedule of the conference, managed in Trello.

DEWT5 Schedule

Presentations:

Maaret Pyhäjärvi wrote some blog posts upfront:

Simon Knight refered to the following articles in The Testing Planet:

The collection of index cards on the DEWT5 Learning Wall.

Collected ‘#DEWT and #DEWT5’ tweets by Richard Bradshaw

DEWT5 Sketchnotes by Zeger van Hese

Not a Conference on Test Strategy by Joris Meerts

(In Response to DEWT5) – What Has a Test Strategy Ever Done for Us? A response to Joris’ post by Colin Cherry

On behalf of all the DEWT’s I’d like to thank the AST for the grant which contributed to the success of this conference.

Posted in DEWT5, Peer conference | Leave a comment

DEWT5 announced

The 5th DEWT peer conference will take place January 16-18th at Hotel Bergse Bossen Driebergen, the Netherlands. The central theme is “Test Strategy“. This version of DEWT will be organized by Philip Hoeben as conference chair, Ruud Cox as content owner and Huib Schoots as facilitator. The twitter hashtag for this peer conference will be #DEWT5.

This conference is for DEWTs and invitees only. Ruud has written the following invitation/call for papers:
At DEWT5, we would like to focus on test strategy, the set of ideas that guide your test design. A test strategy lives throughout the entire lifecycle of a testing project. It includes planning, execution, and reporting, so your experience report could describe any part of a project, or all of it. Some questions to help you focus your experience report:

  • How did the context of your project influence your test strategy? Which factors were taken into account? Which ones were deliberately left out?
  • Did you use a specific method when you created your test strategy? What did work? What didn’t work? How did you know?
  • How did you found out what was important?
  • Did you document your test strategy, and if so in what format?
  • Was your test strategy supported by the decision makers? How about the project team. If you did not document the test strategy, how did you proceed, and how did that work for you?
  • How did your test strategy evolve over time during the project?
  • Did you drop some ideas and pick up others as the test project progressed?
  • Did your approach to test strategy change as the project progressed? Were there any particular challenges associated with test strategy?
  • Was the test strategy successful? Why do you think that? How did you know?

DEWT5 is modelled after the Los Altos Workshop on Software Testing (LAWST) and the Software Test Managers Roundtable (STMR). Information about how those meetings are run can be found at their respective websites at http://lawst.com, http://www.kaner.com/pdfs/stmr2000.pdf and http://testingthoughts.com/blog/28. Attendees are asked to prepare experience reports about the proposed topic.

So far 25 people have confirmed that they will participate: Ben Peachey, Daniel Wiersma, Eddy Bruin, Helena Jeret-Mäe, Huib Schoots, Ilari Henrik Aegerter, Jackie Frank, Jeanne Hofmans, Jean-Paul Varwijk, Jeroen Mengerink, Joep Schuurkes, Joris Meerts, Maaike Brinkhof, Maaret Pyhäjärvi, Marjana Shammi, Massimo D’Antonio, Pascal Dufour, Peter “Simon” Schrijver, Philip Hoeben, Ray Oei, Richard Bradshaw, Ruud Cox, Ruud Teunissen, Simon Knight, Zeger van Hese.

Posted in DEWT5, Peer conference | Leave a comment

Why I am context-driven – Joep Schuurkes

Why am I context-driven? Because it’s more fun.

That’s all there is to it.

Of course I could argue that becoming context-driven has made me a better tester and I do think it has. Yet it’s not the reason I became a context-driven tester. Besides, how would I prove it made me a better tester?

So no, I am context-driven because it’s more fun. Because it sees testing as an intellectual challenge. Because it allows human uncertainty to be at the core of what it is. Because it tells me that I’m in charge of what I do and how I do it. Because it encourages me to dive in at the deep end of some complex problem, trusting on my skills to get out on top and enjoying every step of the way.
And I am context-driven because there’s a context-driven community filled with people who feel the same way.

To me it boils down to this: what do I want testing to be? Do I want it to be about documents, processes and best practices? Or do I want it to be about skills, wonder and investigation? That’s not a difficult choice: I want the latter.

And now the devil’s advocate may ask: But what if it makes you not a better but a worse tester? In a way I don’t care. Testing based on skills and investigation is the job I fell in love with it. If I couldn’t do that, if I wasn’t allowed to be a context-driven tester, I do not think I would be a tester at all.

Posted in Context-driven testing, Why am I context-driven | Tagged , | 1 Comment

I am context-driven. There is no why – Zeger Van Hese

Prologue

No, dear DEWTs, I did not misunderstand the assignment. The title of this series (“Why I am context-driven”) was handed to me chiseled in stone, ten commandments-style. The Moses in me chose to rearrange the tablets. I felt that the original title was asking for a justification of my context-driven-ness, as in “Why did you choose to live the context-driven life?”.

I did not choose the context-driven life. The context-driven life chose me.

Wait a second – did I just paraphrase the enlightened philosopher 2Pac in public? The point is – I don’t feel it was a conscious decision on my part. It was in my testing genes all along, waiting for me to discover it. Here are some defining moments and personal epiphanies as I recall them:

Early tester life

I arrived late to the testing party. I worked for a movie distributor specializing in arthouse cinema at first, followed by a brief stint as a COBOL developer. Those were the days! Riding with dinosaurs! Yelling commands at the compiler: “Hey you, move to Working-Storage Section! And you, Compute this, Display that!”.

In 2000, by the time I joined my first test team, I was almost thirty years old.

I was convinced it was going to be a temporary job since I was called in to go help some colleagues who were short of testers in their team. Unlike many of my team members who had chosen the testing career path, I never received any formal testing training. After all, I was only meant to be there for the short term. So while the rest of the team was being introduced to the wonders of “structured testing”, I was trying to figure out what the hell the system under test was trying to tell me – I taught myself to listen. By doing that I was able to unearth problems and loads of useful information. Right then and there I fell in love with the joy of exploration and discovery.

1st great realization:
Exploring software systems makes me feel alive

The theory

The team came back fully trained, armed with jargon and techniques. I wanted to tap into their newly acquired knowledge and listened carefully when they told stories about equivalence class partitioning, all pairs testing and different sorts of code coverage. Wow, those were actual tools of the trade I could use! The more I got to know, the more I started to like this testing thing.

Some of the best practices they took home confused me though, such as the principle to create test scripts upfront. I had just spent a couple of days discovering important problems and I asked myself: would I have found those very issues if I had created all my tests upfront? Best practice or not, the philosophy behind the whole thing seemed flaky. Why would you base your whole verification process on stuff created at a moment when you know so little about what is coming your way? Who defines what’s “best”, anyway?

2nd great realization:
Other people’s preferred methods might not work for me

The practice

A few months of testing turned into a couple of years, and I was lucky enough to work on different product teams across various industries. It wasn’t easy to find time to explore the software as I used to, because most of these teams had a testing methodology in place with lots of procedures, templates and test scripts designed upfront. I ended up doing most of it under the radar: when developing scripts, I found out I was exploring to make them the best I possibly could; when executing scripts, I was exploring on the side because it seemed silly to only stay on predefined paths. It never failed to find important problems. I felt that all of testing was infused with exploration. I thought it was all just common sense. People just looked at me funny.

3rd great realization:
Exploration is at the heart of all things testing

4th great realization:
My common sense is not other people’s common sense

The reality check

When I got the opportunity to lead a team of testers through an important new release, I grabbed it with both hands and welcomed any guidance I could get. People I highly respected advised me to stick to the procedures and templates with this one, as it was a unique pilot that shouldn’t go wrong. They spoke from experience, since “we used this approach in all our projects and it always worked” (emphasis theirs). I thought that was a bold claim (always? for all of them?), but I decided to give it a go.

The results were less than stellar.

The project came gift-wrapped with spectacularly detailed requirements – the user interface specifications document alone was as thick as a phone book. The software was not ready yet, but we used our time well, churning out elaborated scripts like there was no tomorrow. When the software finally arrived, it looked nothing like we had envisioned it. As a result, our scripts turned out to be brittle and trivial. On top of that, the whole team was getting desperate, bored and tired of following scripts while they felt they could do much more valuable work.

5th great realization:
Context eats strategy for breakfast

6th great realization:
If testing is boring, I’m probably doing it wrong

Our project manager asked for pass/fail rates, bug- and test case metrics. I proposed to rather give him an analysis of the most important problems, but he insisted on getting the numbers. Once these numbers got out, people started altering their behavior. It was the first time that I witnessed the counterproductive potential of metrics.

7th great realization:
Not all metrics are useful – some are dangerous

When the project manager wanted extra graphs for his report, I duly delivered. Three weeks later he was asking me to tweak these graphs to make the situation look less dramatic. It became clear that we had different intrests – I assume his targets and reputation were at stake, while I was concerned about my integrity and credibility as a tester. I wanted to help him, but it felt as if every muscle in my body was resisting.

8th great realization:
I value my integrity

The revelation

In 2003, a co-worker approached me with a big grin on his face, saying “Check this – you might like it” as he threw a conference handout on my desk. A whole presentation on something called “exploratory testing”! Was this for real!? Turned out that it was. Even better: it described my favorite part in testing – the part that seemed so natural to me – as a recognized testing approach. It even had a proper name!

I wanted to know more and started reading everything I could get my hands on. This quickly led to Cem Kaner and James Bach, who championed exploratory testing as a sapient approach involving simultaneous test design, test execution and learning. All their work appeared to be rooted in science, and well thought out. And it wasn’t just theoretical thought-exercises either, they actually gave plenty of pointers on how to do exploratory testing well and how to make it more manageable.

They called it a martial art of the mind, scientific thinking in real time. They did not only make it sound cool – they also put effort in dismissing the common criticism of it being unstructured, asserted that it can be as disciplined as any other intellectual activity. When they stated that virtually all testing performed by human testers is exploratory to some degree, I knew I found my tribe.

9th great realization:
I am not weird – other people think alike

The homecoming

It was almost inevitable that I would cross paths with context-driven school of testing. Although that only happened years later, it was a kind of homecoming.

I discovered a vibrant community, a bunch of skeptics that rejected the idea of best practices, didn’t take anything for granted and were serious about studying their craft. They looked outwards, not only inwards, drawing from sociology, psychology, even philosophy (which was music to my ears – it matched my own tendency to look for testing lessons outside the field of testing).

Members of the context-driven school pointed to Thomas Kuhn’s “The Structure of Scientific Revolutions” to explain how it is possible that different groups of people – although they all claim to be studying the same field of practice – are using such radically different ontologies to describe it. The different schools of testing all have different paradigms, different goals and value different things (which in hindsight explained why I sometimes felt so alienated from other testers – and they from me).

Epilogue

Years have passed and although a lot of things around me (and probably inside me as well) have changed, I am still part of that community. It has become my touchstone for new ideas and my first line help desk when struggling with testing problems. It’s peers like this who encourage me to continuously learn and stay on top of my game.

I am aware that there are significant drawbacks to surrounding yourself with too many like minded people, so I try to engage with people who are willing to ask the hard questions and challenge my beliefs, even when they don’t necessarily disagree with me. The good thing is that there are plenty of those to be found in the community, but I constantly remind myself to have an open mind and to keep interacting with people outside of it as well.

That community is by far the most visible (and audible) part of context-driven testing, but it is not the reason why I consider myself a context-driven tester. As I mentioned above, it was not a conscious choice. Rather, it is how I make sense of the testing world around me. I consider it a value system: my personal set of morals, ethics, standards, preferences and world views that constitute my DNA as a tester.

So yes, dear DEWTs. I’m context-driven. It is baked into my system. There is no why.

Posted in Context-driven testing, Why am I context-driven | Tagged , , | Leave a comment