Testability: a word we should not use?!

The Dutch Exploratory Workshop on Testing did her 9th peer conference at the woods of Driebergen with the following test geeks: Jean-Paul Varwijk, Maaike Brinkhof, Philip Hoeben, Joris Meerts, James Thomas, Ruud Cox, Joep Schuurkes, Elizabeth Zagroba, Beren van Daele, Jeroen Schutter, Andrei Contan, Bart Knaack , Jeroen van Seeters, Huib Schoots, Zeger van Heze, Adina Moldovan, Simon ‘Peter’ Schrijver (facilitator) and Ard Kramer(content owner)

The concept of the peer conference is quite simple: based on experience talks we were covering the topic of Testability. The expectation of the content owner is that there would be a variety of opinions about this topic and to give you a spoiler alert; there was!

Off course there are different ways of looking at testability and I like to focus on three angles:

Intrinsic testability: This more focusing on the product.
To define intrinsic testability you definitely need to know what is the value or what are the risks that are threatening the product (the context of the product). This will help you to define the testability. The models and mnemonics Robert Meany has presenting to use are very useable. James Thomas mentioned the CODS model.

The observability mentioned in this model is the central word that Maaike used in her experience talk. In her DevOps environment risks have lower impact, which means that software can go faster to production. Also because she saw that software in production provides more interesting and relevant information that running the software in an inferior test environment. So don’t use the word testability anymore, a word non-testers are not attracted to, just use the word observability.

Extrinsic testability. To define as testability not related to the product but everything around the product means that we had stories about the impact of testers. If you really want to improve the intrinsic testability you need to be motivated: to improve your testing skills and how you deal with failures because you chose the wrong level of testability. Are you working in an environment that is safe enough that mistakes are excepted and testability can be improved? Upfront you will never know if the chosen testability will be good enough. So what do you learn about the things you did not know before you started testing.

Different stories were about how non-testers are looking at testability. Ferm statements were made that testability as a concept is hard to discuss with non-testers: they don’t feel the need to do ‘something’ with testability. It becomes interesting if you can get the subject on the table by changing your language or even the approach you use. For example, the language you can use is about speed: if you want to get the product faster to production how can we facilitate this process in the best possible way. You will have a discussion about facilities that we increase your testability.

But there were also stories about mental testability: For example stories were the tester his ethic borders were challenged. This meant that he needs to test systems in organizations that try to make money out of processes that violated ethical and legal borders. As a tester, he was asking himself if that kind of systems are mentally testable?

So should we stop using the word testability and use all kinds of different guerilla tactics to get the subject on the table: don’t mention the word anymore, just do it!

DEWT9 was made possible by the (moral) support of the Associaton for Software Testing: Thank you very much!

References:
James Bach’s Heuristics of Software Testability
Maria Kedemo and Ben Kelly’s Dimensions of Testability
Ash Winter and Rob Meaney’s 10 Ps of Testability
Rob Meaney’s CODS
A Peer Workshop on Testability by DEWT
Testability? Shhh!

Posted in DEWT9, Testability | Leave a comment

DEWT 9 announcement

We are happy to announce DEWT9, the ninth edition of our annual peer conference. The conference will be held from Friday January 31 until Sunday February 2020 at Hotel Bergse Bossen in Driebergen, the Netherlands.

The conference starts on Friday evening at six o’clock with dinner, fun, games & conversations. On Saturday morning, around nine o’clock, the official part of the conference starts. On Sunday we will wrap up at about three o’clock in the afternoon.

The peer conferences of the Dutch Exploratory Workshop on Testing are based on the experiences of the participants. Therefore each participant is asked to prepare an experience report on the conference theme. DEWT peer conferences are invitation only.

The theme of DEWT9 is


Testability; A quality aspect which is underestimated!?

How many times were we at the right spot and the right moment to discuss testability? For DEWT9 we are looking for experience reports about the topic of testability. And of course, we are looking at testability from all kind of different angles, for example:

  • The human side: how do you get attention for testability in your team or how to convince your product owner?
  • The technical side: what do you need to get testing going in terms of infrastructure, tools, data, architectural choices or how to avoid code complexity?
  • The business side: how do you interact with the business about a testable MVPs or a discussion about testability which can be about making a choice between value and risk reduction?

Can you send us a short experience report? A report that covers one or more aspects of testability. We are looking forward to receiving your proposal before 1 November 2019 so we can make the preparation to get an awesome DEWT9. We would like to receive your experience report on the following email address: ar.kramer.ard@gmail.com


The DEWT peer conferences follow these rules:

  • The focus of your experience report should be an actual experience.
  • All presentations are 15/20 minutes followed by open season; a facilitated discussion. See the blog post “A Guide to Peer Conference Facilitation” by Paul Holland to learn more about facilitation.
  • Anything shared during the peer conference can be made public with proper attribution. So be careful not to share any proprietary or confidential material.
  • Not everyone will get the opportunity to present. The reason for this is that the focus of a peer conference is on the facilitated group discussion after each experience report.

Simon ‘Peter’ Schrijver – Conference chair
Ard Kramer – Content owner

Posted in DEWT9, Peer conference, Testability | Leave a comment

DEWT8 announced

We are happy to announce DEWT8, the eighth edition of our annual peer conference. The conference will be held from Friday October 19 until Sunday October 21 2018 at Hotel Bergse Bossen in Driebergen, the Netherlands.

The conference starts on Friday evening at six o’clock with dinner, fun, games & conversations. On Saturday morning, around nine o’clock, the official part of the conference starts. On Sunday we will wrap up at about three o’clock in the afternoon.

The peer conferences of the Dutch Exploratory Workshop on Testing are based on the experiences of the participants. Therefore each participant is asked to prepare an experience report on the conference theme. DEWT peer conferences are invitation only.


The theme of DEWT8 is Developing expertise in software testing.

Increasingly, software testing is regarded as the use of a particular set of skills rather than the execution of steps in a process. In order to be able to talk about skill and the development of skill we need models that can show us how to recognize a skilled tester and how to become one.
Therefore we encourage you to think about how one develops expertise in software testing. It can be developed through practical training and mentoring, learning by participation and by transferring tacit knowledge. But developing expertise also means establishing a reputation, being passionate about and proud of your work and having a thirst for knowledge. We are eagerly looking for experience reports on these subjects. So if you have ever been mentored, we would like to know how that felt. How did it influence your learning and what did you do to evaluate your skills? Which skills did you sharpen? And perhaps you have been a mentor yourself or you have been part of a guild or a community of practice. What did you do to transfer knowledge about software testing and how did that work out?


The DEWT peer conferences follow these rules:

  • The focus of your experience report should be an actual experience.
  • All presentations are 15/20 minutes followed by open season; a facilitated discussion. See the blog post “A Guide to Peer Conference Facilitation” by Paul Holland to learn more about facilitation.
  • Anything shared during the peer conference can be made public with proper attribution. So be careful not to share any proprietary or confidential material.
  • Not everyone will get the opportunity to present. The reason for this is that the focus of a peer conference is on the facilitated group discussion after each experience report.

Jean-Paul Varwijk – Conference chair

Joris Meerts – Content owner

Posted in Peer conference | Tagged | Leave a comment

DEWT7: driven to find new perspectives

From Friday 27 January till Sunday 29 January 2017 the seventh annual peer conference of the Dutch Exploratory Workshop on Testing was held at Hotel Bergse Bossen in Driebergen, the Netherlands. The conference started with dinner, drinks and games in the bar on Friday evening. A number of the attendees also attended the first TestBash held in the Netherlands and so it took quite a while for all to get to Driebergen. Furthermore the fatigue of being at two conferences in a row forced several to go to bed at a reasonable hour. But despite this fatigue the conference saw an overall energetic crowd. The discussions were nicely spread over the experience reports and every topic got the attention it deserved.

Experience reports

The theme of DEWT7 was Lessons learned in software testing and around that topic a total of seven experience reports were presented. Rick Tracy kicked off the conference with a talk about how he unintentionally broke the test environment and found a number of bugs as a result.

He learned that we need fresh angles on software and that we cannot continue to use to same approach over and over again if we want to find bugs. Boris Beizer’s Pesticide Paradox was mentioned during the discussion.

Bas van Berkel was up next and he talked about his difficulties introducting the Heuristic Test Strategy Model (HTSM) as a means to diversify the test approach.

The HTSM approach did not fit well with the intentions (mental models) of the team that consisted mainly of developers. After that attempt Bas set up a risk based testing approach and succeeded in getting a conversation about risk started.

After lunch Gwen Diagram implored us all to introduce continuous delivery into our organisations. Her lesson learned was that continuous delivery very drastically reduces the pain of deployment. We were reminded of tedious but tricky manual deployments that lasted hours and took place in the evening or in the night. Gwen learned the technical aspects of continuous delivery in one assignment and was able to apply this knowledge in other projects.

Patrick Prill continued with a story about a software development effort in which the disciplines operated on separate ‘islands’.

He was able to explain his increasing frustration with this situation using the Cynefin model. This model allowed him to discuss the complexity of the organisation with other people on the project and to build bridges between the islands.

The last experience report on Saturday was presented by Joost van Wollingen. Joost approaches testing from a technical perspectives and this allows him to find technical failures that would be hidden to the eye of the functional tester. But in another project his focus on technology left him unaware of a functional defect. So his lesson learned was that we need different perspectives in order to find the defects that matter. Testers need to be aware of the technological implementation of the software and yet they also need to maintain a critical distance in order to bring new information to the table.

The second day of the conference started with a talk by Richard Sholtes.

https://twitter.com/j19sch/status/825631815465582596

He showed us a number of elaborate (Excel) reports that contained information about the progress of testing and the results. He used these reports to communicate with his manager. Gradually it dawned on him that the reports were not read as carefully as he thought and that the decision for releasing the software had become his own responsibility. His lesson learned was that his focus should have been more on finding problems than on making elaborate reports. A discussion about the role of the tester in making release decisions followed.

The last report of the conference was presented by Ash Winter.

Ash talked about the consulting work that he did in which he visited companies and advised them on the improvement of their test approach. He saw that the problems encountered in testing were influenced by what was going on in other parts of the organisation. So he took a look at the wider picture while studying systems thinking. He read An Introduction to General Systems Thinking by Gerald Weinberg which helped him to create this picture. His quest was driven by his belief that he could make a difference. During the discussion we touched upon the principles of a tester and on viewing the testing organisation as a part of a larger system.

Analysis of a lesson learned

Ruud Cox and Joris Meerts closed DEWT7 with a workshop. Its aim was to see how lessons can be learned in software testing. As we saw during the weekend, many of the lessons were learned by the personal motivation of the tester to change something in the way of working. Terms such as bravery and ownership were mentioned. Furthermore, the tester needs background information that allows her to see things in a different perspective. Patrick encountered the Cynefin model, Ash took to reading books by Weinberg and Gwen had the advantage of having been able to study an approach in detail early on.

Models

Every lesson learned is subjective but the question is how we deal with that subjectivity in order to be able to share experiences. We can share our experiences using models and an important part of the discussions during DEWT7 focused on models. During the talk by Bas we found out that we should not assume that our understanding of a certain approach (which is a model in our heads) is similar to how, for example, the developers see it. Richard challenged his own assumptions about what management needed by experimenting with different reports. We need to share our models and figure out what the model in the head of the other person looks like. Through the experience reports by Patrick and Ash we also learned that we can investigate our own subjective opinions by modeling them.

Bias and principles

Joost van Wollingen told us op front that he was biased toward technical testing. This is another form of subjectivity that we can compensate for and communicate to others once we are aware of it. Joost’s presentation also once again triggered the topic of diversification when it comes to the test approach. Rick already mentioned that it is nice to have a fresh perspective from time to time. Bas introduced his approach to take other dimensions of the software product into consideration. And both Patrick and Ash sought different perspectives by employing systems thinking.

Criteria

We saw many criteria by which the speakers judged the outcome of an approach. Many of those criteria were personal in nature and yet tied to the goals of the organisation. Gwen showed us a clear outcome; the reduction of the amount of frustration and the time it takes to deploy an application to production. Other criteria were less measurable, such as the ability to speak the language of the developers (mentioned by Joost), the degree to which a consultant makes a difference in the organisation (mentioned by Ash), or the ability of the team to look at the software from different perspectives (mentioned by Bas and Rick). Altogether we found that often we act in an organisation based on personal experiences and feelings and that criteria by which we judge our actions become clearer along the way.

Participants

DEWT7 was experienced in the gracious company of the following people.

Andreas Faes (BE)
Ash Winter (UK)
Bas van Berkel (NL)
Beren van Daele (BE)
Christopher Chant (UK)
Emma Preston (UK)
Gwen Diagram (UK)
Huib Schoots (NL) – Facilitator
Jean-Paul Varwijk (NL)
Joep Schuurkes (NL) – Content owner
Joost van Wollingen (NL)
Joris Meerts (NL)
Manon Penning (NL)
Marianne Duijst (NL)
Patrick Prill (DE)
Philip Hoeben (NL) – Conference chair
Pieter Withaar (NL)
Richard Scholtes (NL)
Rick Tracy (NL)
Rosie Sherry (UK)
Ruud Cox (NL)
Zeger van Hese (BE) – Facilitator

The DEWT7 attendees

Program

Saturday
Rick Tracy – One minor test, one huge bug
Bas van Berkel – Introducing HTSM within a project
Patrick Prill – The reason for my grumpiness
Gwen Diagram – You do what manually?!
Joost van Wolingen – About being a technically oriented tester
Sunday
Richard Scholtes – What to know and what to show?
Ash Winter – Inadvertent local optimisation
Ruud Cox, Joris Meerts – Analysis of a lesson learned (workshop)

Sponsorship

Association for Software TestingDEWT7 was sponsored by the Association for Software Testing. DEWT and the participants of DEWT7 thank them for their support!

Posted in DEWT7, Peer conference | Leave a comment

Cooking up a lesson learned

The seventh edition of the annual peer conference of the Dutch Exploratory Workshop on Testing will be about lessons learned. The theme reminds oneself immediately of the book Lessons Learned in Software Testing. This book provides the reader with over two hundred lessons. But the aim of the peer conference is not to collect lessons. Rather, we want to look at how the lesson was learned, whether it was applied and, in case it was applied, what the outcome was.

In this article I want to provide some guidelines for the examination of how the lesson learned actually comes into being. My aim is to apply these guidelines during the conference so it enables me to ask better questions. Also, I want to use the guidelines as input for the workshop that Ruud Cox and I will running at the end of the conference.

As you can see I want to focus on how the lesson learned comes into existence, which is the first of a series of steps. The first step is the evaluation of the situation in which the lesson was learned and the analysis of the actions that were taken (who did what) in this situation. The second step is the abstraction of the actions to a more generalized level so that it can be stated in terms that are not so much tied to context in which it was learned. This makes it possible for people who were not part of the actual experience to understand (and evaluate!) the lesson. Both steps are important but I want to focus on the first one.

What is a lesson learned?

In order to examine how a lesson learned comes into existence, first, we need to know what it is. According to Merriam-Webster a lesson can be defined as ‘something learned by study or experience.’ The definition supposes two ways of learning; one by study and one by experience. Lessons learned, in the context this conferences, focuses on learning by experience and this is an important distinction to make. Obviously, it means we need to have an experience in order to learn a lesson. But it also means that the lesson is directly tied to the experience and maybe even generated by the experience. Just as much as the shape of a river bed is shaped by the flow of water, the lesson learned is shaped by experience.

Experience (Merriam-Webster) means the direct observation of or participation in events as a basis of knowledge. It assumes that a lesson can only be learned when a person is directly involved in a situation. Without this involvement there will be no lesson learned. So personal experience is a key factor.

Lessons learned are, for example, a familiar concept in project management. Commonly, projects have lessons learned sessions, in which it is customary to look back on a project and capture practices or approaches that had either advantageous or adverse consequences. The practices, once captured, can be shared so that they have—or avoiding them has—a positive effect on future projects. The two questions that form the basis of a lessons learned session are ‘what went well’ and ‘what did not go well.’

Evaluation, the messy bit

It seems that it is not hard to answer these previous two questions—’what went well’ and ‘what did not go well’. At least, if I look back at the last couple of months of my current project, I can easily identify some things that worked and some things that didn’t work. I am pretty sure my team members can come up with their own lessons learned without much trouble. But if we would compare those lessons, we would probably find that each person employs different criteria for the evaluation of what happened in the last couple of months.

Subjectivity

So there are a number of things that make it difficult to evaluate what happened in the past—that influence the quality of our perception of the lesson learned. First and foremost, since we are talking about personal experience, the lesson learned must be subjective. There are many situations in which many persons go through the same experience (for example, in a software project). Perhaps in this case a collective assessment counters some of the subjectivity of the individual assessment. But usually, the definition of what went well and what went wrong is a subjective one. Subjectivity should be considered when creating a lesson learned.

Criteria

The other point is that different criteria are used to evaluate a lesson learned. If we say something was a success or a failure, we need criteria by which to judge it. If I look at my project again, I can take for example, the sprint velocity as an indicator of success for a certain approach. Or I can use the general mood in the team, the readability of the code, the speed of the automated tests or the amount of technical debt. These are indicators—some are easy and some are hard to measure—that may tell us about the effect of a certain practice or the change of a certain practice. In the examination of a lesson learned, something has to be said about the (qualitative or quantitative) indicators by which success or failure is measured.

Cause and effect

Changes in practices can have effects on a project. Usually a lesson learned is about a change in some practice to which some effect is ascribed. Say I introduce, in an Agile team, risk analysis as a part of the refinement of a user story. In parallel I think up some indicators that should see improvement because of the introduction of risk analysis. The indicators may never show improvement, which makes it difficult to know if there was an effect, but even if they do, I should not jump to the conclusion that my introduction of risk analysis caused it. There may be other factors. Causal relationships are not easy to evaluate and there are causal fallacies that we can commit along the way. A discussion of cause and effect should be part of a lesson learned.

Context

Furthermore some analysis of the context is necessary. Why did the actions lead to success or failure in this particular context? And which circumstances caused the learning of the lesson to happen? In other words: what enabled you to learn that lesson? Obvious, if my lesson learned is that introducing risk analysis to an Agile team improves the efficiency of testing, I can only learn this lesson in a context with a team that does not yet use risk analysis. The context enabled me to learn this lesson. Interesting insights could be gained from the study of the factors enabling a lesson learned.

Skills

As a side note, this form of contextual analysis is strongly reminiscent of action research, in which the researcher is involved in a collective effort to, for example, find a solution to a problem. This kind of research requires specific skills in the area of data gathering (for example, the keeping of a journal or log), reflection and evaluation, organization and synthesis. Ultimately, a discussion of lessons learned touches upon the usage of these skills.

Posted in Context-driven testing, DEWT7, Peer conference | Leave a comment

DEWT 7 announced

We are happy to announce DEWT7, our seventh annual peer conference. The conference will be held from Friday January 27 until Sunday January 29 2017 at Hotel Bergse Bossen in Driebergen, the Netherlands. The conference starts on Friday evening at 6 pm  with dinner, fun, games & conversations. On Saturday morning, 9 am, the official part of the conference starts. On Sunday we will wrap up at about 3 pm in the afternoon.

The peer conferences of the Dutch Exploratory Workshop on Testing are based on the experiences of the participants. Therefore each participant is asked to prepare an experience report on the conference theme. DEWT peer conferences are invitation only.


The theme of DEWT7 is Lessons learned in software testing.

As we grow older, we build up experience. We might even learn a thing or two. Apply it with great success later. Forget it, repeat the old mistake, learn the same lesson again. Find ourselves in a similar situation, apply the lesson learned and deepen our understanding.

These are the kind of stories we would like to hear from you in your experience reports: how you learned one of your great lessons in software testing. One of those lessons that demarcate a period of “before” and “after” – although in some cases the before and after will be difficult to pinpoint to a specific second or day.

And we also want to hear what happened with you and with that lesson afterwards. Do you still apply it? Did you apply it once well-intended, but with horrendous result? Did it grow obsolete? Do you apply it all the time, or only in specific circumstances?

So in total that makes two stories: one of a lesson and one applying that lesson later. Two stories, one experience report, sharing something about what makes you the tester you are.


Our peer conferences follow these rules:

  • The main focus of your experience report should be an actual experience.
  • All presentations are 15/20 minutes followed by “open season” a facilitated discussion. See the blog post “A Guide to Peer Conference Facilitation” by Paul Holland to learn more about facilitation.
  • Anything shared during the peer conference can be made public with proper attribution. So be careful not to share any proprietary or confidential material.
  • Not everyone will get the opportunity to present. The reason for this is that the focus of a peer conference is on the facilitated group discussion after each experience report.

Philip Hoeben (conference chair) & Joep Schuurkes (content owner)


This peer conference is made possible by the Grant we received from AST.

ast

Posted in DEWT7, Peer conference | Leave a comment

An Experience Report Guideline

As a result of DEWT6 and other musings, Ruud Cox and I thought it was time to be more specific about what it means to do an experience report. The Dutch Exploratory Workshop on Testing (DEWT) believes the experience report is an important vehicle for learning about software testing. This is why the DEWT peer conferences are centered around them.

Ruud and I wrote a guideline because we think that a good experience report enables software testers to make better decisions about practices and practitioners. Over the years we saw quite a number of experience reports and though many of those talks and the following discussions provided the audience with great insights, we feel that by being more specific about what is expected from an experience report, an even better learning experience can be created.

The guideline can be downloaded (as PDF document) by clicking on the following link: An Experience Report Guideline.

We would like to thank Jean-Paul Varwijk, Joep Schuurkes and James Bach for reviewing the guideline.

Posted in DEWT6, Peer conference | Tagged | 1 Comment

DEWT6: The Medium is the Message

From Friday 22 January till Sunday 24 January 2016 the sixth annual peer conference of the Dutch Exploratory Workshop on Testing was held at Hotel Bergse Bossen in Driebergen, the Netherlands. The theme was ‘Communicating testing during software development’. In this article we give a summary of the proceedings of the conference.

First off, as the conference chair I would like to thank the organizers and the participants for bringing their energy, creativity and inspiration to the conference.

Experience reports

Over the weekend, five experience reports were presented on different aspects of communication (for an overview, see the Program below). Susan van de Ven reached out to the group to help her with a problem of trust and caring.  She found herself in a position in which she was responsible for a release decision based on the quality of the product, and yet she did not have the authority to demand the information that she needed. As a peer, she had to elicit the information she needed from the other testers. Topics such as the sharing of responsiblity, caring, motivation and the building of trust were discussed. Ard Kramer told us a powerful story of risk and meaning. We discussed risk as a threat to value and about the tester’s ability to identify risk. To discuss risk is to discuss value and meaning and thus Ard presented a three layer model in which meaning (why) was at the center, surrounded by the organisation, the outside world (what) and the processes and the project (how). Risk can also be discusssed using personas and in this respect Linda referred to how she used personas from the movie Aladin to be able to identify certain risks. By discussing risks and meaning, the tester could be a ‘cultural broker‘ between different people in the project.

Thomas Ponnet approached the conference theme by doing some research on different aspects of communication. He provided us with a mind map that he created and some personal stories to go with the model. He regarded the framework as a starting point for investigating, for example, your own communication. Joep (Deepak) Schuurkes talked about a situation in which he hardly communicated about testing at all for the duration of the project. Joep presented his story in a very laid-back manner, which triggered different emotions with the DEWT6 participants. Some felt anger, others depression and others were happy because Joep’s story appeared to be a story of success. This, in its turn triggered discussions not what Joep was communicating but about the way in which he was communicating it. Because of that, the phrase ‘the medium is the message’ was introduced and the discussion touched upon this rather scary higher level of abstraction. The fact that the project appeared to be succesful without much communication about testing also triggered the question how much communication about testing is enough.

In the final experience report Philip and Femke presented their approach for doing pair testing, based upon the rules for back-to-back DJing. They talked about some of the sessions that they did, the main objective of which was to transfer knowledge about the product from Femke (who is a tester and subject matter expert of the product) to Philip (who is a tester and new to the project). They did that by taking control of the application by turn. The discussion touched upon the transfer of knowledge, making tacit knowledge explicit and the balance between narration and asking questions. Interestingly, Philip and Femke tested in different environments and the discussion also touched upon noise and interruptions from outside that influenced the quality of the communication.

A Web of Meaning / Chaos / Unmeaning

The closing workshop of DEWT6 was organized by Joep Schuurkes. It was entitled ‘Web of Meaning’ and its purpose was to connect the different reports to identify common themes. We split up into four groups and connected the reports using stickies. After that the stickies were aggregated into a huge map, the Web of Meaning. This map is displayed below.

DEWT6 Web of Meaning

The map is an expression of the discussion during open season, the hallway talks, the lunch chats and the late-night philosophies of DEWT6. But because of its chaotic nature it was also quickly dubbed the ‘Web of Chaos’ or the ‘Web of Unmeaning’. During the writing of these proceedings I found it very useful as a reference, but I think it was only useful to me because I was at the conference. Michael correctly remarked that such a collection of insights and phrases requires analysis and synthesis in order to be useful for a larger audience. This will be certainly be part of our effort during DEWT7.

Participants

DEWT6 was experienced in the gracious company of the following people.

Aleksandar Simic (DE)
Alexandru Rotaru (RO)
Andreas Faes (BE)
Ard Kramer (NL)
Ben Peachey (NL)
Beren van Daele (BE)
Eddy Bruin (NL)
Femke Boerrigter (NL)
Jackie Frank (NL)
Joep Schuurkes (NL)
Joris Meerts (NL) – Conference chair
Linda van de Vooren (NL)
Michael Bolton (CA)
Peter ‘Simon’ Schrijver (NL) – Content owner
Philip Hoeben (NL)
Rob van Steenbergen (NL)
Robert Page (NL)
Ruud Cox (NL) – Facilitator
Simone de Ruijter (NL)
Susan van de Ven (NL)
Thomas Ponnet (DE)
Wim Heemskerk (NL)
Zeger van Hese (BE) – Facilitator

DEWT6 participants

Back row (from left to right): Simone de Ruijter, Andreas Faes, Thomas Ponnet
Middle row (from left to right): Aleksandar Simic, Joris Meerts, Peter ‘Simon’ Schrijver, Beren van Daele, Zeger van Hese, Jackie Frank, Susan van de Ven, Robert Page, Linda van de Vooren, Ben Peachey
Front row (from left to right): Philip Hoeben, Femke Boerrigter, Ruud Cox, Ard Kramer, Joep Schuurkes, Alexandru Rotaru, Eddy Bruin, Rob van Steenbergen
Not in the picture: Michael Bolton, Wim Heemskerk

Program

Saturday
Susan van de Ven – The perhaps too informal approach to testing communication
Ard Kramer –  Is there a risk?
Thomas Ponnet – W5H3
Joep Schuurkes – The time I didn’t communicate about my testing. Or did I?
Sunday
Philip Hoeben & Femke Boerrigter – Communication during dynamic pair testing
Joep Schuurkes – Web of Meaning (workshop)

Some resources that were mentioned

Blogs about the conference

Sponsorship

Association for Software TestingDEWT6 was sponsored by the Association for Software Testing. On behalf of DEWT and the participants of DEWT6, I thank them for their support!

Posted in DEWT6, Peer conference | Tagged | Leave a comment

DEWT6 announced: Communicating testing during software development

From Friday 22 January till Sunday 24 January 2016 the sixth annual peer conference of the Dutch Exploratory Workshop on Testing will take place at Hotel Bergse Bossen in Driebergen, the Netherlands. The conference is organized by Joris Meerts as conference chair, Jean-Paul Varwijk as content owner and Ruud Cox and Zeger van Hese as facilitators. The twitter hashtag for this peer conference will be #DEWT6.

The theme of DEWT6 is ‘Communicating testing during software development’. Jean-Paul Varwijk illustrates the theme as follows.

One of the goals of software testing, particularly context-driven software testing, is to supply our stakeholders with information. Often we mention how the type and quality of the information we provide extends beyond the presentation of mere metrics such as pass/fail rates. We believe that this information should enable our stakeholders to make informed and meaningful decisions on whether or not the developed software suits their needs and wants, and lives up to the relevant (quality) standards and expectations.

Much of this information, or at least what is communicated about this information, is directed towards the final stages of development but most exchanges of information happen during development itself. During DEWT6 we would like you to share with us your experiences in communicating about software testing while it is being developed. With whom did you share your test ideas and test results? How did you share it? How was your feedback received? Did it turn out the way you expected? Was it useful?

The peer conferences of the Dutch Exploratory Workshop on Testing are based on the experience reports of the participants. Therefore each participant is asked to prepare an experience report on the conference theme. Participation in this conference is by invitation only.

The DEWT peer conferences are modeled after the Los Altos Workshop on Software Testing (LAWST) and the Software Test Managers Roundtable (STMR). More information about this type of conference can also be found in Paul Holland’s Guide to Peer Conference Facilitation.

Posted in DEWT6, Peer conference | Tagged , , | Leave a comment

DEWT5 Report

The 5th DEWT peer conference took place January 16-18th at Hotel Bergse Bossen Driebergen, the Netherlands. The central theme was “Test Strategy“.

DEWT5 was attended by Ben Peachey, Daniel Wiersma, Eddy Bruin, Huib Schoots, Ilari Henrik Aegerter, Jackie Frank, Jeanne Hofmans, Jean-Paul Varwijk, Jeroen Mengerink, Joep Schuurkes, Joris Meerts, Maaike Brinkhof, Maaret Pyhäjärvi, Marjana Shammi, Massimo D’Antonio, Pascal Dufour, Peter “Simon” Schrijver, Philip Hoeben, Ray Oei, Richard Bradshaw, Ruud Cox, Ruud Teunissen, Simon Knight and Zeger van Hese.
Helena Jeret-Mäe unfortunately couldn’t make it.

Below is the schedule of the conference, managed in Trello.

DEWT5 Schedule

Presentations:

Maaret Pyhäjärvi wrote some blog posts upfront:

Simon Knight refered to the following articles in The Testing Planet:

The collection of index cards on the DEWT5 Learning Wall.

Collected ‘#DEWT and #DEWT5’ tweets by Richard Bradshaw

DEWT5 Sketchnotes by Zeger van Hese

Not a Conference on Test Strategy by Joris Meerts

(In Response to DEWT5) – What Has a Test Strategy Ever Done for Us? A response to Joris’ post by Colin Cherry

On behalf of all the DEWT’s I’d like to thank the AST for the grant which contributed to the success of this conference.

Posted in DEWT5, Peer conference | Leave a comment