Cooking up a lesson learned

The seventh edition of the annual peer conference of the Dutch Exploratory Workshop on Testing will be about lessons learned. The theme reminds oneself immediately of the book Lessons Learned in Software Testing. This book provides the reader with over two hundred lessons. But the aim of the peer conference is not to collect lessons. Rather, we want to look at how the lesson was learned, whether it was applied and, in case it was applied, what the outcome was.

In this article I want to provide some guidelines for the examination of how the lesson learned actually comes into being. My aim is to apply these guidelines during the conference so it enables me to ask better questions. Also, I want to use the guidelines as input for the workshop that Ruud Cox and I will running at the end of the conference.

As you can see I want to focus on how the lesson learned comes into existence, which is the first of a series of steps. The first step is the evaluation of the situation in which the lesson was learned and the analysis of the actions that were taken (who did what) in this situation. The second step is the abstraction of the actions to a more generalized level so that it can be stated in terms that are not so much tied to context in which it was learned. This makes it possible for people who were not part of the actual experience to understand (and evaluate!) the lesson. Both steps are important but I want to focus on the first one.

What is a lesson learned?

In order to examine how a lesson learned comes into existence, first, we need to know what it is. According to Merriam-Webster a lesson can be defined as ‘something learned by study or experience.’ The definition supposes two ways of learning; one by study and one by experience. Lessons learned, in the context this conferences, focuses on learning by experience and this is an important distinction to make. Obviously, it means we need to have an experience in order to learn a lesson. But it also means that the lesson is directly tied to the experience and maybe even generated by the experience. Just as much as the shape of a river bed is shaped by the flow of water, the lesson learned is shaped by experience.

Experience (Merriam-Webster) means the direct observation of or participation in events as a basis of knowledge. It assumes that a lesson can only be learned when a person is directly involved in a situation. Without this involvement there will be no lesson learned. So personal experience is a key factor.

Lessons learned are, for example, a familiar concept in project management. Commonly, projects have lessons learned sessions, in which it is customary to look back on a project and capture practices or approaches that had either advantageous or adverse consequences. The practices, once captured, can be shared so that they have—or avoiding them has—a positive effect on future projects. The two questions that form the basis of a lessons learned session are ‘what went well’ and ‘what did not go well.’

Evaluation, the messy bit

It seems that it is not hard to answer these previous two questions—’what went well’ and ‘what did not go well’. At least, if I look back at the last couple of months of my current project, I can easily identify some things that worked and some things that didn’t work. I am pretty sure my team members can come up with their own lessons learned without much trouble. But if we would compare those lessons, we would probably find that each person employs different criteria for the evaluation of what happened in the last couple of months.

Subjectivity

So there are a number of things that make it difficult to evaluate what happened in the past—that influence the quality of our perception of the lesson learned. First and foremost, since we are talking about personal experience, the lesson learned must be subjective. There are many situations in which many persons go through the same experience (for example, in a software project). Perhaps in this case a collective assessment counters some of the subjectivity of the individual assessment. But usually, the definition of what went well and what went wrong is a subjective one. Subjectivity should be considered when creating a lesson learned.

Criteria

The other point is that different criteria are used to evaluate a lesson learned. If we say something was a success or a failure, we need criteria by which to judge it. If I look at my project again, I can take for example, the sprint velocity as an indicator of success for a certain approach. Or I can use the general mood in the team, the readability of the code, the speed of the automated tests or the amount of technical debt. These are indicators—some are easy and some are hard to measure—that may tell us about the effect of a certain practice or the change of a certain practice. In the examination of a lesson learned, something has to be said about the (qualitative or quantitative) indicators by which success or failure is measured.

Cause and effect

Changes in practices can have effects on a project. Usually a lesson learned is about a change in some practice to which some effect is ascribed. Say I introduce, in an Agile team, risk analysis as a part of the refinement of a user story. In parallel I think up some indicators that should see improvement because of the introduction of risk analysis. The indicators may never show improvement, which makes it difficult to know if there was an effect, but even if they do, I should not jump to the conclusion that my introduction of risk analysis caused it. There may be other factors. Causal relationships are not easy to evaluate and there are causal fallacies that we can commit along the way. A discussion of cause and effect should be part of a lesson learned.

Context

Furthermore some analysis of the context is necessary. Why did the actions lead to success or failure in this particular context? And which circumstances caused the learning of the lesson to happen? In other words: what enabled you to learn that lesson? Obvious, if my lesson learned is that introducing risk analysis to an Agile team improves the efficiency of testing, I can only learn this lesson in a context with a team that does not yet use risk analysis. The context enabled me to learn this lesson. Interesting insights could be gained from the study of the factors enabling a lesson learned.

Skills

As a side note, this form of contextual analysis is strongly reminiscent of action research, in which the researcher is involved in a collective effort to, for example, find a solution to a problem. This kind of research requires specific skills in the area of data gathering (for example, the keeping of a journal or log), reflection and evaluation, organization and synthesis. Ultimately, a discussion of lessons learned touches upon the usage of these skills.

Advertisements
This entry was posted in Context-driven testing, DEWT7, Peer conference. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s