No, dear DEWTs, I did not misunderstand the assignment. The title of this series (“Why I am context-driven”) was handed to me chiseled in stone, ten commandments-style. The Moses in me chose to rearrange the tablets. I felt that the original title was asking for a justification of my context-driven-ness, as in “Why did you choose to live the context-driven life?”.
I did not choose the context-driven life. The context-driven life chose me.
Wait a second – did I just paraphrase the enlightened philosopher 2Pac in public? The point is – I don’t feel it was a conscious decision on my part. It was in my testing genes all along, waiting for me to discover it. Here are some defining moments and personal epiphanies as I recall them:
Early tester life
I arrived late to the testing party. I worked for a movie distributor specializing in arthouse cinema at first, followed by a brief stint as a COBOL developer. Those were the days! Riding with dinosaurs! Yelling commands at the compiler: “Hey you, move to Working-Storage Section! And you, Compute this, Display that!”.
In 2000, by the time I joined my first test team, I was almost thirty years old.
I was convinced it was going to be a temporary job since I was called in to go help some colleagues who were short of testers in their team. Unlike many of my team members who had chosen the testing career path, I never received any formal testing training. After all, I was only meant to be there for the short term. So while the rest of the team was being introduced to the wonders of “structured testing”, I was trying to figure out what the hell the system under test was trying to tell me – I taught myself to listen. By doing that I was able to unearth problems and loads of useful information. Right then and there I fell in love with the joy of exploration and discovery.
1st great realization:
Exploring software systems makes me feel alive
The team came back fully trained, armed with jargon and techniques. I wanted to tap into their newly acquired knowledge and listened carefully when they told stories about equivalence class partitioning, all pairs testing and different sorts of code coverage. Wow, those were actual tools of the trade I could use! The more I got to know, the more I started to like this testing thing.
Some of the best practices they took home confused me though, such as the principle to create test scripts upfront. I had just spent a couple of days discovering important problems and I asked myself: would I have found those very issues if I had created all my tests upfront? Best practice or not, the philosophy behind the whole thing seemed flaky. Why would you base your whole verification process on stuff created at a moment when you know so little about what is coming your way? Who defines what’s “best”, anyway?
2nd great realization:
Other people’s preferred methods might not work for me
A few months of testing turned into a couple of years, and I was lucky enough to work on different product teams across various industries. It wasn’t easy to find time to explore the software as I used to, because most of these teams had a testing methodology in place with lots of procedures, templates and test scripts designed upfront. I ended up doing most of it under the radar: when developing scripts, I found out I was exploring to make them the best I possibly could; when executing scripts, I was exploring on the side because it seemed silly to only stay on predefined paths. It never failed to find important problems. I felt that all of testing was infused with exploration. I thought it was all just common sense. People just looked at me funny.
3rd great realization:
Exploration is at the heart of all things testing
4th great realization:
My common sense is not other people’s common sense
The reality check
When I got the opportunity to lead a team of testers through an important new release, I grabbed it with both hands and welcomed any guidance I could get. People I highly respected advised me to stick to the procedures and templates with this one, as it was a unique pilot that shouldn’t go wrong. They spoke from experience, since “we used this approach in all our projects and it always worked” (emphasis theirs). I thought that was a bold claim (always? for all of them?), but I decided to give it a go.
The results were less than stellar.
The project came gift-wrapped with spectacularly detailed requirements – the user interface specifications document alone was as thick as a phone book. The software was not ready yet, but we used our time well, churning out elaborated scripts like there was no tomorrow. When the software finally arrived, it looked nothing like we had envisioned it. As a result, our scripts turned out to be brittle and trivial. On top of that, the whole team was getting desperate, bored and tired of following scripts while they felt they could do much more valuable work.
5th great realization:
Context eats strategy for breakfast
6th great realization:
If testing is boring, I’m probably doing it wrong
Our project manager asked for pass/fail rates, bug- and test case metrics. I proposed to rather give him an analysis of the most important problems, but he insisted on getting the numbers. Once these numbers got out, people started altering their behavior. It was the first time that I witnessed the counterproductive potential of metrics.
7th great realization:
Not all metrics are useful – some are dangerous
When the project manager wanted extra graphs for his report, I duly delivered. Three weeks later he was asking me to tweak these graphs to make the situation look less dramatic. It became clear that we had different intrests – I assume his targets and reputation were at stake, while I was concerned about my integrity and credibility as a tester. I wanted to help him, but it felt as if every muscle in my body was resisting.
8th great realization:
I value my integrity
In 2003, a co-worker approached me with a big grin on his face, saying “Check this – you might like it” as he threw a conference handout on my desk. A whole presentation on something called “exploratory testing”! Was this for real!? Turned out that it was. Even better: it described my favorite part in testing – the part that seemed so natural to me – as a recognized testing approach. It even had a proper name!
I wanted to know more and started reading everything I could get my hands on. This quickly led to Cem Kaner and James Bach, who championed exploratory testing as a sapient approach involving simultaneous test design, test execution and learning. All their work appeared to be rooted in science, and well thought out. And it wasn’t just theoretical thought-exercises either, they actually gave plenty of pointers on how to do exploratory testing well and how to make it more manageable.
They called it a martial art of the mind, scientific thinking in real time. They did not only make it sound cool – they also put effort in dismissing the common criticism of it being unstructured, asserted that it can be as disciplined as any other intellectual activity. When they stated that virtually all testing performed by human testers is exploratory to some degree, I knew I found my tribe.
9th great realization:
I am not weird – other people think alike
It was almost inevitable that I would cross paths with context-driven school of testing. Although that only happened years later, it was a kind of homecoming.
I discovered a vibrant community, a bunch of skeptics that rejected the idea of best practices, didn’t take anything for granted and were serious about studying their craft. They looked outwards, not only inwards, drawing from sociology, psychology, even philosophy (which was music to my ears – it matched my own tendency to look for testing lessons outside the field of testing).
Members of the context-driven school pointed to Thomas Kuhn’s “The Structure of Scientific Revolutions” to explain how it is possible that different groups of people – although they all claim to be studying the same field of practice – are using such radically different ontologies to describe it. The different schools of testing all have different paradigms, different goals and value different things (which in hindsight explained why I sometimes felt so alienated from other testers - and they from me).
Years have passed and although a lot of things around me (and probably inside me as well) have changed, I am still part of that community. It has become my touchstone for new ideas and my first line help desk when struggling with testing problems. It’s peers like this who encourage me to continuously learn and stay on top of my game.
I am aware that there are significant drawbacks to surrounding yourself with too many like minded people, so I try to engage with people who are willing to ask the hard questions and challenge my beliefs, even when they don’t necessarily disagree with me. The good thing is that there are plenty of those to be found in the community, but I constantly remind myself to have an open mind and to keep interacting with people outside of it as well.
That community is by far the most visible (and audible) part of context-driven testing, but it is not the reason why I consider myself a context-driven tester. As I mentioned above, it was not a conscious choice. Rather, it is how I make sense of the testing world around me. I consider it a value system: my personal set of morals, ethics, standards, preferences and world views that constitute my DNA as a tester.
So yes, dear DEWTs. I’m context-driven. It is baked into my system. There is no why.