DEWT’s on TESTNET Najaarsevenement 2013

Much has changed the last few years. A lot of thing went much faster than any of us had anticipated. What started out as a place to discuss and talk testing has become something with a stronger voice and direction.

This week at least 6 DEWT’s are speaking at the Dutch TestNet Najaarsevenement (see here and here). Most of us wouldn’t have thought that would be something we would do, ever.

So we are very proud! And still there is a lot to learn. This event is also for us a place to learn, to interact with other testers, many of them not even aware of Context Driven Testing. Or with rather shallow definitions of what CDT really is. That is no problem, by discussing and interacting we want to improve ourselves and our craft. Stay tuned…

And: lets’s GO!

Posted in News | Tagged | Leave a comment

Reading Skills for Software Testers

From 19 till 21 April 2013, the third peer conference of the Dutch Exploratory Workshop on Testing was held in Driebergen, The Netherlands. In a talk I did during that conference, I made a slightly emotional appeal to start looking at software testing skills, in order to explore the question of what makes a good tester.

The particular skill I mentioned in my talk was reading. Some time ago, I did an assignment in which the reviewing of functional documentation played a major part. In preparation for this assignment, I was asked to take a close look at chapter 15 of the book TMap Next (Koomen, 2006). TMap Next represents a test management approach that is widely used in the Netherlands. Chapter 15 is on ‘Toetstechnieken’ which is translated (in the English edition) as ‘Evaluation techniques’. It was suggested that the information in the chapter would prepare me for the task of reviewing functional documentation.

Building a mental picture through reading

While working on my assignment, reviewing the documentation, I found that I was conducting an investigation into the testability of the design. Now testability is something that may have different meanings depending on who you ask. Testability for me meant that the design, the large set of documents that was placed before me, had to be internally consistent and consistent with relevant standards and related documents. Furthermore the structure of the design had to be complete and correct. This, among other things, meant that flows through the design could be followed without gaps, disconnects or open ends. A testable design, in my opinion, was a design based on which tests could be created without leaving too much room for interpretation and thus ambiguity. I was not actually looking for the functional correctness of the design with regards to user requirements. For several reasons, that would have severely complicated the scope of my review.

It turned out that, when reviewing for consistency and structure, many things need to be considered. Based on nothing more than words and diagrams it should be possible to create a mental model of the system and operate that system within the mind. We may have a model of the actual system in our minds. Such a model may contain a hierarchical structure, interacting components and flows of information. So concepts of systems need to be discovered and checked with what’s on paper.

But we may also build different models, based upon the text. Such are models of the organisation, models of how the organisation perceives reality, and models of how the organisation deals with its knowledge, how it describes what’s meaningful and special. All of these models can be investigated through interacting with the text.

Testers as first-rate readers

Reviewing documentation is a process of questioning and evaluation. In that respect it is like software testing, which is the art of skillful investigation through experimentation. A greater awareness of how we read, can aid this investigation. There are models, such as SQ3R (Survey, Question, Read, Recite, and Review, Robinson, 1946), that emphasize the investigative part of reading. There is also the famous book on how to read a book (Adler, 1940) which touches upon many of the themes that are important in testing, such as analytical reading and critical reading, but also prejudice and judgment.

Reading at a more detailed level can provide even more information. A heuristic such as ‘Mary had a little lamb’ introduced by Gerald Weinberg and Donald Gause (Weinberg, 1989) focuses on how the meaning of a sentence changes, whenever a different word is stressed.

The most fundamental skill that aids the creation of a mental model from paper, is reading. Actually – considering the fact that software testers are continually confronted with written material – reading should be a skill with which the average tester is liberally bestowed. We should be first-rate readers.

So why is the art of reading missing in the TMap methodology?

It is hard not to notice that chapter 15 of TMap Next does not mention reading at all. If we extremely summarize the contents of this chapter, then what is said about reviewing is that it can be done by one or more persons. The chapter focuses on the review process, using the IEEE Standard for Software Reviews (IEEE Std 1028-1997) as a starting point. It tells us something about the roles, criteria and procedures for the review types inspection, walkthrough (Waldstein, 1974, though, for some reason, not referenced in TMap Next) and reviews.

It may be so that the authors of TMap Next simply assumed that reading skills are omnipresent in software testers. What I did notice over the years is that people who are designated to review documentation can fail horribly because they absolutely do not know how to read usually rather complicated documentation. So at the very least reading is a cognitive skill that should be fostered carefully. Somewhere in the past I guess I believed, in a period of ill justified innocence, that all testers possessed those skills that they naturally require. I do not believe that anymore.

Another starting point for this chapter could be that not reading skills but the processes described in the chapter, are the way to handle reviewing. Here my main struggle is that I saw review processes in action (though not anything identical to the processes described in chapter 15) and I never once felt that the processes themselves were the reason why the results were as they were. The process suggests that actors (read: humans) are aligned to accomplish a task together. Chapter 15 does not shed even the dimmest of lights on the capacities of those actors or the circumstances under which they interact. Maybe it assumes that the actors are all the same and they only need to be switched on and off at the right time, in the right room, in the right combination. But even that basic and dreadfully incorrect assumption is not mentioned in the chapter. Without these fundamental assumptions, the process is a sham.

What it boils down to is that chapter 15 had better not be used in software testing and had better not be taught to testers as an introduction to or an elucidation of reviewing. As far as I am concerned inflicting the information contained in chapter 15 on testers, should be considered harmful. The main reason is that it is impossible to verify what is described in chapter 15. The second reason is that whatever is described in chapter,  is not related to the craft of software testing, because we know that software testing is about things such as reading. We should realize that testers, when reviewing, have far more pressing problems than figuring out the minimum number of warm bodies that is needed in order to officially commence the review process.

References

Adler, Mortimer – How to Read a Book, 1940

Koomen, Tim – TMap Next voor resultaat gericht testen, 2006

Robinson, Francis Pleasant – Effective Study, 1946

Waldstein, N.S. –  The Walk-Thru – A Method of Specification, 1974

Weinberg, Gerald – Exploring Requirements, 1989

Posted in Meeting reports, Peer conference | Tagged | 3 Comments

DEWT3 Experience Reports

DEWT3 Participants

Standing from left to right: Jurian van de Laar, Stephen Hill, Huib Schoots, Joep Schuurkes, Joris Meerts, Rik Marselis, Pascal Dufour, Ruud Cox, Bernd Beersma, Markus Gaertner, Philip Hoeben, Philip-Jan Bosch, Derk-Jan de Grood, Adrian Canlon, Ard Kramer, Peter Duelen, Jean-Paul Varwijk, Zeger van Hese, Angela van Son, James Bach, Ray Oei. Sitting: Michael Philips

Below is a number of experience reports written by attendees. If you want to get an in depth view on what happened during this peer conference, then I recommend to read them all.

Dutch Exploratory Workshop in Testing (DEWT) 3 and General Systems Thinking at the third Dutch Exploratory Workshop on Testing by Stephen Hill

DEWT3 Sketchnotes by Ruud Cox

DEWT3 Sketchnotes by Huib Schoots

DEWT3 experience report by Joep Schuurkens

In addition, Joris Meerts created an interesting list of Systems Thinking books.

On behalf of all the DEWT’s I’d like to thank the AST for the grant which helped in making this conference a success.

Posted in Uncategorized | Tagged | 3 Comments

DEWT3 has begun…

In the bar…

DEWT3 -Grand bar of 'Bergse Bossen' Driebergen

DEWT3 -Grand bar of ‘Bergse Bossen’ Driebergen

Testing begins...

Testing begins…

Posted in Uncategorized | 1 Comment

The Real Future of Software Testing

It is a common notion that we can learn about the future by looking at the past. At the very least, by looking at the paths that were travelled to get us to the here and now, we have an understanding of where we stand. Through that single point we may, for example, draw the lines from the past and project them into the future.

This technique is frequently used to make inferences about the future of software testing. Many of the visions are based on extrapolating market trends in software development. If, in the recent past, we noticed a growing use of mobile devices, it is likely that we will see continued growth in mobile testing. If security emerged as a hot topic in the past couple of years, security testing will continue to be of interest. Most of these predictions, about e.g. cloud, mobile, Agile, security and big data, can be found in the regular forecasts by IT research companies such as Gartner, Ovum and Forrester. To draw a picture of the future of testing, the only thing left to do is to add ‘testing’ to each of the trends. It is that simple.

But do ‘cloud testing’, ‘mobile testing’, ‘Agile testing’ and all other collations actually tell us something about the future of software testing? In many cases, not specifically. Take mobile testing. It is a certainty that the development of applications for a mobile devices carries with it a great number of specific technologies, tools and challenges. These things affect the day-to-day work of the tester. He has to grasp new domains, new tools, new ways of working and novel perspectives on software usage. We know they affect the work, but what we fail to investigate or even notice, is how they affect the craft; how the basic paradigms in software testing evolve because of whatever happens in software development. It is an important realization that by mapping the future of software testing on shifting technologies and, for example, changing perspectives on software usage, we focus on how the work is affected, but not on the underlying paradigms that drive the craft. This, to me, is not helpful in identifying the ways in which software testing evolves. In the worst case it causes regression, giving way to views that functional testing is an ancient and obscure specialism, for which the need is rapidly waning (Whittaker, 2011).

To further elucidate this example I would like to look at the popular test automation tools such as Selenium, HP QuickTest Professional or Watir. The evolution of software testing has become intertwined with test automation tools in such a way that, by focussing increasingly on familiarity with tools, knowledge of testing is dispersed. While the tool clearly advances the reach and capabilities of functional testing, it does not advance the paradigms that drive the testing effort. Tests still need to be created and the intelligence with which the tests are created is one of the factors seriously affecting the success of test automation. The tool merely amplifies intelligent use or the lack thereof. Everyone knows the old adage that ‘a fool with a tool is still a fool’. By this adage, while we educate hosts of ‘Selenium testers’, ‘mobile testers’ or ‘cloud testers’, what we get may still be only a handful of testers who grasp the paradigms of functional testing and are able to use the tool successfully. From this particular point of view, the term ‘Agile tester’, for example, is nothing more than an empty vessel.

If we want to take a look at the future of software testing we have to look at what is left when we strip from it the knowledge of tools, technologies or domains. Functional testing is, among other things, the art of investigation by experimentation and for this we basically have two paradigms: (functional) specification based test design by using (formal) test techniques and exploratory test design. Test specification techniques were mostly created in the 1970’s while exploratory testing was introduced (formally) by Cem Kaner in the 1988 (Kaner, 1988). Both these ways of investigating software are recognized as established points of view in the testing literature. And since they have been around for quite a while, the question arises whether there has been a long pause in the growth of software testing as a discipline. In our collective view, dominated by the perspectives of those who casually couple software testing with the latest software development infatuation, this may be the case.

So what we fail to notice is the real way forward in functional testing. Advances in this area,  especially in the area of exploratory testing, have been made. In 1996 James Bach introduced the Heuristic Test Strategy Model (Bach, 1996), drawing from the research by Herbert Alexander Simon (Simon, 1957) and the Hungarian mathematician George Pólya (Pólya 1947), on heuristic discovery and problem solving. In 2008 Julian Harty presented his talk Six Thinking Hats for Software Testers at StarWest (Harty, 2008). Borrowing from the ideas of the British psychologist Edward de Bono (De Bono, 1985), Harty introduced the notion of different ways of thinking into software testing. In recent years Michael Bolton published on the concept of tacit knowledge in software testing (Bolton, 2011), drawing from work by the British sociologist Harry Collins (Collins, 2010) and the Hungarian philosopher Michael Polanyi (Polanyi, 1966), who introduced the concept of tacit knowledge.

Among other scientific concepts that were introduced into software testing is systems thinking, as conceived by the Austrian biologist Ludwig von Bertalanffy (Von Bertalanffy, 1968). The notion that we should look at systems as a whole and not as a sum of parts was applied to software engineering by the great Gerald Weinberg (Weinberg, 1975). Another concept is that of grounded theory, which is, in essence, the building of a theory through the qualitative research of data. It was introduced by sociologists Barney Glaser and Anselm Strauss (Glaser, 1967) and applied to software testing by Rikard Edgren (Edgren, 2009).

The list above is by no means conclusive. For now it suffices to say if we regard software testing as skillful investigation by experimentation, we should try to benefit from what we know about investigation and experimentation. As we have seen this knowledge comes from different areas of scientific research. For the future of the software testing to be bright, it is to be built on these foundations.

References and further reading
Bach, James. Heuristic Test Strategy Model (1996)
Von Bertalanffy, Ludwig, General System Theory (1968)
Bolton, Michael. Shapes of Actions (2011)
De Bono, Edward. Six Thinking Hats (1985)
Collins, Harry. Tacit and Explicit Knowledge (2010)
Edgren, Rikard. Grounded Test Design (2009)
Glaser, Barney and Strauss, Anselm. The Discovery of Grounded Theory (1967)
Harty, Julian. Six Thinking Hats for Software Testers, StarWest (2008)
Kaner, Cem. Testing Computer Software (1988)
Polanyi, Michael. The Tacit Dimension (1966)
Pólya, George. How To Solve It (1947)
Simon, Herbert Alexander. Models of Man (1957)
Weinberg, Gerald. An Introduction to General Systems Thinking (1975)
Whittaker, James. All That Testing is Getting in the Way of Quality, StarWest (2011)

Posted in Uncategorized | 6 Comments

Software Quality Characteristics poster of The Test Eye translated into Dutch

At EuroSTAR 2012, Henrik Emilsson of The Test Eye did a talk about their Software Quality Characteristics poster. After his talk he asked if someone was interested in translating this poster into other languages. Today, DEWT proudly presents the Dutch translation; Software Kwaliteit Kenmerken.

Posted in Uncategorized | 1 Comment

Announcing DEWT3

The 3rd DEWT peer conference will take place on April 19-21st at Hotel Bergse Bossen Driebergen, the Netherlands. DEWT is a conference that falls into the series of peer conferences on testing like LAWST, LEWT, SWET and GATE.

The main theme will be

Systems Thinking

The twitter hashtag for this peer conference is #DEWT3.

Joris Meerts maintains a list of books related to the theme.

This conference is for DEWTs and invitees only. 21 people will participate in DEWT3.

Guests
James Bach (USA), Bernd Beersma, Philip-Jan Bosch, Peter Duelen (Belgium), Pascal Dufour, Markus Gaertner (Germany), Derk-Jan de Grood, Stephen Hill (UK), Jurian van de Laar, Rik Marselis, Joep Schuurkes, Angela van Son.

DEWTs
Adrian Canlon, Ruud Cox, Zeger van Hese (Belgium), Jeanne Hofmans, Philip Hoeben, Joris Meerts, Ray Oei, Huib Schoots, Jean-Paul Varwijk.

Posted in Uncategorized | 3 Comments

DEWT2 was a blast!

Standing left to right:
Ray Oei, Jean-Paul Varwijk, Adrian Canlon, Markus Gartner (Germany), Ruud Cox, Joris Meerts, Pascal Dufour, Philip Hoeben, Gerard Drijfhout, Bryan Bakker, Derk Jan de Grood, Joep Schuurkes, Lilian Nijboer, Philip-Jan Bosch, Jeroen Rosink, Jeanne Hofmans
Kneeling left to right:
Tony Bruce (UK), Zeger van Hese (Belgium), Ilari Henrik Aegerter (Switserland), Huib Schoots, Peter Simon Schrijver

DEWT2 was a success. The list below contains blog posts written by attendees. If you want to get an in depth view on what happened during this peer conference, then I recommend to read them all.

During the conference pictures were taken and uploaded to twitter. I captured the pictures and included them. The quality of the pictures is low, but they cover the conference very well and they give a good impression.

Posted in Uncategorized | Leave a comment

Announcing DEWT2

The 2nd DEWT workshop will take place on October 5-6th in Hotel Bergse Bossen Driebergen, the Netherlands. DEWTis a workshop that falls into the series of peer workshops on testing like LAWST, LEWT, SWET and GATE.

The main theme of this peer workshop will be:

Experience Reports: Implementing Context-Driven Testing

The twitter hashtag for this peer conference will is #DEWT2.

Program

Friday, October 5 

18.00 – 19.00 Opening Remarks
19.00 –  –.–  Food, drinks, puzzles & lightning talks
Jean-Paul Varwijk on a context-driven test approach
Zeger van Hese on intakes

Saturday, October 6

09.00 – 09.15 Opening Remarks
09.15 – 10.00 IlariHenrik Aegerter – Introducing context-driven testing at Ebay
10.00 – 10.45 Markus Gärtner – What I learned from coaching a context-driven tester
10.45 – 11.30 Ray Oei - Workshop RST/CDT at clients and or Teaching testers at AST
11.30 – 13.00 Lunch, Walk in the forest, Group photo

13.00 – 14.00 Ruud Cox – Testing medical devices, a context-driven spin-off
14.00 – 15.00 Huib Schoots – Context-driven testing at Rabobank International
15.00 – 16.00 Test lab / live test / dojo
16.00 – 17.00 Open podium

*The current program is provisional and can be changed if the group so desires.

23 people will participate in DEWT2.

Guests
Markus Gartner (Germany), Ilari Henrik Aegerter (Switserland), Tony Bruce (UK), Gerard Drijfhout, Pascal Dufour, Rob van Steenbergen, Derk Jan de Grood, Joep Schuurkes, Leon Bosma, Bryan Bakker, Lilian Nijboer, Philip-Jan Bosch

DEWTs
Adrian Canlon, Ruud Cox, Philip Hoeben, Zeger van Hese (Belgium), Jeanne Hofmans, Joris Meerts, Ray Oei , Jeroen Rosink, Huib Schoots, Peter Simon Schrijver, Jean-Paul Varwijk

Posted in Uncategorized | Leave a comment

Excellent Workshop on Coaching

March 7th, we had an excellent workshop on coaching. People who were there; Angela van Son (trainer), Michael Bolton, Adrian Canlon, Ruud Cox, Zeger van Hese, Philip Hoeben, Jeanne Hofmans, Joris Meerts, Ray Oei, Jean-Paul Varwijk, Jeroen Rosink, Huib Schoots and Simon Peter Schrijver.

The programme

After eating pizza, the programme started with the question: What is coaching? What I put in my notes is that coaching is about finding answers. The coach supports the coachee by listening, asking questions and providing feedback. But it is key that the coachee finds the answers within him or herself.

After this introductional discussion we started with the first exercise. The assignment was to form teams of two people where one person is asking questions to find out what the destination of the next holiday of his or her partner is. Two minutes long, first only closed questions and in the next two minutes only open questions. The lesson learned from this exercise is that with open questions you make a coachee think of an answer while with closed questions the coachee can pick an answer from a discrete set of possible answers without too much thinking e.g. yes or no. To put it in an oversimplified way, from a coaching perspective, open questions are good and closed questions are bad.

The seconds exercise was about advice. Two persons are having a conversation. One person is asking for advice but the other person is not allowed to give advice. This exercise took 7 minutes per conversation. The result was hillarious. Some people were literally begging for advice others were using other tricks. From a coaching perspective this exercise showed that it is very tempting to give advice where the coachee has to find answers within him or herself. A lesson learned from this exercise is that giving advice is not bad, but it shouldn’t be the default.

The last topic on the agenda was about the layers of communication or the UI (onion) model. This model contains four layers which play a role in communication between two people; content, procedure, interaction and emotion, The reason why this model is called onion model is because it is normally visually represented as an onion. The content is normally most present in a conversation but there are deeper layers. All layers were discussed with examples but I didn’t make enough notes to add some examples to this post.

And that was the end of an excellent workshop. Angela van Son and Michael Bolton, thank you very much for being our guest this evening.

Photo impression

Book recommendations

Angela van Son recommends the folllowing books about coaching (they’re all in Dutch):

Tien beïnvloedingsvaardigheden – Jan Bijker
Feilloos adviseren – Peter Block
Co-actief coachen – Laura Witworth e.a.
HOE-BOEK voor de Coach – Joost Crasborn & E. Buis

Posted in Coaching, Meeting reports, Workshop | 1 Comment