Thursday, May 28, 2009

When Scrum meets traditional quality assurance

The contents of this post has been growing on me for some time. It’s about the challenge of working with Scrum in an organization where there’s a lot of traditional (eg waterfall based) ways of doing things.

Let’s say you’re the manager of a team that develops and maintains some software system. Let’s say you’ve successfully introduced Scrum in that development team and they’re doing well and things are progressing. But, you don’t have any “Testers” on your team. Yet. Whatever testing is done is performed by the programmers themselves, in a more or less ad hoc way. Now, there’s a whole “QA Department” in your company, with resource(s) available for you. But it is an entirely different department that is not under your control. All you can do is request and allocate one or more resources from QA – but you’re not in a position to affect how they work. They are used to working according to a waterfall style project model. The processes that they follow rely on a “requirement specification” to exists ahead of time, before the implementation begins, together with a detailed and approved UI design ahead of time, a detailed and approved system design ahead of time, and so on. So the Testers from the QA Department will not be able to assist with any testing unless you have these things in place for them to start building test cases and test specifications on. What do you do?

One of the most central challenges for me right now with regard to Scrum & Agile Development is exactly that: How to continue working with Scrum in my development team and still manage to cooperate and integrate smoothly with another department that is not yet working according to Scrum or the Agile philosophy.

Is it possible to follow agile principles in only one part of the software development process?

So far, since I haven’t had any testers on my team, I have more or less been ignoring the problem and simply done things cowboy-style. I’ve run the project as a pure Scrum project, with user stories in a Product Backlog, with a Scrum Product Owner, with velocity measurement, etc, etc. And – in line with Agile – we’ve worked in a strictly prioritized order, so consequently we haven’t spent much time yet on those lower priority backlog items, and they would certainly not yet be possible to use for formulating any “Test Cases” in the QA Department’s meaning of the word. But the time is coming where my Scrum team and their way of working will meet the QA Department and its way of working. Will the two fit?

Don’t misunderstand me! I’m really looking forward to getting those QA resources on the team, and I’m 100% confident, regardless of philosophies and project models, that quality will benefit from the addition of QA resources – no question about it. But I’m worried that the benefit may be at the expense of “quality” in the development process itself (in the sense of me having to “sacrifice” certain Agile cornerstones).

In my view, “testing” and “quality assurance” is not something that is applied on top of programming as a separate stage. Quality assurance is something that is intertwined with the entire software lifecycle; from idea, through analysis, design and development, to integration and system testing, rollout, handover and maintenance. And since it is all intertwined inside the software lifecycle, it is not possible to limit Agile philosophies to programming only. To be precise, and to answer my own question above; No I don’t think you can be “agile” in an environment where you finalize a “Requirement Specification” at the beginning of the project, and then base everything on it; plans, designs, test cases, etc.

So, my plan is that together with the coming QA resource we will try to find a way for us to meet half-way; to find a way of working that requires minimal change in the QA processes and still minimal change in the Scrum implementation. I really think this can be done.

Here’s the approach so far;

What does the QA Department do? Well, they test software. What information do they need in order to be able to test? Well, they need a test plan, and they need a set of test cases. And what information is needed in order to create the test plan? A schedule, probably. Great! I can deliver a schedule, no problem. I just have a slightly different way of creating it than traditional projects.
And what information is needed in order to formulate test cases? “Requirements”! Could I deliver the backlog as “the requirements”? I suspect no. Darn. Why? Well, the User Stories alone are probably not detailed enough for QA to base test cases on; the level will be too high and there will be too much room for ambiguity, especially before we start working on the story. However, once we’re done with each story we’ll know more about the requirements. I don’t want to have to deliver the requirements ahead of time.

But wait. Didn’t I say that QA is an integral part of the software lifecycle? Yes I did :-). So what is Scrum? Iterative and Incremental. Can’t the “Requirement Specification” also be created Iteratively and Incrementally? And consequently; can’t the set of Test Cases (the Test Specification) also be created Iteratively and Incrementally?

So, the goal for us now is to make incremental deliveries of the “Requirement Specification” and of the “Test Specification”, in parallel to the incremental deliveries of software – sprint by sprint. We’re just getting started and are working out the practical bits in order to achieve this. For example, we’ll include in our Default Definition of Done that Requirements should be formulated. Secondly, we’ll keep track of which Stories are Done-Done (meet Definition of Done) at the end of each Sprint. The tester will use that information as input for his work. So at the end of each sprint there’s a new set of finished Stories accompanied with Requirements, that the tester can use as input to create a Test Specification increment. This way, the formulation of test cases is done one sprint behind the development. It might not be optimal, but considering that the tester is not full-time allocated to the development team, doesn’t sit with us, and that I’m not really attempting to reshape the entire QA department’s process all at once at this point :-), I think it’s a fair construction and a good trade-off.

So. That’s where I am right now. I’ll post an update as we progress!

I’d be glad to hear about other people’s experience about this type of situation.

10 comments:

  1. On this topic, I will recommend the excellent book "Agile Testing" by Lisa Crispin and Janet Gregory (Addison Wesley)

    ReplyDelete
  2. We have been/are in a similar situation. When each story is done the developer has to write a little more detailed "How to Test". That, along with the "excpected outcome" (which can change along the way) often surfice for the QA resource to perform her duties. On our scrum board we have introduced "Ready for test", "In testing" and "Done/Approved". When we move stories into the "Ready for test" the QA resource can assign them to herself and read all the information needed (we use Jira to keep track of everything), perform the tests and fail or approve the story (we actually call them issues and that makes me somewhat confused at times since I'm used to stories... hehe).
    I feel that this approach has been pretty easy to adopt and does not interfere that much with the workflow for the developers. One just have to get used to the fact that a story (sorry, issue) is not actually DONE just because you have completed your coding. And the QA resource feels like she's in the loop and up to speed.
    As I understand it, just until recently the QA dept. used to work very unscrumish so it's nice to see that the transition to this has been so frictionless for all parties.

    ReplyDelete
  3. Very interesting.

    So stories are not "Done-Done" until it has passed the testing (i.e. is moved to your "Done/Approved" column? And your velocity is lower for it, right? I.e. you don't count stories until they're "Done/Approved"?

    I'd like to use this approach too because it means that testing is really performed (by the QA resource(s)) inside each sprint, together with each story, and the story is - as you say - not considered done until it has been sufficiently tested. With my current approach, we call the story "Done-Done" when it fulfills the DoD, but the DoD doesnt include that the test cases should be written and executed; it just says that the requirements should be formulated and that the development team should have done unit tests.

    The reason why we do this and not your approach is that we have very short sprints; we have 1 week sprints. Our tester doesn't feel that's enough time to write test cases and perform the tests...

    Do you have any thoughts about this? How long are your sprints?

    ReplyDelete
  4. To me, a test plan is something a tester or a test leader writes. But it is waste unless the creation of the plan discovers anything.

    To me, there may be requirements, but the only thing that defines the system, apart from itself, are the test cases. So the requirement spec is waste.

    User stories are not use cases and are not requirements, they are not form enough.

    Furthermore, a requirment spec will produce a need to write bug reports that compares the outcome of a test case with the requirment spec. Since the latter is waste, so is the bug report. You only need to know which test case failed.

    If the test competence is outside of your team, you need to hand over the result of "coding" and you get this discussion of "done-done". Both are waste. Handover requires procedures and they take time so they time between producing a bug and discovering it increases.

    IMHO, BTW. Keep it up.

    ReplyDelete
  5. Just wanted to add the URL to this very interesting article by Per on this topic:
    http://blog.crisp.se/perlundholm/2009/06/01/1243890906062.html

    Thanks for your comments, Per. I appriciate it.

    And as I wrote on your blog; I agree. Requirements are waste. If we can avoide writing requirements and writing bugreports alltogether, we certainly should.

    My suggestion is only a workaround in order to manage to cooperate with a non-agile department. Once we reach a point where the entire organization is agile I'm pretty sure the concept of "requirement specification" is history. But until then, I need a way to cooperate with the QA department.

    Btw, I really like the idea of bug reports being waste too!

    ReplyDelete
  6. I actually enjoyed reading through this posting.Many thanks.

    ReplyDelete
  7. These large companies, who can be a big mess, you have to use the software who visit the quality of the work of employees, if not the situation will get worse.

    ReplyDelete
  8. thanks for sharing such a wonderfull article!

    ReplyDelete