Tuesday, September 30, 2008

Version control with multiple scrum teams

Earlier this year Henrik Kniberg wrote a very interesting article about version control and scrum, in a multiteam environment. It doesn't cover every aspect, but it is a good source of inspiration and interesting to read anyway. You'll find it here: http://www.infoq.com/articles/agile-version-control

Cross-functional teams (“Feature teams”) in game development is not straightforward

Our teams are average about 5-6 persons in size. Some are somewhat larger and some are smaller. But in general they are all a mix of people with varying competencies. For example for one of our games which is a somewhat “3D intense”, the teams consist of web developers (PHP), 3D developers (C++), 3D artists and 2D artists. The idea with Scrum is that the team collectively commit to the sprint backlog and help each other out to focus on finishing (Done-Done) stuff in sprint backlog priority order.

The problem here is that the 3D artists cannot really contribute to the PHP or C++ coding, and the Web developers aren’t really capable of helping with the 3D programming. So the feeling of collective commitment is weakened, and the cooperation-effect is lost; what remain to help out with are things like testing, perhaps code-reviewing and just generally “wiping the floor” (removing impediments) for whoever is working on the top-most priority stuff (e.g. being “Servants” to the “Kings”). It works sometimes, but it is not optimal. It would be interesting to know how others do it.

The alternative would be to split the teams up according to these “hard borders” of competence, so that e.g. the artists would be in a team on their own, and leaving all the programmers in one team. I don’t have empirical evidence to back this up, but I have a feeling it is a choice between two less-than-optimal alternatives, and putting them all into one team is less bad than to split them up…

Thursday, September 25, 2008

Iterative, Incremental development - Continuous refactoring

It's hard. Certainly much easier to say than to do, right? We keep failing at this. Actually, "failing" is maybe a bit harsh so let me rephrase; we're not as good as I think we should be.

The problem is in how we look at the code we write and the work we do; and what values and expectations we lay in the term "Done" (or "Releaseable"). We want to finish what we start. Even the Definition of Done (DoD) tells us that it should be "finished". Our Definition of Done says "Releaseable" (more or less). This should be your DoD too, by the way. So the team sits there with a bunch of stories to produce and knows that the DoD says that it should be "releasable". A natural reaction, given every developer's pride in work, is to focus on delivering something that is really fully "releasble", meaning "great". Or often "perfect". When all the DoD really (read it again if you don't believe me :-)) says is "releasable".

But here's the problem: "Finished" doesn't mean "Complete".

What we fail to do is to remember that the Definition of Done only requires us to deliver something that is possible to release. No more and no less. It doesnt say anything about the code being in super shape, future proof or completed-once-and-for-all-for-all-eternity-never-to-return-to-again. On the contrary; Scrum and Agile development is all about iterative, incremental work. It is about continous refactoring. It is OK, it is even expected - even required - for us to go back to code and add new stuff and kill off or change things done in previous sprints.

Forget about trying to do everything right from the start. Try and look only as far as the sprint length is - as far as your current sprint goal. That is what we need to focus on. We do what we need to now, in order to make what we're currently committed to - then we come back next sprint and add to and refactor what we previously did. It's ok - it is not something to avoid: it is something to strive for!

Another thing to consider is the following: isn't it great to be able to refactor your code, continuously? I know since my programming days, I really loved it when I got a chance to refactor my own code. This, I think, because I learned stuff when I originally wrote it - so I know how to make it better when I rewrite it. And I think most people love the feeling of making something better.

Finally, I also want to mention something I picked up when speaking to a scrum coach the other day: refactoring a few years ago really could be a pain, due to the poor development environments. But today most major IDE's and code-writing tools have great support for refactoring; in a matter of minutes you can make changes to tens or even hundreds of files. This is something that I will definately look into in our organization; what support for refactoring do we have in our development environment(s), and what better tools are out there?

Monday, September 22, 2008

Shock Therapy!

I'm glad I'm not the only one with this experience... On the topic of what I wrote abot doing Scrum right from the start instead of cutting corners; I learned the other day that Jeff Sutherland (one of the "inventors" of Scrum) calls this approach "Shock Therapy". There's a very interesting article on his blog about this: http://jeffsutherland.com/scrum/2008/09/shock-therapy-bootstrapping.html

Thursday, September 18, 2008

Estimating at Sprint Planning

When do you estimate? And what?

We estimate stories using Gnome Days, which becomes the story's initial "Story Point" value. I know Scrum teaches to estimate the story points as a relative complexity or size, not taking time into account.

So you have a bunch of stories in your backlog, all estimated using Gnome Days. Meaning, I have knowledge about the size of the stories in the backlog, and with my knowledge about previous actual velocity I can derive an estimate of how long those stories will actually take.

So I go to the Sprint Planning meeting with a bunch of stories, the top priority stories in the backlog. The team picks a bunch that they feel comfortable with.

Now, how much do they look at the story point values when they pick what they commit to? Do they even care about it? At this point? Or do they adjust completely to it in relation to knowledge about past velocity? We do something in between. The team picks a set of stories they feel comfortable with but use the story point sum to compare against past velocity, as a sanity check.

Next question: Do you estimate the un-estimated stories at the sprint planning or are you supposed to have that ready when you enter? If so; at what point do you estimate stories? Does the Product Owner estimate the stories himself? Or is it done by the whole team at some other point (other than the sprint planning)? Or do you have an "estimation team" that meet regularly to estimate the most important currently unestimated stories in the backlog?

The latter is my preferred approach, until someone gives me a good argument as to why not. I like the idea of a relatively stable team of mixed competences who estimate stories ahead of time, in priority order. That way, that is done when the team enters the sprint planning, and the team can focus on picking stories and instead revise the estimates, breakdown into tasks where needed and/or sanity-check the commitment. How do other organization do it?

Wednesday, September 17, 2008

Start off by the book, if you're new to Scrum!

We started our agile journey a year ago. I knew that Scrum was increasingly popular and I read a lot about it (and other Agile methods). One of the great things about it was (and is) that it is so "common sense". Every description of the method(s) explain that you have to find your own way and that for example Scrum is just a framework, not a ready out-of-the-box method or toolset. This is great, but it was also (to us at least) a pitfall and contributed largely to one of the first really big mistakes (out of many ;-))...

Why? What happened? Oh, right.. I will explain. The pragmatic attitude of Scrum and the "laidback" sortof informal tone it gives when you read about it, made us feel like it was alright to cut corners; "Oh, yeah well we dont have to do it exactly by the book - Agile and Scrum says you need to find your own way right, an implementation and process set that suits you and your unique environment.. So then it's OK for us to skip the retrospections really, or to not have people appointed Scrum Master' [and so on]". We'd tend to use Scrum and Agile and its flexibility and humbleness, if you will, as an excuse to cut corners in the implementation. As a result, I think our implementation of Scrum/Agile has been suffering and consequently has taken a lot longer time than it otherwise could have. Don't get me wrong; the Agile approach has been meaningful and has improved our means of developing software from day one. I just feel it could have done us even more good even sooner. Many times throughout the past year we've discovered something new about Scrum that we've introduced (or had a problem that we've had to solve) which has really made us take one or several steps closer to a "real" Scrum implementation.

What I am saying is this: If you're new to Scrum and you want to introduce it in your organization or your team; do it by the book first. Don't cut corners or make tradeoffs! Set out to follow everything you read about it, 100%! Take that extra week of studying Agile and Scrum, or that extra $2000 CSM course. Do that, upfront from day one. No seriously. Do it.

I wish we had done that. You will want to learn and fully understand the method first and the thinking behind it, before you are capable of understanding how to make tradeoffs or cut corners. Yes, Scrum is a very flexible method (framework) and you definately need to adjust to your organizaiton in order for it to work fully for you. But you need to (really) understand it first before you're capable of understanding the full implications of shortcuts.

You will find a valuable checklist that you want to benchmark against, here, it's Henrik Kniberg's Scrum Checklist. I wish we had known about this sooner. Check it out.

Tuesday, September 16, 2008

Backlog manager tool

Just wanted to share our "Backlog Manager" excel file, which we use in all our teams.
I'll try to make some posts here with further explanations, information and thoughts surrounding it, but for now I'll just settle on providing the link to the file.

It should work both for Microsoft Excel 2003 as well as Microsoft Excel 2007. It is possible to open in OpenOffice Calc too, but I don't think the Macros work.

NOTE! Buttons doesn't work (nothing happens when you click them?)... Well then you have to enable Macros in the security settings. See the little notice that appears just below the menu/toolbar in Excel when you open it.

Here's the link:
BacklogManager-v8.xls
(Updated 2009-01-23)

Credit to Henrik Kniberg for the original, which I have incorporated into this sheet. The index card generator and the index card template is full credit to him (although I refined his version a little and edited parts of his VB code).

Tool for recording Planning Poker games

When we play Planning Poker I wanted a way of recording the results of each story in a simple way. I also wanted to be able to accept the fact that we won't always reach consensus. So if we have three people playing a 3, person playing a 2 and one person playing a 5, and all of them refuse to change their minds, what do you do? Do you count it as 3? Or 5? Or 2? After you've tried letting the extremes argue (defend) their choices and then had a couple of replays, it starts to feel silly. So being a fan of mathematics and formulas I wanted to be able to live with that difference in opinion and still have a clear way of counting the result. The obvious approach would be to count the average; (2+3+3+3+5)/5 = 3,2 (or rounded to nearest half; 3.0). It works. Do it if you want. I on the other hand wanted the formula to tilt towards "worst case" rather than letting the lower estimates affect it equally. So I used a formula I picked up in (I think) "Agile Estmiating and Planning" by Mike Cohn; Estimate = (1*Min + 2*Avg + 3*Max)/6. The result in the example above would be: (1*2+2*3.2+3*5)/6=3.9. Or rounded to nearest half (which we always do); 4.0.


So for every play we do we record the final estimates from each player. We enter them in the formula above and that is what we the put as the estimate or story point value.


I have a tool I made in Excel, which we now use. Feel free to download and use if you like:
EstimationSheetTemplate-v4.xls

Planning Poker and tasks vs stories

We've used Planning Poker for over a year now. Our experience is that it is a great and easy way of coming up with estimates. If you don't know what it is, you can read more about it here.

When we started it seemed natural to us then that the values of the Planning Poker deck represented hours; 1,2,3,5,8 hours and so on. We had a rule of thumb that said that no task/requirement/story should be larger than e.g. 18-24 hours. If estimators felt like playing a higher card than that then it indicated that the story needed to be split up into smaller bits.

So, this seemed fine, especially on paper. The problem was that many sessions became extremely lengthy and we often got stuck in discussions about details and solutions. I know this is to be avoided in Planning Poker, but that's what I wanted to point out here: we made that mistake. It wasn't as easy as just saying "No let's not talk about design details now"... We said that. Over and over :-). People felt they needed to discuss the details, in order to decide between e.g. a 5 or an 8, or between 8 and 13... It was often possible to force a play and ask people to follow gut-feeling and intuition first, but the discussions kept popping up over and over again.

Our approach on avoiding this and speeding up the sessions was to simplify: forget hours. We estimate days instead. Our smallest estimate now is 0.5 (days) and then we follow the typical Planning Poker deck series; 1,2,3,5,8 and so on. A typical task/story takes between 0.5 and 5 days, I would say.

In my experience estimating in hours causes a false sense of security and leads to over-confidence in the plans. One of the arguments against using days as the estimation unit may be that we then miss the opportunity of really thinking through what parts/activities/tasks a certain story/task consists of and thereby miss possibly important aspects in the estimate... Maybe it is so. I dont know yet... So far I think it's a good tradeoff; I trade waste of time and false sense of security for a possible decreased accuracy... Well, I've already accepted to sacrifice accuracy given the 2-3 minutes per story and the whole idea behind Poker Planning to begin with: a fast and simlpe way of estimating.

My understanding of the common practice in Scrum is that you estimate two sorts of things;
1. Estimate stories in Story Points, which is a relative estimate of complexity (regardless of time)
2. Break stories down into tasks and estimate tasks in hours.

Our implementation (so far :-)) is to simply estimate stories using Gnome Days. Sometimes we have to break stories down into tasks during a sprint planning session, but certainly not always. If we do, we obviously estimate the tasks individually, but we still use days as the unit, not hours.

I'm curious to hear about other people's experience in estimating days vs hours in Scrum.

Monday, September 15, 2008

Gnome Days

So how does other people estimate?

I know, estimate relative size or complexity. That's all good. But how do you do it? In my experience it's not really natural. There's always a discussion first about neglecting (or the impossibility of neglecting) the time factor in the estimation process.

The idea with relative complexity estimation is - as I've understood it - to ignore time and instead focus on (in lack of a better word) the size of the task/feature/requirement/story. This means that you should ignore thinking in hours or days and instead just order the stories relative to eachother; this is twice as hard as this and this is about the same, and this is half as hard, and so on. Henrik Kniberg's approach of Turbo Estimation is perhaps the simplest form of complexity estimation; you just divide each story into simple, medium and hard.

This is all good. My problem with the above mentioned method is that it doesnt "feel" natural.
Every time I've initiated an estimation session with the intent of estimating complexity acc to the above, it has sparked (sometimes) lengthy discussions about why it works or not.

My usual argument is that the estimators should think of it as time being a result of the complexity and the term complexity should be regarded more like "size". E.g. the time is a result of the "size" of the work of completing a certain task/requirement/story/feature.

Another complication here is the lack of continuity in the estimation process. As you know, measuring velocity (actual vs estimated) is a key point in Scrum in order to manage sprint intake, so you will want the complexity estimates to mean the same thing from one estimation session to another in order for velocity to actually have a meaning. And since we usually dont have estimation sessions more than maybe once every two or three weeks people tend to forget the relative size of stories across sessions.

My conclusion from the above is that sure, relative estimation of complexity is great in the absence of any other method, but it really leaves many things to be desired.

Our solution now is to estimate in ideal days, but we call it "Gnome Days" - or Tomtedagar in Swedish. The term was inspired by Henrik Knibergs book "Scrum and XP from the trenches" (if you haven't read it yet: do! It's free and can be found here, although I suggest you buy it to support Henrik) where he describes that time estimation should be done considering how long a task will take to complete for an optimal number of resources locked in an room and left completely undisturbed. I thought of that as a gnome (swedish "Tomte") in a basement, who you lock up and don't let out again until the task/story/requirement is completed.
Anyway, we estimate the initial "story point" value of each story using gnome days. This is in our experience much more natural and still lets estimators estimate stories relative to eachother. A challenge here is to avoid micromanagement and microestimation, so we forbid the use of hours. Instead, our smallest estimate (ever) is 0.5 gnome day. If a story is estimated to more than e.g. 3-4 gnome days then it could probably use breakdown into smaller stories.

Scrum ftw!

Yes. Scrum ftw! Actually, Agile ftw! As I study the methods more closely I become increasingly convinced of the strengths - which lay in the simplicity. In the common sense.
I just started this blog today so there's not much here yet, but I plan on describing ideas and sharing tools and lessons here, from the journey that I've had the chance of being a part of during the past year or so, in this company.