Monday, September 15, 2008

Gnome Days

So how does other people estimate?

I know, estimate relative size or complexity. That's all good. But how do you do it? In my experience it's not really natural. There's always a discussion first about neglecting (or the impossibility of neglecting) the time factor in the estimation process.

The idea with relative complexity estimation is - as I've understood it - to ignore time and instead focus on (in lack of a better word) the size of the task/feature/requirement/story. This means that you should ignore thinking in hours or days and instead just order the stories relative to eachother; this is twice as hard as this and this is about the same, and this is half as hard, and so on. Henrik Kniberg's approach of Turbo Estimation is perhaps the simplest form of complexity estimation; you just divide each story into simple, medium and hard.

This is all good. My problem with the above mentioned method is that it doesnt "feel" natural.
Every time I've initiated an estimation session with the intent of estimating complexity acc to the above, it has sparked (sometimes) lengthy discussions about why it works or not.

My usual argument is that the estimators should think of it as time being a result of the complexity and the term complexity should be regarded more like "size". E.g. the time is a result of the "size" of the work of completing a certain task/requirement/story/feature.

Another complication here is the lack of continuity in the estimation process. As you know, measuring velocity (actual vs estimated) is a key point in Scrum in order to manage sprint intake, so you will want the complexity estimates to mean the same thing from one estimation session to another in order for velocity to actually have a meaning. And since we usually dont have estimation sessions more than maybe once every two or three weeks people tend to forget the relative size of stories across sessions.

My conclusion from the above is that sure, relative estimation of complexity is great in the absence of any other method, but it really leaves many things to be desired.

Our solution now is to estimate in ideal days, but we call it "Gnome Days" - or Tomtedagar in Swedish. The term was inspired by Henrik Knibergs book "Scrum and XP from the trenches" (if you haven't read it yet: do! It's free and can be found here, although I suggest you buy it to support Henrik) where he describes that time estimation should be done considering how long a task will take to complete for an optimal number of resources locked in an room and left completely undisturbed. I thought of that as a gnome (swedish "Tomte") in a basement, who you lock up and don't let out again until the task/story/requirement is completed.
Anyway, we estimate the initial "story point" value of each story using gnome days. This is in our experience much more natural and still lets estimators estimate stories relative to eachother. A challenge here is to avoid micromanagement and microestimation, so we forbid the use of hours. Instead, our smallest estimate (ever) is 0.5 gnome day. If a story is estimated to more than e.g. 3-4 gnome days then it could probably use breakdown into smaller stories.

1 comment:

  1. COEPD LLC- Center of Excellence for Professional Development is the most trusted online training platform to global participants. We are primarily a community of Business Analysts who have taken the initiative to facilitate professionals of IT or Non IT background with the finest quality training. Our trainings are delivered through interactive mode with illustrative scenarios, activities and case studies to help learners start a successful career. We impart knowledge keeping in view of the challenging situations individuals will face in the real time, so that they can handle their job deliverables with at most confidence.