Last week I left you hanging. In How To Make Meaningful Estimates For Software Products I basically said that estimates don’t work for software projects. That’s still true. But the fact that estimates don’t work doesn’t mean that features don’t get completed and delivered. They do get finished, you just can’t predict with accuracy when that will happen. So the question arises – can we get to any degree of predictability, despite that?
There are a few different ways you can approach predictability, and you can use them in combination:
- Don’t ship until it’s finished – that is, make the prediction about the quality and the scope, but don’t predict the time
- Ship on a regular basis, including only what’s finished – that is predict the time and the quality, but not the scope
- Ship partial features – predict the time and the quality, and accept partial scope
- Ship tiny features – only ship features you can estimate reliably, which means (remember this from last week’s post) they are not interesting
And there are some mitigations for the fact that estimates don’t work. The most obvious is the one that Steve Johnson (@sjohnson717) mentioned in a comment on last week’s post:
- Estimate by comparison – “this feature seems about as big as that feature, which took us four weeks to implement”
The smaller the feature, the better this works, of course, because uncertainty grows with the value of the feature.
I have more thoughts on estimates and product planning predictability in my next post.
A few thoughts on estimating. I had a conversation with someone yesterday who asked me how I worked with the engineers on estimates. My answer shocked him, I think. I wanted to expand here on what was a throwaway conversation:
- My favorite story about estimates is about the Sydney Opera House, as told by Nicholas Nassim Taleb in The Black Swan. First, you should know that construction is incredibly well-understood and for some types of projects builders can repeatedly complete them within 5% of the estimated time.
The Sydney Opera House, started in 1959, was scheduled to be completed in 1963 for $7M (Aus). Actual construction took nearly four times the original estimate – it actually finished in 1973 (10 years late!) – and it cost more than 12 times the original budget at $104M (Aus). And of course, the Opera House was only 1/3 of the original project. If builders can be that far off, simply because it’s never been done before, why should we think that we should be able to estimate software, which always by definition has never been done before?
- There is a fundamental disconnect between estimates and interesting things. Interesting things are unpredictable. User stories are estimatable, therefore not interesting.
- Estimates are not a standard distribution. They are really screwed up distribution where the likely value is way the heck out there beyond the value you think it should be. (And very occasionally, extremely rarely, things go a lot faster than you expect.)
- I prefer timeboxes, and for interesting things, we get done what we get done in the timebox. The art of product management is figuring out what to do in the timebox. Note: this works much better in software than in construction. Buildings have to obey the laws of physics, but software doesn’t. There is no such thing as a Minimum Viable Product in construction – you can’t build a fancy roof until you build the structure to support it. But you can do that in software. There’s a lot of software out there that is essentially fancy roofs floating in the air.
- Think about failure, which is so important in innovation. Failure is of course immune to estimates, by definition.
For example, let’s assume I can get a decent estimate for doing something interesting (which we know I can’t, but hang on). Then we do it. It only takes twice as long as we estimated! (That’s a great result.) Unfortunately, given reality, it’s wrong, and has to be done again. It was a failure, but it was a productive failure. We learned a lot. We didn’t get the feature to market when we expected to, but if we’d put that version into the market, it would have been bad in oh so many ways.
So we start doing it again, and mostly we have to start from scratch, but we did learn some things in version 1. We also realize we can get a little bit of version 2 out to early adopters. It’s definitely not a full feature – they have to do manual work to get the value, but they are willing because it’s so useful to them. And we learn some stuff, and we end up building version 3, instead of version 2, because we got some great feedback that makes it even better. Versions 1 and 2 are sunk costs, and they are PAINFUL, but because we did them, we have version 3, and it’s beautiful. And it only took us four times as long to get the feature out as originally estimated, which is actually a pretty good result.
The title of this post might have been a little misleading. I suspect I may have created a firestorm. I can’t wait to hear what you think!
This week and last have been hard on the old cognitive capacity, and the topics I’m working on for the blog need a lot more cognitive work than I’ve had available.
Or is it just that I’ve spent too much time reading really interesting articles on the Internet? Here are some good ones I’ve run across in the last week that I wanted to share, in lieu of a real post. Each of these posts is worth the time spent to read (or watch) it.
- First, Bruce McCarthy (@d8a_driven) turned me on to Jason Brett’s excellent “60-second Business Case heuristic for creating a “strategic score” for a set of features. I described a similar scoring method in How To Prioritize. Jason and Bruce discuss it in this post on Bruce’s blog.
- @SeriousPony mentioned this great seminal Alan Kay talk as a first step for understanding how to do user experience and user interface better. I loved the talk – it’s full of throwaway lines that become touchstone aphorisms for us latter-day practitioners, such as ”Find a context that will do most of your thinking for you,” (about 25 minutes in).
- One of my recent hobby horses, as you know if you’ve read the blog, is that product management – creating new products – is not a straightforward application of best practices or techniques. Coming up with new products and new features is complex, full of emergent knowledge. Therefore, I was pleased to see this article on Insight-Driven Innovation in the MIT Tech Review blog. Simply having and understanding or analyzing a lot of data does not get you disruptive innovation. You need to have a moment – or multiple moments – of insight that are patently NOT data-driven. ”Disruptive innovations need divergent thinking combined with instinct and gut.”
- From Sarah Davanzo, “culture cartographer,” comes “The Trend of E-shaped People,” extending the concept of T-shaped people – who have a combination of broad but relatively shallow knowledge across a range of topics (“Experience”), and deep knowledge on one or more particular topics (“Expertise”) – to include two more items – a tendency to “Explore” and an ability to “Execute.” The result is not only four “E’s”, but you can also use an “E-shaped” graphic to illustrate it.
- Finally, my current favorite moment each week is when I receive @katemats’ and @katestull’s Technology Leadership News (TLN) in my Inbox. Every week there’s at least one article that I take immediate action on. One of my favorites was ”Nine Creativity-Sparking Tips” where I learned ”your obvious is your art.” It kind of changed my life to get that understanding. (And I suspect the E-shaped people article was already mentioned in TLN at some point.) I’ve even configured RescueTime so that reading TLN counts as Highly Productive time, since I learn something I put into practice almost every week..