Posted by Keith McMillan
September 29, 2011 | Leave a Comment
One of my employees recently said something to the effect of “I think people underestimate the role that luck plays on development projects.” I think he’s right, but it’s a difficult topic to nail down. I’ll give it a try, though.
When we set out to build software, there’s usually a lot of unknowns, because almost all of the time, we’re doing something we have not done before. Maybe we’ve not done it with this language, or this particular distribution of services across servers. In fact, I usually say if we’re doing something exactly the way we’ve done it before, I have to ask “Why are we doing it again?” If it wasn’t right the first time, surely it won’t be right now unless we change something…
Okay, so there are unknowns. Given those unknowns, we have two choices.
- Try to plan in exhaustive detail, doing huge amounts of research first.
- Take our best guess and roll with it.
My thesis is that it’s not possible to get sufficient detail to eliminate all the unknowns without actually doing the work. Add to that any sort of uncertainty regarding people getting sick, quitting, business requirements changing and the like, and that really eliminates the first option as being viable at all. We just simply have to take our best shot and run with it.
Taking our best guess frequently extends to quite a bit, usually, but I think I can boil it down to a short, if high level, list:
- Components we’re getting from somewhere else work the way we would like, are stable and reasonably bug free
- Our staffing is consistent
- Our requirements change only to a reasonable degree, and in expected ways (I’d say “stable” but I don’t believe in requirements as static)
- Our understanding of those requirements are sufficient to get an order of magnitude guess as to their level of effort
And that’s where the luck comes in. I’ve seen many a project go badly because one of these guesses proves to be wrong. My staff at RedSky, for instance, was barely stable in the time I was there, and that had an impact on our ability to deliver.
I’m not implying that stakeholders don’t understand that these are factors, but they sometimes seem to behave like we should be able to account for them. Fair enough: I have learned to assume the hardest things possible when discussing requirements, but sometimes I’ve been surprised still. I remember one project where I was regularly surprised that requirements, when discussed in detail were always way more than they appeared to be on the surface, and took far longer to implement.
Some project management styles can help with some of these: adding a contingency (fudge factor) in traditional planning, or using velocity in an agile project can help with requirements instability. A good project still requires some things to go right, tho.
And that’s where we can hopefully have some influence. We can create good software designs that try to isolate the factors that our out of our control, putting this framework behind a subsystem boundary for instance, where if we have to replace it, the amount of work is hopefully minimized. If we’re going to make any progress at all, tho, at some point we’re going to need to start making decisions about which framework to use, and to start trying it out. A lot of what I do is asking “and what if that doesn’t work” and trying to figure out where we go from there, but sometimes not only does Plan A fail, but so does Plan B, and C, and there is no Plan D, and that’s when things get ugly.
As I said earlier, stakeholders will admit there’s a degree of luck involved, but think that amount is pretty small. I think it’s actually an underestimated factor, and rather larger than most people will admit. We can try to minimize the amount of luck we need, but a project that’s out of luck (even the everyday kind) is usually in some pretty deep trouble.
No comments yet.