In my time at a major insurance company in mid-state Illinois, we had a very interesting conversation on what it meant to be “done” with a user story. Done can mean a lot of things to a development team, anywhere from “hey, I just finished writing this code” to “we’ve turned on the feature in production.”
We created what we called a “spectrum of done” to illustrate various levels of “done-ness” (okay, enough with the “quotes”), and to help teams decide where they wanted to be when they said something was Done. Some of the highlights are:
- Code compiles on the developers machine
- Code compiles on a continuous integration build server
- Code passes regression test suite
- Functionality reviewed by the product owner/stakeholders
- Functionality deployed into test environment and tested with mock integrations to other servers
- Functionality deployed and tested with integrations to other real servers
- Functionality deployed into production
- Functionality used in production for some period of time
I submitted, only half-joking that we were really only Done with a feature when we turned the system off, because before that it was still subject to change or finding a problem.
There are a lot of reasons why you want your definition of Done to be as advanced as possible, but possibly the most important is that you want the best possible idea on whether you have anything left to do on a particular feature before it can be used in production. Measuring by what’s Done vs what’s not Done requires that you have a diminishing chance that when you say it’s Done, that there’s still something left to do.
The other side of this argument is that we want regular feedback on small chunks of functionality in order to give us regular data points to judge progress, and the amount of time you typically need to invest gets much bigger as you move up the Done scale. Deploying to production environments in the typical company requires change control procedures, and that means time and money.
The sweet spot for most folks seems to be in building on a shared development server, with some sort of robust suite of tests to assure that new functionality is at least working per the tests, and that nothing that used to work was broken (subject always to the suite of tests being reasonable). This most folks will recognize as Continuous Integration (although you should really read Continuous Integration isn’t a Tool!)
Keith returns to consulting on 9/1/11 after a year at RedSky Technologies. He’s currently looking for his next engagement.