Everyone loves watching the Olympics.
There’s the drama of human competition. National pride. The exuberance of youth. Dramatic moments. And, of course, world records.
One of the things that makes the Olympics so compelling to watch is the possibility of someone swimming faster or jumping higher than anyone who has come before. Ever. Simple, pared-down-to-the-basics events like running, swimming and speed skating have so few variables that we can reasonably compare the performances of the athletes we’re watching to those who have come before them. We know to marvel at Usain Bolt’s 9.63-second time in the 100m dash in Beijing because we are reminded by well-prepared Olympic broadcasters of how much faster that was than any prior Olympic performance.
But wait – isn’t this a blog on the subject of software development?
It is. I use Olympic records as a point to make a counterpoint. How do you know when a software development team has just turned in a world-class, amazing performance, worthy of writeups in CIO magazine and finish-line bonuses for all the amazing sofware engineers?
I’m sorry, dear software engineers. You can’t. Oh, if you were a member of that team, and you felt all cylinders clicking like never before in your career, you can know it yourself. Feel it in your bones. But guess what? The people whose opinions matter – your bosses, your client, your product manager – have no clue that what you just accomplished was insanely great.
Why is that? Why is it so hard to recognize the difference between an incredible and a merely average performance in software development?
Two reasons, primarily: variability and complexity.
First, there’s variability. As in, no two software development projects are alike. Even among what might seem to be similar projects – say, for word processing applications – there are still differences in feature lists, underlying operating systems, tools, languages and other factors. And, of course, differences in knowledge and ability of the team members, the development, management and testing methodologies being used, and the competence of project and product management.
Then there’s complexity. Building software is no simple task. It places many concurrent demands on software developers – knowledge of API’s and frameworks, programming languages, testing tools and methodologies – all of which constantly evolve. And there is always complexity in the domain of the software itself. There are inherently complex requirements (I always think of the crazy decision tree and wild mathematics used to price an airline seat), and there are relatively simple requirements that become complex, either because they change rapidly, or because they compete or conflict with other requirements in the same software.
All these reasons contribute to something that any seasoned software developer takes as fact: Estimating is hard. Predicting even the amount of time required to build a simple feature – a sliver of an overall project – is something of a roll of the dice. We may know that when Usain Bolt lines up to run a 100-m dash, he’ll come in somewhere in the neighborhood of 9.6 seconds. But we find it difficult to say that a simple software development activity will take four hours or two days. (For those who are interested in Agile, these difficulties are why we use velocity to measure a team’s speed and use it as an informed predictor of future performance.)
Okay, so it’s hard to estimate effort in software development. And it’s impossible for outsiders to know when a team is doing an incredible job. What do we take away from all this? And wasn’t this post supposed to talk about managing expectations?
Indeed. The point is this: In order to be successful as a software engineer, or as a team, we must accept that those who care most about the outcome of a project can’t possibly understand the intricacies and difficulties of executing that project. All they can do – and what they have done forever – is compare our results to our earlier promises. Actuals to estimates. Costs to budgets.
This, my friends, is why the art of managing expectations is so important.
Because estimation is so difficult in software engineering, and because it can be seen as taking precious time away from “getting real work done,” we often spend too little time and mental energy when estimating effort on software projects. When we do that, though, we short-change ourselves. We need to remember that when we put together an estimate, what we are really creating is the ruler by which we will be measured for the remainder of our project. Looking at it that way, and having been on projects where we have excelled only to receive a collective yawn from management or a client, why would we not take more care in crafting that ruler at the outset – or even fine-tuning it as our project evolves?
Those steeped in Agile methods are probably thinking at this point, “This is why we use story points and velocity, to measure real efficiency and use it as a predictor of future efficiency.” Granted. When management and clients have bought into the method, it works well. But even on Agile projects, we are often asked for a “budget number” up front, before we’ve written a line of code. We can’t always refuse this request – and we shouldn’t. It’s not an unreasonable request to know whether an application will cost $1 million or $20 million dollars before a team is hired and set in motion. Sound business management requires investors to understand likely costs, risks, etc.
So when we do provide this sort of up-front estimate, it is important to remember that we are beginning to build our ruler. The one that will later be used to measure our success. It is important. And we must take care in its creation. Do it right, and you may be given a gold medal at the finish line. Do it wrong, and while you may have written more features per hour than any other developer in human history, you’ll likely have an unhappy client.