When to Refactor 20 Jul 2012
Michael Denomy’s post on craftsmanship and refactoring got me thinking about XP and the balance of code quality, refactoring, and enabling rapid change. I’ve often struggled with this balance myself. Of course, many of us want to leave code in the best state possible. For some, it even becomes like a game to find the best code representation of the solution to a problem. Lately, I’ve leaned toward moving on to new features, for two reasons. First, new features make money. Customer-facing changes have a more measurable and more certain payoff than improving the state of the code—typically a more forward-looking investment—especially when done just after making a change. Second, and to elaborate on this theme, you don’t know what kind of change you’ll need to make until you need to make it. Refactoring often involves trade-offs between optimizing for different types of changes. If you invest effort in making code easy to change in a particular way, your investment may be wasted if it turns out the change you need to make is different. Insert questionable Knuth quote about premature optimization here.
To look at it from a different angle: one of the tenets of XP is to do the simplest thing possible. This can work quite well, because it delays doing work you don’t need to do until you know you need to do it. However, this is a fine example of how the different elements of XP are interdependent. Doing the simplest possible thing now implies a need to be able to change what you’ve done when you face a more complicated problem later. If your code has implemented a simple solution, but done so in a way that’s difficult to change, it will hinder you in moving to the new simplest thing possible that incorporates your more complex world-view.
To further complicate matters, I’d suggest that certain types of refactoring optimize for a particular solution. A perfectly factored implementation of the simplest solution to your current problem may make it more difficult to expand to incorporate other concerns. There’s clearly a tension here, and I don’t believe there’s an easy solution. A good programmer must make judgment calls on how much and what type of refactoring to apply at what times, to support the business demands on his code and the types of changes he encounters.
The way I’ve struck this balance lately is to try to implement features simply without refactoring much right away, while making sure I don’t leave a mess as I go. Once I need to make a change to existing code, I refactor to support the new requirements. This sort of “just-in-time” refactoring carries the danger or cost of lost familiarity with the code once you get around to the refactoring, but it avoids premature optimization in cases where you never need to change the code again. This balance has been working well for me, but this discussion reaffirms for me the need to make sure I’ve cleaned up enough to make the code approachable before I move on to the next feature.
One way to think about this is to behave like a chef. If you observe a well-functioning kitchen, you’ll notice there’s never a mess around. Spare moments are spent cleaning work areas and tools to keep work going sustainably and smoothly. If you keep your work clean as you go, you won’t have extra impediments lying around when you need to make a bigger, restructuring change.
comments powered by Disqus