Stubble: Stub Testing for HTTP 23 Jul 2012

I’m pleased to introduce the first release of a new testing library called Stubble. Stubble was created to help developers test their applications’ communication with other applications over HTTP. It provides an easy way to set up an embeddable test server that will respond to requests you specify with associated responses. Stubble was written in Scala and runs on the JVM. Adding a request/response interaction looks like this:

val response = Response(HttpResponseStatus.OK, Some("content"))
server.addInteraction(Interaction(List(PathCondition("/")), response))

Background

To understand Stubble a little better, let’s rewind a bit. Why is this useful? Many of the applications I’ve come across communicate with other processes via HTTP. They may need to call external web services to integrate data from other systems, or they may need to make HTTP requests to initiate actions such as sending emails on the application’s behalf. At Cyrus Innovation we tend to have a culture of testing everything, but testing this part of our applications has always been difficult. In order to make sure our software correctly handles different conditions that arise in these interactions, we are faced with a couple of undesirable options.

The first, perhaps most obvious way to approach this is to set up an instance of the external system (for the purpose of this article, let’s call the system under test “internal” and anything outside it, particularly the HTTP server(s) with which it needs to interact “external”) with test data suitable to reproduce some important interactions. Depending on the degree of control you have over the other software, this may or may not be possible, but there are two big problems with this. Taking the external system through the states necessary to cover important cases may require driving through a user interface, or it may not be possible at all. At best, this is likely to be slow and complicated to set up. Worst of all, this creates an external dependency in your software’s build. If the server goes down, is reconfigured, or its version doesn’t match that expected by your software revision, your build fails and you’re left to figure out what happened. Was it a new bug in your software, or just a glitch in the other system? The main upside to this approach is that it can be used to provide fairly high fidelity to production use. Because you are using an actual instance of the external software, communicating between machines over an actual network, this test system is likely to be bug-for-bug compatible with the production system.

Another approach is to write a custom test system to simulate the external system. This often involves setting up an HTTP server and programming it to respond as the external system would. This provides a high level of flexibility, as you can tailor the test server exactly to match the needs of the tests you are writing, but can be tedious to integrate with your build and involves a lot of low level detail in managing HTTP interaction. After trying variations on this theme for several systems, I asked myself, “Why doesn’t something already exist that just lets me tell it what response to return when the request looks a certain way?” As it sometimes turns out in cases like this, my solution was to create that thing I was looking for myself.

Stubble takes care of the repeated, low-level details of this approach by providing a simple API for setting up HTTP responses that will be returned when HTTP requests are made that meet specified conditions. If you’ve ever used a stub or mock library such as Mockito for specifying interactions between objects, using Stubble to specify interactions with an HTTP server should be familiar.

The Stubble Approach

An interaction in Stubble is a list of request conditions and a response. When Stubble receives an incoming request, it runs through the interactions you’ve registered with it and returns the response of the first matching interaction. Stubble also comes with Maven integration to make it easy to run during integration testing in your build. Using Stubble can be as simple as starting the server in your test and adding an interaction to it, but the build integration provides a way to start up Stubble before your application starts for integration testing, so that any requests made throughout your application’s lifecycle can be handled by Stubble. A common example of this is data that’s loaded at startup. Using Stubble, you can provide enough boilerplate data to get the system running, then make calls to refresh or change data when you’ve set up new interactions inside a test.

A great way to get started with Stubble is to read the introductory material at the project page on GitHub. I hope you’ll check it out, and feel free to help out by logging any issues you have or submitting pull requests (though I invite you to send a note or open an issue before writing too much code).

Permalink | Comments

When to Refactor 20 Jul 2012

Michael Denomy’s post on craftsmanship and refactoring got me thinking about XP and the balance of code quality, refactoring, and enabling rapid change. I’ve often struggled with this balance myself. Of course, many of us want to leave code in the best state possible. For some, it even becomes like a game to find the best code representation of the solution to a problem. Lately, I’ve leaned toward moving on to new features, for two reasons. First, new features make money. Customer-facing changes have a more measurable and more certain payoff than improving the state of the code—typically a more forward-looking investment—especially when done just after making a change. Second, and to elaborate on this theme, you don’t know what kind of change you’ll need to make until you need to make it. Refactoring often involves trade-offs between optimizing for different types of changes. If you invest effort in making code easy to change in a particular way, your investment may be wasted if it turns out the change you need to make is different. Insert questionable Knuth quote about premature optimization here.

To look at it from a different angle: one of the tenets of XP is to do the simplest thing possible. This can work quite well, because it delays doing work you don’t need to do until you know you need to do it. However, this is a fine example of how the different elements of XP are interdependent. Doing the simplest possible thing now implies a need to be able to change what you’ve done when you face a more complicated problem later. If your code has implemented a simple solution, but done so in a way that’s difficult to change, it will hinder you in moving to the new simplest thing possible that incorporates your more complex world-view.

To further complicate matters, I’d suggest that certain types of refactoring optimize for a particular solution. A perfectly factored implementation of the simplest solution to your current problem may make it more difficult to expand to incorporate other concerns. There’s clearly a tension here, and I don’t believe there’s an easy solution. A good programmer must make judgment calls on how much and what type of refactoring to apply at what times, to support the business demands on his code and the types of changes he encounters.

The way I’ve struck this balance lately is to try to implement features simply without refactoring much right away, while making sure I don’t leave a mess as I go. Once I need to make a change to existing code, I refactor to support the new requirements. This sort of “just-in-time” refactoring carries the danger or cost of lost familiarity with the code once you get around to the refactoring, but it avoids premature optimization in cases where you never need to change the code again. This balance has been working well for me, but this discussion reaffirms for me the need to make sure I’ve cleaned up enough to make the code approachable before I move on to the next feature.

One way to think about this is to behave like a chef. If you observe a well-functioning kitchen, you’ll notice there’s never a mess around. Spare moments are spent cleaning work areas and tools to keep work going sustainably and smoothly. If you keep your work clean as you go, you won’t have extra impediments lying around when you need to make a bigger, restructuring change.

Permalink | Comments

Benefits of TDD: Problem Solving 18 Jul 2012

Sometimes those of us who advocate Test Driven Development have trouble clearly enumerating the reasons we find it valuable. “Because it gives us quick feedback,” we’ll say, or “because it helps us organize our code better.” We can usually come up with a couple of good reasons, but a compelling or comprehensive description can be elusive. The source of this trouble may lie in the interacting and cumulative nature of the many benefits of TDD. To put this another way, I practice TDD not for one or two specific reasons, but for many subtle positive effects, some of which may come into play to varying degrees in different situations. I think it’s helpful to be aware of the benefits that come from TDD. In particular, this article by Dan North (by its omission of this aspect) made me think about one of the benefits I find from TDD: support for breaking down a problem into smaller parts.

If you think back to your grade school math classes, you may remember trying to solve word problems. As the complexity of the problems increased, you might have broken each problem down into smaller problems, solving each directly, then combined the parts into a complete solution. Fast forward to professional software development, which often involves solving complex and deeply interwoven problems. TDD can provide the structure for just this type of breakdown.

A characteristic often found among the smart and talented folk who occupy the field of programming is an ability to hold many parts of a complex problem in your head, reconciling them through sheer intellect. This is a wonderful characteristic to possess, but programs that depend on it can be difficult to understand and maintain. For those of us with more modest mental faculties, who enjoy simpler expressions of complex problems, or who desire to make code accesible to others without the context built up to develop the code, it can be helpful to break problems down into constituents of the whole.

This is where TDD comes in. By fixing the solution to each sub-problem with a test, you free your mind to focus on other parts of the problem. If you make a change that subtly invalidates one of the conditions you’ve established, the test you’ve written will alert you without your having to run through all the edge cases in your head. Once you have determined the correct conditions for part of your problem, you can rely on the computer to ensure those conditions remain valid and devote your mental effort fully to address other parts of your program or larger issues of composing the solution. In essence, this is applying the core value of computing to the task of applying the core value of computing. While we leverage computing power to reduce work for general problems, shouldn’t we harness the same power for our own tasks?

What’s your favorite benefit of TDD?

Permalink | Comments