Wednesday, February 24, 2010

The Maintainability of Unit Tests

At speakerconf 2010 discussion repeatedly arose around the idea that unit tests hinder your ability to refactor and add new features. It's true that tests are invaluable when refactoring the internals of a class as long as the interface doesn't change. However, when the interface does change, updating the associated tests is often the vast majority of the effort. Additionally, if a refactoring changes the interaction between two or more classes, the vast majority of the time is spent fixing tests, for several classes.

In my experience, making the interface or interaction change often takes 15-20% of the time, while changing the associated tests take the other 80-85%. When the effort is split that drastically, people begin to ask questions.

Should I write Unit Tests? The answer at speakerconf was: Probably, but I'm interested in hearing other options.

Ayende proposed that scenario based testing was a better solution. His examples drove home the point that he was able to make large architectural refactorings without changing any tests. Unfortunately, his tests suffered from the same problems that Integration Test advocates have been dealing with for years: Long Running Tests (20 mins to run a suite!) and Poor Defect Localization (where did things go wrong?). However, despite these limitations, he's reporting success with this strategy.

In my opinion, Martin Fowler actually answered this question correctly in the original Refactoring book.
The key is to test the areas that you are most worried about going wrong. That way you get the most benefit for your testing effort.
It's a bit of a shame that sentence lives in Refactoring and not in every book written for developers beginning to test their applications. After years of trying to test everything, I stumbled upon that sentence while creating Refactoring: Ruby Edition. That one sentence changed my entire attitude on Unit Testing.

I still write Unit Tests, but I only focus on testing the parts that provide the most business value.

An example
you find yourself working on an insurance application for a company that stores it's policies by customer SSN. Your application is likely to have several validations for customer information.

The validation that ensures a SSN is 9 numeric digits is obviously very important.

The validation that the customer name is alpha-only is probably closer to the category of "nice to have". If the alpha-only name validation is broken or removed, the application will continue to function almost entirely normally. And, the most likely problem is a typo - probably not the end of the world.

It's usually easy enough to add validations, but you don't need to test every single validation. The value of each validation should be used to determine if a test is warranted.
How do I improve the maintainability of my tests? Make them more concise.

Once you've determined you should write a test, take the time to create a concise test that can be maintained. The longer the test, the more likely it is to be ignored or misunderstood by future readers.

There are several methods for creating more concise tests. My recent work is largely in Java, so my examples are Java related. I've previously written about my preferred method for creating objects in Java Unit Tests. You can also use frameworks that focus on simplicity, such as Mockito. But, the most important aspect of creating concise tests is taking a hard look at object modeling. Removing constructor and method arguments is often the easiest way to reduce the amount of noise within a test.

If you're not using Java, the advice is the same: Remove noise from your tests by improving object modeling and using frameworks that promote descriptive, concise syntax. Removing noise from tests always increases maintainability.

That's it? Yes. I find when I only test the important aspects of an application and I focus on removing noise from the tests that I do write, the maintainability issue is largely addressed. As a result the pendulum swings back towards a more even effort split between features & refactoring vs updating tests.


  1. I've noticed a similar issue - testing things that simply aren't that complicated. For instance, writing should_has_many for a bog-standard association in Rails with Shoulda. Now, I'm certainly inclined to write a unit test if there's something complicated going on (a join, a weird order, etc) but not for the plain association.

    On the other hand, I've encountered BDD purists who would write a batch of tests when adding an un-validated varchar field to a model, "just in case". Ugh.

  2. While I agree that there's always an inherent amount of overhead in maintaining tests, I generally think the test fixing pain is worth it as it's straightforward work. The time you spend fixing up tests is recouped by suffering fewer regressions on a day to day basis, particularly when you're messing with code in an unfamiliar area.

    I think one of the biggest problems in automated test adoption is trying to get developers to apply the same rigour to test code as they do with production code. It's way too easy to mandate unit testing and then simply pay lip service to it (all of the pain, few of the benefits!)

    I've found that when tests are written well and insulated from change, a lot of the refactoring pain is mitigated. Again, I agree there's always going to be some pain, but the organisation & developers have a lot of influence on how much pain is involved.

  3. benkt4:35 PM

    Not all areas of the code are equally likely to be refactored later though - I had an attempt at discussing this in the context of startups on my blog.

    My conclusion is that you should focus your unit testing on the core business logic that's not going to change. Front-ends come and go, storage engines come and go, but if your business is e.g. <a href=">online music management</a> then you need to have robust tests for the business logic connecting 'Tracks' to 'Artists' and 'Albums' - even though you know the front-end will get changed for a shiner model and the storage engine will get swapped out for a more scalable version etc. etc.

  4. I write functional/integration tests. A good set of those will cover most of your bases and they are easy to maintain. Then you can "Spot Unit Test" bits of code as needed.

  5. Hi Jay,

    Correct me if I'm wrong but is the implication that TDD goes away with this cycle? Or is it just that your TDD loop is much longer now because the tests driving out your code cover much more functionality?


  6. Anonymous9:57 AM

    Hi James,
    I still like to TDD when I write tests, but (you are correct) I don't TDD when I'm not going to have tests for what I'm working on.

    In that case I tend to do what is now called 'outside in development'. I drive the interfaces by starting with what I need and delegating to classes that have the ability to do what I need done.

    I've actually been a fan of 'outside in dev' for a long time. Here's a blog post that I wrote almost 5 years ago:

    I tend to do the same thing in my domain these days, where I drive outside in, then when I get to the 'in' that contains important logic, I write the tests.

    Is that clear?

    Cheers, Jay

  7. Testing is worth it, in my experience. The maintenance does take some time but it's usually straightforward-- plus it gives a different team member the opportunity to see how the code works firsthand.

    I've had the best results using unit tests for highly detailed work such as verifying algorithms.

    Two areas that have been especially valuable for our unit tests are verifying our code during language upgrades: Ruby 1.8.6, 1.8.7, 1.9.1, and Rails 1, 2, 3.

    The unit tests have turned up significant differences during upgrades. Real examples include variations in string character arrays, hash key ordering, random number generation, date/time localization, and method signature changes.

  8. Anonymous10:22 AM


    I used to be crazy about rspec, testing and what have you.

    From my few years of experience, here are my findings:

    - Drop RSpec, webrat, whatever and only use Test::Unit
    - Only test a few edge-case scenarios in unit tests
    - Heavily test the controllers because that's the real deal.

    I stopped using unit tests because I was finding myself having to fix failing tests just because a name was changed, and that happens at least once in a functional test and once in a unit test.

    I have been running and with great success using my test suite.


  9. Jay, I agree with you that heavy unit testing and trying for 100% coverage is overrated and really how far you take it depends on the application. I especially see this with early stage startups where you have limited resources and time and writing tests for every edge case and getting 100% coverage isn't all that valuable to the business especially at its stage in development.

    There are other applications that need to be more 'bullet-proof' and I'd expect to see more testing as the cost of bugs and failed business logic is worth the heavier cost of testing.

    I wrote a post last year that is related to this topic. You may find it relevant as part of the conversation you're looking for on testing:

  10. Just read a little bit about unit testing. I do not have a solid opinion about it yet, but, at a first glance, it seems that isn't necessary to create tests for just EVERYTHING on the application.

  11. Hi Jay,

    I have two points to make here.

    1) everyone seems to be forgetting that TDD is largely about design, not just testing. By deciding not to unit test sections of your code you are deciding not to test drive their design, and running the risk of introducing coupling that the tests would have highlighted. Maybe that is a worthwhile risk to take, but be aware of it.

    2) I get more and more mystified by this idea that having unit tests somehow makes refactoring harder. Im my experience it makes it easier. If you make a change and it breaks a lot of your unit tests, then I suspect that your unit tests are not really unit tests and you should take a look at how you write them.

    Additionally, I suspect that when people talk about 'large scale' refactorings they mean changes that span across a large number of classes. In this case, at the class level these are not refactorings, they are changes. Accordingly I test drive these changes, by changing the tests first and using them to inform my design. The problems people are reporting lead me to suspect that they abandon TDD after the initial version of the code is written.

    Lastly, if you find you are doing a lot of 'large scale' refactoring that touch a lot of classes and hence require changing a lot of tests it may be time to reconsider your design. Why are these changes touching to many classes? Is it possible to extract some classes to encapsulate that sort of change?

    The fundamental point of TDD is that if your tests are causing you pain, it is probably an opportunity to better your design. Dealing with the pain by avoiding the test rather misses the point IMHO.

  12. Anonymous4:39 AM

    The notion that you should concentrate on testing those areas that bring the most business value isn't nothing new. You should apply a cost-effectiveness analysis to anything you do: designing, implementing, unit testing, human testing, bugfixing etc. When there's a feature to implement and it will take 2 days, maybe it's cheaper (for entire company) to do it manually for the time being? Maybe you don't have to have shiny AJAXy forms and on-the-fly validations in the admin area that only your coworkers will be using? Maybe the editboxes on some form are misaligned but is it really worth to spend couple of hours to set them right? That's what you should be asking yourself (or your managers) all the time, not only when writing tests or refactoring.


Note: Only a member of this blog may post a comment.