In order to have an effective test suite, you must have a maintainable test suite. Only if the test suite is maintainable will all members of the team take ownership and ensure the tests are performing as expected.
Far too often I see the following behavior
- Run test suite before checking in.
- A test for part of the system you are not working on fails.
- Find the failing test, but don't take the time to understand the test.
- Determine from the failure what the issue is and change a value.
- Move on because the test is now passing, never knowing if the test still verifies what was originally intended.
You can claim that the above scenario only happens when you have weak developers. You'd be wrong. I work with some of the
best developers in the industry and it still happens.
I agree that part of the blame is on the developer who doesn't take the time to understand the test; however, more of the blame is on the code's original author, who didn't give the new developer a fighting chance.
The original author didn't ask the right question: How can I make this code as maintainable as possible? In the previous sentence, maintainable is the keyword. Maintainable doesn't always mean DRY, abstracted, or object oriented. When it comes to tests, maintainability means: When the test breaks, what can it contain that will make it easiest to fix.
The reason the original steps were important to this entry is because they show the most common usage of tests: You've written/changed code that caused unexpected test failures. In this scenario your workflow is very simple, find the broken test and make it work. Therefore, when you get to the test, which of the following would you prefer to find?
test "when attribute is not nil or empty, then valid is true" do
validation = Validatable::ValidatesPresenceOf.new stub, :name
assert_equal true, validation.valid?(stub(:name=>"book"))
end
test "when attribute is not nil or empty, then valid is true" do
assert_equal true, @validation.valid?(@stub)
end
The first test contains all the information necessary to understand the intent and implementation of the test.
On the other hand, the second test pulled the creation of the validation instance and the stub into a setup method. While the setup method can create a DRYer test, it has reduced your ability to maintain this test. If the test file happens to be long, this problem is magnified by the fact that it may not be immediately obvious that a setup method is even being called. Even worse, if you don't notice that the setup method exists you may see unexpected behavior.
Another problem with setup is that it doesn't fail if unnecessary behavior creeps into the setup method. For example, the original author of a test file creates 3 test methods that all have common setup. He moves this behavior to the setup method and checks in. The next author needs to write some tests that use some of the shared behavior but not all. The correct thing to do is move the uncommon behavior to the original 3 tests; however, in practice I've never seen this happen. Again, you can argue that it's the developers fault; however, do you ever remember doing what I'm describing?
Of course, the above scenario assumes you even know that a setup method is executing. It's very possible to open a test file to create a new test and the file is over 80 lines long. In that case it's common to create a new test without noticing that a setup is even being executed. Now, since your test isn't taking advantage of the setup, the code from the setup is no longer common to all tests, and maintainability has again been lost.
Worse, I once opened a test that had a setup method and several tests that depended on it. Additionally, the test file also contained another group of tests that used a private method to initialize a few objects. The logic from the setup and the private method was almost exactly the same. Clearly, the two different authors didn't notice their differing implementations. Ironically, in their attempt to be DRY they created a greatly inferior alternative: Duplicate code that's not easy to follow.
What does all this mean? It means that when writing tests you should think about them in isolation. They run in isolation and the chances are when they break (if you write robust tests) they will be fixed in isolation; therefore, abstracting anything that is 'common' actually hurts maintainability. With every test, pretend it's the first and only test you are writing in that class. If that were true, it would be hard to justify those setup or private helper methods.
A question that always comes up: What if my test requires a large amount of set up code? This is often a clear sign that the domain model of the application is broken. Another explanation I've seen is, all the setup isn't actually required if you stick to testing one thing at a time. This is another reason that using
one assertion per test is valuable.
Another question that always comes up: What about writing DRY code? For me, the value behind DRY has never been typing something once, but that when I want to make a change I only have to make it in one place. Since I'm advocating the duplication in a test file, you still only need to make the change in one place. You may need to do a find and replace instead of changing one helper method, but the other people who are stuck maintaining your tests will be very happy that you did.