Last year I worked on a UK mobile phone and mobile phone deals comparison site: Omio.com
When the project kicked off, we knew we wanted a beautiful interface with rich behavior. For example, the ability to filter deals and compare phones needed to have almost rich client usability and responsiveness. At the time, the team believed HTML & jQuery were the best option for the entire website. Since this was a public facing site we also (obviously) had search engine indexing requirements. The site turned out beautifully in the end, but these days I wonder if strictly HTML & jQuery were the best solution.
The last few weeks I've been working heavily with Adobe Flex 3. I've been very pleased by how productive the framework has helped me become. Very often a stakeholder will ask for a feature that sounds hard to deliver, but is made trivial by some built in component in Flex. I went into the project interested in Flex, and after a few weeks I find myself completely on-board.
Flex gives me all kinds of components and effects. It relieves cross browser pain. It gives me views that can contain objects, not just strings. It gives me those things and many other things that I had considered before I gave it a shot. I wasn't convinced. On paper Flex had nice things, but I was missing the killer feature.
Now that I've used it, I've found what the killer feature is for me--productivity. There are plenty of things to complain about (just like every other technology), but the combined result of all the improvements is better software, faster.
Getting back to Omio, I certainly don't believe that the entire site should be done in Flex. However, the very dynamic aspects of the site would have been easier to develop had we used Flex. Off the top of my head I would surely have built the filters and the comparison logic in Flex. The phone and deal detail pages should be done in HTML because they are easy, static, and can be indexed; however, the dynamic aspects of the site do nothing for, if not hurt page rank.
There are valid concerns with RIA. Neal Ford recently posted a good blog entry explaining how choosing Flex or Silverlight can lock you to a vendor. This usually isn't an issue since you are often locked to a vendor anyway. However, on my current project we experienced a moment where we thought that Flex simply wasn't going to do what we needed, and we were going to have to start over from scratch. It turned out to be a false alarm, but Neal's post really hit home for me those few days.
There are benefits, and concerns. However, if you go into the situation with your eyes open, I think RIA is the right choice far more often most of us previously thought.
Friday, May 30, 2008
Tuesday, May 27, 2008
Testing: The Value of Test Names
Test names are glorified comments.
No matter what form is used a test name is a description, a required comment.
I used to think comments were a really good thing. I always tried to think about why when I was writing a comment instead of how. Having your comments describe why was a best practice.
Then, Martin Fowler and Kent Beck showed me the light.
I used to think test names were a really good thing. I always tried to think about why when I was writing a test name instead of how... I think you see where this is going.
Once I recognized that test names were comments I began to look at them differently. I wondered if all the things I disliked about comments could also be said about test names.
Just like comments, their accuracy seemed to degrade over time. What once explained why is now a distracting bit of history that no longer applies. Sure, better developers should have updated them. Whatever. The problem with test names is that they aren't executable or required. Making a mistake doesn't stop the test suite from running.
While test names can be greatly important, they are never essential to execution. There's no guard to ensure accuracy other than conscience and diligence. One persons essential comment is another persons annoying noise, which complicates things even more. Stale test names make me uneasy, but I wonder if test names have bigger problems. For example, it would be poor form to put the why in the name instead of in the code. [C]omments are there because the code is bad, could the same be said of your test names and their tests?
In fact, the entire section in Refactoring on comments may apply to test names also.
In fact, writing tests without names was fairly easy. I focused very heavily on creating a domain model that allowed me to write tests that required no explanation. Eighty percent of the time I found I could remove a test name by writing better code. Sometimes coming back to a test required a bit of time to see what was going on, but I didn't take this as a failure, instead I took the opportunity convert the code to something more maintainable.
I started sharing the idea of making test names superfluous with friends and got very mixed reviews.
John Hume asks for a test suite with examples of how I put this idea into practice. The first draft of this entry didn't include examples because I think the idea isn't tied to a particular framework, and I didn't want to give the impression that the framework was the important bit. However, I think John is right, the entry is better if I point to examples for those interested in putting the idea into practice.
The expectations.rubyforge.org testing framework is designed with anonymous tests as a core concept. I have used expectations on my last 3 client projects and all the open source projects I still work on. You can browse the following tests for plenty of examples.
John also likes code folding at the test name level, which is obviously not possible without test names. Truthfully, I don't have an answer for this. If you use code folding, you'll need to take that into account.
Pat Farley asks: So when do method names get the ax?
Method names are actually an interesting comparison. The reality is that methods are already available nameless. Blocks, procs, lambdas, closures, anonymous functions, or whatever you prefer to call them are already essentially methods (or functions) that have no name.
Ola Bini probably had the best response when it came to method names.
Dan North and I stumbled onto the idea that test names may be more or less valuable based on your workflow. I find that I never care near as much about the test names as I do about individual tests. Dan seems to use test names to get a high level understanding of how a class works. I'll probably explore this idea in a later post after I do a bit more to mature the thoughts. For now it's sufficient to say, if you rely heavily on test names for understanding, you might not be at a point where dropping them makes sense.
Mike Mason likes test names because they can act as a boundary for what the test should do. The basic rules I always followed were that a test can never contain a "and" or an "or". I think Mike is correct that there is a benefit to requiring people to stick to tests that can be explained in a short sentence. However, I like to think the same thing can be achieved with other self imposed constraints such as limiting each test to One Assertion per Test or One Expectation per Test.
Of course, now I'm suggesting one assertion per test and no test names, so if you aren't ready to adopt both, it might not be time to adopt nameless tests.
Ben Butler-Cole provided some anonymous tests written in Haskell that were quite readable. He made the observation that [T]he terser your language the more likely you will be able to make your tests sufficiently comprehensible. I actually find this to be a key in advocating anonymous tests. The test must be easily readable and maintainable, otherwise the effort necessary to work with a test would warrant a description.
A common thought (that I agree with) is that using anonymous tests is something you'll want to try only if you have a talented team. If you're new to testing or have a fair amount of junior programmers on your team, you should probably stick to conventional thinking. However, if you have a team of mostly masters and journeyman, you may find that anonymous tests result in better tests and a better domain model, which is clearly a good thing.
Thanks to Ben Butler-Cole, Chris Stevenson, Dan North, John Hume, Mike Mason, Ola Bini, Prasanna Pendse, Ricky Lui and anyone else who provided feedback on the rough draft.
def test_address_service_recognizes_post_code_WC1H_0HT # Test::Unit
test "address service recognizes post code WC1H 0HT" # Dust
it "should recognize post code WC1H 0HT" # RSpec
should "recognize post code WC1H 0HT" # Shoulda
No matter what form is used a test name is a description, a required comment.
I used to think comments were a really good thing. I always tried to think about why when I was writing a comment instead of how. Having your comments describe why was a best practice.
Then, Martin Fowler and Kent Beck showed me the light.
Don't worry, we aren't saying that people shouldn't write comments. In our olfactory analogy, comments aren't a bad smell; indeed they are a sweet smell. The reason we mention comments here is that comments often are used as a deodorant. It's surprising how often you look at thickly commented code and notice that the comments are there because the code is bad. -- Fowler & Beck, RefactoringFast forward a few years...
I used to think test names were a really good thing. I always tried to think about why when I was writing a test name instead of how... I think you see where this is going.
Once I recognized that test names were comments I began to look at them differently. I wondered if all the things I disliked about comments could also be said about test names.
Just like comments, their accuracy seemed to degrade over time. What once explained why is now a distracting bit of history that no longer applies. Sure, better developers should have updated them. Whatever. The problem with test names is that they aren't executable or required. Making a mistake doesn't stop the test suite from running.
While test names can be greatly important, they are never essential to execution. There's no guard to ensure accuracy other than conscience and diligence. One persons essential comment is another persons annoying noise, which complicates things even more. Stale test names make me uneasy, but I wonder if test names have bigger problems. For example, it would be poor form to put the why in the name instead of in the code. [C]omments are there because the code is bad, could the same be said of your test names and their tests?
In fact, the entire section in Refactoring on comments may apply to test names also.
Comments lead us to bad code that has all the rotten whiffs we've discussed in [Chapter 3 of Refactoring]. Our first action is to remove the bad smells by refactoring. When we're finished, we often find that the comments are superfluous.Comments lead us to bad code...remove the bad smells by refactoring. When we're finished, we often find that the comments are superfluous. With this in mind I began to toy with the idea that I could write tests that made test names unnecessary.
In fact, writing tests without names was fairly easy. I focused very heavily on creating a domain model that allowed me to write tests that required no explanation. Eighty percent of the time I found I could remove a test name by writing better code. Sometimes coming back to a test required a bit of time to see what was going on, but I didn't take this as a failure, instead I took the opportunity convert the code to something more maintainable.
If you need a comment to explain what a block of code does, try Extract Method. If the method is already extracted but you still need a comment to explain what it does, use Rename Method. If you need to state some rules about the required state of the system, use Introduce Assertion.To be clear, I don't think you should be extracting or renaming methods in your tests, I think test names can indicate that those things are needed in the code you are testing. Conversely, Introduce Assertion could apply directly to your test. Instead of stating your assumptions in the test name, introduce an assertion that verifies an assumption. Or better yet, break the larger test up into several smaller tests.
A good time to use a comment is when you don't know what to do. In addition to describing what is going on, comments can indicate areas in which you aren't sure. A comment is a good place to say why you did something. This kind of information helps future modifiers, especially forgetful ones.I agree with Fowler and Beck, comments are valuable when used correctly. Wrapping legacy code with a few tests may result in tests where you know the results, but you don't quite understand how they are generated. Also, higher level tests often benefit from a brief description of why the test exists. These are situations where comments are truly valuable. However, I found I could handle those situations with a comment, they didn't require a test name.
I started sharing the idea of making test names superfluous with friends and got very mixed reviews.
John Hume asks for a test suite with examples of how I put this idea into practice. The first draft of this entry didn't include examples because I think the idea isn't tied to a particular framework, and I didn't want to give the impression that the framework was the important bit. However, I think John is right, the entry is better if I point to examples for those interested in putting the idea into practice.
The expectations.rubyforge.org testing framework is designed with anonymous tests as a core concept. I have used expectations on my last 3 client projects and all the open source projects I still work on. You can browse the following tests for plenty of examples.
- Expectations tests: http://expectations.rubyforge.org/svn/trunk/test/
- Validatable unit tests: http://validatable.rubyforge.org/svn/trunk/test/unit/
- Arbs tests: http://arbs.rubyforge.org/svn/trunk/test/
John also likes code folding at the test name level, which is obviously not possible without test names. Truthfully, I don't have an answer for this. If you use code folding, you'll need to take that into account.
Pat Farley asks: So when do method names get the ax?
Method names are actually an interesting comparison. The reality is that methods are already available nameless. Blocks, procs, lambdas, closures, anonymous functions, or whatever you prefer to call them are already essentially methods (or functions) that have no name.
Ola Bini probably had the best response when it came to method names.
A method selector is always necessary to refer to the method from other places. Not necessarily a name, but some kind of selector is needed. That's not true for test names, and to a degree it's interesting that the default of having test names comes from the implementation artifact of tests usually being represented as methods. Would we still have test names in all cases if we didn't start designing a test framework based around a language that cripples usage of abstractions other than methods for executable code?Prasanna Pendse pointed out that if you use test names for anything you'll lose that ability. The only usage we could think of was creating documentation from the names (RSpec specdoc).
Dan North and I stumbled onto the idea that test names may be more or less valuable based on your workflow. I find that I never care near as much about the test names as I do about individual tests. Dan seems to use test names to get a high level understanding of how a class works. I'll probably explore this idea in a later post after I do a bit more to mature the thoughts. For now it's sufficient to say, if you rely heavily on test names for understanding, you might not be at a point where dropping them makes sense.
Mike Mason likes test names because they can act as a boundary for what the test should do. The basic rules I always followed were that a test can never contain a "and" or an "or". I think Mike is correct that there is a benefit to requiring people to stick to tests that can be explained in a short sentence. However, I like to think the same thing can be achieved with other self imposed constraints such as limiting each test to One Assertion per Test or One Expectation per Test.
Of course, now I'm suggesting one assertion per test and no test names, so if you aren't ready to adopt both, it might not be time to adopt nameless tests.
Ben Butler-Cole provided some anonymous tests written in Haskell that were quite readable. He made the observation that [T]he terser your language the more likely you will be able to make your tests sufficiently comprehensible. I actually find this to be a key in advocating anonymous tests. The test must be easily readable and maintainable, otherwise the effort necessary to work with a test would warrant a description.
A common thought (that I agree with) is that using anonymous tests is something you'll want to try only if you have a talented team. If you're new to testing or have a fair amount of junior programmers on your team, you should probably stick to conventional thinking. However, if you have a team of mostly masters and journeyman, you may find that anonymous tests result in better tests and a better domain model, which is clearly a good thing.
Thanks to Ben Butler-Cole, Chris Stevenson, Dan North, John Hume, Mike Mason, Ola Bini, Prasanna Pendse, Ricky Lui and anyone else who provided feedback on the rough draft.
Friday, May 23, 2008
Speaking: Manchester GeekNight
I'm going to be presenting on The 5Ws of DSLs at the Manchester GeekNight on June 4th. If you're in the area, please do drop by and say hello.
Tuesday, May 13, 2008
ActionScript and Essence vs Ceremony
Over two years ago I started working primarily with Ruby. Since those days I haven't written a line of C# or Java, and I've often wondered if I would be annoyed by the ceremony that both C# and Java generally require.
I've recently been working a bit with Flex.Overall, I've enjoyed working with something new, and I can definitely see the appeal of RIA in general. It's been fairly easy to get started with Flex because it feels very similar to working with HTML and Javascript.
[Disclaimer: My experience with ActionScript is very limited, so there may be a better way that I'm not aware of]
I was working with a few MXML views the other day and I ran into a situation that made me think about essence and ceremony. The following example code is a simplified example of what I was working on.
The intent of the simplified example is to change what's inside the typeDetailsBox based on what's selected in the typeCombo.
To solve this in ActionScript I wrote code similar to the following.
The code is easy enough to follow, but it feels much more verbose than what I prefer. I like being able to pass classes around as objects, so if I could, I'd probably make the classes themselves the "data" for the drop down. If that wasn't an option, I'd prefer an eval method that let me do the following.
Unfortunately, ActionScript seems to be a language a bit more concerned with ceremony than I am.
Don't read this as a dismissal of Flex or ActionScript. Whether or not to use a language is a very complicated question and I wouldn't ever base my answer on a switch statement or the ability to eval. Like I said before, working with Flex has been a largely enjoyable experience, but, I must admit, after working 7 years with high ceremony languages and the past 2 with a low ceremony language, I know which I prefer.
I've recently been working a bit with Flex.Overall, I've enjoyed working with something new, and I can definitely see the appeal of RIA in general. It's been fairly easy to get started with Flex because it feels very similar to working with HTML and Javascript.
MXML, an XML-based markup language, offers a way to build and lay out graphic user interfaces. Interactivity is achieved through the use of ActionScript, the core language of Flash Player that is based on the ECMAScript standard. --Wikipedia Flex InformationActionScript might be based on the ECMAScript standard, but it feels a lot more like Java than Javascript.
[Disclaimer: My experience with ActionScript is very limited, so there may be a better way that I'm not aware of]
I was working with a few MXML views the other day and I ran into a situation that made me think about essence and ceremony. The following example code is a simplified example of what I was working on.
<?xml version="1.0"?>
<mx:HBox xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:ComboBox id="typeCombo"/>
<mx:Box id="typeDetailsBox"/>
</mx:HBox>
<!-- ~/views/OptionOne.mxml -->
<mx:HBox xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:Label text="You chose option 1"/>
</mx:HBox>
<!-- ~/views/OptionTwo.mxml -->
<mx:HBox xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:Label text="You chose option 2"/>
</mx:HBox>
The intent of the simplified example is to change what's inside the typeDetailsBox based on what's selected in the typeCombo.
To solve this in ActionScript I wrote code similar to the following.
public static const Option1:String = "1";
public static const Option2:String = "2";
public function init():void {
typeCombo.dataProvider = new ArrayCollection([{label:Option1, data:Option1}, {label:Option2, data:Option2}]);
typeCombo.addEventListener("close", function(e:Event):void{ doTypeSelected(); });
doTypeSelected();
}
public function doTypeSelected():void {
if (typeDetailsBox.numChildren > 0) {
typeDetailsBox.removeChildAt(0);
}
switch (typeCombo.selectedItem.data) {
case Option1:
typeDetailsBox.addChild(new OptionOne());
break;
case Option2:
typeDetailsBox.addChild(new OptionTwo());
break;
}
}
The code is easy enough to follow, but it feels much more verbose than what I prefer. I like being able to pass classes around as objects, so if I could, I'd probably make the classes themselves the "data" for the drop down. If that wasn't an option, I'd prefer an eval method that let me do the following.
public function init():void {
typeCombo.dataProvider = new ArrayCollection([{label:"1", data:"new OptionOne()"}, {label:"2", data:"new OptionTwo()"}]);
typeCombo.addEventListener("close", function(e:Event):void{ doTypeSelected(); });
doTypeSelected();
}
public function doTypeSelected():void {
if (typeDetailsBox.numChildren > 0) {
typeDetailsBox.removeChildAt(0);
}
typeDetailsBox.addChild(eval(typeCombo.selectedItem.data));
}
Unfortunately, ActionScript seems to be a language a bit more concerned with ceremony than I am.
Don't read this as a dismissal of Flex or ActionScript. Whether or not to use a language is a very complicated question and I wouldn't ever base my answer on a switch statement or the ability to eval. Like I said before, working with Flex has been a largely enjoyable experience, but, I must admit, after working 7 years with high ceremony languages and the past 2 with a low ceremony language, I know which I prefer.
Monday, May 12, 2008
Testing: Duplicate Code in Your Tests
Luke Kanies recently left a comment on Using Stubs to Capture Test Essence:
Three of the points from the Wikipedia entry relate to test setup code.
In some small-scale contexts, the effort required to design around DRY may be far greater than the effort to maintain two separate copies of the data.
There's no question that designing around DRY for tests adds additional effort. First of all, there's no reason that tests need to run in the context of an object; however, the most common way to use a setup is to create instance variables and reuse those instance variables within a test. The necessity of instance variables creates the requirement that a test must be run within a new object instance for each test.
Ensuring that a setup method only creates the objects that are necessary for each test adds complexity. If all tests don't require the same data, but you create a setup that creates everything for every test, digesting that code is painful and unnecessary. Conversely, if you strictly adhere to only using setup for objects used by all methods you quickly limit the number of tests you can put within one test case. You can break the test case into multiple test cases with different setup methods, but that adds complexity. Consider the situation where you need the foo instance variable for test 1 and 2 and the bar instance variable for test 2 and 3. There's no straightforward way to create foo and bar in a DRY way using a setup method.
Of course, there's also readability impacts to consider. When a test breaks that relies on setup and teardown I have to look in 3 places to get the full picture of what's going on. When a test is broken I want to fix it as quickly as possible, so I prefer looking in one place.
The crux of the issue is whether a test case is one context or the individual tests are small-scale multiple contexts. While a test case is conceptually one object, we rarely concern ourselves with a test cases as a whole. Instead testing is very much about many individual executable pieces of code. In fact, it's a testing anti-pattern to have any tests depend on other tests. We write tests in isolation, fix tests in isolation and often run tests in isolation; therefore, I consider them to be many small independent entities that do completely isolated jobs. The result of those jobs is aggregated in the end, but ultimately the different entities never (or should never) interact. Each test is and runs in it's own context.
Imposing standards aimed at strict adherence to DRY could stifle community involvement in contexts where it is highly valued, such as a wiki.
Tests are often written by one, but maintained by many. Any barrier to readable, reliable, and performant tests should be removed as quickly as possible. As I've previously discussed, abstracting behavior to 3 different places reduces readability, which can reduce collaboration. Additionally, setup methods can cause unexpected behavior for tests that are written by an author other than the creator of the setup method. This results in less reliable tests, another obvious negative.
Human-readable documentation (from code comments to printed manuals) are typically a restatement of something in the code with elaboration and explanation for those who do not have the ability or time to read and internalize the code...While tests aren't code comments or printed manuals they are very much (or should be) Human-readable documentation. They are also restatements of something in the code with elaboration and explanation. In fact the setup and teardown logic is usually nothing more than elaboration and explanation. The idea of setup and teardown is to put the system into a known state before executing an assertion. While you should have the ability to understand setup and teardown, you don't have the time. Sure, you can make the time, but you shouldn't need to.
The problem is that it's much easier to understand how to apply DRY than it is to understand why.
If your goal is to create a readable, reliable, and performant test suite (which it should be) then there are better ways to achieve that goal than by blindly applying DRY.
Pull the data and the behavior out of the setup and teardown methods and move it to the tests, then take a hard look.
Unimportant behavior specification
Setup methods often mock various interactions; however, the ultimate goal of the interactions is a single result. For example, you may have designed a Gateway class that takes a message, builds a request, sends the request and parses the results of the request. A poorly designed Gateway class will require you to stub the request building and sending the request. However, a well designed Gateway class will allow you to stub two methods at most and verify the behavior of the parse method in isolation. The poorly designed Gateway class may require a setup method, the well designed Gateway class will have a simple test that stubs a method or two in a straightforward manner and then focuses on the essence of the behavior under test. If your domain model requires you to specify behavior which is has nothing to do with what you are testing, don't specify that behavior in a setup method, remove the need to specify the behavior.
Loosely couple objects
I've always disliked the ObjectMother pattern. ObjectMother is designed to build complex hierarchies of objects. If you need an ObjectMother to easily test, you probably actually need to take another look at your domain model. Loosely coupled objects can be created without complicated graphs of dependencies. Loosely coupled objects don't require setting up multiple collaborators. Fewer collaborators (or dependencies) result in tests that don't rely on setup methods to create various collaborators.
The idea is to write tests that only specify what's necessary and verify one thing at a time. If you find that you need to specify things that are unrelated to what you are trying to verify, change the domain model so that the unnecessary objects are no longer required.
Solve problems globally or locally, but not between
Sometimes there are things that need to happen before and after every test. For example, running each test in a transaction and rolling back after the test is run is a good thing. You could add this behavior at a test case level, but it's obviously beneficial to apply this behavior to every test in the entire test suite.
There are other cases where it can make sense to solve issues on a global level. For example, I once worked on a codebase that relied heavily on localization. Some tests needed to verify strings that were specified in the localization files. For those tests we originally specified what localization file to use (in each test), but as we used this feature more often we decided that it made sense to set the localization file for the entire test suite.
Putting everything within a test is the best solution for increasing understanding. However, applying behavior to an entire test suite also aides in comprehension. I would expect any team member to understand that reference data is loaded once before the test suite is run, each test is run in a transaction, and localization uses the test localization file.
Understanding the behavior of an entire test suite is valuable and understanding the behavior of a test is valuable, but understanding a test case provides almost no benefit. Rarely do you run a single test case, rarely does an entire test case break, and rarely do you edit an entire test case. Since your primary concerns are the test suite and the tests themselves, I prefer to work on those levels.
I wouldn't go too far with the global behavior approach, and I definitely wouldn't start moving conditional logic into global behavior, but applied judiciously this can be a very valuable technique.
Bringing it all together
The core of the idea is that it should be easy to understand a test. Specifying everything within a test makes understanding it easy; however, that solution isn't always feasible. Setup and teardown are not the solution to this problem. Luckily the techniques above can be used to make specifying everything within a test a realistic and valuable solution.
Now if you would only see the wisdom of applying DRY to your whole code base, rather than just the functional bit. :)Sadly, DRY has become a philosophy that is dogmatically and blindly applied to every aspect of programming. I'm a big fan of firing silver bullets but if you go that path your number one priority needs to be finding where a solution does not fit. In fact, those who have applied DRY excessively have found that there are situations when DRY may not be advantageous.
Three of the points from the Wikipedia entry relate to test setup code.
In some small-scale contexts, the effort required to design around DRY may be far greater than the effort to maintain two separate copies of the data.
There's no question that designing around DRY for tests adds additional effort. First of all, there's no reason that tests need to run in the context of an object; however, the most common way to use a setup is to create instance variables and reuse those instance variables within a test. The necessity of instance variables creates the requirement that a test must be run within a new object instance for each test.
Ensuring that a setup method only creates the objects that are necessary for each test adds complexity. If all tests don't require the same data, but you create a setup that creates everything for every test, digesting that code is painful and unnecessary. Conversely, if you strictly adhere to only using setup for objects used by all methods you quickly limit the number of tests you can put within one test case. You can break the test case into multiple test cases with different setup methods, but that adds complexity. Consider the situation where you need the foo instance variable for test 1 and 2 and the bar instance variable for test 2 and 3. There's no straightforward way to create foo and bar in a DRY way using a setup method.
Of course, there's also readability impacts to consider. When a test breaks that relies on setup and teardown I have to look in 3 places to get the full picture of what's going on. When a test is broken I want to fix it as quickly as possible, so I prefer looking in one place.
The crux of the issue is whether a test case is one context or the individual tests are small-scale multiple contexts. While a test case is conceptually one object, we rarely concern ourselves with a test cases as a whole. Instead testing is very much about many individual executable pieces of code. In fact, it's a testing anti-pattern to have any tests depend on other tests. We write tests in isolation, fix tests in isolation and often run tests in isolation; therefore, I consider them to be many small independent entities that do completely isolated jobs. The result of those jobs is aggregated in the end, but ultimately the different entities never (or should never) interact. Each test is and runs in it's own context.
Imposing standards aimed at strict adherence to DRY could stifle community involvement in contexts where it is highly valued, such as a wiki.
Tests are often written by one, but maintained by many. Any barrier to readable, reliable, and performant tests should be removed as quickly as possible. As I've previously discussed, abstracting behavior to 3 different places reduces readability, which can reduce collaboration. Additionally, setup methods can cause unexpected behavior for tests that are written by an author other than the creator of the setup method. This results in less reliable tests, another obvious negative.
Human-readable documentation (from code comments to printed manuals) are typically a restatement of something in the code with elaboration and explanation for those who do not have the ability or time to read and internalize the code...While tests aren't code comments or printed manuals they are very much (or should be) Human-readable documentation. They are also restatements of something in the code with elaboration and explanation. In fact the setup and teardown logic is usually nothing more than elaboration and explanation. The idea of setup and teardown is to put the system into a known state before executing an assertion. While you should have the ability to understand setup and teardown, you don't have the time. Sure, you can make the time, but you shouldn't need to.
The problem is that it's much easier to understand how to apply DRY than it is to understand why.
If your goal is to create a readable, reliable, and performant test suite (which it should be) then there are better ways to achieve that goal than by blindly applying DRY.
Where there's smoke, pour gasoline --Scott ConleyDuplicate code is a smell. Setup and teardown are deodorant, but they don't fix the underlying issue. Using a setup method is basically like hiding your junk in the closet. There are better ways.
Pull the data and the behavior out of the setup and teardown methods and move it to the tests, then take a hard look.
Unimportant behavior specification
Setup methods often mock various interactions; however, the ultimate goal of the interactions is a single result. For example, you may have designed a Gateway class that takes a message, builds a request, sends the request and parses the results of the request. A poorly designed Gateway class will require you to stub the request building and sending the request. However, a well designed Gateway class will allow you to stub two methods at most and verify the behavior of the parse method in isolation. The poorly designed Gateway class may require a setup method, the well designed Gateway class will have a simple test that stubs a method or two in a straightforward manner and then focuses on the essence of the behavior under test. If your domain model requires you to specify behavior which is has nothing to do with what you are testing, don't specify that behavior in a setup method, remove the need to specify the behavior.
Loosely couple objects
I've always disliked the ObjectMother pattern. ObjectMother is designed to build complex hierarchies of objects. If you need an ObjectMother to easily test, you probably actually need to take another look at your domain model. Loosely coupled objects can be created without complicated graphs of dependencies. Loosely coupled objects don't require setting up multiple collaborators. Fewer collaborators (or dependencies) result in tests that don't rely on setup methods to create various collaborators.
The idea is to write tests that only specify what's necessary and verify one thing at a time. If you find that you need to specify things that are unrelated to what you are trying to verify, change the domain model so that the unnecessary objects are no longer required.
Solve problems globally or locally, but not between
Sometimes there are things that need to happen before and after every test. For example, running each test in a transaction and rolling back after the test is run is a good thing. You could add this behavior at a test case level, but it's obviously beneficial to apply this behavior to every test in the entire test suite.
There are other cases where it can make sense to solve issues on a global level. For example, I once worked on a codebase that relied heavily on localization. Some tests needed to verify strings that were specified in the localization files. For those tests we originally specified what localization file to use (in each test), but as we used this feature more often we decided that it made sense to set the localization file for the entire test suite.
Putting everything within a test is the best solution for increasing understanding. However, applying behavior to an entire test suite also aides in comprehension. I would expect any team member to understand that reference data is loaded once before the test suite is run, each test is run in a transaction, and localization uses the test localization file.
Understanding the behavior of an entire test suite is valuable and understanding the behavior of a test is valuable, but understanding a test case provides almost no benefit. Rarely do you run a single test case, rarely does an entire test case break, and rarely do you edit an entire test case. Since your primary concerns are the test suite and the tests themselves, I prefer to work on those levels.
I wouldn't go too far with the global behavior approach, and I definitely wouldn't start moving conditional logic into global behavior, but applied judiciously this can be a very valuable technique.
Bringing it all together
The core of the idea is that it should be easy to understand a test. Specifying everything within a test makes understanding it easy; however, that solution isn't always feasible. Setup and teardown are not the solution to this problem. Luckily the techniques above can be used to make specifying everything within a test a realistic and valuable solution.
Tuesday, May 06, 2008
Using Stubs to Capture Test Essence
Back in January of 2006 I wrote about Using Stubs and Mocks to Convey Intent. In those days I was working mainly with C# and NMock and the simplest way to create a stub was to allow ReSharper generate a stub based on an Interface. This introduced a bit more effort when first generating the stub and added complexity around where the stubs should live and how they should be used. Despite the additional effort, I still felt that using them was a net gain.
Things are very different in the Ruby world. While using C#, Dependency Injection was something I used religiously; however, using Ruby I can simply stub the new method of a dependency. C# had hurdles that made stubbing controversial; however, using stubs in Ruby tests is basically seamless thanks to Mocha. Mocha makes it easy to Replace Collaborators with Stubs, but you shouldn't stop there--you should also Replace Mocks with Stubs.
Replacing Mocks with Stubs is beneficial for several reasons. Stubs are more concise. One of the largest problems with tests is that they can be overwhelming the first time you look at them. Reducing a 3 line mock definition to
Stubs are great, but you may need to use a mock to verify behavior. However, you don't need to verify several behaviors in one test. Using a single mock to verify specific behavior and stubbing all other collaborations is an easy way to create tests that focus on essence. Following the One Expectation per Test suggestion often results in using one or more stubs... and more robust tests.
Using stubs is a good first step towards concise and relevant test code; however, you can take stubs a step further. Mocha defines a
Stubs can be even more valuable if you are interested verifying a single interaction but are completely uninterested in all other interactions. In that case it's possible to define a
Since the
Lastly stubbing can help with test essence by allowing you to stub methods you don't care about. The following example defines a gateway that creates a message, sends the message, and parses the response.
The Gateway#process method is the only method that would be used by clients. In fact, it probably makes sense to make post, create_request, and parse_response private. While it's possible to test private methods, I prefer to create tests that verify what I'm concerned with and stub what I don't care about. Using partial stubbing I can always test using the public interface.
The combination of partial stubbing and defining small methods results in highly focused tests that can independently verify behavior and avoid cascading failures.
Mocha makes it as easy to define a mock as it is to define a stub, but that doesn't mean you should always prefer mocks. In fact, I generally prefer stubs and use mocks when necessary.
Things are very different in the Ruby world. While using C#, Dependency Injection was something I used religiously; however, using Ruby I can simply stub the new method of a dependency. C# had hurdles that made stubbing controversial; however, using stubs in Ruby tests is basically seamless thanks to Mocha. Mocha makes it easy to Replace Collaborators with Stubs, but you shouldn't stop there--you should also Replace Mocks with Stubs.
Replacing Mocks with Stubs is beneficial for several reasons. Stubs are more concise. One of the largest problems with tests is that they can be overwhelming the first time you look at them. Reducing a 3 line mock definition to
stub(:one => 1, :two => 2)
is almost always a readability win.Stubs are great, but you may need to use a mock to verify behavior. However, you don't need to verify several behaviors in one test. Using a single mock to verify specific behavior and stubbing all other collaborations is an easy way to create tests that focus on essence. Following the One Expectation per Test suggestion often results in using one or more stubs... and more robust tests.
Using stubs is a good first step towards concise and relevant test code; however, you can take stubs a step further. Mocha defines a
stub_everything
method that creates an object which will return nil to any message it doesn't understand. The stub_everything stubs are fantastic for dependencies that are used several times, but you are only interested in specific interactions. The interactions which are unimportant for the current test no longer need to be defined.Stubs can be even more valuable if you are interested verifying a single interaction but are completely uninterested in all other interactions. In that case it's possible to define a
stub_everything
and set an expectation on it. Consider the following sample code.class MessageService
def self.deliver(message)
message.from = current_user
message.timestamp
message.sent = Gateway.process(message)
end
def test_the_message_is_sent
Gateway.stubs(:process).returns(true)
message = stub_everything
message.expects(:sent=).with(true)
MessageService.deliver(message)
end
Since the
expects
method is defined on Object you can set expectations on concrete objects, mocks, and stubs. This results in a lot of possible combinations for defining concise behavior based tests.Lastly stubbing can help with test essence by allowing you to stub methods you don't care about. The following example defines a gateway that creates a message, sends the message, and parses the response.
class MessageGateway
def process(message_text)
response = post(create_request(message_text))
parse_response(response)
end
def post(message)
# ...
end
def create_request(message_text)
# ...
end
def parse_response(response)
# ...
end
The Gateway#process method is the only method that would be used by clients. In fact, it probably makes sense to make post, create_request, and parse_response private. While it's possible to test private methods, I prefer to create tests that verify what I'm concerned with and stub what I don't care about. Using partial stubbing I can always test using the public interface.
def test_create_request
gateway = MessageGateway.new
gateway.expects(:post).with("<text>hello world</text>")
gateway.stubs(:parse_response)
gateway.process("hello world")
end
def test_post
gateway = MessageGateway.new
gateway.stubs(:create_request).returns("<text>hello world</text>")
gateway.expects(:parse_response).with("<status>success</status>")
gateway.process("")
end
def test_parse_response
gateway = MessageGateway.new
gateway.stubs(:create_request)
gateway.stubs(:post).returns("<status>success</status>")
assert_equal true, gateway.process("")
end
The combination of partial stubbing and defining small methods results in highly focused tests that can independently verify behavior and avoid cascading failures.
Mocha makes it as easy to define a mock as it is to define a stub, but that doesn't mean you should always prefer mocks. In fact, I generally prefer stubs and use mocks when necessary.
Subscribe to:
Posts (Atom)