Sunday, February 18, 2007

Rails: Presenters - An additional layer alternative.

The first time I wrote about Presenters was back in September. The title of the article contains a question mark for a reason, at the time it was a brand new idea that I wanted to share, but not endorse as "a good thing".

Time passed, a new project started for me, and Jamis wrote Skinny Controller, Fat Model, Moving associated creations to the model, and a blog entry that mentioned (in the comments) that he and David have been thinking about the pattern.

An important thing to note about Presenter is that it adds an additional layer; therefore, more complexity. This is definitely a trade off worth considering, and the reason that I believe that Skinny Controller, Fat model is the way to go if you can get away with it. However, when you begin working with multiple models on one view, I do believe it's time to give a presenter a chance. Which is why I created an entry with another presenter example in response to Jamis' Moving associated creations to the model entry.

I closed the Another Rails Presenter Example entry with the following paragraph
Using Presenters has additional advantages such as their ability to easily integrate with ActiveRecord validations, separation of view behavior from models (such as formatting a phone number), and allowing you to put validation error messages somewhere other than in a model. I'll address each of these scenarios in upcoming blog entries.
Since I wrote that entry I've touched on a few of these topics, but never brought it all together.

So, taking the paragraph one sentence at a time, here's my explaination.
Using Presenters has additional advantages such as their ability to easily integrate with ActiveRecord validations
This statement was somewhat misleading. Presenters do easily integrate with ActiveRecord::Base models. They can easily aggregate errors from multiple models and display those errors from the presenters errors collection. However, this is currently possible if you include Validatable, and ActiveRecord::Base validations could simply add this behavior and presenters would provide no superior solution. There's a discussion around whether models aggregating errors is a good thing, but that's an discussion for another entry.
[Presenters allow] separation of view behavior from models (such as formatting a phone number)
This is quite true, and one of my favorite aspects of using presenters. Consider an Order class that contains a total attribute. The total is stored in the database as a Number (Oracle). The total needs to be displayed to the user (in the view), formatted as currency, but without the unit ($). The easy solution is to put the following code in the view.
number_to_currency(, :unit => "")
While this does work, it's basically not unit testable. It is functionally testable, but not nearly as neatly as I'd prefer. An alternative solution is to mix ActionView::Helpers::NumberHelper into the model. Then I can unit test my ActiveRecord::Base subclass, but adding view behavior to models can begin to lead you down (or, some may argue, you are already on) the wrong path. Therefore, adding a presenter, an object that represents your view, allows you clearly separate concerns and create more maintainable tests. Since presenters are class versions of your views, including ActionView::Helpers::NumberHelper in a presenter makes perfect sense. And, Presenters should be easy to test without relying on rendering and other framework dependencies.
[Presenters] allow you to put validation error messages somewhere other than in a model.
Putting error messages, that will be used as display text, in a model might or might not bother you. There's an academic argument available, but actual software delivery is what I'm interested in. Unfortunately, pulling errors messages out of models isn't simply an academic issue on my current project. We have requirements that include storing errors messages in a database so they can be manipulated by business users instead of programmers. There are few things in developing software that I believe are black and white, but empowering subject matter experts is one thing that I see little merit in arguing against. So, this wasn't just a requirement, but something I believed was the right idea.

Since we chose to use presenters, we were able to specify keys in our validations. Those keys were used by the presenters to pull the appropriate errors messages from the database. This solution was simple, and more importantly, easily testable.

Shortly after I wrote Another Rails Presenter Example, Jay D. left a comment on the original Presenter entry. the example you gave I would have moved the code to the model.

"However, a better example would have been if the necessary data is spread across various models."

I still don't see why it wouldn't work in the model...

My controllers are clean like the one in your example and I can easily test the data aggregation/manipulation in my unit tests. Why would this be a bad thing?
By the time this comment came across, I had been using Presenters heavily (1 per view and often 1 per partial) on my current project. I still don't believe that they are required; however, I do now believe that they can be a valuable pattern when applied correctly. I responded to Jay D. as a comment; however, I believe Jay D's question is common enough that my answer is worth repeating in this entry.
The classic example for using a Presenter is a summary page. For example, if you buy books from Amazon, the final page has information about each book that you bought, tax information, shipping information, billing information, user account information, etc. In that scenario, I'd prefer to aggregate all that data in a Presenter.

Presenters offer other opportunities for separation of concerns that models do not. For example, if you want to validate that the user entered two matching passwords or email addresses you must add 'virtual' attributes to your models. The same is true if you would like to create an acceptance attribute (to use validates_acceptance_of). You could argue that since these attributes are not persisted to the database that they should not live in the model.
Presenters are no more a silver bullet than any other pattern provided in the past; however, they can be helpful when used correctly.

Friday, February 16, 2007

Rails: Unit Testing Update

In the past I've written about unit testing Rails applications and specifically ActiveRecord::Base unit tests. On this topic, I have friends that range from interested to skeptical and probably worse. Since I expect the readers of this blog to fit somewhere in the same range, I thought it might be valuable to post the following stats from my current project. The following is a cleaned up version of our build output.
[Unit Tests]
Finished in 6.115697 seconds.
983 tests, 1600 assertions, 0 failures, 0 errors

[Drop and Create 3 databases]

[Functional Tests]
Finished in 5.746835 seconds.
281 tests, 542 assertions, 0 failures, 0 errors

[Integration Tests]
Finished in 0.945832 seconds.
6 tests, 15 assertions, 0 failures, 0 errors
4 months into the project, with 16 developers, and our build still runs (including dropping and creating 2 postgres databases and 1 mysql db) in less than 50 seconds.

Rails: Acceptance Testing

My current team has 2 QA roles. The QA developers are responsible for creating a acceptance suite that ensures that each story is completed without breaking previously QA'd stories. The project I'm working in is a web application, so using Selenium as the acceptance test suite was a fairly easy choice.

4 months later, the Selenium suite is painfully large. The Selenium suite is run with each CI build; however, it is not run (for performance) reasons by developers before they check in. The result: the Selenium build is broken more often than not.

Fixing the Selenium build can be time consuming for the QA developers, which takes away from the time that they have to spend doing exploratory testing. This is acceptable assuming that the Selenium tests are catching bugs. Unfortunately, we estimate that when the Selenium tests are broken only 10% of the time it is because a bug has been introduced, and the other 90% of the time the Selenium tests require updating because the functionality of the application has changed.

At the same time the build is getting longer and longer (because more tests equals more browser open and closes); therefore, the feedback loop for the Selenium tests is losing even more value.

The writing on the wall prompted my memory of stories of other projects that eventually throw out their acceptance suite because maintaining it is a full time job and it isn't providing enough value.

Based on the fact that we have a long running acceptance suite and a functional test suite that catches 90% of bugs introduced, I concluded that we should look for a more beneficial approach to acceptance testing. After a bit of discussion with a few coworkers we came up with the following idea: create a DSL for acceptance tests. The QA team can create acceptance tests using the DSL. The acceptance tests can be evaluated in one context to run as Rails Functional or Integration tests. This will allow the developers to run the entire acceptance suite before checking in without creating the overhead of running selenium. If the acceptance tests break when run as Rails Functional tests the QA team should work with the developers to update the acceptance tests. The same acceptance tests can be evaluated in another context to run as Selenium tests. This allows the CI build to run the same tests through the browser. We do believe that running through the browser is valuable and this solution allows us to continue to test through the browser.

I've begun creating this solution for my current project and hope to extract something general for other projects moving forward. Look for updates in the future.

TDD: Removing test noise

Consider the following expectations:
expect 4 do,2)

expect ActiveRecord::Base.to_recieve(:execute).with("insert ...") do
In the past I've written about the absence of mocks in xUnit frameworks. I still believe this is a limitation and it leads to the following additional limitation: You have to set up behavioral expectations in a different way than you set up state based assertions. A side effect of this is that it often leads to tests that contain both behavioral and state based requirements, which are generally brittle. I believe that having two ways to express a test requirement increases noise in tests. The above example attempts to address the disconnect.

There's another spot where test noise can creep in: the name of the test. This idea is very controversial because it goes against what can work best. The problem is, what can work best, rarely does.

For example, when you encounter a test name such as def test_should_send_email_when_the_order_is_placed, how often does the test actually verify that an email is sent following an order being placed? Furthermore, how often do you even find test names that are that descriptive?

In my experience, I generally find tests such as def test_can_save. This is annoying, but it's better than when I find def test_should_verify_that_the_service_is_called_to_verify_a_credit_card and the actual body of the test does something entirely different. This generally happens on larger code bases that have been in development for a bit of time. When the description was originally written it was (hopefully) valid; however, at some point the test broke when a developer was trying to check in. The developer came to the test, didn't understand it and did what they had to do to make it work again. This cycle continued a few times until I found it (also when the tests broke). By the time I found it, no one had a full picture of what the test should be doing and I often end up deleting the test since even the original author has no idea what it's doing with all the patches that other developers have put on it.

The above scenario describes a larger problem: lack of communication. When the tests break the team members should talk to each other to ensure that the test continues to provide value; however, this simply doesn't happen. You could argue that the developers need to be beaten with a stick; however, I prefer a solution that plays to what developers desire: the ability to change the tests themselves.

While the above example does highlight a lack of communication, it isn't simply in the form of team members talking to each other. The test itself did not clearly communicate it's intent and was hard for the maintainers to comprehend.

The original example takes into account that the test name is rarely valuable and replaces it with what is important, the expectation of the test. By reducing the noise in the test maintainability is increased as well as the likelihood that the test will continue to provide value.

If you are a fan of Dave Astels one assertion per test guideline you will notice that the original example also guides you to follow the guideline. However, one assertion per test is simply a guideline so I expect you could also add additional expectations in the block where necessary.

In the Validatable work I've been doing lately I started using:
expect [expected value] do
This is working well, but I haven't taken the time to implement the mock syntax. I expect that using Mocha it would be fairly easy; however, I think the long term solution is to create a new testing(expectation?) framework.

Thursday, February 15, 2007

Ruby Evaluation options article on InfoQ

Yesterday, I published a fairly long article on InfoQ about Evaluation Options in Ruby. The article covers eval, instance_eval, class_eval and provides examples of using each of them.

Feedback welcome.

Sunday, February 11, 2007

Ruby: Validatable

I finished up the 1.1.0 release this morning of Validatable. Validatable is a module that you can mix into your classes to add validations.
class Person
include Validatable
attr_accessor :name
validates_presence_of :name

person =
person.valid? #=> false
person.errors.on(:name) #=> "can't be empty"
Validatable currently supports
  • validates_presence_of
  • validates_length_of
  • validates_format_of
  • validates_confirmation_of
  • validates_acceptance_of
The validations are very similar to the validations that Rails provides. In addition to the traditional Rails functionality, Validatable provides 3 additional features.

Validation of an entire hierarchy of objects with errors aggregated at the root object.
class Person
include Validatable
validates_presence_of :name
attr_accessor :name

class PersonPresenter
include Validatable
include_validations_for :person
attr_accessor :person

def initialize(person)
@person = person

presenter =
presenter.valid? #=> false
presenter.errors.on(:name) #=> "can't be blank"
Validations that turn off after X times of failed attempts.
class Person
include Validatable
validates_presence_of :name, :times => 1
attr_accessor :name

person =
person.valid? #=> false
person.valid? #=> true
Validations can be given levels. If a validation fails on a level the validations for subsequent levels will not be executed.
class Person
include Validatable
validates_presence_of :name, :level => 1, :message => "name message"
validates_presence_of :address, :level => 2
attr_accessor :name, :address

person =
person.valid? #=> false
person.errors.on(:name) #=> "name message"
person.errors.on(:address) #=> nil
Similar to Rails, Validatable also supports conditional validation.
class Person
include Validatable
attr_accessor :name
validates_format_of :name, :with => /.+/, :if => { !name.nil? }
end #=> true

Friday, February 09, 2007

Ruby: Forwardable addition

Almost every project I work on ends up using the Forwardable module included in the Ruby Standard Library. In fact, I use it so often I thought it was worth putting together an entry about using Forwardable to avoid violating the Law of Demeter.

These days I'm working on a project that aggregates a large amount of data on almost every screen. To increase testability of the application we introduced the Presenter pattern. I've previously written an entry introducing the Presenter pattern and a follow up with an additional Presenter example.

In practice our presenters almost always use Forwardable. For example a summary page could display shipping and billing information.
class SummaryPresenter
def_delegator :shipping_address, :first_name, :shipping_first_name
def_delegator :shipping_address, :last_name, :shipping_last_name
def_delegator :shipping_address, :first_name=, :shipping_first_name=
def_delegator :shipping_address, :last_name=, :shipping_last_name=
def_delegator :billing_address, :first_name, :billing_first_name
def_delegator :billing_address, :last_name, :billing_last_name
def_delegator :billing_address, :first_name=, :billing_first_name=
def_delegator :billing_address, :last_name=, :billing_last_name=
Now, I love forwardable, but it's pretty clear that I can come up with a better solution that will allow me to clean up the def_delegator code.

We decided to create another method, delegate_to, which looks very similar to def_delegators, but also takes a hash that allows you to specify a prefix. After making this change the code can be cleaned up to the following.
class SummaryPresenter
delegate_to :shipping_address, :first_name, :last_name, ..., :prefix => 'shipping'
delegate_to :shipping_address, :first_name=, :last_name=, ..., :prefix => 'shipping'
delegate_to :billing_address, :first_name, :last_name, ..., :prefix => 'billing'
delegate_to :billing_address, :first_name=, :last_name=, ..., :prefix => 'billing'
This was a good first step, but there is obviously more we can clean up.

We also added the ability to create writers by simply adding :writer => true to the options hash. Following this change the code is concise but just as readable as the original version, if not more because of the noise reduction.
class SummaryPresenter
delegate_to :shipping_address, :first_name, :last_name, ...,
:prefix => 'shipping', :writer => true
delegate_to :billing_address, :first_name, :last_name, ...,
:prefix => 'billing', :writer => true
Below is the full code required to add this functionality.
module ForwardableExtension
def delegate_to(object, *args)
options = args.last.is_a?(Hash) ? args.pop : {}
args.each do |element|
def_delegator object, element, name_with_prefix(element, options).to_sym
if options[:writer]
def_delegator object, :"#{element}=", :"#{name_with_prefix(element, options)}="

def name_with_prefix(element, options)
"#{options[:prefix] + "_" unless options[:prefix].nil? }#{element}"

Forwardable.send :include, ForwardableExtension
And, of course, the tests.
require File.dirname(__FILE__) + '/../unit_test_helper'

class ForwardableExtensionTest < Test::Unit::TestCase
class Bar
attr_accessor :cat, :dog, :monkey

class Foo
extend Forwardable

delegate_to :bar, :cat, :dog, :prefix => 'baz', :writer => true
delegate_to :bar, :monkey

def bar
@bar ||=

test "getter and setter are delegated correctly" do
f =
f.baz_cat = "cat"
assert_equal "cat", f.baz_cat

f =
f.baz_dog = "dog"
assert_equal "dog", f.baz_dog

test "setter is created only when writer is true" do
f =
assert_raises NoMethodError do
f.monkey = "raise"