Thursday, June 29, 2006

Business Natural Language

For several months now I've been working on a system where the business rules are expressed in a Domain Specific Language. In fact, the majority of my focus these days goes towards working with Domain Specific Languages. However, I'm most fascinated by Domain Specific Languages that empower business subject matter experts.

One problem with DSLs is that almost anything can be considered a DSL. Everyone knows the common DSLs: YACC and spreadsheet macros. However, recently Rake and Ruby On Rails have gained much popularity as DSLs. Some even consider Cobol to be a DSL. And there lies the problem, you can make a case for almost anything to be considered a DSL.

Is that a problem? Well, to me it is. I enjoy writing, talking, and thinking about DSLs. And, when I can't be specific about what I'm discussing I find my conversations often get slowed down by describing what type of DSLs I'm not talking about. Or, worse, people dismiss the conversations because they don't believe that DSLs can work in the way described. This, unfortunately, is fairly common because of DSLs of the past that have failed.

By now I hope you can understand why I feel the need to classify the type of work I'm doing. For the past few months I've been referring to my work as Business Domain Specific Languages. Unfortunately, even if you prefix business on a topic all anyone really hears is DSL. Also, (as Jeremy Stell Smith pointed out) the DSLs I'm interested in are similar to natural language.

Therefore, in an attempt to focus the conversations and make them more productive I'm going to start describing the type of DSL work I'm doing as Business Natural Languages.

Tuesday, June 27, 2006

Ruby on Rails Unit Tests

Updated following multiple comments requesting examples and questions on where to put AR::Base subclass tests.

Everyone (who reads this blog anyway) knows that you should not cross boundaries while unit testing. Unfortunately, Ruby on Rails seems to believe otherwise. This is evident by the fact that the test:units rake task has the pre-requisite db:test:prepare. Additionally, if you use script/generate to create a model, it creates a [model].yml fixture file and a unit test that includes a call to the fixtures class method. Rails may be opinionated, but that doesn't mean I have to agree with it.

With a minor modification you can tell Rails not to run the db:test:prepare task. You should also create a new test helper that doesn't load the additional frameworks that you will not need. I found some of the code for this from reading a great book, Rails Recipes, by Chad Fowler.

You'll need to add a .rake file to /lib/tasks. The file contains one line:
Rake::Task[:'test:units'].prerequisites.clear
Additionally, you'll need to create a new helper file in /test. I named my file unit_test_helper.rb, but the file name is your choice.
ENV["RAILS_ENV"] = "test" 
require File.expand_path(File.dirname(__FILE__) + "/../config/environment")
require 'application'
require 'test/unit'
require 'action_controller/test_process'
require 'breakpoint'

class UnitTest
def self.TestCase
class << ActiveRecord::Base
def connection
raise InvalidActionError, 'You cannot access the database from a unit test', caller
end
end
Test::Unit::TestCase
end
end

class InvalidActionError < StandardError
end
As you can see, the unit_test_helper.rb only requires what is necessary; however, it also changes ActiveRecord::Base to throw an error if you attempt to access the connection from a unit test.

I included this test in my codebase to ensure expected behavior.
require File.dirname(__FILE__) + '/../unit_test_helper'

class AttemptToAccessDbThrowsExceptionTest < UnitTest.TestCase
def test_calling_the_db_causes_a_failure
assert_raise(InvalidActionError) { ActiveRecord::Base.connection }
end
end
Update (Again):
We have been using this style of testing for several months now and have over 130 tests at this point and our tests still run in less than a second.

This decision does carry some trade-offs though. First of all, it becomes a bit more work to test ActiveRecord::Base subclasses in your unit tests. I'm comfortable with the small amount of extra work since it results in a significantly faster running test suite.

Also, if you need to use a AR::Base class as a dependency for another class, you will need to mock or stub the AR::Base class. This generally requires using Dependency Injection or a framework such as Stubba. For example, if you have a method that returns an ActiveRecord::Base subclass you can mock the new call and return a stub instead.
class AccountInformationPresenter

def account
Account.new
end

end

class AccountInformationPresenterTest

def test_account_returns_a_new_account
Account.expects(:new).returns(stub(:name=>'jay'))
AccountInformationPresenter.new.account
end

end
In the above code, mocking the new call on Account prevents an unnecesary database trip.

For an example of what our unit tests look like here are some tests and the classes that the tests cover.
require File.dirname(__FILE__) + '/../../../unit_test_helper'

class SelectTest < Test::Unit::TestCase
def test_select_with_single_column
assert_equal 'select foo', Select[:foo].to_sql
end

def test_select_with_multiple_columns
assert_equal 'select foo, bar', Select[:foo, :bar].to_sql
end

def test_date_time_literals_quoted
date = DateTime.new(2006, 1, 20, 13, 30, 54)
assert_equal "select to_timestamp('2006-01-20 13:30:54', 'YYYY-MM-DD HH24:MI:SS')", Select[date].to_sql
end

def test_select_with_symbol_and_literal_columns
assert_equal "select symbol, 'literal'", Select[:symbol, 'literal'].to_sql
end

def test_select_with_single_table
assert_equal 'select foo from foo', Select[:foo].from[:foo].to_sql
end

def test_select_with_multiple_tables
assert_equal 'select column from bar, foo',
Select[:column].from[:foo, :bar].to_sql
end
end

require File.dirname(__FILE__) + '/../../unit_test_helper'

class TimeTest < Test::Unit::TestCase
def test_to_sql_gives_quoted
t = Time.parse('2006/05/01')
assert_equal "to_timestamp('2006-05-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')", t.to_sql
end

def test_to_pretty_datetime
d = Time.parse("05/10/2006")
assert_equal "05-10-2006 12:00 AM", d.to_pretty_time
end
end

class Select < SqlStatement
class << self
def [](*columns)
unless columns.select { |column| column.nil? }.empty?
raise "Empty Column in #{columns.inspect}"
end
self.new("select #{columns.collect{ |column| column.to_sql }.join(', ')}")
end
end

def from
@to_sql += " from "
self
end

def [](*table_names)
@to_sql += table_names.collect{ |table| table.to_s }.sort.join(', ')
self
end

end

class Time
def to_sql
"to_timestamp('" + formatted + "', 'YYYY-MM-DD HH24:MI:SS')"
end

def to_pretty_time
self.strftime("%m-%d-%Y %I:%M %p")
end

private

def formatted
year.to_s + "-" + month.pad + "-" + day.pad + " " + hour.pad + ":" + min.pad + ":" + sec.pad
end
end

Ruby convert Time to Date

On more than one occasion I've needed to convert a Time object to a Date. Generally, I use the following code:
Date.parse(Time.now.strftime('%Y/%m/%d'))
I have two questions.
  • Is there a better way?
  • Why doesn't Time have a to_date method?

Monday, June 26, 2006

Use erb to insert dynamic text

Tonight, I was writing up some new material and wanted to mix some examples with the text. I could have just pasted the code into the material; however, I wanted the material to update automatically if I updated the code. When working with Martin on his next book I got to experience the value in this practice first hand and it wasn't something I wanted to work without at this point.

So, I decided to use a similar tag system in my writing. When deciding on what syntax to use to delimit areas where I wanted code to execute I thought of the <% .. %> syntax common to .rhtml, .yaml, etc files. My first instinct was to use some type of description inside my delimiter and process the file based on the description. Then I realized, it would be easiest to simply express what I want to do by writing the valid ruby.

At this point I had a text file that contained small blocks of code surrounded by <%= %>. I could write a task that searches and evals all the code, but this seemed to be much better:
erb INPUT > OUTPUT
That's it, my output is now generated from the static text and the dynamic code that executes. I love it when the simplest thing that could possibly work just works.

Friday, June 16, 2006

OpenStruct freeze behavior

While looking at OpenStruct, I noticed what I considered to be unexpected behavior.
irb(main):007:0> frozen = OpenStruct.new(:foo=>1).freeze
=> #
irb(main):008:0> frozen.foo
=> 1
irb(main):009:0> frozen.foo = 2
=> 2
irb(main):010:0> frozen.foo
=> 2
This behavior surprised me since freeze is defined as:
Prevents further modifications to obj. A TypeError will be raised if modification is attempted. There is no way to unfreeze a frozen object. See also Object#frozen?.

a = [ "a", "b", "c" ]
a.freeze
a << "z"

produces:

prog.rb:3:in `<<': can't modify frozen array (TypeError)
from prog.rb:3
To find out what was happening I opened ostruct.rb. The OpenStruct class defines methods based on the keys of the constructor hash parameter and stores the values in a hash.
def initialize(hash=nil)
@table = {}
if hash
for k,v in hash
@table[k.to_sym] = v
new_ostruct_member(k)
end
end
end

def new_ostruct_member(name)
name = name.to_sym
unless self.respond_to?(name)
meta = class << self; self; end
meta.send(:define_method, name) { @table[name] }
meta.send(:define_method, :"#{name}=") { |x| @table[name] = x }
end
end
A quick check shows that if you freeze an OpenStruct instance the value hash is not frozen.
irb(main):002:0> frozen = OpenStruct.new(:foo=>1).freeze
=> #
irb(main):003:0> frozen.frozen?
=> true
irb(main):004:0> table = frozen.send :table
=> {:foo=>1}
irb(main):005:0> table.frozen?
=> false
To fix this issue you could redefine freeze and delegate the call to the freeze to both the value hash as well as the object. The problem with this solution is that when the TypeError exception is raised it will return hash as the frozen object, not the OpenStruct.
irb(main):006:0> table.freeze
=> {:foo=>1}
irb(main):007:0> frozen.foo = 2
TypeError: can't modify frozen hash
from /opt/local/lib/ruby/1.8/ostruct.rb:75:in `[]='
from /opt/local/lib/ruby/1.8/ostruct.rb:75:in `foo='
from (irb):7
from :0
Another solution is to change the definition of new_ostruct_member.
class OpenStruct
def new_ostruct_member(name)
name = name.to_sym
unless self.respond_to?(name)
meta = class << self; self; end
meta.send(:define_method, name) { @table[name] }
meta.send(:define_method, :"#{name}=") do |x|
raise TypeError, "can't modify frozen #{self.class}", caller(1) if self.frozen?
@table[name] = x
end
end
end
end
The above change will raise OpenStruct as the frozen class when a modification attempt is made. To verify the change works correctly I wrote the following test.
class OpenStructTest < Test::Unit::TestCase
def test_struct_does_not_modify_table_if_frozen
f = OpenStruct.new(:bar=>1).freeze
assert_raise(TypeError) { f.bar = 2 }
end
end

Thursday, June 15, 2006

Ruby TestStub

Because I believe in testing one concrete class at a time I often make use of stubs. When I first began programming in ruby I would create stubs within my test classes
class SqlGeneratorTest < Test::Unit::TestCase
class StubIdGenerator
def nextval; 1; end
end

def test_something
stub = StubIdGenerator.new
#some logic
end
end
Then, as I previously blogged about, I started using classes as stubs. But, somewhere along the way I had forgotten about OpenStruct. It wasn't until recently when a co-worker suggested he was going to write a 'stub mother' that I remembered OpenStruct.

OpenStruct, as the documentation states, allows you to create data objects and set arbitrary attributes. The above StubIdGenerator definition and instantiation code can be replaced by using OpenStruct with one line.
def test_something
stub = OpenStruct.new(:nextval=>1)
end
Unfortunately, OpenStruct does not quite behave as I would expect a stub to. For example, you could write:
def test_something
stub = OpenStruct.new(:nextval=>1)
stub.nextval = 2
stub.nextval # nextval now returns 2 not 1.
end
To remedy this, I stole some of the behavior of OpenStruct and created TestStub. TestStub behaves the way I expect a stub to behave. It does not throw an exception when I call a writer that I've defined as a valid writer and returns a constant value when I call a reader. These requirements can be expressed as tests:
class TestStubTest < Test::Unit::TestCase
def test_writers_are_created_correctly
stub = TestStub.new(:bar, :baz)
assert_nothing_raised do
stub.bar = 2
stub.baz = 3
end
end

def test_calling_an_invalid_writer_raises_nme
stub = TestStub.new(:bar, :baz)
assert_raise(NoMethodError) { stub.cat = 4 }
end

def test_readers_are_created_correctly
stub = TestStub.new(:bar=>1)
assert_equal 1, stub.bar
end

def test_first_reader_when_multiple_readers_are_created_correctly
stub = TestStub.new(:bar=>1, :baz=>2)
assert_equal 1, stub.bar
end

def test_second_reader_when_multiple_readers_are_created_correctly
stub = TestStub.new(:bar=>1, :baz=>2)
assert_equal 2, stub.baz
end

def test_writers_when_readers_are_specified
stub = TestStub.new(:bar, :baz, :cat=>1)
assert_nothing_raised do
stub.bar = 2
stub.baz = 3
end
end

def test_readers_when_writers_are_specified
stub = TestStub.new(:cat, :dog, :bar=>1)
assert_equal 1, stub.bar
end
end
Since I was able to steal much of the behavior I needed from OpenStruct, the TestStub class was quite easy to throw together.
class TestStub
def initialize(*should_respond_to)
@table = {}
should_respond_to.each do |item|
meta = class << self; self; end
create_readers(meta, item) and next if item.kind_of? Hash
create_writer(meta, item) and next if item.kind_of? Symbol
end
end

def create_readers(meta, item)
item.each_pair do |key, val|
@table[key.to_sym] = val
meta.send(:define_method, key.to_sym) { @table[key.to_sym] }
end
end

def create_writer(meta, item)
meta.send(:define_method, :"#{item}=") { }
end

attr_reader :table # :nodoc:
protected :table

#
# Compare this object and +other+ for equality.
#
def ==(other)
return false unless(other.kind_of?(TestStub))
return @table == other.table
end
end

Monday, June 12, 2006

Ruby IDE

'What IDE should I use' is a very popular nuby question. Unfortunately, there isn't a vastly superior option; therefore, the answer depends largely on your personal preferences.

The current popular options for ruby development are TextMate and RadRails. TextMate is a very popular editor that provides keyword highlighting and integrated test execution. TextMate is very extensible and allows for a high level of customization. TextMate appears to be the popular choice for Mac OS X users. Unfortunately, it is only available for Mac OS X. Also, it's not specific to ruby and it is only an editor.

RadRails also provides keyword highlighting and adds a few capabilities such as the ability to start and stop webrick, generators, and other various short-cuts. RadRails is built on the Eclipse RCP. RadRails is considered an IDE because of the additional features. Some of my co-workers find the auto-completion helpful; however, the majority of features didn't really help me; therefore, I consider it basically just another editor. Of course, often you get what you pay for.

Another option is to use IntelliJ combined with the Simple Syntax Highlighting plug-in. IntelliJ has great Subversion integration, local history, and great JavaScript support. IntelliJ can be heavy at times, but with a few tweaks I found to be my favorite windows ruby development environment.

There are other editors in the ruby space:
  • ActiveState: Komodo
  • Arachno
  • RIDE - ME
  • FreeRIDE
I haven't included links to these because I see no reason to refer any traffic. Upon review, none of the above provide any additional features (of any value) when compared to the popular options. Frankly, I see no competitive advantage to any of these IDEs. In a PostIntelliJ world, why are the only current options nothing more than glorified editors?

I would love to see JetBrains move into the ruby space, but I'm not holding my breath. Can you imagine developing a large C# or Java application without a great IDE? Ruby is simple enough that it is possible to develop a large application simply using TextMate; however, an IDE that added refactoring support would certainly improve productivity.

Wednesday, June 07, 2006

More Test Driven Development observations

In part one I described some of the observations I've made while following Test Driven Development practices. In part two I'll describe other observations that I have previously lightly touched on or haven't documented at all.

If your object graph will not permit testing each concrete class individually, your object graph requires refactoring. Look to introduce Dependency Injection to resolve the issue.

ObjectMother is an artifact, a legacy testing pattern. In the early days of TDD, simply writing tests was a large step. However, testing has evolved to separated unit and functional tests. Unit tests should test individual classes; therefore, ObjectMother is clearly not an option. Functional tests should use actual implementations so ObjectMother could be a potential fit. However, unit tests should test boundary cases and functional tests should only test the classes interactions. Therefore, there should be a minimal amount of functional tests. Since the number of functional tests should remain low, there should not be enough duplication to warrant use of ObjectMother. FluentInterface (or the special case of FluentInterface, InitializationChain) is often a superior alternative to ObjectMother.

In an object oriented system, classes collaborate to achieve a larger goal. However, tests are not object oriented. In fact, having tests depend on other tests is an anti-pattern. Tests are procedural in nature and should be treated as such. Each test method should encapsulate the entire behavioral test. Tests should have very specific goals which can easily be identified by looking only at the method body, private method calls, and stubs if applicable.

Using patterns, for example Template Method, in tests does reduce duplication but at the expense of readability. Due to the loss in readability, setup is considered harmful. The superior solution is to create private methods that are executed when called.

Removing duplication within a system reduces the complexity of the entire system. However, the entire scope of a test is the single method body. Therefore, abstracting duplication should not be taken lightly. I rarely find that my tests contain duplication. When my tests do contain duplication, the solution is often refactoring a poorly designed piece of the object graph, not extracting a private method. For example, duplication can occur in setting up an array for testing. But, the actual problem is the getter returning an array, not the duplicate test code. When duplicate code is extracted to a private method, be aware that you are coupling two entities that in general should be decoupled. This is acceptable, but only for a minimal amount of special cases. In fact, even when a special case occurs, it is preferable to leave the duplication as is if it increases readability

Supporting classes, stubs for example, should be included as private subclasses of the test class. This ensures that to read and understand an entire test you only need to look at one file. It additionally ensures that the stub will only be used within the context of the test class. This is very valuable in keeping the tests robust.

'Magic' introduced in a test (such as using the test class as a stub) may appear clever; however, it reduces readability and should be immediately refactored to something more maintainable. Save the effort and write the test clearly in the first place.

Always use strings or numbers as the 'expected' value of an assertion instead of constants or calculated values. Nothing is more meaningless than seeing a test that asserts expected_error_message == customer.save. The major problem here is the question of which is wrong, expected_error_message or the result of customer.save. Additionally, the maintainability suffers because the values of both arguments must be obtained when debugging.

Use assert_equal (or Assert.AreEqual, etc) the majority of the time. Asserting equality results in highly readable test failures. If I see a failure similar to <"select * from customers"> expected but was <"select from customers"> I may know what the issue is without rereading the test. While the others such as assert, assert_not_null, etc have their place, the error messages are not near as meaningful. <false> is not true is very clear, but I have no idea what was false. Finding the true issue requires consulting the test.

I believe that is a fairly comprehensive list my major observations. Look for follow-ups in the future as I continue to learn more lessons.

Some Test Driven Development observations

Testing code is hard. There is some material available, but most of the examples are trivial. When you actually sit down to do strictly Test Driven Development (TDD) the complications appear quickly.

The first tripping point for me was cascading failures. Simple changes to one class often caused several test failures. Sometimes the fix was a one-liner, but often it was not. I began to find the solution when I first read Martin's paper on Mocks and Stubs. After I read that paper I began to use Mocks religiously, but that didn't really solve the problem. Using an excessive amount of mocks is just as fragile as using the actual implementations.

To me, the next distinct progression of my testing skills was following the one assertion per test guideline. Using only one assertion per test allowed me to write very focused tests that generally had less dependencies. By removing dependencies my tests became more robust.

Understanding Dependency Injection was the next major step in improving my testing skills. By using Dependency Injection I was able to further abstract my dependencies to ensure robust tests.

At this point I felt my tests had improved significantly; however, my test suite was becoming a bit of a burden to execute. The major problem was the excessive network traffic between my test suite and the database. To me, long build times are a serious problem because it represents lost developer time, several times a day. I quickly understood the distinction between Unit tests and Functional (or Integration) tests.

The next logical step was to improve test performance by not crossing boundaries while unit testing. Utilizing Dependency Injection I began to mock the data access code in my unit tests which significantly improved test processing time.

Further increasing the robustness of my tests was achieved by testing one concrete class at a time. Again, Dependency Injection helped provide the solution. By using Dependency Injection the code I wrote was significantly decoupled compared to the previous code I was writing. Injecting mocks or stubs for dependencies of my class under test did increase the robustness of the tests; however, I still had the issue of dealing with fragile mock set up code.

A common practice is to specify method names using strings when setting up expectations of a mock. This is obviously fragile when performing constant refactoring. However, using stubs always seemed like more trouble than it was worth because it required implementing an entire interface when often I only cared about a few methods. With no clear answer on the 'best' choice I used both excessively in different places to determine which approach I preferred. In the end, I found that neither should be treated as the silver bullet. In fact, using both mocks and stubs it's possible to make very clear what your test is accomplishing.

Another gray area in testing concerns what visibility to give your classes members. When I first started out testing I believed that exposing the private members for testing was acceptable. However, I quickly learned this approach was naive and a much better solution is to simply test through the public interface. Also, if a method is marked as private, but I feel it needs to be tested I likely wouldn't hesitate to simply make it public.

The reason I believed I needed to access private methods was because I was using a Code Coverage tool that told me my code was only about 80% covered. However, the answer wasn't that I needed to devise a solution to reach 100%, it was that I needed someone to tell me that 80% or higher is good enough. Should you strive for more than 80%? Sure, but don't make any compromises to get there. You will find the return on investment is generally not near as valuable.

That's the majority of my previously documented lessons learned concerning testing. I hope this entry and my previous entries can provide some guidance and help you avoid some common stumbling blocks. In part 2 of this entry I'll document some observations that I haven't previously written about.

Tuesday, June 06, 2006

Ruby Kernel system, exec and %x

The Ruby Core Library documentation is very similar for Kernel.system, Kernel.exec and %x[..]. Recently I needed to kick off a system process, so I spent some time working with all 3 options.

Kernel.exec does exactly what the documentation states:
Replaces the current process by running the given external command. If exec is given a single argument, that argument is taken as a line that is subject to shell expansion before being executed. If multiple arguments are given, the second and subsequent arguments are passed as parameters to command with no shell expansion. If the first argument is a two-element array, the first element is the command to be executed, and the second argument is used as the argv[0] value, which may show up in process listings. In MSDOS environments, the command is executed in a subshell; otherwise, one of the exec(2) system calls is used, so the running command may inherit some of the environment of the original program (including open file descriptors).

exec "echo *" # echoes list of files in current directory
# never get here

exec "echo", "*" # echoes an asterisk
# never get here
An important thing to notice is that it replaces the current process. To immediately see what is meant by 'replaces', try running exec from irb:
focus:~/work/eaa jay$ irb
irb(main):001:0> exec 'svn st'
focus:~/work/eaa jay$
A more realistic example could be, assume you wanted to execute an external command from rake and you tried to use exec. Your external command would be executed, but rake would never finish. Not exactly desired results.

Kernel.system behaves very similarly, but does not replace the current process. The documentation states the following:
Executes cmd in a subshell, returning true if the command was found and ran successfully, false otherwise. An error status is available in $?. The arguments are processed in the same way as for Kernel::exec.

system("echo *")
system("echo", "*")

produces:

config.h main.rb
*
The same irb test shows us that system will return true or false, and $? can be used to get a Process::Status instance.
irb(main):014:0> s = system 'uptime'
10:56 up 3 days, 23:10, 2 users, load averages: 0.17 0.17 0.14
=> true
irb(main):015:0> s.class
=> TrueClass
irb(main):016:0> $?.class
=> Process::Status
Process::Status does contain the exitstatus Fixnum, but it does not appear to capture any errors that may occur.
irb(main):019:0> s = system "ruby -e 'invalid ruby $%^&'"
-e:1: parse error, unexpected $undefined., expecting $
invalid ruby $%^&
^
=> false
Though I do like knowing my external application failed, it is nice to know why it failed. If you can redirect the output to a file this isn't a problem; however, you have 2 options if the execution output is simply dumped to stdout. The first option is to use system and redirect output to a file
irb(main):003:0> system 'uptime > results.log'
=> true
irb(main):004:0> exit
focus:~/work/eaa jay$ more results.log
13:09 up 4 days, 1:23, 2 users, load averages: 0.21 0.24 0.19
Another option is to use %x[..] and save the results. This option is more often used when the results need to be captured and used to determine further execution flow. To capture the results of a %x[..] you need to assign a variable to the return value.
irb(main):001:0> result = %x[uptime]
=> "13:16 up 4 days, 1:30, 2 users, load averages: 0.39 0.29 0.23\n"
irb(main):002:0> p result
"13:16 up 4 days, 1:30, 2 users, load averages: 0.39 0.29 0.23\n"
Using system allows you to deal with the output when false is returned. However, if you are always going to open the file and process the results you are likely better off using %x[..] in the first place.

Thursday, June 01, 2006

Ruby on Rails and Database Standards

Most large organizations have internal database naming standards. These standards are often created in-house by the senior employees and can vary widely between organizations.

Ruby on Rails makes several assumptions about the database structure. Some examples include an id column named (gasp) id or a foreign key column named 'table name'_id. Some of the Ruby on Rails plug-ins also make assumptions about the structure of the database. Acts As State Machine expects the model table to have a state column where the state is stored as a string.

Obviously, conflicts can occur between corporate standards and Ruby on Rails assumptions. Ruby on Rails provides you with the flexibility to support many of the (often bizarre) corporate standards. However, some database decisions are not easy to change. For example Acts As State Machine (AASM) to my knowledge does not allow you to abstract the states to a lookup table. And, having a string repeated in a single table can cause some DBAs to have nightmares.

A solution to this problem is to use views when working with an underlying database structure that you cannot fully control. To solve the AASM issue you can provide an updatable view that returns the state as a string. When an update needs to occur on the view you simply update the underlying table to use the correct key from the state lookup table.

As Ruby on Rails gains acceptance into larger enterprises I expect to see more scenarios where these types of issues appear.