My current team has 2 QA roles. The QA developers are responsible for creating a acceptance suite that ensures that each story is completed without breaking previously QA'd stories. The project I'm working in is a web application, so using
Selenium as the acceptance test suite was a fairly easy choice.
4 months later, the Selenium suite is painfully large. The Selenium suite is run with each CI build; however, it is not run (for performance) reasons by developers before they check in. The result: the Selenium build is broken more often than not.
Fixing the Selenium build can be time consuming for the QA developers, which takes away from the time that they have to spend doing exploratory testing. This is acceptable assuming that the Selenium tests are catching bugs. Unfortunately, we estimate that when the Selenium tests are broken only 10% of the time it is because a bug has been introduced, and the other 90% of the time the Selenium tests require updating because the functionality of the application has changed.
At the same time the build is getting longer and longer (because more tests equals more browser open and closes); therefore, the feedback loop for the Selenium tests is losing even more value.
The writing on the wall prompted my memory of stories of other projects that eventually throw out their acceptance suite because maintaining it is a full time job and it isn't providing enough value.
Based on the fact that we have a long running acceptance suite and a functional test suite that catches 90% of bugs introduced, I concluded that we should look for a more beneficial approach to acceptance testing. After a bit of discussion with a few coworkers we came up with the following idea: create a DSL for acceptance tests. The QA team can create acceptance tests using the DSL. The acceptance tests can be evaluated in one context to run as Rails Functional or Integration tests. This will allow the developers to run the entire acceptance suite before checking in without creating the overhead of running selenium. If the acceptance tests break when run as Rails Functional tests the QA team should work with the developers to update the acceptance tests. The same acceptance tests can be evaluated in another context to run as Selenium tests. This allows the CI build to run the same tests through the browser. We do believe that running through the browser is valuable and this solution allows us to continue to test through the browser.
I've begun creating this solution for my current project and hope to extract something general for other projects moving forward. Look for updates in the future.