Thursday, July 03, 2008

The Immaturity of In Browser Testing

Designing applications that behave the same in several browsers is a miserable job. Unfortunately, it's often a business requirement. If your application needs to behave flawlessly in multiple browsers, In Browser testing is probably a necessary evil.

I tend to use Selenium for In Browser testing; therefore, this entry is written from the perspective of a Selenium user.

Selenium is terrible for several reasons.
  • There are several ways to drive Selenium, and none of them are particularly mature. Should you use SeleniumRC, Selenium on Rails, the in browser recorder, or some other half baked solutions? I don't have the answer. I've used all 3 of the named solutions and found them all to be problematic. Yes, the problems can be gotten around, but they are there and solving them costs time.
  • There are several languages for writing tests. Should you use Java, Ruby, Python, Perl, etc? I have no idea. Having the choice might seem like a good thing -- until the person who was writing the majority of tests leaves and the next person to take on the Selenium suite decides he wants to use another language. The languages are also fairly clunky. I can't help wondering if a better solution would have been to create a DSL specific to the in browser testing space.
  • I could have written this entire blog entry before most of the Selenium suites in the world would finish. In Browser testing is almost unacceptably slow. Selenium Grid sets out to solve this problem. So you should use that, right? Not exactly, it's not worth the effort unless you have a large suite, and it requires you to go down the SeleniumRC path, which may or may not be the right choice for you.
  • Selenium suites quickly reach the size where their value is not proportionate to the amount of effort the tests require to maintain. Thinking about throwing your suite away? If you do you'll be joining a very large club of developers who decided to dump their Selenium suites. It is very hard to design a large Selenium test suite that provides value. I've heard of several suites that were thrown out and only one suite that was large and the team believed it provided value. I guess there's hope, 1 team managed to get it right.
  • Browsers are buggy. While Selenium itself might justify it's value, spending a week figuring out what the latest bug in IE is starts to call in question the value of the Selenium suite. Of course, you can stop testing in IE, since only IE breaks the build, but if you need to deploy to an environment where users will be on IE... you're in a bad spot.
  • Selenium is great for verifying that everything works as expected, but when a test breaks you get little information on what the problem is. Since the tests are running at such a high level, it's unlikely that you'll be able to easily identify the majority of defects based on the broken Selenium test. The broken test is a great tip that something is wrong, but you'll likely need to do some digging to figure out exactly what is wrong.
Of course, there is another point of view. There are several reasons that Selenium is a good tool.
  • The only real way to know that your application runs in all browsers is to test it in all browsers. Selenium makes it possible to run the same tests regardless of browser.
  • The only way to verify that all the pieces of your application integrate perfectly is to test against the entire application stack. Selenium provides a great tool for simulating user experience.
  • Once you make a decision on what version to use and what language to use, writing tests is easy. Getting started with Selenium takes very little time, including time for learning.
  • For those less than technical team members, the Selenium recorder can be a great tool for creating tests.
  • Selenium also represents a tool that is helpful for both developers and testers.
The trick to using Selenium is knowing who (for), what (for), when, and why it's useful. For those that desire concise descriptions -- Selenium is best used by developers or testers when testing the most valuable (to the business) happy paths of a Javascript heavy web application that must function in several browsers.

When you begin to deviate from the above context, things begin to get problematic.

Selenium is undoubtably a tool that can be used by both developers and testers. The various ways to drive the tool ensure that both less than technical users and very technical users both have options. Selenium is best used for happy path testing because large suites can be both hard to maintain and prohibitively slow. Selenium is an appropriate choice for Javascript heavy applications since the tests run directly in the browser, ensuring expected behavior. Selenium is also helpful for mitigating cross-browser compatibility risks. The write once, run in several browsers model is a powerful one. You should chose Selenium when it can improve your confidence that the highest business value features are working correctly.

Despite it's problems it would be misleading not to mention that it probably is the best solution for In Browser testing, but there is surely room for improvement.


  1. Hey Jay,

    I can understand that pain, because we have the same issue in .Net.

    Should I go with Selenium? Watin? Watir?

    All of them feature most of these problems you mentioned.

    That's why I started a project that aims at being an easier, fluent interface for doing browser driver acceptance testing.

    If you'd like to check it out, just go to and check the Stormwind.Accuracy project.

    Hope you like it! It's for .net, but the idea behind is definitely platform agnostic.

    Bernardo Heynemann
    ThoughtWorks UK

  2. You mentioned going the DSL route for selenium. This is exactly what we did at my last gig for a commercial property insurance app. It worked out really well. We made a simple declarative DSL (in Java, which turned out nice because of good auto-completion in IDE) and wrote all our tests in that language.

    The DSL turned out really key for us because not only were our tests readable, but the DSL provided us a layer of abstraction over SeleniumRC. We actually implemented 2 targets for our DSL, Selenium RC and a Selenium HTML. And we were toying with implementing other targets like HTMLUnit.

    I could go on and on... but my point is having a good DSL really was what helped power the success of our Selenium test suite implementation.

  3. Marcus Tandy6:32 PM

    We use Concordion to express the test cases and WebDriver to provide the scripting DSL for Selenium. Together they work really well.

  4. We've found that our selenium scripts fail roughly 16 percent of the time randomly (probably due to timing issues)

    One of our developers saw it randomly fail and wondered about what the heck happened so scripted a set of 500 identical tests to run, 86 of which failed. I don't care whether it's 0 or 500 successes, I just want consistency.


  5. I think the way to go is do the unit testing on the server. Why should UI code by so difficult to test? MVC was created to enable easy unit testing of business logic so business logic wasn't always easy to test. If we rewrite UI code to be easier to test then we can eliminate the browser all together. It's a high ideal but I think it's the best way to go in the long term. you can find more of my thought on the future of UI testing here:

  6. I found WatiN or easier to use for implementing functional web testing on the .NET platform.


Note: Only a member of this blog may post a comment.