Wednesday, June 25, 2008

The Simplest Thing That Could Possibly Work

the simplest thing that could possibly work -- it's a phrase held dearly to all agile developers, but it's also a common source of disagreement. I recently saw two different interpretations of the phrase that I considered to be worth sharing.

Often the emphasis of the phrase is like so: the simplest thing that could possibly work. However, recently Ian Robinson was discussing the idea that the emphasis should be: the simplest thing that could possibly work. The difference is only emphasis, but it's a change worth considering.

I was also part of a discussion the other day where a developer was following a pattern correctly, but it was causing us to add a significant amount of additional code. This code would have supported several features that seemed nice to us, but our domain expert hadn't asked for any of those features. I would have been happy to go along if there was a small amount of code or if the nice features were on our roadmap, but neither of those things were true. I began to advocate for breaking the pattern and doing only what we needed to do. It allowed us to delete ~60% of the code we were currently working with.

At first the developer said "this is where we're going to disagree on the simplest thing that could possibly work." He argued that we were backing ourselves into a corner by not following the pattern; therefore, what I was suggesting couldn't possibly work. I took a few moments to consider his point of view. I concluded that he might be right, but deleting 60% of the code we were currently working with meant that the remaining 40% was so small that if we did need to rewrite in the future it would actually be easier than the amount of effort required to maintain the prematurely put in place architecture.

I believe there are occasions where the simplest thing that could possibly work is writing 10 lines of code today that do what you need, and deleting them tomorrow in light of new requirements.

There are, of course, a few caveats. I had been pairing with the developer for about 4 hours and was able to accurately assess what the difference actually was between what we were doing and what we could be doing. Once we both saw how much effort it was to follow the pattern, it was easy for us to trim to the simpler version. I think it would have been more painful to speculate on the pattern implementation effort, and we may not have agreed on the outcome.

Also, painting yourself into a corner when you can jump over the paint is fine, but if someone else keeps painting on the other side, you may not be able to get out. The small amount of code that we wrote can be rewritten in about an hour, but if someone else was going to add to that code then we would have probably gone down the more complicated path. No one was building on our code so we weren't as worried about the foundation we put in place.

Tuesday, June 24, 2008

Flex: Expert Developers Needed

The majority of my Flex posts have been pro-Flex. The two biggest reasons I advocate for Flex are:There are a few places that I'd love experts to help evolve.
  • state of testing with Flex is good enough, but it's got plenty of room to improve. (mocking framework, test runners, test frameworks, better stubbing)
  • The language has dynamic capabilities, but seems to be evolving into a statically typed language.
  • The majority of examples only show Flex with ActionScript in Script tags mixed with the MXML components (which is not how you need to code). I write my Flex with unobtrusive ActionScript, and it's quite easy.
  • The MVC framework feels a bit too full of ceremony. I'd love to see some competition resulting in easier to use components.
Unfortunately, most of the developers I talk to these days are not excited about Flex adoption. I'll be the first to tell you it's not perfect, but I'm surprised by the hesitation level. I don't consider myself an early adopter, and I don't tolerate much pain from any technology. But, I don't think Flex fits in the category of bleeding edge or excessive yak shaving. There are gotchas, but what technology doesn't have them? The most depressing aspect of the expert hesitation is that they wont get the opportunity to help shape the language/platform.

I already find myself so effective using Flex that I would reach for it anytime I consider Javascript in the future. But, the world would be a much better place if the experts continued to improve upon an already solid base.

Monday, June 23, 2008

Flex: Anonymous Classes as Test Stubs

In my previous post, The State of Flex Testing, I stated that I use anonymous objects as stubs. This entry will show how to easily create stubs that stub methods and properties.

If you simply need a stub that has a property, the following code is all you'll need.

{state: "ready"}

The above code creates an object that response with "ready" to the message state. Adding methods is also easy if you remember that you can create properties that store anonymous functions (just like Javascript). If you need the execute method to return true, the following stub should do the trick.

{state: "ready", execute: function(){ return true }}

That's really it. You can define methods and properties on anonymous objects as you please.

Is it beautiful? Not really. I've considered a few other options would allow you to define methods without the anonymous function, but it's never been so annoying that it was worth spending the time to get the implementation correct. I suspect anyone who put their mind to it could come up with a nice implementation in a short amount of time.

Flex: The State of Testing

Given my focus on testing, it's not surprising that the most common question I get about Flex is: Is it testable? Of course, the answer is yes otherwise I wouldn't be using it.

Out of the box, it's very similar to early JUnit. Like all XUnit ports, it's based on creating testing classes and defining methods that are tests. There's setup, teardown, and all the usual features you expect from XUnit ports.

There's a FlexUnit swc (library) that assists in creating a test swf file, which runs all the tests in the browser. There's a few blog entries that give details on how to generate an XML file from the test swf, so breaking a build or reporting results is possible. I haven't bothered to go that far, but others have successfully blazed that trail.

In general, testing Flex is not great, but I rarely find myself concerned or unhappy. Here's a few reasons why.
  • There's little logic in my views anyway, the current app is Flex front end to Rails resources.
  • Anything that's annoying, I move out of my way by generating with some ruby scripts (for example: creating the file that lists all the test cases). -- more on this later
  • Anonymous objects in flex have been 'good enough' stubs
If you only want a high level opinion -- it's good enough, but not ideal or ground breaking.

I'm not a fan of manually adding tests to my test suite, but just like Paul Gross, we dynamically create ours. So it's not a problem. We didn't stop there though, we also generate the package and class name; therefore, our test classes stay pretty clean. The code below is an example of an entire test file (the ones that we work with).

public function testLimitSortOrderCriteriaXml():void {
var smartListCriteria:* = new SmartListCriteria();
smartListCriteria.addLimitCriteria({order:"an order"});
assertEquals("an order", smartListCriteria.toXml().sort.order);
}

public function testLimitSortFieldCriteriaXml():void {
var smartListCriteria:* = new SmartListCriteria();
smartListCriteria.addLimitCriteria({field:"a field"});
assertEquals("a field", smartListCriteria.toXml().sort.field);
}

Those methods wouldn't do much in isolation, but if you generate the surrounding (tedious) code, they work quite well. Here's the rake task that I use.

desc "Generate test files"
task :generate_tests do
tests = Dir[File.dirname(__FILE__) + "/../../test/flex/tests/**/*Test.as"].each do |file_path|
file_path = File.expand_path(file_path)
package = File.basename(File.dirname(file_path))
emit_file_path = (File.dirname(file_path) + "/" + File.basename(file_path, ".as") + "Emit.as").gsub(/tests/,"generated_tests")
File.open(emit_file_path, "w") do |file|
file << <<-eot
package generated_tests.#{package} {

import flexunit.framework.TestCase;
import flexunit.framework.TestSuite;
import uk.co.company.utils.*;
import uk.co.company.smartLists.*;
import mx.controls.*;

public class #{File.basename(emit_file_path,".as")} extends TestCase {

#{File.readlines(file_path)}
}
}
eot

end
end
end

As you can see, I generate test files and which are later used as source for the test swf. It's a few extra steps, but I never see those steps, I just write my test methods and run rake test:flex when I want to execute the tests. The one catch is that I can't run individual tests, which is terrible.

There are no mocking frameworks that I've seen, so that's kind of a bummer. Though, in practice I haven't found myself reaching for a mock anyway, but that will probably depend on your style. Again, I don't have much behavior in my views, so I don't need rich testing tools. Of course, I'd love to have them if they were available, but I don't find their loss to be a deal breaker. I do reach for stubs fairly often, but anonymous classes seem to work fine for that scenario.

There is one huge win that I'll conclude with. When using HTML & Javascript you have to test in browser to ensure that it works. This isn't true of Flex since there aren't browser issues. So, I can test my application using FlexUnit, and be confident that it just works. Removing the need for a (often slow and fragile) Selenium suite is almost enough to make me ditch HTML & Javascript forever.

Monday, June 16, 2008

Immaturity of Developer Testing

The ThoughtWorks UK AwayDay was last Saturday. You could over-simplify it as an internal conference with some focus on technology, and extra emphasis on fun. At the last minute one of the presenters cancelled so George Malamidis, Danilo Sato, and I put together a quick session -- Immaturity of Developer Testing.

It's no secret that I'm passionate about testing. The same is true of Danilo and Georege, and several of our colleagues. We thought it would be fun to get everyone in a room, argue a bit about testing, and then bring it all together by pointing out that answers are contextual and the current solutions aren't quite as mature as they are often portrayed. To encourage everyone to speak up and increase the level of honesty we also brought a full bottle of scotch.

We put together 5 sections of content, but we only managed to make it through the first section in our time slot. I'll probably post the other 4 sections in subsequent blog posts, but this entry will focus on the high level topics from the talk and the ideas presented by the audience.

Everyone largely agreed that tests are generally treated as second class citizens. We also noted that test technical debt is rarely addressed as diligently as application technical debt is. In addition, problems with tests are often handled by creating band-aids such as your own test case subclass that hides an underlying problem, testing frameworks that run tests in parallel, etc. To be clear, running tests in parallel is a good thing. However, if you have a long running build because of underlying issues and you solve it by running the tests in parallel.. that's a band-aid, not a solution. The poorly written tests may take 10 minutes right now. If you run the tests in parallel it might take 2 minutes today, but when you are back to 10 minutes you now have ~5 times as many problematic tests. That's not a good position to be in. Don't hide problems with abstractions or hardware, tests are as important as application code.

Another (slightly controversial) topic was the goal of testing. George likes the goal of confidence. I love the confidence that tests give me, but I prefer to focus on Return On Investment (ROI). I think George and I agree in principle, but articulate it differently. We both think that as an industry we've lost a bit of focus. One hundred percent test coverage isn't a valuable goal. Instead it's important to test the code that provides the most business value. Test code must be maintained; therefore, you can't always afford to test everything. Even if you could, no automated test suite can ever replace exploratory testing. Often there are tests that are so problematic that it's not valuable to automate them.

The talk was built on the idea that context is king when talking about testing, but it quickly devolved into people advocating for their favorite frameworks or patterns. I ended up taking a side also, in an attempt to show that it's not as easy as right and wrong. I knew the point of view that some of the audience was taking, but I didn't get the impression that they were accepting the other point of view. We probably spent too much time on a few details, of course, the scotch probably had something to do with that.

I wish we could have gotten back on track, but we ended up running out of time. After the talk several people said they enjoyed it quite a bit, and a few people said they began to see the opposing points of view. I think it was a good thing overall, but it's also clear to me that some people still think there are absolute correct and incorrect answers... which is a shame.

Next up, pro and con lists for browser based testing tools, XUnit, anonymous tests, behavior driven development, and synthesized testing.

Thursday, June 12, 2008

Developer Testing and the Importance of Context

How is it we keep falling for the same trick? Why is it so hard to remember: there is no silver bullet.

I've spent a significant amount of my time for the past 3 years focusing on testing. I've learned several lessons.Unfortunately, that list doesn't represent best practices or rules for better testing, it represents patterns for creating a readable, reliable, and performant test suite -- if and only if you work on the kind of projects I work on and you use tests the way I do.

And that's the killer. Context is still king.

I work on teams generally of size 6-16 developers. More often than not I fix tests that I've never seen before. I never read test files to understand class responsibilities. I never generate documentation based on my tests. I do my best to write perfect tests so that my application runs perfectly, but the best tests I write, I never look at again. My tests have a thankless task: guide my system design and ensure that I don't introduce regression bugs, and that's it. I run the entire test suite every minute on average.

I expect some of my readers work the way that I do, and in environments similar to mine; however, the vast majority probably don't. That means that a small minority can blindly follow my suggestions, but the majority of my readers will need to understand why I prefer those patterns and if they apply to their work environment.

Perhaps an example is appropriate. Which of these tests would you prefer to find when the build fails because of it.

test "when attribute is not nil or empty, then valid is true" do
validation = Validatable::ValidatesPresenceOf.new stub, :name
assert_equal true, validation.valid?(stub(:name=>"book"))
end

test "when attribute is not nil or empty, then valid is true" do
assert_equal true, @validation.valid?(@stub)
end

It's actually a trick question. The context is too important to ignore when composing an answer. If this test lived in a project where I was the sole author and expected maintainer then the latter is probably a better solution because I would know where to find the creation of @validation. However, on a large team where it's more likely that I'll never see this test until it's broken, there's a great argument for keeping all the necessary logic within the test itself.

The same test could be written with or without a test name.

test "when attribute is not nil or empty, then valid is true" do
validation = Validatable::ValidatesPresenceOf.new stub, :name
assert_equal true, validation.valid?(stub(:name=>"book"))
end

expect Validatable::ValidatesPresenceOf.new(stub, :name).to.be.valid?(stub(:name=>"book"))

Again, the context is critical in deciding which approach to use on your project. The second test is one line, but it provides very little understanding of why it exists. You can resolve this issue by adding a comment or (what I would prefer) by changing the class to have a more fluent interface that explains why as well as how. However, both of these solutions make it hard to easily scan the file for an understanding of ValidatesPresenceOf or generate documentation based on the tests. Are those things important to you?

Dan North and I agree more than we disagree, but we have very different styles of testing. We also use test in different ways, but we both have the same goal in mind -- use tests to create reliable, readable, and performant software. I believe striving for reliable, readable, and performant applications and tests is a good goal to have and there are several ways to get there. Your best bet is to understand the patterns that work for me, understand the patterns that work for Dan, and understand the patterns that work for anyone else who is passionate about developer testing. You'll find that some of their approaches are in direct conflict. This isn't because one pattern is superior to another in isolation, it's because one pattern is superior to another in context.

There's also another factor worth mentioning. Innovation around testing is still happening at a rapid pace. It seems as though there's a new testing or mocking framework appearing on a weekly basis. I suspect this is probably representative of the future of testing -- testing frameworks targeted at specific contexts. As of right now you may write your unit tests and functional tests using the same framework; however, in the future you may prefer Synthesized Testing when focusing on developer tests and RSpec Story Runner for acceptance tests. It's also possible that the newest features of XUnit.net, JMock or Mockito will give you a better way to model your domain. These, and the other testing frameworks, are evolving at a rapid pace because developer testing patterns are still immature and are being adapted to the contexts in which they are used. In the future you may not use the same tool for several different types of tasks -- and that's probably a good thing.

It all comes back to context. The best advice anyone can give you is to consider yours and take the patterns that should help the most... and then adapt as your context changes.

Tuesday, June 10, 2008

Flex: Objects in Views

Imagine the following requirement:
The application needs a combo box that has the following options:
  • delete
  • mark as spam
  • bounce
When delete is selected the current email (the one being viewed) needs to be moved from the inbox to the trash.

When mark as spam is selected the current email should be tagged as spam.

When bounce is selected the server should send a reject message to the server where the email originated.
Assume you are using the web. The traditional approach is to create a drop down with unique values that can be used on the server side (generally as a case statement). The unique values are strings, which represent keys, which are used to determine what the correct course of action is. This scenario works, and is familiar to most web developers, but it's not the most elegant solution.

Flex offers you a better solution. The Flex ComboBox allows you to set the dataProvider to an array of anonymous objects which can contain pretty much whatever you need. Even if you don't know Flex and ActionScript, the following code should still be readable, and interesting.

function init() {
comboBox.dataProvider = [{label: "delete", command:DeleteCommand}, {label: "mark as spam", command:MarkAsSpamCommand}, {label: "bounce", command:BounceCommand}];
comboBox.addEventListener("change", function(){ new comboBox.selectedItem.command().execute(); });
}


function init() {
comboBox.dataProvider = [{label: "delete", execute:delete}, {label: "mark as spam", execute:markAsSpam}, {label: "bounce", execute:bounce}];
comboBox.addEventListener("change", function(){ comboBox.selectedItem.execute(); });
}

function delete() {
// delete implementation
}

function markAsSpam() {
// mark as spam implementation
}

function bounce() {
// bounce implementation
}

This is one of the things I really like about Flex. Since I'm not tied to HTML and strings, I can put whatever I like in the view. The first example keeps all the logic out of the view and simply creates a new command and executes it immediately. It's nice and clean without worrying about converting from strings to classes. The second solution relies on functions to handle processing. In this case you could be doing something as simple as showing or hiding components in the view (not something you'd need a class for).

Either way, I'm working with first class concepts, instead of string representations of first class concepts. The result is cleaner code that's easier to work with.

Tuesday, June 03, 2008

ActionScript: The difference between Object and * (an asterisk)

If you've ever wondered what the difference is between the following two statements, you aren't alone.

var result:Object = sendRequest();
var result:* = sendRequest();

It's fairly hard to Google for the explanation, but Subhash Chandra Gupta recently pointed me to a good example.

The full article can be found in the Flex 3 Help.

The difference is that * (an asterisk) signifies that the object can be any type and typing something as an Object will require you to cast if you are compiling in strict mode. Here's a greatly simplified example to demonstrate.

function returnOne():Object {
return 1;
}
var one:Number = returnOne(); // compiler error in strict mode

function returnTwo():* {
return 2;
}
var two:Number = returnTwo(); // no compiler error in strict mode