Tuesday, December 29, 2009

Presenting Lessons I've Learned

Up until a few years ago I'd never done any type of public speaking. I'm outspoken among friends, but generally shy around strangers. However, some opportunities presented themselves and I decided to take the leap into the world of presenting. I thought it might be helpful to document some lessons I've learned. If you decide to take the leap into presenting, I hope these ideas make your journey a bit easier than mine.
  • Get Help - I took a class at Speakeasy (NYC). The class that I took forced me to stand in front of a small group of strangers, present on any topic I liked, and have the entire thing recorded. I learned about things that I was doing wrong, but I also got to see what mistakes the other people in the class were making. It was the single largest thing that improved my public speaking.
  • Practice - the last public talk I gave was at QCon SF 2008. In the weeks leading up to that talk I rehearsed the presentation at least 25 times. There's so many reasons why a presentation can go wrong, so you'll want to make sure you have the content down cold. You'll lose the audience's trust if at any time you look as though you don't have the content down 100%.
  • Don't Script Jokes - at QCon London 2008, I was feeling nervous and chatting with Jim Webber. Jokingly, I said "maybe I'll open with: My name is Jay Fields, and yes, I'm American, but I guess you already knew that from my weight problem". Jim thought it was "brilliant". So I opened with the joke. But, it didn't come out smooth or well-timed, it came out dry and forced. No one laughed, and I got to start out my presentation with awkward silence. If you're naturally funny, it's fine to improv a joke in the middle of the presentation; otherwise, I suggest sticking to the content.
  • Know Your Audience - Audience's will react to your content in different ways based on several factors, it's important to consider these factors when putting together your content. For example, a local Rails User Group may laugh at your joke about DBAs. However, at QCon there are likely to be DBAs, CTOs, and other audience members who may not find your DBA humor amusing. It's also valuable to consider language barriers. In Europe, where my audiences often very mixed, jokes never seemed to amuse the audience in my talks or any that I attended. Maybe, the humor didn't translate well, or maybe I was moving too quickly. Either way, I made a mental note to stick to the content when my audience was likely to speak English as a second language.
  • Look Natural - At Speakeasy I learned that "arms at your sides" is how to look natural to the audience. It's extremely uncomfortable and feels unnatural to me, but I have the recorded video to show that I'm wrong. The problem is, if you are always waiving your arms around, or hiding them behind your back, you distract the audience. That's not to say you should never move your arms, but do it less often. I tend to point or gesture way too often, so whenever I notice that I am, I just naturally put my arms down and focus on doing that for a bit.
  • Face Forward - When you turn your back on the audience, you lose them. Especially if you are talking with your back to them. Instead, take small, backward steps and always face the entire audience.
  • Pictures - pictures are really the way to go. If you put text on the screen, people will read the text and tune you out. Some presenters are amazing enough that they get away with no slides at all. I don't suggest starting there. Technical conference-goers are used to slides. However, you can stick to pictures for your slides. If you have pictures that support your ideas, you can have slides while still forcing the audience to pay attention to you for content delivery.
  • Short Text - If you must use text, don't use sentences, paragraphs, definitions, or anything else lengthy. A few words, as little as possible is the only way to go. If the audience is reading, they aren't listening to you.
  • No Bullets - If you must you text, avoid bullets. Instead, show one line at a time and hide or shade the other lines.
  • Code Is Acceptable - Some ideas are more easily expressed with a snippet of code. Don't avoid code when code is the best choice. Instead, when you bring the code up, give the audience 30 seconds (or whatever is appropriate) to digest the code, and then begin to discuss it. Just remember, as long as the code is on the screen, there will be people reading, and not paying attention to you.
  • No Distractions - Distractions can ruin a presentation. Excessive transitions, morphing text, blinking text, etc all take away from the message that you are trying to express. I remember seeing a Flex presentation at RailsConf where the presenters showed their Flex Twitter Client. It was really pretty, and kept popping up tweets from conference attendees. Putting it up was awesome, leaving it up was the worst possible choice. I can't remember anything they said after they put up the application. I tuned them out for the remainder of the talk, and read all the tweets that kept popping up. I didn't mean to, but I was drawn to the shiny objects. After the talk I asked a few friends if the presentation was any good. They had no idea, we were all entranced by the twitter client.
  • Start Small, Build Up - My wife is the first to hear (suffer) though any presentation that I put together. I practice it a few times, then present it to her, then practice a few more times, then move on to a slightly larger venue. A User Group or some peers at work are good audiences for a new talk. After you present to 10-20 people, you should feel pretty confident about giving the same presentation to 100-200 people.
  • Be Original - If you use a template provided with Powerpoint or Keynote, it's likely that someone else at the conference will be using the same template.
  • Be Yourself - In my presentations I almost always swear and make some kind of sarcastic remark. That's how I act among friends and when I act like myself in presentations people tend to accept that what I'm saying is what I believe, not what I'm trying to pitch.
  • Record Coding - Don't live code unless you've practiced it 100 times, know how to deal with all possible problems, and are Justin Gehtland. Okay, I'm (sort-of) kidding about having to be Justin. However, the reality is that live coding is really, really hard. Often, you can express the same thing with a recorded coding session and there's little to no chance that things will go wrong. Justin has acting, teaching, and presentation training. He's also ridiculously smart. Those things combined mean he can carry a live coding session even when things go very wrong. If you have the same background, go for it. Otherwise, stick with the, much more likely to succeed, recorded coding session.
  • Questions - Pause for questions a few times during your presentation. It allows you to give color on ideas that you may not have clearly expressed. It also gives the audience the chance to see that you really know what you are talking about. For me, it also helps me relax and provide conversation, instead of simply lecturing.
  • Breathe - You know your content, the audience doesn't. Chances are you are going too fast. The simple rule of thumb is, the audience is always at least 5 seconds behind whatever you are saying. If you take the time to breathe or take a sip of water, you give them the opportunity to catch up.
  • Relax - The best presentation ratings I've ever gotten were when I gave a presentation entirely hungover. I thought I was going to be terrible. But, I was too hungover to be nervous, and I gave a straightforward, natural presentation on the ideas. I'm not advocating that you get drunk the night before your presentation, but do take steps to relax, if you know how. For me, I like to have friends in the audience, a drink about an hour before the presentation and a drink right after. It's my ritual and it helps ensure that I'm as relaxed as possible.
Almost everything I learned I got from Neal Ford, Jim Webber, and Dan North. Thanks for the ideas, gentlemen. If I left anything out, it would be cool to see additional lessons that you've learned throughout the years.

Update:
Steve Vinoski said...
I'd add the following:
  • Always repeat any questions asked of you before answering them. This is important not only for the audience in front of you, but also especially for any audience viewing a recorded session at a later time.
  • Don't be afraid to answer "I don't know" if someone asks you a question you don't know. The audience would rather hear honesty than some made-up BS. Presumably you possess specialized knowledge or wisdom, otherwise you wouldn't be up there speaking, but that doesn't mean you know everything, and frankly the audience doesn't expect you to.
  • Ask the audience questions. This helps keep them engaged. Remember, your talk is really more about them than it is about you, so gauging the audience and adjusting accordingly can help maximize the value of your message to them.
  • Should you encounter an audience member who wants to challenge you and argue with you, just politely decline to engage by saying you'll be happy to discuss the issue with them after the talk. Back-and-forth arguments with an audience member lose and annoy most of the rest of the audience almost immediately, and since you're probably not repeating every statement the arguer makes, anyone watching a recording of the argument hears only half of it and you lose them immediately.
  • Above all, respect your audience and they'll respect you. Except in extremely unusual circumstances, they want you to succeed because that way they'll get the most out of your talk. If you're a new or nervous speaker, keeping that in mind can help put you at ease.

Tuesday, November 03, 2009

Polyglot World

At QCon 2008 Steve Vinoski told me he uses at least 4 languages pretty much every day. At the time I thought 4 seemed like a lot of languages to use. Are we ready for a world where programmers need to know so many languages?

If you think about building a web application, you probably need to know a server-side language, HTML, Javascript, CSS, SQL, etc. Of course, there's no easy way to draw a line and say that those are, or are not all languages. I'm not sure the distinction matters, what does matter is having effective tools to get your job done. Maybe 4 languages isn't surprising.

I've worked on projects where C# and SQL were all we used; however, I think those days are coming to an end.

As an example, let's start with books. I'm currently reviewing Martin Fowler's upcoming DSL book. For his examples, Martin chose Java and C# where possible, and Ruby where a dynamic language was necessary. I think his language choices are pragmatic and wise. It's easy to agree and not give the idea another thought. However, his choice is a departure from his traditional usage of Java and C# exclusively.

Martin didn't add a new language because he changed his mind about dynamic languages, he added a new language because our industry changed it's mind about dynamic languages. Dynamic languages are now an acceptable choice for many projects, and an appropriate language addition for authors looking to appeal to the widest audience.

If Martin ever publishes another version of Refactoring, I expect it will have examples in several different languages. Refactoring: Ruby Edition contains several refactorings that are impossible to do in Java and C#. I have no idea what languages will be popular at the time of another Refactoring release; however, I think we will see a diverse set of languages and refactorings that target specific languages.

However, the polyglot movement can be found in the industry even more so than in books. Twitter is known to use both Ruby and Scala. Google uses C++, Java, Python, and Javascript. At DRW Trading we use any language that will help us get to market faster. Any given day, working solely on my project, I could be working with Java, C#, Clojure, Ruby or other even more interesting options. Steve's "4 language" statement doesn't surprise me at all these days.

You can also look at the multiple languages on the CLR and the JVM as further proof that as an industry we are willing to look to multiple languages to solve individual problems. The interoperability allows us to use the languages and paradigms we prefer for specific tasks, while working toward a greater solution. From personal experience, using Java and Clojure together has been a big productivity boost on several occasions.

As I already mentioned, in the web world you often need to master several languages to be effective. As a young developer, learning each language is probably a daunting task. However, I think having several targeted languages is part of the reason that web application development has been so successful. Targeted languages often give you the power and simplicity that allow you to get your job done without thinking about overly complex abstractions.

People often forget that "one language to rule them all" also implies that the language will attempt normalize everything. However, there are obviously efficiency gains to be had if you choose languages that play to the strengths of a paradigm or platform. The web proves this by having languages that target and solve specific problems. I wouldn't want to write CSS or Javascript in another language, even if it allowed me to solve all my problems with only 1 language. I prefer languages that increase my effectiveness to languages that reduce my learning requirements.

Ruby took a bold and productive step by introducing features that worked exclusively on Linux. As a result, several tasks are very simple to do on Linux and impossible on Windows.I applaud the choice. It makes my life easier when I'm on Linux, and I know I have to find another solution when I'm working on Windows. I'll take those options over some poor abstraction that feels painful on both platforms.

And, that's where I see our industry moving. Languages targeted at specific problems and run on specific platforms. The languages will be easier to create and use. There's been a movement in our industry towards simplicity for some time. This is just another logical step in that direction.

The days of one language for all problems and platforms are dwindling, which is nice. I prefer the polygot world.

Wednesday, October 21, 2009

Refactoring: Ruby Edition available.

Refactoring: Ruby Edition is available (and In Stock) on amazon.com.



Sorry it took so long, I hope it is worth the wait.

Thursday, September 10, 2009

Pressure, Expressed in Initial Development Time

def Initial Development Time: In software development projects, initial development time (IDT) is the length of time it takes from the project's first line of code until the business derives notable value from it.
I've done plenty of projects in my career, some with an IDT of a few months and some with an IDT of a year or more. Based on those projects I've decided that I like the following equation to express the pressure felt by a team at any given moment during the IDT.
pressure = Fibonacci(current month of IDT)

note:This equation assumes all other variables are normal. Obviously a team that must finish a project in a month or suffer dire consequences will feel a much different amount of pressure.
During the first month, the team feels 0 (which represents almost no) pressure. There's no legacy code, few broken builds, no requirements issues. During the second and the third month, the team feels 1 (which represents more, but still very little) pressure. The first mistakes and hurdles have been found. The architecture is being pushed.

The following months follow the fibonacci sequence. At month four, the team feels a pressure of 2. At month five, the team feels a pressure of 3. And, so on.

The sequence isn't meant to be out of 100, 1000, or any other number. It's purpose is to show the progression of pressure as the as months of IDT pass. In an ideal world, a manager would understand that a team under extreme pressure will underperform, and the manager would do everything in his or her power to reduce the IDT.

Consequences
Pressure can be good in moderation. Many Agile teams have iteration point goals which pressure the team to get a reasonable amount of work done per iteration. Unfortunately, pressure also tends to have the boiling frog effect.

Pressure often leads to compromises on quality. Far too often I've heard "let's just get this done, and then we'll clean it up." This is a good strategy if you are planning on cleaning immediately after completion (red/green/refactor). However, if you plan on cleaning it up "in the future", that step (the majority of the time) is forgotten.

Fibonacci works well for this (quality related) equation also.
% of code that "sucks" = min(80, Fibonacci(current month of IDT))
Most people wont admit that the code they are currently writing is poor, which helps explain why the percentage rarely reaches over 80%. Also, there's often some code that was written early, and never needed to be modified. Late in a project, looking at that code may be the only thing that keeps you sane.
sidenote: 50% of the time "this code sucks" can be directly translated to "I didn't write this code". On a project with many months of IDT you are bound to experience some turn-over, thus the amount of code that "sucks" is destined to go up. And, even if the original code didn't suck, the modifications done by the person who only half-understands the code probably do suck.
Innovation also seems to suffer on projects with many months of IDT. A typical (well functioning) project team tends to spend 20-30% of their time experimenting with new ideas that will make them more productive in the future. When innovations are found the dividends often greatly exceed the original time investment. Even when no innovations are found, the team often benefits from the experience gained.

Unfortunately, during IDT people begin to forget the value in innovation. They seem to visualize a path to the finish and charge down that path full speed with no regard for hurdles or environment changes. They are often unable to see the value in any time spent that doesn't already have "proven" value. Very soon, you begin to hear: "let's just get this done, and then we'll look at better solutions."

Between (unclean? dirty?) poorly written code and lack of innovation, the chances of dragging a code base out of darkness dwindle quickly as the IDT months drag on.

Lies
Many Agile believers will tell you that the problems with pressure can be alleviated with Milestones. This is patently false. A solution isn't valuable to a business until it provides notable business value. You may be 13 iterations into the project with 2 milestones behind you, but if the business isn't deriving value, you haven't done anything more than show them partially working software. Partially working software is great for building trust, but it alleviates zero pressure with respect to IDT.

Options
The only real remedy to the negative consequences of a long IDT is to create a shorter IDT. There truly is no alternative.

Unfortunately, some projects truly do take many months to deliver business value. I've found that some pressure can be alleviated if the team recognizes and adjusts to the pressure.

The 3 most helpful things I've found are:
  • Weekly (scheduled) 1 on 1 discussions between the team lead and each member of the team.
  • Take vacation. One person out for a week isn't nearly as catastrophic as a team full of burnt out people.
  • Set aside, and always use, experimentation time. Innovations can give you a big boost that the project will definitely need in the long run.
Thanks to Lance Walton for feedback, and the phrase "Initial Development Time"

Thursday, August 20, 2009

Staying Current: A Software Developer's Responsibility

I have a personal hatred for weekend conferences*. To me, a weekend conference ensures that I'll be "working" for 12 straight days.

I understand that opinion isn't universal.

Some people have problems getting time "off" to attend conferences. These situations feel like a fundamental misunderstanding of a software developer's responsibilities. Part of your (software developing) job is staying up on current technologies. That means doing some research during your day.

(almost directly stolen from Ward on technical debt)
If you spend your entire day coding and never looking at new things, you accrue productivity debt. In the short term (say the last week of a release), it makes sense to take on a little debt. However, in the long term, assuming little or no payment, the interest (where interest equals the gap between your skills and the current solutions) will render you a NZPP (Net-Zero Producing Programmer). In a typical organization you can coast as a NZPP for around 6 months and slowly transition to a NNPP.

It is your responsibility not to become a NZPP (or NNPP). Most talented software developers refuse to work with NZPPs. At the point that you become NZPP, you usually have to declare bankruptcy (with regard to software development). You generally have two choices: take a much lower paying job where you can learn new things or move into another role. If you want to be a software developer, neither of these outcomes is desirable.

Luckily, you have the power to avoid becoming a NZPP. Most employers will happily buy you technical books and send you to technical conferences. In my opinion, whether or not you took advantage of these benefits should be noted on your performance review. Not staying current as a software developer, especially when the opportunity is offered to you, is software malpractice.

I once created a list of things I look for in potential team-mates.
  • Have you tried Test Driven Development? Can you name something you liked and something you disliked?
  • What language(s) that are gaining popularity, but not yet mainstream, have you written Hello World in?
  • Do you read books or blogs looking for new ideas at least (on average) once every two weeks?
  • Do you at least attempt to learn a new language every year?
  • Have you ever run a code coverage or cyclomatic complexity tool against your codebase?
A commenter said something along the lines of
Not everyone has the personal free time to dedicate to doing all of these things
And, that is the fundamental flaw. Employees (and even some employers) seem to think that these are activities that should be done in your off time. I can't disagree more. These are things that a responsible developer needs to do as part of their job, thus it can be done during work hours.

20% time isn't something Google invented, it's just something they named, formalized and made popular. The activity itself is something good software developers have been doing for years. I applaud Google for making it a standard, thus ensuring that it's employees always have the opportunity to stay current. However, your company doesn't need to standardize on 20% time for you to stay current.

It's your responsibility to make time in your day to read a book or a blog.

You should also take advantage of a company sponsored trip to a conference. If you've attended conferences before and derived little value, I highly suggest the QCon conferences and JAOO.

Once you start doing research as part of your job you'll find that conferences are just like work, except the focus is 100% on research. And, it's not something you want (or should have to) spend your personal time on, it's just another productive day of doing what you have a responsibility to do.


* Which is why Josh and I run SpeakerConf Tuesday-Thursday. You can travel to, attend and travel home without missing a weekend day.

Monday, August 17, 2009

Macros Facilitate Expressive Code

Someone once asked me if I thought Clojure was more expressive than even Ruby. I didn't have enough information to form an opinion then, and I still don't now. However, I recently noticed something that led me to believe the answer could actually be yes.

I was looking through the code of clojure.test on Friday and I noticed something interesting. In clojure.test, the form(s) passed to the "is" macro are wrapped by a try/catch. It caught my eye because I often want to do the same thing in other languages, and usually I have to settle for much less elegant solutions.

Here's a bit of example code to help create some context
(deftest a-test
(is (= 1 (some-function-that-throws-an-exception)))
(println "this code still executes"))
For this example to work you'll have to ignore the fact that you probably don't want this behavior. In practice I prefer my tests to abort on the first failing assertion; however, in this blog entry I'm focusing on what's happening, not what I theoretically prefer.

In the example I call a function the same way I would anywhere else, and the framework has full control over what happens if my function throws an exception. This is accomplished when the "is" macro takes the forms and manipulates them into something similar to the original code, but with additional capabilities.

This particular example struck me as one where macros allow you to write only what you want, and the macro adds the additional behavior that you desire. However, the key is, you don't have to do anything special to get this additional behavior.

Consider trying to do the same thing in Ruby. In Test::Unit you would need an assert method that took a block.

def safe_assert_equal
result = yield
assert_equal result.first, result.last
rescue Exception=>e
puts e
end

class Testable < Test::Unit::TestCase
def test_something
safe_assert_equal { [1, 2] }
end
end

The additional syntax isn't drastically invasive; however, you are forcing additional syntax and requiring the understanding of why it's necessary.

Of course, in Java things would be even less expressive. The most likely solution would be to put the assertEquals in a Runnable, but I'd be interested in hearing other ideas. Regardless of the solution, it would obviously be invasive and take away from the readability of the test.

Being able to only say what is necessary is powerful. Stuart Halloway likes to talk about Essence and Ceremony, where Essence is the ability to say only what you want, and Ceremony is all the additional things you are required to say.

Macros seem to be a powerful tool for those looking to write Ceremony-free code.

Tuesday, July 07, 2009

More Trust, Less Cardwall

Last weekend, at the Hacker B&B, I mentioned to Jason Rudolph that my current team has no cardwall. He was a bit surprised and asked what we do have.

We track what needs to be done in 3 stacks: Eventually, This Release, and Tech.

Our stakeholder gives us requirements all the time. We also think of things that need to be done on a fairly regular basis. All requirements are captured on an index card with about 5 to 6 words. These cards are not the entire story (pardon the pun), instead they are a placeholder for a conversation. That's fairly standard for the projects that I'm a part of.

New cards are put in Eventually, unless they need to be done for This Release. When a card is done, it's put in a Done stack. Our stakeholder goes through the Done cards every few days and discards them after a quick once-over.

That's it for requirements. It's a very slim process. We estimate at the release level, not the card level.

Some cards are purely technical. We put those in the Tech stack and the team prioritizes as they wish. In general we work on Tech cards when no-one is available to pair. However, sometimes a Tech card is the highest priority and it can get picked up instead of a This Release card.

I mentioned to Jason that I believe we can be successful with this process because of the trust that our stakeholder has in us. As a consultant, I always felt like a vendor instead of a partner. Now that I'm full-time, we're all on the same team. When you're on the same team you don't seem to need as much ceremony and information radiation. This definitely isn't universal, but it's been my experience since I joined DRW.

I definitely prefer the slimmed down process, but I only see it working in an environment with a very high level of trust.

Update: Credit due to Chris Zuehlke for leading the fight for the slimmed down approach. We are more productive thanks to his efforts.

Thursday, June 25, 2009

Programmer Confidence and Arrogance

At SpeakerConf 2009, one of the speaker's wives asked me: Why is it that most speakers are confident, and some are simply arrogant. The question was also in the context of "software is nerdy, how can you speakers be so cocky". I pondered for a minute and replied: When no one knows what's "correct", people with confidence generally win the discussion.

Imagine the early days of medicine. Three different doctors give you three different surgery options. There's not enough experience in the industry to show which is the correct choice. Who do you trust? Probably the doctor who managed to be the most confident without being overly aggressive or cocky.

As I've said before, we're still figuring this stuff out. I constantly try to improve my processes for delivering software. I share those processes via speaking and blogging. However, I'm definitely aware that I could be doing it entirely wrong.

In general, I'm weary of developers who speak in absolutes. Most solutions I've seen are contextual, and even in context they tend to be fragile. We simply don't have the heuristics to determine what true best practices (for our field) are.

When pondering the original question I remembered when I wrote about Despised Software Luminaries. At the time I blamed passion; however, I'm guessing confidence and arrogance probably also weigh heavily on the source of animosity.

There's a direct correlation between being a luminary and your compensation package. Therefore, luminaries are enticed to gain as much market share as possible. Your luminary status is determined by the speed at which your ideas are adopted. And, without absolute proof of correctness, the speed at which your ideas are adopted can largely based on your confidence. I'm sure some luminaries see this situation and walk the line between confidence and arrogance.

Of course, not all luminaries are in it for compensation. I truly doubt Martin Fowler does what he does for money. But, the people who are looking to take market share from Martin may be motivated by a compensation package. Therefore, it's pretty hard to escape the the effects of confidence and arrogance.

The confidence and arrogance discussion is also interesting if you've ever met a luminary who you found to be completely incompetent. Perhaps they truly don't know what they were talking about, and they've just been confident enough to fool the majority of people they've met so far.

Wednesday, June 24, 2009

Java: Method Chain with Snippet

I've noticed a pattern pop up a few times in my Java code in the past 6 months. Maybe it's a decent pattern, or maybe I only have a hammer.

The problem I'm trying to solve is setting some global state, running a test, and ensuring that the global state is set back to it's original (or correct) value. My usual solution to this problem is to remove global state, but not all global states are created equally. The two global states I've been unsuccessful at removing are the current time and system properties.

In my previous post I described how I freeze time using the Joda library. The example code for freezing time was the first time I used a Method Chain with a Snippet. At the time I thought it was an ugly solution, but the best I could come up with.

A few months later I was testing some code that set and read from the system properties. My first tests set the properties and didn't clean up after themselves. This quickly caused trouble, and I found myself turning to Method Chain with Snippet again.

Here's some example code where I verify that setupDir doesn't overwrite a default property:

@Test
public void shouldNotOverrideDir() {
new Temporarily().setProperty("a.dir", "was.preset").when(new Snippet() {{
new Main().setupDir();
assertEquals("was.preset", System.getProperty("a.dir"));
}});
}

And, here's the code for the Temporarily class

public class Temporarily {
private Map properties = new HashMap();

public void when(Snippet snippet) {
for (Map.Entry entry : properties.entrySet()) {
System.setProperty(entry.getKey(), entry.getValue());
}
}

public Temporarily setProperty(String propertyName, String propertyValue) {
if (System.getProperty(propertyName) != null) {
properties.put(propertyName, System.getProperty(propertyName));
}
System.setProperty(propertyName, propertyValue);
return this;
}
}

The code works by setting the desired state for the test, chaining the state cleanup method, and passing the test code as a Snippet to the state cleanup method. The code exploits the fact that Java will execute the first method, then the argument to the chained method, then the chained method.

For the previous example, the 'setProperty' method is executed, then the Snippet is constructed and the initializer is immediately executed, then the 'when' method is executed. The Snippet argument isn't used within the when method; therefore, no state needs to be captured in the Snippet's initializer.

This pattern seems to work well whenever you need to set some state before and after a test runs. However, as I previously mentioned, it's much better if you can simply remove the state dependency from your test.

Tuesday, June 23, 2009

Freezing Joda Time

Once upon a time Mark Needham wrote about freezing Joda Time. Mark gives all the important details for freezing time (which is often helpful for testing), but I came up with some additional code that I like to add on top of his example.

Two things bother me about Mark's example. First of all, I always like the last line of my test to be the assertion. It's not a law, but it is a guideline I like to follow. Secondly, I don't like having to remember that I need to reset the time back to following the system clock.

I came up with the following idea. It's definitely a poor man's closure, but it does the job for me.
    
@Test
public void shouldFreezeTime() {
Freeze.timeAt("2008-09-04").thawAfter(new Snippet() {{
assertEquals(new DateTime(2008, 9, 4, 1, 0, 0, 0), new DateTime());
}});
}

The Freeze class is very simple:

public class Freeze {

public static Freeze timeAt(String dateTimeString) {
DateTimeUtils.setCurrentMillisFixed(JodaDateTime.create(dateTimeString).getMillis());
return new Freeze();
}

public void thawAfter(Snippet snippet) {
DateTimeUtils.setCurrentMillisSystem();
}
}

The Snippet class is even more simple:

public class Snippet {}

Using this code I can keep my assertions as close to the end of the test method as possible, and it's not possible to forget to reset the time back to the system clock.

Wednesday, June 10, 2009

Developer Testing: Welcome to the Beta Test

In March of 2009 I gave a talk at SpeakerConf about developer testing. The presentation is available as 'desktop' and 'iPhone' m4v files.Unfortunately, the Q&A session is inaudible, so I cut the end short.

Monday, June 08, 2009

Mockito non-hamcrest any matcher

These days I'm using Mockito for my behavior based tests. I like Mockito's integration with Hamcrest, but I don't always like the viral matcher requirement. In particular, if I have a method that takes 3 arguments, I don't like the fact that if I use a matcher for one argument I have to use a matcher for all 3. For example, in the following verification I don't care about the callback instance, but I do care about the timeout and the async flag.

verify(channel).subscribe(any(Callback.class), eq(100), eq(true))

I was toying with some code the other day and it occurred to me that I should be able to write my own any method that achieves what I'm looking for without requiring my other arguments to be matchers.

The code below is what I've started using as an alternative.

    public static  T any(final Class clazz) {
MethodInterceptor advice = new MethodInterceptor() {
public Object intercept(Object obj, Method method, Object[] args, MethodProxy proxy) throws Throwable {
if (method.getName().equals("equals")) {
return clazz.isInstance(obj);
}
return null;
}
};

return new ProxyCreator().imposterise(advice, clazz);
}

Using my any implementation the first example code can be written like the example below.

verify(channel).subscribe(any(Callback.class), 100, true)

My implementation relies on classes from cglib and spruice; however, you could copy the necessary class from spruice very easily. Here are the referenced classes:
  • net.sf.cglib.proxy.MethodInterceptor
  • net.sf.cglib.proxy.MethodProxy
  • org.spruice.proxy.ProxyCreator
There may be limitations of this implemenation, but it's been working fine for me. Please let me know if you spot a potential improvement.

Wednesday, May 27, 2009

Calling Clojure from Java

Calling Clojure from Java is easy, if you know which classes are important.

On my current project I make all my Clojure files resources, load them, and call the functions directly. The following example shows Clojure printing the argument it's given.

; printer.clj
(ns printer)

(defn print-string [arg]
(println arg))

// Java calling code
RT.loadResourceScript("printer.clj");
RT.var("printer", "print-string").invoke("hello world");

There's a few things worth noting about the example: RT.var takes the namespace name and the function name. The Var returned by RT.var has an invoke method that allows you to pass any number of Objects. The invoke method also returns an Object, which allows you return values from Clojure where necessary.

It's also worth noting that the Clojure/Java interop is very, very good. You could pass a Java object to Clojure, make changes to it in Clojure, and return it back to Java. Of course, you might not even need to return it to Java since the instance referenced in Clojure would be the same instance referenced in Java.

Tuesday, March 31, 2009

Kill Your Darlings

When I worked with Zak Tamsen, one of his favorite (software development) sayings was: Kill Your Darlings. The idea was simple, don't get too attached to code.

I'm not sure why so many developers become so attached to code they've written. Perhaps developers have the same attachment to code that artists have to their paintings or music. Or, perhaps they view their legacy code as job security. Whatever the reason, developers seem to have some illogical desire to extend the life of code they've produced.

Of course, the desire to make legacy code live-on isn't universal. I tend to live in the opposite world, where I can't delete code fast enough. As much as I'd like to believe that people who share my views are evolved in some way, I know it isn't true. At least in my case, I like to delete code because I have ADD. Plain and simple.

Someone recently asked me how I deal with developers who are over-attached to their code. I've only found one successful way: create a finished solution that is simpler, and then share it with the original developer. Unfortunately, this path doesn't exactly promote collaboration, so it can definitely lead to other problems.

George Malamidis taught me something about code attachment a few years ago: You always gain by allowing someone to show you an alternative solution. If someone wants to solve a problem in a different way, there are several gains to be had. If their way is inferior, you have an opportunity to mentor a team-mate. If their way is equally elegant, you've gained another solution, or point of view that may be superior in the future. If their way is superior you learn something new and the codebase improves. In exchange for these gains you only need to give up time. Time is valuable, but it's also well spent on improving the ability of a team-mate or your personal ability.

Michael Feathers has also written on this topic, specifically focusing on frameworks. In Stunting a Framework, Michael discusses creating small focused frameworks and then letting them go. I think Michael really nailed it with that entry, it's definitely worth a quick read.

I think killing your darlings extends beyond codebases and frameworks to languages themselves. At SpeakerConf 2009, I floated the idea that we should more actively seek to kill languages. Perhaps, after 3 versions of a language, it's time for that language to be retired. Imagine what we could create if the resources dedicated to Java were instead focused on creating a successor to Java. Think of all the time that would be saved if backwards compatibility became a non-issue.

I can imagine the horror of developers who have written and used frameworks for Java. What would we do without all those frameworks? The reality is, we'd port those frameworks and improve them. There's thousands of developers who are dying to port an existing framework to a new language. And, frameworks should be stunted anyway, if you agree with Michael (which I do).

We are already evolving. New languages are appearing all around us. Frameworks are born and killed at a rapid pace. However, the attachment to code, frameworks, and languages only slows our maturing process. Be aware of, and support progress. Kill your darlings.

Wednesday, March 18, 2009

Retrospective Trust Level

Retrospectives can be complicated meetings. Done correctly, they can provide immense value. Done poorly, they can be a show that provides negative value. A well run retrospective requires more than just going through the motions. Several things contribute to a successful retrospective; however, in my experience the key ingredient is trust.

I expect Fred George (all the way on the right, 2nd from the top) would probably agree with me. Fred is the only person I've ever met that would start each retrospective by asking everyone to write down, on a scale of 1 to 5, where their trust level is at.

A retrospective without trust is pretty worthless, so measuring trust at the beginning of the meeting definitely makes sense. If the trust level is ever below the acceptable level (assuming 5 is complete trust, anything below a 4 is potentially a problem) then the meeting doesn't proceed until a solution is found to the lack of trust.

Of course, if trust is low, you might not give a true trust level. This problem can fairly easily be addressed by having a retrospective facilitator that is in no way invested in the project. The facilitator can collect the trust measurements in an anonymous manner that protects the innocent team members.

If you're finding that your retrospectives aren't providing as much value as they should be, you might want to measure the trust level. You may find that people are afraid to talk about the bigger problems.

Tuesday, March 17, 2009

Continual Retrospective

I'm a huge fan of retrospectives. When consulting I found retrospectives to be absolutely required. An enormous amount of value was derived from expressing concerns and discussing the expected and actual impacts. However, I've also worked on a few highly functioning teams that never seemed to get the same value from retrospectives.

Highly functioning teams tend to address issues as soon as they are discovered, which is far better, but removes some of the value that is usually derived from retrospectives.

I don't like ad-hoc retrospectives, I prefer a routinely scheduled meeting. I favor the scheduled version because often it doesn't seem like a retrospective is needed and it's delayed too long. Delaying valuable meetings definitely doesn't seem like a good plan.

The problem is, on highly functioning teams we'd often end up in the retrospective with only 1 or 2 issues to talk about. Eventually the retrospectives would seem superfluous and be removed from the calendar. I've always been uneasy with disregarding something that's proven so valuable in the past, but you can't argue with removing waste. (where waste = loss of time)

On highly functioning teams, I've always worried that smaller issues were falling through the cracks and costing us efficiency. But, retrospectives seemed too expensive a tool to identify the smaller issues.

A few years ago I worked on a team that had an "issues" white board. Anything that annoyed you could be put on the issues board. Any time you ran into an issue, you would add a new issue, or put a + next to the existing issue if the issue had already been recorded.

The issues board gave us great visibility into things that bothered the team. Adding plusses gave us a good gauge to see which issues were the most annoying. If something was annoying and encountered often, it would quickly gather severals plusses. However, if something was annoying, but extremely rarely encountered, it might not make sense to invest the effort necessary to address the issue.

As an example, if a test has a race condition that causes the build to fail 5 times a week, it's probably going attract plusses quickly and signal the team that the race condition needs to be addressed. Conversely, if the race condition causes the build to fail once a year, and it would take substantial effort to fix, it's not likely to get '+ support'. And, any issue without + support is probably not worth addressing. Usually, issues with little + support are removed after it appears that the ROI isn't positive on addressing the issue. If the issue pops back up it can easily be added back to the issues board.

The first time I used the issues board it was in conjunction with retrospectives; however, these days I find the issues board to be a great replacement for retrospectives on highly functioning teams. I view the issues board as a continual retrospective that can be modified at any time. You also get the added benefit of providing a real time view into what currently slows or stops progress on the team.

Replacing retrospectives with a continual retrospective isn't something I'd recommend under normal circumstances. Like I originally said, I find retrospectives to usually be extremely valuable. However, if the value of retrospectives seems to be negative when compared with the value of time, then a continual retrospective is probably better than no retrospective at all.

Wednesday, February 04, 2009

Thoughts on Developer Testing

This morning I read Joel: From Podcast 38 and it reminded me how immature developers are when it comes to testing. In the entry Joel says:
a lot of people write to me, after reading The Joel Test, to say, "You should have a 13th thing on here: Unit Testing, 100% unit tests of all your code."
At that point my interest is already piqued. Unit Testing 100% of your code is a terrible goal and I'm wondering where Joel is going to go with the entry. Overall I like the the entry (which is really a transcribed discussion), but two things in the entry left me feeling uneasy.
  • Joel doesn't come out and say it, but I got the impression he's ready to throw the baby out with the bath water. Unit testing 100% of your code is a terrible goal, but that doesn't mean unit testing is a bad idea. Unit testing is very helpful, when done in a way that provides a positive return on investment (ROI).
  • Jeff hits it dead on when he says:
    ...what matters is what you deliver to the customer...
    Unfortunately, I think he's missing one reality: Often, teams don't know what will make them more effective at delivering.
I think the underlying problem is: People don't know why they are doing the things they do.

A Painful Path
Say you read Unit Testing Tips: Write Maintainable Unit Tests That Will Save You Time And Tears and decide that Roy has shown you the light. You're going to write all your tests with Roy's suggestions in mind. You get the entire team to read Roy's article and everyone adopts the patterns.

All's well until you start accidently breaking tests that someone else wrote and you can't figure out why. It turns out that some object created in the setup method is causing unexpected failures after your 'minor' change created an unexpected side-effect. So, now you've been burned by setup and you remember the blog entry by Jim Newkirk where he discussed Why you should not use SetUp and TearDown in NUnit. Shit.

You do more research on setup and stumble upon Inline Setup. You can entirely relate and go on a mission to switch all the tests to xUnit.net, since xUnit.net removes the concept of setup entirely.

Everything looks good initially, but then a few constructors start needing more dependencies. Every test creates it's own instance of an object; you moved the object creation out of the setup and into each individual test. So now every test that creates that object needs to be updated. It becomes painful every time you add an argument to a constructor. Shit. Again.

The Source of Your Pain
The problem is, you never asked yourself why. Why are you writing tests in the first place? Each testing practice you've chosen, what value is it providing you?

Your intentions were good. You want to write better software, so you followed some reasonable advice. But, now your life sucks. Your tests aren't providing a positive ROI, and if you keep going down this path you'll inevitably conclude that testing is stupid and it should be abandoned.

Industry Experts
Unfortunately, you can't write better software by blindly following dogma of 'industry experts'.

First of all, I'm not even sure we have any industry experts on developer testing. Rarely do I find consistently valuable advice about testing. Relevance, who employs some of the best developers in the world, used to put 100% code coverage in their contracts. Today, that's gone, and you can find Stu discussing How To Fail With 100% Code Coverage. ObjectMother, which was once praised as brilliant, has now been widely replaced by Test Data Builders. I've definitely written my fair share of stupid ideas. And, the examples go on and on.

We're still figuring this stuff out. All of us.

Enlightenment
There may not be experts on developer testing, but there are good ideas around specific contexts. Recognizing that there are smart people with contextually valuable ideas about testing is very liberating. Suddenly you don't need to look for the testing silver-bullet, instead you have various patterns available (some conflicting) that may or may not provide you value based on your working context.

Life would be a lot easier if someone could direct you to the patterns that will work best for you, unfortunately we're not at that level of maturity. It's true that if you pick patterns that don't work well for your context, you definitely wont see positive ROI from testing in the short term. But, you will have gained experience that you can use in the future to be more effective.

It's helpful to remember that there aren't testing silver-bullets, that way you wont get lead down the wrong path when you see someone recommending 100% code coverage or other drastic and often dogmatic approaches to developer testing.

Today's Landscape
Today's testing patterns are like beta software. The patterns have been tested internally, but are rarely proven in the wild. As such, the patterns will sometimes work given the right context, and other times they will shit the bed.

I focus pretty heavily on testing and I've definitely seen my fair-share of test pain. I once joined a team that spent 75% of their time writing tests and 25% of their time delivering features. Not a member of the team was happy with the situation, but the business demanded massively unmaintainable Fit tests.

Of course, we didn't start out spending 75% of our time writing Fit tests. As the project grew in size, so did the effort needed to maintain the Fit tests. That kind of problem creeps up on a team. You start by spending 30% of your time writing tests, but before you know it, the tests are an unmaintainable mess. This is where I think Jeff's comments, with regard to writing tests that enable delivery, fall a bit short. Early on, Fit provided positive ROI. However, eventually, Fit's ROI turned negative. Unfortunately, by then the business demanded a Fit test for every feature delivered. We dug ourselves a hole we couldn't get out of.

The problem wasn't the tool. It was how the process relied on the Fit tests. The developers were required to write and maintain their functional tests using Fit, simply because Fit provided a pretty, business readable output. We should have simply created a nice looking output for our NUnit tests instead. Using Fit hurt, because we were doing it wrong.

The current lack of maturity around developer testing makes it hard to make the right choice when picking testing tools and practices. However, the only way to improve is to keep innovating and maturing the current solutions.

If It Hurts, You're Doing It Wrong
Doing it right is hard. The first step is understanding why you use the patterns you've chosen. I've written before about the importance of context. I can explain, in detail, my reasons for every pattern I use while testing. I've found that having motivating factors for each testing pattern choice is critical for ensuring that testing doesn't hurt.

Being pragmatic about testing patterns also helps. Sometimes your favorite testing pattern wont fit your current project. You'll have to let it go and move on. For example, on my current Java project each test method has a descriptive name. I maintain that (like methods and classes) some tests are descriptive enough that a name is superfluous, but since JUnit doesn't allow me to create anonymous test methods I take the path of least resistance. I could write my own Java testing framework and convince the team to use it, but it would probably hurt. The most productive way to test Java applications is with JUnit, and if I did anything else, I'd be doing it wrong.

I can think of countless examples of people doing it wrong and dismissing the value of a contextually effective testing pattern. The biggest example is fragile mocking. If your mocks are constantly, unexpectedly failing, you're doing something wrong. It's likely that your tests suffer from High Implementation Specification. Your tests might be improved by replacing some mocks with stubs. Or, it's possible that your domain model could be written in a superior way that allowed more state based testing. There's no single right answer, because your context determines the best choice.

Another common pain point in testing is duplicate code. People go to great lengths to remove duplication, often at the expense of readability. Setup methods, contexts, and helper methods are all band-aids for larger problems. The result of these band-aids is tests that are painful to maintain. However, there are other options. In the sensationally named entry Duplicate Code in Your Tests I list 3 techniques that I've found to be vastly superior to setup, contexts and helper methods. If those techniques work for you, that's great. If they don't, don't just shove your trash in setup and call it a day. Look for your own testing innovations that the rest of us may benefit from.

If something hurts, don't look for a solution that hurts slightly less, find something that is a joy to work with. And, share it with the rest of us.

Tests Should Make You More Effective
What characterizes something as 'effective' can vary widely based on your context.

Some software must be correct or people die. This software obviously requires thorough testing. Other software systems are large and need to evolve at a fairly rapid pace. Delivering at a rapid pace while adding features almost always requires a fairly comprehensive test suite, to ensure that regression bugs don't slip in.

Conversely, some software is internal and not mission critical. In that case, unhandled exceptions aren't really a big deal and testing is clearly not as high a priority. Other systems are small and rewritten on a fairly consistent basis, thus spending time on thorough testing is likely a waste. If a system is small, short lived, or less-important, a few high level tests are probably all you'll really need.

All of the example environments and each other type of environment share one common trait: You should always look at your context and see what kind of tests and what level of testing will make you more effective.

Tests Are Tools
The tests are really nothing more than a means to an end. You don't need tests for the sake of having tests, you need malleable software, bullet-proof software, internal software, or some other type of software. Testing is simply another tool that you can use to decrease the amount of time it takes to get your job done.

Testing can help you-
  • Design
  • Protect against regression
  • Achieve sign-off
  • Increase customer interaction
  • Document the system
  • Refactor confidently
  • Ensure the system works correctly
  • ...
Conclusion
When asking how and what you should test, start by thinking about what the goal of your project is. Once you understand your goal, select the tests that will help you achieve your goal. Different goals will definitely warrant using different testing patterns. If you start using a specific testing pattern and it hurts, you're probably using a pattern you don't need, or you've implemented the pattern incorrectly. Remember, we're all still figuring this out, so there's not really patterns that are right; just patterns that are right in a given context.

[Thanks to Jack Bolles, Nat Pryce, Mike Mason, Dan Bodart, Carlos Villela, Martin Fowler, and Darren Hobbs for feedback on this entry]

Wednesday, January 21, 2009

Questions To Ask an Interviewer

If you've ever read tips on interviewing then you know it's a good idea to have questions ready to ask someone who's just interviewed you. If you're not good at remembering questions under-pressure you should write down a few and take the note with you.

Most of my important questions are answered in the interview: What does your software process look like, what tools do you use, etc. However, I have a few questions that don't usually come up during the normal course of an interview.
  • At what level are technology decisions made? Do the teams decide what tools and languages are used, or is it the architect, directors, the CTO or someone else? Assuming the team doesn't make the decision, what happens if the team disagrees with the decision maker?
  • What kind of hardware are developers given, and who decided that this was the ideal setup? If I want to reformat my Windows box and run Ubuntu, do I have that freedom?
  • How much freedom am I given to customize my work-space? If my teammates and I want to convert our cubes into a common area with pairing stations, what kind of hurdles are we likely to encounter?
  • How does the organization chart look? If there are 2 levels between me and the CTO, do I need to follow the chain of command, or am I able to go directly to the CTO if I feel it's appropriate? What about the CEO, am I able to get 10 minutes of the CEO's time?
  • What don't you like about working here?
The last one is really my favorite. People actually tend to be pretty honest about what they'd like to change at their organizations.

Obviously, the answers are going to vary largely by the type of organization you are looking to join. If you're interviewing at Google, it's probably not easy to get on the CTO or the CEO's schedule. So, I don't think there's right or wrong answers, but in context the answers can help guide whether the organization is a good fit for you.

Wednesday, January 14, 2009

The Fowler Effect

ThoughtWorks does its best to attract smart developers. It's no easy task. How do you convince the smartest developers in the world to join a company that requires 100% travel (in the US anyway) and become consultants. It's a tough sell. I've always believed that people join because of the other people they'll be working with. And, of the ThoughtWorks employees that people want to work with, Martin Fowler is generally at the top.

I, like so many other people, found ThoughtWorks because of Martin Fowler. I followed a link from his website, read about the company, and decided I wanted to give it a shot. After interviewing I did a bit more research about ThoughtWorks. I was definitely put off by the travel and the thought of being a consultant, but I found blogs by Jim Newkirk, Mike Mason, and several other ThoughtWorks employees. They were saying the things I wanted to hear and they seemed like people I wanted to work with. I definitely joined ThoughtWorks for the people, but Martin was the original reason I applied.

Not everyone joins ThoughtWorks because of Martin, but a large majority of people do. Employing Martin Fowler guarantees ThoughtWorks enjoys a steady stream of talented applicants. I've always jokingly referred to this as the Fowler Effect.

I wonder if other companies could benefit from employing a luminary.

The first obvious question is: what would the luminary do? I expect the answer would vary based on the luminary.

In Martin's case, he doesn't do a lot of project work, but he is very involved in ThoughtWorks and it's projects. Martin actively participates in public discussions, visits projects, offers advice and solicits input all the time. I'm sure employing Martin also helps ThoughtWorks sell projects. The cost benefit analysis is pretty easy for ThoughtWorks and Martin, but I'm not sure how that translates to non-consulting organizations.

If your recruiting budget is the size of a luminary's salary, it might make more sense to hire a luminary that embodies the type of team you want to build. That luminary will probably be able to bring friends with similar philosophies and attract other candidates who are on the same page. In that case it would be fairly simple, you consider the luminaries salary spending part of the recruiting budget.

Other luminaries may be interested in splitting their time between project work and luminary activities. Depending on the knowledge that the luminary brings, 50% of their time might provide enough value when combined with their recruiting contributions to justify their salary.

Luminaries can also justify their employment with their network. Luminaries are often connected to people who are looking for feedback on bleeding edge technology. For example, a luminary could gain early access to something such as Microsoft's upcoming DSL solutions, MagLev, or some other type of game changing software. Getting access to game changing software before your competition could yield great benefits.

Universities can also use luminaries to attract students the way organizations use luminaries to attract employees.
As an example, many universities "employ" well known luminaries to lecture for them, yet those luminaries don't base themselves out of their campus 100% of their time, coming in to teach, perhaps one class a year. -- Pat Kua
It's clear that ThoughtWorks benefits from the Fowler Effect, but I think other organizations could also benefit more than they expect by employing a luminary. Luminaries not only bring networks, publicity, and experience, but they also attract other talented developers. The net result of employing a luminary and all the people they attract could be the difference between being good and being great.

Tuesday, January 13, 2009

Creating Objects in Java Unit Tests

Most Java unit tests consist of a Class Under Test (CUT), and possibly dependencies or collaborators. The CUT, dependencies and collaborators need to be created somewhere. Some people create each object using 'new' (vanilla construction), others use patterns such as Test Data Builders or Object Mother. Sometimes dependencies and collaborators are instances of concrete classes and sometimes it makes more sense to use Test Doubles. This entry is about the patterns I've found most helpful for creating objects within my tests.

In January of 2006, Martin Fowler wrote a quick blog entry that included definitions for the various types of test doubles. I've found working with those definitions to be very helpful.

Test Doubles: Mocks and Stubs
Mockito gives you the ability to easily create mocks and stubs. More precisely, I use Mockito 'mocks' for both mocking and stubbing. Mockito's mocks are not strict: They do not throw an exception when an 'unexpected' method is called. Instead, Mockito records all method calls and allows you to verify at the end of your test. Any method calls that are not verified are simply ignored. The ability to ignore method calls you don't care about makes Mockito perfect for stubbing as well.

Test Doubles: Dummies
I didn't find many cases where I needed a dummy. A dummy is really nothing more than a traditional mock (one that throws exceptions on 'unexpected' method calls) with no expected method calls. In cases where I wanted to ensure that nothing was called on an object, I usually used the verifyZeroInteractions method on Mockito mocks. However, there are exceptional cases where I don't want to verify that no methods were called, but I would like to be alerted if a method is called. This is more of a warning scenario than an actual failure.

On my current project we ended up rolling our own dummies where necessary and overriding (or implementing) each method with code that would throw a new Exception. This was fairly easy to do in IntelliJ; however, I'm looking forward to the next version of Mockito, which should allow for easier dummy creation.

Test Doubles: Fakes
My current project does make use of Fakes. We define a fake as an instance that works for and is targeted towards a single problem. For example, if your class depended on a Jetlang fiber you could pass in a fake Jetlang fiber that synchronously executed a command as soon as it was given one. The fake fiber wouldn't allow you to schedule tasks, but that's okay, it's designed to process synchronous requests and that's it.

We don't have a large amount of fakes, but they can be a superior alternative when compared to creating multiple mocks that behave in the same way. If I need a CreditCardProcessingGateway to return a result once, it's good to use a mock. However, if I need a CreditCardProcessingGateway to consistently return true when given a luhn valid credit card number (or false otherwise) a fake can be a superior option.

Concrete Classes
I'm a big fan of Nat Pryce's Test Data Builders for creating concrete classes. I use test data builders to create the majority of dependencies and collaborators. Test data builders also allow me to easily drop in a test double where necessary. The following example code shows how I'll use a test data builder to easily create a car object.
aNew().car().
with(mock(Engine.class)).
with(fake().radio().fm_only()).
with(aNew().powerWindows()).build();
The (contrived) example demonstrates 4 difference concepts:
  • A car can easily be built with a mock engine
  • A car can easily be built with a fake fm radio
  • A car can easily be built with a power windows concrete implementation
  • All other dependencies will be sensible defaults
Nat's original entry on the topic states that each builder should have 'sensible defaults'. This left things a bit open for interpretation so we tried various defaults. In the end it made the most sense to have all test data builders use other test builders as defaults, or null. We never use any test doubles as defaults. In practice this is not painful in anyway, since you can easily drop in your own mock or fake in a specific test that requires it.

The builders are easy to work with because you know the default dependencies are concrete, instead of having to look at the code to determine if the dependencies are concrete, mocks, or fakes. This convention makes writing and modifying tests much faster.

You may have also noticed the aNew() and fake() methods from the previous example. The aNew() method returns a DomainObjectBuilder class and the fake() method returns a Faker class. These methods are convenience methods that can be staticly imported. The implementations of these classes are very simple. Given a domain object Radio, the DomainObjectBuilder would have a method defined similar to the example below.
public RadioBuilder radio() {
RadioBuilder.create();
}
This allows you to import the aNew method and then have access to all the test data builders in a code completeable manner. Keeping the defaults in the create method of each builder ensures that any shared builders will come packaged with their defaults. You could also create a no-arg constructor, but I prefer each builder to have only one constructor that contains all the dependencies necessary for creating the actual concrete class.

The following code shows how all these things work together.

// RadioTest.java
public class RadioTest {
public void shouldBeOn() {
Radio radio = aNew().radio().build();
radio.turnOn();
assertTrue(true, radio.isOn());
}
// .. other tests...
}

// DomainObjectBuilder.java
public class DomainObjectBuilder {
public static DomainObjectBuilder aNew() {
return new DomainObjectBuilder();
}

public RadioBuilder radio() {
return RadioBuilder.create();
}
}

// RadioBuilder.java
public class RadioBuilder {
private int buttons;
private CDPlayer cdPlayer;
private MP3Player mp3Player;

public static RadioBuilder create() {
return new RadioBuilder(4,
CDPlayerBuilder.create().build(),
MP3PlayerBuilder.create().build());
}

public RadioBuilder(int buttons, CDPlayer cdPlayer, MP3Player mp3Player) {
this.buttons = buttons;
this.cdPlayer = cdPlayer;
this.mp3Player = mp3Player;
}

public RadioBuilder withButtons(int buttons) {
return new RadioBuilder(buttons, cdPlayer, mp3Player);
}

public RadioBuilder with(CDPlayer cdPlayer) {
return new RadioBuilder(buttons, cdPlayer, mp3Player);
}

public RadioBuilder with(MP3Player mp3Player) {
return new RadioBuilder(buttons, cdPlayer, mp3Player);
}

public Radio build() {
return new Radio(buttons, cdPlayer, mp3Player);
}
}

As you can see from the example, it's easy to create a radio with sensible defaults, but you can also replace dependencies where necessary using the 'with' methods.

Since Java gives me method overloading based on types, I always name my methods 'with' when I can (like the example shows). This doesn't always work, if for example you have two different properties that are of the same type. This usually happens with built in types, and in those cases I create methods such as withButtons, withUpperLimit, or withLowerLimit.

The one other habit I've gotten into is using Builders to create all objects within my tests, even the Class Under Test. This results in more maintainable tests. If you use the the Class Under Test's constructor explicitly within your test and you add a dependency you'll end up having to change each line that creates a Class Under Test instance. However, if you use a builder you may not need to change anything, and if you do have to change anything it will probably only be for a subset of the tests.

Conclusion
I'm a big fan of Test Data Builders and Test Doubles. Combining the two concepts has resulted in being able to write tests faster and maintain them more easily. These ideas can be incrementally added as well, which is a nice bonus for someone looking to add this style to an existing codebase.

Tuesday, January 06, 2009

The Cost of Net Negative Producing Programmers

It's common to compare ourselves to doctors, lawyers, and other highly-paid professionals. Unfortunately, almost every comparison breaks down dramatically, very quickly.

When doctors screw up (massively) they get sued. When (permanently employed) programmers screw up they get let-go, and they go find a new job, usually with more responsibility and a pay raise. There are no ramifications for software malpractice.*

Successful lawyers put in years to learn their craft, then they put in years trying to become partner. Eventually they get to make the firm defining decisions, but only after maturing their abilities and proving themselves. In our industry you need no formal education. If you show the slightest sign of competency you'll quickly be given the keys to the kingdom. Architect promotions are not hard to come by.

I had an 'architect' title with no college degree and only 3 years of experience. At that point I'd never heard of unit testing, refactoring, or design patterns. In those days I was picking up information from O'Reilly In a Nutshell books and MSDN. At the same time I was leading a team building software for the US government (via contracts the company had won, not employed directly by them). I was massively under-skilled, and yet, there I was writing software that troops would need to stay alive.**

I wish my experience were isolated, but while I was consulting for 3.5 years I saw countless examples of exactly the same story.

I know the argument: demand is so high, we don't have another choice. I reject this argument on the basis that most good programmers spend the majority of their time fixing problems created by terrible programmers.

Good programmers fix problems created by terrible programmers in various ways. Good programmers can directly address problems by rewriting poorly written custom code. However, the less obvious way good programmers address poorly written code is when they write custom code because all the tools, that a company could potentially buy instead of building, are terrible. Would so many companies write their own time tracking, project managing, bug tracking, expense tracking and other internal tools if they had reasonable commercial options?

The argument that demand is too high completely ignores the opportunity cost of working with NNPPs.

The next common argument: We don't have that many NNPPs. Really? Write a few blog entries, attend a few conferences, do some consulting. I think you might change your mind. And, remember, the people commenting on blog entries or attending conferences are the ones who actually care about what they are doing. I'm horrified to think what the less interested NNPPs produce.

Here's a gem from a comment on my last blog post.
I honestly think I can do about 3000 lines of good code in a day... -anonymous
The commenter actually thinks writing 3000 lines of code a day is a good thing.

If you read through the comments you'll find another commenter that doesn't understand the difference between open classes and inheritance, but overall the majority of comments are well thought out reasonable responses. Several people were able to create logical responses without being emotionally attached to Java. That gave me some hope that things were a bit better than I generally picture them as. But, then I checked out the dzone score of the entry.

Now, maybe it's just not a well written or educational in any way post, that would be fine. But, when I read the comments, I'm back to disgusted by our industry. In this day and age, is it really reasonable to think that Java doesn't have limitations? I would say no. Java is a great tool for certain tasks, but there are plenty of things to dislike about it. I wouldn't want to work with people too blind to notice that.

Another common statement is: In every industry you have people who don't care about their jobs. I don't think that's a good comparison either. Bad doctors are sued until they can't afford malpractice insurance. Lawyers, very publicly, lose cases and are fired. In those industries, if you aren't contributing towards moving things forward, you're quickly exited.

Professional sports is an industry that probably has very few professionals that don't care about their job. If you aren't good enough, you don't make it, period. In basketball, there's no position for someone who is only good enough to dribble. If you're good enough, you're paid well. If you aren't, you don't make it. It's that simple.

So what is the cost of NNPPs? I'd say there are several ways to answer that question.

The first obvious cost is opportunity cost that companies lose when they can't provide quality software to their customers. This can translate to significantly lower revenue due to lack of or cancelled subscriptions or licenses. It can be the difference between the success and the failure of a start-up. For businesses, I would say the cost is epic. Is it any surprise that technology companies are some of the most volatile stocks.

There's other costs as well. When software fails, people don't get the aid they needed. This can be money, hospital care, legal direction, and many other life altering situations. Poorly designed software can cause death, and yet rarely is that kind of thing considered by a programmer.

I heard once that in Great Britain's MOD if you design software for a plane you go up in the test plane when the software is beta-tested. If all programmers were held with that level of accountability, how many do you think would still be in our field? How many would you want to collaborate with before you went up in the plane together?

Of course, we don't all write life-threatening software, but does that give us an excuse for lowering our colleague-quality requirements? Picture the things we could do if we didn't spend most of our time making up for terrible programmers and you'll know why I'm passionate about the topic.

Terrible programmers also slow us down as an industry. When I talk about Open Classes people are terrified. They always say: That's fine on your team, but we could never work with a language that has Open Classes. Is that a problem with Open Classes or a problem with the team? I worked on several teams, large and small, that used Open Classes diligently and I can't remember a single problem we ever had with them. Much the opposite, the teams were often, clearly more effective because they were talented and the language let them solve problems in the most effective way.

Java and C# are not silver bullets. The languages are good solutions to a certain class of problems, using them for problems that could be better solved with a different language stagnates the growth of other languages. The longer great programmers use a less-effective tool for the job the less time they have to work with and mature languages that are more suitable. As a result our entire industry loses out.

There's also a cost to your future colleagues. There's a big difference between a NNPP and someone new to the industry. Someone new to the industry benefits greatly from a mentor, but what if the mentor is an NNPP? Some NNPPs do small scale damage in isolated components of applications. But, the most insidious NNPPs are the architects who's ideas belong on The Daily WTF. These NNPP architects build entire armies of NNPPs around them and single-handedly waste millions, if not billions of dollars, annually. And, potentially worse, they ruin or at least drastically hurt the careers of eager to learn teammates.

The Cost of NNPPs is high enough that it's become my soapbox issue. But, truthfully, I'm not saying anything really new, so is there any hope for our industry?

I do think there are things we can do to help move the industry in the right direction. The good programmers can refuse to work with bad programmers. That might mean moving to an organization where that's a goal, or making that a goal of your current organization. Providing negative feedback directly to a NNPP teammate is always hard, but I do believe the ROI justifies the action. It's also helpful to provide that feedback to your manager, so the manager knows your opinions on the skill levels of your teammates. You can also suggest to managers that employees who refuse to take advantage of training budgets should be looked closely at. Lastly, you could suggest moving developer compensation to be based on the success of the project. A royalties model would be really interesting.

And, if you have a blog, you could write your own entry expressing your opinions on the topic.

* The exception being freelance developers, who are the minority, at least here in the US.
** Thank God, the government never ended up using the software we delivered.