Over 18 months ago I wrote Year Five, an experience report I never imagined I would write. I closed the blog entry by saying I look forward to writing about Year Six. A year and a half later, I'm still having a hard time deciding what (if anything) I should write. My writers block isn't the result the Remote Work experiment failing. Quite the opposite, the success of the Remote Work experiment has helped shape a team I'm very proud to be a part of, and yet I find myself unable to declare victory.
"How can you work effectively with remote teammates" has become the most common question I hear when meeting up with old colleagues for coffee. Clearly people are interested in the topic. At the same time, I prefer not to write about half-baked ideas (these days), thus my apprehension in documenting my approach.
This entry is an experience report, nothing more, nothing less. I'd be very skeptical of anyone providing recipes or best practices around remote work. Those working remotely today are breaking traditional workplace rules. Some are succeeding, most are failing, and I don't know of anyone with solid general advice. What follows are merely my observations.
My team became remote on accident. My boss and I were a 2 person co-located team in NYC, he quit, they gave me his job, and my best option to fill my previous role lived in Chicago. David Chelimsky was willing to be in NYC 2 weeks out of the month, and I was willing to be in Chicago 1 week out of the month. It was effectively co-location, and our team ran that way for a couple months.
Sometime in month three I started to feel like having David in NYC 2 weeks a month was unnecessary, and (surprisingly to me) he'd independently come to the same conclusion. From that point forward we alternated on traveling 1 week a month - thus we worked remotely for 3 weeks out of 4. We saw no drop off in our productivity or camaraderie, and traveling less definitely improved our overall happiness. I'm not sure how it would have worked out if I'd been forced to try remote work; however, arriving there organically was surprisingly painless.
Eventually the team grew and DRW hired John Hume, who lives in Austin, Texas. I believe it's worth noting that we each lived in a different city. I've always believed that a team needs to either be completely remote or completely co-located. The team is up to 5 people at this point, and I've actively gone out of my way to ensure no two people work out of the same place on an ongoing basis.
When you join the team you'll have to start on a 6 month contract. The contract period gives both you and I the opportunity to figure out if you're a good fit. There are plenty of brilliant people who I wouldn't work with: I'm looking for people with compatible opinions on software who are able to flourish in the environment we've created.
Remote disagreements are hard to resolve. We can't get a beer and talk it out. I can't read your body language (or other subtle signs) and see something needs to change. Thus, it's not enough to be talented, you'll also need to be a philosophical match. The philosophical ideas are easy to agree on when everyone's looking to start a new endeavor, but best intentions don't always equal an ideal working environment. The 6 month contract ensures both parties know what we're getting into long before we discuss the idea of full time employment.
My team is stretched through various timezones, sometimes from London to Los Angeles. This leaves the options of work odd hours, or finding people who can take on larger tasks and don't require constant contact. My team chose to go the latter path, and we've found no notable impact on our ability to collaborate even when our hours overlap as little as 2 hours a day. This choice is another reason the 6 month contract is critical: some people want more than 2 hours of contact a day. That's neither good, nor bad, nor is it easy to predict. If someone ends up needing more contact than they find they're getting, it's better for all parties if everyone goes their separate way after 6 months.
I've found that my opinion on pair-programming and co-location is constantly evolving. In more than 2 years of working remotely, I've probably done less than 4 hours of remote pair-programming. I mention this because some people believe remote pair-programming is an essential ingredient to successful remote work. I am not one of those people.
That said, I try to see my teammates as often as is reasonable, and we do often pair-program when we are co-located. At this point, I see every team member quarterly, and the entire team spends a week together every 6 months. We've found this frequency to be a solid balance between keeping relationships strong and keeping travel to a minimum.
The above schedule works for ongoing collaboration; however, the beginning of a work relationship is almost the exact opposite. When a new contractor joins the team, they travel and co-locate as much as possible. A recent team member spent his first week in Austin, his second in Chicago, and his third in London. By the end of those 3 weeks he'd seen the codebase from 3 different perspectives and spent a week (mostly pairing) with every member of the (at that time, 4 person) team. That much travel is a high up-front cost, but it helps immensely with learning both the codebase and the team you'll be working with.
Communication is what I consider to be the hardest part of remote work. I haven't found an easy, general solution, thus I often find myself duplicating effort to ensure teammates can consume data in their preferred format. A few teammates prefer video chat each time we're on the phone, a few teammates despise video chat. A few teammates like the wiki as a backlog, a few haven't ever edited the wiki (as far as I know). Some prefer strict usage of email/chat/phone for async-unimportant/async-important/sync-urgent, others tend to use one of those 3 for all communication. There hasn't been one tool that I would recommend; instead I think it's much more valuable to note that people prefer different approaches, and it's the job of the team lead to communicate with the team members in the way that they prefer, not the other way around. The only rule I try to apply universally: I end as many conversations as I can with "is there anything I can do to make your life better". If you constantly ask that question, it should be (often painfully) obvious what you need to change to continue to improve things for the team as a whole.
The question of hardware often comes up as well, what should a company provide, what should an individual? My approach: take the cost of a 30" monitor, any laptop, a tablet, a smart phone, and anything else they'd want, then average it out over 2 years. I think you'll find the amount of money is so trivial that you'd be a fool not to buy them whatever they want. (and that you're company loses money every time you waste your time talking about such a small expenditure.)
That's more or less it. I would summarize it like so: I want to create a team that people want to be a part of for at least the next 10 years. That begins by finding people who are a great fit; not everyone will be, and we'll learn valuable lessons from those people as well. We start the relationship out right, spending many hours together getting to know each other and getting to know the ins & outs of the project. From that point on we'll see if your preferred working style (hours, communication needs, etc) fit well with the team. We'll already know if you'll be happy on the team long before either of us has to commit to any long term working relationship. From there, as long as I remember that I work for the team, not the other way around, everyone should continue to be happy and effective.
Sunday, December 28, 2014
Wednesday, December 17, 2014
Working Effectively with Unit Tests Official Launch
Today marks the official release release of Working Effectively with Unit Tests. The book is available in various formats:
As far as the softcover edition, I had offers from a few major publishers, but in the end none of them would allow me to continue to sell on leanpub at the same time. I strongly considered caving to the demands of the major publishers, but ultimately the ability to create a high quality softcover and make it available on Amazon was too tempting to pass up.
The feedback has been almost universally positive - the reviews are quite solid on goodreads (http://review.wewut.com). I believe the book provides specific, concise direction for effective Unit Testing, and I hope it helps increase the quality of the unit tests found in the wild.
If you'd like to try before you buy, there's a sample available in pdf format or on the web.
- DRM free pdf, epub, & mobi (Kindle) at http://leanpub.com/wewut
- Softcover at http://amzn.com/1503242706
- Kindle edition at http://amzn.com/B00QS2HXUO
As far as the softcover edition, I had offers from a few major publishers, but in the end none of them would allow me to continue to sell on leanpub at the same time. I strongly considered caving to the demands of the major publishers, but ultimately the ability to create a high quality softcover and make it available on Amazon was too tempting to pass up.
The feedback has been almost universally positive - the reviews are quite solid on goodreads (http://review.wewut.com). I believe the book provides specific, concise direction for effective Unit Testing, and I hope it helps increase the quality of the unit tests found in the wild.
If you'd like to try before you buy, there's a sample available in pdf format or on the web.
Monday, August 25, 2014
The Case for Buying Technical Books
In the past few months I've seen more than a few articles encouraging programmers to write books. Each article provides at least a bit of good advice, and proceeds to conclude with the same idea:
You should write a book to build your brand.
I find this conclusion accurate and extremely disappointing. If the overwhelming reason to write a book is brand building, then the pool of potential authors is restricted to people who would benefit from brand building (and people who don't value their time).
How Did We Get Here?
The Internet, obviously. Practically everyone knows how to download any movie, song, or book at no cost. Opinions on "illegal downloading" range from opposition to pride. I'm not particularly interested in discussing those opinions; however, I believe it's worth observing the impact of the combination of ability and desire to acquire content without compensating the creator.
“Books aren't written - they're rewritten...” -- Michael Crichton
If you've never written a book, you may not be aware of colossal effort it takes to write a mediocre book. When it's all said and done, it can take well over an hour of effort per page. Great books, such as Java Concurrency in Practice, require an even greater level of attention to detail, and cost even more time to create. Brian Goetz estimates that it took them approximately 2,400 hours to create JCiP. If we also knew their royalty structure and the number of copies sold, we'd be able to calculate the hourly rate for writing a high quality book.
It turns out, one of the recent articles encouraging writing gives you royalty numbers and a hint on how many copies a quality book might sell.
Royalties for print should start at 18% of net revenues to the publisher. (Expect that figure to be around $10-20, so you're only making a few dollars on each sale.)...Selling 10 thousand copies of a print tech book these days is a solid success and should be compensated accordingly. -- Obie Fernandez
Let's assume JCiP was more than a solid success and sold 20K copies (doubling Obie's "solid success" benchmark). Assuming they negotiated royalties well, that would mean making $40,000 - thus the hourly rate for writing JCiP would be under $17 per hour.
Clearly I've made a few assumptions, but I believe all of them are based on sound logic. As long as you work 8 hours a day, 5 days a week, for 50 weeks and write a modern classic, you'll make around $34,000 per year. Anecdotal evidence among my author friends who've yet to write a modern classic is worse: the hourly rate is less than minimum wage.
The royalty structures combined with lessening sales create an environment where writing a book for (royalty) profit isn't a reasonable use of your time. As a result, the majority of today's authors are either consultants or unknown programmers. Established, non-consultant programmers gain little from the brand building aspect of writing a book, and likely make far more than $34,000 a year at their full-time jobs - why would they take on a poorly paying second job?
Around 2005 it became fairly easy to download, for free, practically any book. It might be coincidence that 10 of 13 of these Must-Read books were written prior to 2005. Despite the possibility, I don't believe it's a coincidence. Rather, I believe that at one time it paid to create a best selling technical book, and people with various backgrounds took up the challenge.
Nice Assumption Filled History Lesson, What's Your Point?
My point is fairly simple. If you're, like I am, tired of having to choose between books written decades ago and books written by those with at least a slightly ulterior motive, buy some books. Does your company have a book buying policy? If you aren't spending your entire book budget, why not? It costs you nothing to buy a book and give it to a teammate, and every royalty penny reminds an author that someone cares about all of those hours writing and rewriting.
Even if your company doesn't have a book budget, ask yourself if you'd rather your next book about Java be written by a consultant you've never heard of or Java's language architect. The average technical book costs little compared to life's other expenses, and buying a technical book is investing in your profession twice. You stand to gain knowledge both from today's book purchase and a potential future book written by the same author - a future book that may never be written given the current financial incentives.
If you're a CTO, Director or Manager, why aren't you constantly buying books for the developers you work with? They could probably use your advice on which books will best guide their careers.
Makes Sense, What Should I Buy?
There are several good books now available on leanpub - where the authors are paid significantly higher royalties. If you want to support authors you should always start there. From there I would own at least a copy of Chad's (previously referenced) Must-Read books. I'd also buy Chad's Passionate Programmer. Finally, you can't go wrong working your way through this list: Clojure Bookshelf.
Wednesday, July 16, 2014
Solitary Unit Test
Originally found in Working Effectively with Unit Tests
It’s common to unit test at the class level. The
In the same entry, Bill also defines a boundary as: ”... or even an ordinary class if that class is ‘outside’ the area your [sic] trying to work with or are responsible for”. Bill’s recommendation is a good one, but I find it too vague. Bill’s statement fails to give concrete advice on where to draw the line. My second constraint is a concrete (and admittedly restrictive) version of Bill’s recommendation. The concept of constraining a unit test such that ‘the Class Under Test should be the only concrete class found in a test’ sounds extreme, but it’s actually not that drastic if you assume a few things.
Solitary Unit Test can be defined as:
It’s common to unit test at the class level. The
Foo
class will have an associated FooTests
class. Solitary Unit Tests follow two additional constraints:
- Never cross boundaries
- The Class Under Test should be the only concrete class found in a test.
In the same entry, Bill also defines a boundary as: ”... or even an ordinary class if that class is ‘outside’ the area your [sic] trying to work with or are responsible for”. Bill’s recommendation is a good one, but I find it too vague. Bill’s statement fails to give concrete advice on where to draw the line. My second constraint is a concrete (and admittedly restrictive) version of Bill’s recommendation. The concept of constraining a unit test such that ‘the Class Under Test should be the only concrete class found in a test’ sounds extreme, but it’s actually not that drastic if you assume a few things.
- You’re using a framework that allows you to easily stub most concrete classes
- This constraint does not apply to any primitive or class that has a literal (e.g. int, Integer, String, etc)
- You’re using some type of automated refactoring tool.
Solitary Unit Test can be defined as:
Solitary Unit Testing is an activity by which methods of a class or functions of a namespace are tested to determine if they are fit for use. The tests used to determine if a class or namespace is functional should isolate the class or namespace under test by stubbing all collaboration with additional classes and namespaces.
Monday, June 30, 2014
Working Effectively with Unit Tests Rough Draft Complete
I finally put the finishing touches on the rough draft of Working Effectively with Unit Tests. It's been an interesting journey thus far, and I'm hoping the attention to detail I've put into the rough draft will translate into an enjoyable read.
What I did poorly: I'd written the book's sample before I ever put it on leanpub. Before a book is published you can collect contact and price information from those who are interested. However, once you publish and begin selling, you no longer have the ability to collect the previously mentioned information. I published and began selling my book immediately - and forfeited my chance to collect that information.
What I did well: I published early and often. I can't say enough nice things about leanpub. I've gotten tons of feedback on example style, writing style, typos, and content. One reader's suggestion to switch to Kevlin Henney's Java formatting style made my book enjoyable to read on a Kindle. I had twitter followers apologizing for "being pedantic and pointing out typos", and I couldn't have been happier to get the feedback. Each typo I fix makes the book more enjoyable for everyone. If you're going to write a book, get it on leanpub asap and start interacting with your audience.
What I learned from Refactoring: Ruby Edition (RRE): RRE contains errors, far too many errors. I vowed to find a better way this time around, and I'm very happy with the results. Every example test in the book can be run, and uses classes also shown in the book. However, writing about tests is a bit tricky: sometimes "failure" is the outcome you're looking to document. Therefore, I couldn't simply write tests for everything. Instead I piped the output to files and used them as example output in the book, but also as verification that what failed once continued to fail in the future (and vice versa). WEwUT has a script that runs every test from the book and overwrites the output files. If the output files are unchanged, I know all the passing examples are still correctly passing, and all the failing examples are still correctly failing. In a way, git diff became my test suite output. I'm confident in all the code found in WEwUT, and happy to be able to say it's all "tested".
What's unclear: Using leanpub was great, but I'm not really sure how to get the word out any further at this point. I set up a goodreads.com page and many friends have been kind enough to tweet about it, but I don't really have any other ideas at this point. I've reached out to a few publishers to see about creating a paperback, and I suspect a print version will increase interest. Still, I can't help thinking there's something else I should be doing between now and paperback launch.
What's next: The rough draft is 100% complete, but I expect to continue to get feedback over the next month or so. As long as the feedback is coming in, I'll be doing updates and publishing new versions.
If you've already bought the book, thank you for the support. It takes 10 seconds to get a pdf of any book you want these days, and I can't thank you enough for monetarily supporting all the effort I've put into WEwUT. If you haven't bought the book, you're welcome to give the sample a read for free. I hope you'll find it enjoyable, and I would gladly accept any feedback you're willing to provide.
What I did poorly: I'd written the book's sample before I ever put it on leanpub. Before a book is published you can collect contact and price information from those who are interested. However, once you publish and begin selling, you no longer have the ability to collect the previously mentioned information. I published and began selling my book immediately - and forfeited my chance to collect that information.
What I did well: I published early and often. I can't say enough nice things about leanpub. I've gotten tons of feedback on example style, writing style, typos, and content. One reader's suggestion to switch to Kevlin Henney's Java formatting style made my book enjoyable to read on a Kindle. I had twitter followers apologizing for "being pedantic and pointing out typos", and I couldn't have been happier to get the feedback. Each typo I fix makes the book more enjoyable for everyone. If you're going to write a book, get it on leanpub asap and start interacting with your audience.
What I learned from Refactoring: Ruby Edition (RRE): RRE contains errors, far too many errors. I vowed to find a better way this time around, and I'm very happy with the results. Every example test in the book can be run, and uses classes also shown in the book. However, writing about tests is a bit tricky: sometimes "failure" is the outcome you're looking to document. Therefore, I couldn't simply write tests for everything. Instead I piped the output to files and used them as example output in the book, but also as verification that what failed once continued to fail in the future (and vice versa). WEwUT has a script that runs every test from the book and overwrites the output files. If the output files are unchanged, I know all the passing examples are still correctly passing, and all the failing examples are still correctly failing. In a way, git diff became my test suite output. I'm confident in all the code found in WEwUT, and happy to be able to say it's all "tested".
What's unclear: Using leanpub was great, but I'm not really sure how to get the word out any further at this point. I set up a goodreads.com page and many friends have been kind enough to tweet about it, but I don't really have any other ideas at this point. I've reached out to a few publishers to see about creating a paperback, and I suspect a print version will increase interest. Still, I can't help thinking there's something else I should be doing between now and paperback launch.
What's next: The rough draft is 100% complete, but I expect to continue to get feedback over the next month or so. As long as the feedback is coming in, I'll be doing updates and publishing new versions.
If you've already bought the book, thank you for the support. It takes 10 seconds to get a pdf of any book you want these days, and I can't thank you enough for monetarily supporting all the effort I've put into WEwUT. If you haven't bought the book, you're welcome to give the sample a read for free. I hope you'll find it enjoyable, and I would gladly accept any feedback you're willing to provide.
Wednesday, May 21, 2014
Working Effectively with Unit Tests
Unit Testing has moved from fringe to mainstream, which is a great thing. Unfortunately, as a side effect developers are creating mountains of unmaintainable tests. I've been fighting the maintenance battle pretty aggressively for years, and I've decided to write a book that captures what I believe is the most effective way to test.
I'm currently ~25% done with the book, and it's available now for $14.99. My plan is to raise the price to $19.99 when I'm 50% done, and $24.99 when I'm 75% done. Leanpub offers my book with 100% Happiness Guarantee: Within 45 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks. Therefore, if you find the above or the free sample interesting, you might want to buy it now and save a few bucks.
Buy Now here: https://leanpub.com/wewut
From the PrefaceThe book does touch on some theory and definition, but the main purpose is to show you how to take tests that are causing you pain and turn them into tests that you're happy to work with.
Over a dozen years ago I read Refactoring for the first time; it immediately became my bible. While Refactoring isn’t about testing, it explicitly states: If you want to refactor, the essential precondition is having solid tests. At that time, if Refactoring deemed it necessary, I unquestionably complied. That was the beginning of my quest to create productive unit tests.
Throughout the 12+ years that followed reading Refactoring I made many mistakes, learned countless lessons, and developed a set of guidelines that I believe make unit testing a productive use of programmer time. This book provides a single place to examine those mistakes, pass on the lessons learned, and provide direction for those that want to test in a way that I’ve found to be the most productive.
For example, the book demonstrates how to go from...As of right now, you can read the first 2 chapters for free at https://leanpub.com/wewut/read
looping test with many (built elsewhere) collaborators
.. to individual tests that expect literals, limit scope, explicitly define collaborators, and focus on readability
.. to fine-grained tests that focus on testing a single responsibility, are resistant to cascading failures, and provide no friction for those practicing ruthless Refactoring.
I'm currently ~25% done with the book, and it's available now for $14.99. My plan is to raise the price to $19.99 when I'm 50% done, and $24.99 when I'm 75% done. Leanpub offers my book with 100% Happiness Guarantee: Within 45 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks. Therefore, if you find the above or the free sample interesting, you might want to buy it now and save a few bucks.
Buy Now here: https://leanpub.com/wewut
Monday, May 19, 2014
Weighing in on Long Live Testing
DHH recently wrote a provocative piece that gave some views into how he does and doesn't test these days. While I don't think I agree with him completely, I applaud his willingness to speak out against TDD dogma. I've written publicly about not buying the pair-programming dogma, but I hadn't previously been brave enough to admit that I no longer TDD the vast majority of the time.
The truth is, I haven't been dogmatic about TDD in quite some time. Over 6 years ago I was on a ThoughtWorks project where I couldn't think of a single good reason to TDD the code I was working on. To be honest, there weren't really any reasons that motivated me to write tests at all. We were working on a fairly simple, internal application. They wanted software as fast as they could possibly get it, and didn't care if it crashed fairly often. We kept everything simple, manually tested new features through the UI, and kept our customer's very happy.
There were plenty of reasons that we could have written tests. Reasons that I expect people will want to yell at me right now. To me, that's actually the interesting, and missing part, of the latest debate on TDD. I don't see people asking: Why are we writing this test? Is TDD good or bad? That depends; TDD is just a tool, and often the individual is the determining factor when it comes to how effective a tool is. If we start asking "Why?", it's possible to see how TDD could be good for some people, and bad for DHH.
I've been quietly writing a book on Working Effectively with Unit Tests, and I'll have to admit that it was really, really hard not to jump into the conversation with some of the content I've recently written. Specifically, I think this paragraph from the Preface could go a long way to helping people understand an opposing argument.
I find myself doing both. Different development activities often require different tools; i.e. Depending on what I'm doing, different motivators apply, and what tests I write change (hopefully) appropriately.
To be honest, if you look at your tests in the context of the motivators above, that's probably all you need to help you determine whether or not your tests are making you more or less effective. However, if you want more info on what I'm describing, you can pick up the earliest version of my upcoming book. (cheaply, with a full refund guarantee)
The truth is, I haven't been dogmatic about TDD in quite some time. Over 6 years ago I was on a ThoughtWorks project where I couldn't think of a single good reason to TDD the code I was working on. To be honest, there weren't really any reasons that motivated me to write tests at all. We were working on a fairly simple, internal application. They wanted software as fast as they could possibly get it, and didn't care if it crashed fairly often. We kept everything simple, manually tested new features through the UI, and kept our customer's very happy.
There were plenty of reasons that we could have written tests. Reasons that I expect people will want to yell at me right now. To me, that's actually the interesting, and missing part, of the latest debate on TDD. I don't see people asking: Why are we writing this test? Is TDD good or bad? That depends; TDD is just a tool, and often the individual is the determining factor when it comes to how effective a tool is. If we start asking "Why?", it's possible to see how TDD could be good for some people, and bad for DHH.
I've been quietly writing a book on Working Effectively with Unit Tests, and I'll have to admit that it was really, really hard not to jump into the conversation with some of the content I've recently written. Specifically, I think this paragraph from the Preface could go a long way to helping people understand an opposing argument.
I don't actually know what motivates DHH to test, but if we assumed he cares about validating the system, preventing future regressions, and enabling refactoring (exclusively) then there truly is no reason to TDD. That doesn't mean you shouldn't; it just means, given what he values and how he works, TDD isn't valuable to him. Of course, conversely, if you value immediate feedback, problems in small pieces, and tests as clients that shape design, TDD is probably invaluable to you.Why Test?
The answer was easy for me: Refactoring told me to. Unfortunately, doing something strictly because someone or something told you to is possibly the worst approach you could take. The more time I invested in testing, the more I found myself returning to the question: Why am I writing this test?
There are many motivators for creating a test or several tests:Some of the above motivators are healthy in the right context, others are indicators of larger problems. Before writing any test, I would recommend deciding which of the above are motivating you to write a test. If you first understand why you're writing a test, you'll have a much better chance of writing a test that is maintainable and will make you more productive in the long run.
- validating the system
- immediate feedback that things work as expected
- prevent future regressions
- increase code-coverage
- enable refactoring of legacy codebase
- document the behavior of the system
- your manager told you to
- Test Driven Development
- improved design
- breaking a problem up into smaller pieces
- defining the "simplest thing that could possibly work"
- customer approval
- ping pong pair-programming
Once you start looking at tests while considering the motivator, you may find you have tests that aren't actually making you more productive. For example, you may have a test that increases code-coverage, but provides no other value. If your team requires 100% code-coverage, then the test provides value. However, if you team has abandoned the (in my opinion harmful) goal of 100% code-coverage, then you're in a position to perform my favorite refactoring: delete.
I find myself doing both. Different development activities often require different tools; i.e. Depending on what I'm doing, different motivators apply, and what tests I write change (hopefully) appropriately.
To be honest, if you look at your tests in the context of the motivators above, that's probably all you need to help you determine whether or not your tests are making you more or less effective. However, if you want more info on what I'm describing, you can pick up the earliest version of my upcoming book. (cheaply, with a full refund guarantee)
Monday, January 27, 2014
REPL Driven Development
When I describe my current workflow I use the TLA RDD, which is short for REPL Driven Development. I've been using REPL Driven Development for all of my production work for awhile now, and I find it to be the most effective workflow I've ever used. RDD differs greatly from any workflow I've used in the past, and (despite my belief that it's superior) I've often had trouble concisely describing what makes the workflow so productive. This entry is an attempt to describe what I consider RDD to be, and to demonstrate why I find it the most effective way to work.
If that isn't clear, hopefully the video below demonstrates what I'm talking about.
If you're unfamiliar with RDD, the previous video might leave you wondering: What's so impressive about RDD? To answer that question, I think it's worth making explicit what the video is: an example of a running application that needs to change, a change taking place, and verification that the application runs as desired. The video demonstrates change and verification; what makes RDD so effective to me is what's missing: (a) restarting the application, (b) running something other than the application to verify behavior, and (c) moving out of the source to execute arbitrary code. Eliminating those 3 steps allows me to focus on what's important, writing and running code that will be executed in production.
In my career I've spent significant time writing applications in C#, Ruby, & Java. While working in C# and Java, if I wanted to make and verify (in the application) any non-trivial change to an application, I would need to stop the application, rebuild/recompile, & restart the application. I found the slowness of this feedback loop to be unacceptable, and wholeheartedly embraced tools such as NUnit and JUnit.
I've never been as enamored with TDD as some of my peers; regardless, I absolutely endorsed it. The Design aspect of TDD was never that enticing to me, but tests did allow me to get feedback at a significantly superior pace. Tests also provide another benefit while working with C# & Java: They're the poorest man's REPL. Need to execute some arbitrary code? Write a test, that you know you're going to immediately delete, and execute away. Of course, tests have other pros and cons. At this moment I'm limiting my discussion around tests to the context of rapid feedback, but I'll address TDD & RDD later in this entry.
Ruby provided a more effective workflow (technically, Rails provided a more effective workflow). Rails applications I worked on were similar to my RDD experience: I was able to make changes to a running application, refresh a webpage and see the result of the new behavior. Ruby also provided a REPL, but I always ran the REPL external to my editor (I knew of no other option). This workflow was the closest, in terms of efficiency, that I've ever felt to what I have with RDD; however, there are some minor differences that do add up to an inferior experience: (a) having to switch out of a source file to execute arbitrary code is an unnecessary nuisance and (b) refreshing a webpage destroys any client side state that you've built up. I have no idea if Ruby now has editor & repl integration, if it does, then it's likely on par with the experience I have now.
When working on a feature, the short term goal is to have it working in the application as fast as possible. Arbitrary execution, live changes, and only writing what you need are 3 things that can help you complete that short term goal as fast as possible. The video above is the best example I have of how you go from a feature request to software that does what you want in the smallest amount of time. In the video, I only leave the buffer to verify that the application works as intended. If the short term goal was the only goal, RDD without writing tests would likely be the solution. However, we all know that that are many other goals in software. Good design is obviously important. If you think tests give you better design, then you should probably mix both TDD & RDD. Preventing regression is also important, and that can be accomplished by writing tests after you have a working feature that you're satisfied with. Regression tests are great for giving confidence that a feature works as intended and will continue to in the future.
REPL Driven Development doesn't need to replace your current workflow, it can also be used to extend your existing TDD workflow.
RDD Cycle
First, I'd like to address the TLA RDD. I use the term RDD because I'm relying on the REPL to drive my development. More specifically, when I'm developing, I create an s-expression that I believe will solve my problem at hand. Once I'm satisfied with my s-expression, I send that s-expression to the REPL for immediate evaluation. The result of sending an s-expression can either be a value that I manually inspect, or it can be a change to a running application. Either way, I'll look at the result, determine if the problem is solved, and repeat the process of crafting an s-expression, sending it to the REPL, and evaluating the result.If that isn't clear, hopefully the video below demonstrates what I'm talking about.
If you're unfamiliar with RDD, the previous video might leave you wondering: What's so impressive about RDD? To answer that question, I think it's worth making explicit what the video is: an example of a running application that needs to change, a change taking place, and verification that the application runs as desired. The video demonstrates change and verification; what makes RDD so effective to me is what's missing: (a) restarting the application, (b) running something other than the application to verify behavior, and (c) moving out of the source to execute arbitrary code. Eliminating those 3 steps allows me to focus on what's important, writing and running code that will be executed in production.
Feedback
I've found that, while writing software, getting feedback is the single largest time thief. Specifically, there are two types of feedback that I want to get as quickly as possible: (1) Is my application doing what I believe it is? (2) What does this arbitrary code return when executed? I believe the above video demonstrates how RDD can significantly reduce the time needed to answer both of those questions.In my career I've spent significant time writing applications in C#, Ruby, & Java. While working in C# and Java, if I wanted to make and verify (in the application) any non-trivial change to an application, I would need to stop the application, rebuild/recompile, & restart the application. I found the slowness of this feedback loop to be unacceptable, and wholeheartedly embraced tools such as NUnit and JUnit.
I've never been as enamored with TDD as some of my peers; regardless, I absolutely endorsed it. The Design aspect of TDD was never that enticing to me, but tests did allow me to get feedback at a significantly superior pace. Tests also provide another benefit while working with C# & Java: They're the poorest man's REPL. Need to execute some arbitrary code? Write a test, that you know you're going to immediately delete, and execute away. Of course, tests have other pros and cons. At this moment I'm limiting my discussion around tests to the context of rapid feedback, but I'll address TDD & RDD later in this entry.
Ruby provided a more effective workflow (technically, Rails provided a more effective workflow). Rails applications I worked on were similar to my RDD experience: I was able to make changes to a running application, refresh a webpage and see the result of the new behavior. Ruby also provided a REPL, but I always ran the REPL external to my editor (I knew of no other option). This workflow was the closest, in terms of efficiency, that I've ever felt to what I have with RDD; however, there are some minor differences that do add up to an inferior experience: (a) having to switch out of a source file to execute arbitrary code is an unnecessary nuisance and (b) refreshing a webpage destroys any client side state that you've built up. I have no idea if Ruby now has editor & repl integration, if it does, then it's likely on par with the experience I have now.
Semantics
I like Simon's description, but I don't believe that we need to break things down to two different TLAs. Quite simply, (sadly) I don't think enough people are developing in this way, and the additional specification causes a bit of confusion among people who aren't familiar with RDD. However, Simon's description is so spot on I felt the need to describe why I'm choosing to ignore his classifications.-- Simon Katz
- It's important to distinguish between two meanings of "REPL" - one is a window that you type forms into for immediate evaluation; the other is the process that sits behind it and which you can interact with from not only REPL windows but also from editor windows, debugger windows, the program's user interface, etc.
- It's important to distinguish between REPL-based development and REPL-driven development:
- REPL-based development doesn't impose an order on what you do. It can be used with TDD or without TDD. It can be used with top-down, bottom-up, outside-in and inside-out approaches, and mixtures of them.
- REPL-driven development seems to be about "noodling in the REPL window" and later moving things across to editor buffers (and so source files) as and when you are happy with things. I think it's fair to say that this is REPL-based development using a series of mini-spikes. I think people are using this with a bottom-up approach, but I suspect it can be used with other approaches too.
RDD & TDD
RDD and TDD are not in direct conflict with each other. As Simon notes above, you can do TDD backed by a REPL. Many popular testing frameworks have editor specific libraries that provide immediate feedback through REPL interaction.When working on a feature, the short term goal is to have it working in the application as fast as possible. Arbitrary execution, live changes, and only writing what you need are 3 things that can help you complete that short term goal as fast as possible. The video above is the best example I have of how you go from a feature request to software that does what you want in the smallest amount of time. In the video, I only leave the buffer to verify that the application works as intended. If the short term goal was the only goal, RDD without writing tests would likely be the solution. However, we all know that that are many other goals in software. Good design is obviously important. If you think tests give you better design, then you should probably mix both TDD & RDD. Preventing regression is also important, and that can be accomplished by writing tests after you have a working feature that you're satisfied with. Regression tests are great for giving confidence that a feature works as intended and will continue to in the future.
REPL Driven Development doesn't need to replace your current workflow, it can also be used to extend your existing TDD workflow.
Subscribe to:
Posts (Atom)