Thursday, June 25, 2015

Drop Books

The vast majority of books I purchase are for my own enjoyment, but not all of them. There are a few books that I buy over and over, and drop on the desks of friends and colleagues. These books, all technical, are books that I think most programmers will benefit from reading. I call these books "Drop Books"; I drop them and never expect them to be returned.

My main motivation for dropping books is to spread what I think are great ideas. Specifically, I'm always happy to spread the ideas found in the following books:
I know a few of my friends buy Drop Books as well. Spreading solid ideas and supporting authors seems like a win/win to me; hopefully more and more people will begin to do the same.

Monday, April 06, 2015

Unit Testing Points of View, Probably

Michael Feathers, Brian Marick, and I are collaborating to create a new book: Unit Testing Points of View ... probably.


In 2014 Martin Fowler provided Technical Review for Working Effectively with Unit Tests. As part of his feedback he said something along the lines of: I'm glad you wrote this book, and I'll likely write a bliki entry noting what I agree with and detailing what I would do differently. I'm still looking forward to that entry, and I think the idea is worth extending beyond Martin.

Unit testing is now mainstream, has tons of entry level books, and has a great reference book. The result is a widely accepted idea that you can quickly and easily adopt; unfortunately, I've found little in the way of documenting pattern trade-offs. I believe that combination leads to a lot of waste. To help avoid some of that waste, I'd like to see more written about why you may want to choose one Unit Testing style over another. That was the inspiration and my goal for Working Effectively with Unit Tests. Growing Object-Oriented Software is another great example of a book that documents both the hows and whys of unit testing. After that, if you're looking for tradeoff guidance... good luck.

Is this a problem? I think so. Without discussion of tradeoffs, experience reports, and concrete direction you end up with hours wasted on bad tests and proclamations of TDD's death. The report of TDD's death was an exaggeration, and the largest clue was the implication that TDD and Unit Testing were synonymous. To this day, Unit Testing is still largely misunderstood.

What could be

Working Effectively with Unit Tests starts with one set of opinionated Unit Tests and evolves to the types of Unit Tests that I find more productive. There's no reason this evolution couldn't be extended by other authors. This is the vision we have for Unit Testing Points of View. Michael, Brian, and I began by selecting a common domain model. The first few chapters will detail the hows and whys of how I would test the common domain model. After I've expressed what motivates my tests, Brian or Michael will evolve the tests to a style they find superior. After whoever goes second (Brian or Michael) finishes, the other will continue the evolution. The book will note where we agree, but the majority of the discussion will occur around where our styles differ and what aspect of software development we've chosen emphasize by using an alternative approach.

Today, most teams fail repeatedly with Unit Tests, leading to (at best) significant wasted time and (at worst) abandoning the idea with almost nothing gained. We aim to provide tradeoff guidance that will help teams select a Unit Testing approach that best fits their context.

As I said above: There's no reason this evolution couldn't be extended by other authors. In fact, that's our long term hope. Ideally, Brian, Michael and I write volume one of Unit Testing Points of View. Volume two could be written by Kevlin Henney, Steve Freeman, and Roy Osherove - or anyone else who has interest in the series. Of course, given the original inspiration, we're all hoping Martin Fowler clears some time in his schedule to take part in the series.

Why "Probably"?

Writing a book is almost always a shot in the dark. Martin, Michael, Brian, and I all think this a book worth writing (and a series worth starting), but we have no idea if people actually desire such a book. In the past you took the leap expecting failure and praying for success. Michael, Brian, and I believe there's a better way: leanpub interest lists (here's Unit Testing Points of View's). We're looking for 15,000 people to express interest (by providing their email addresses). I'm writing the first 3 chapters, and they'll be made available when we cross the 15k mark. At that point, Michael and Brian will know that the interest is real, and their contributions need to be written. If we never cross the 15k mark then we know such a book isn't desired, and we shouldn't waste our time.

If you'd like to see this project happen, please do sign up on the interest list and encourage others to as well (here's a tweet to RT, if you'd like to help).

Sunday, March 15, 2015

My Answers for Microservices Awkward Questions

Earlier this year, Ade published Awkward questions for those boarding the microservices bandwagon. I think the list is pretty solid, and (with a small push from Ade) I decided to write concise details on my experience.

I think it's reasonable to start with a little context building. When I started working on the application I'm primarily responsible for microservices were very much fringe. Fred George was already giving (great) presentations on the topic, but the idea had gained neither momentum nor hype. I never set out to write "microsevices"; I set out to write a few small projects (codebases) that collaborated to provide a (single) solid user experience.

I pushed to move the team to small codebases after becoming frustrated with a monolith I was a part of building and a monolith I ended up inheriting. I have no idea if several small codebases are the right choice for the majority, but I find they help me write more maintainable software. Practically everything is a tradeoff in software development; when people ask me what the best aspect of a small services approach is, I always respond: I find accidental coupling to be the largest productivity drain on projects with monolithic codebases. Independent codebases reduce what can be reasonably accidentally coupled.

As I said, I never set out to create microservices; I set out to reduce accidental coupling. 3 years later, it seems I have around 3 years of experience working with Microservies (according to the wikipedia definition). I have nothing to do with microservices advocacy, and you shouldn't see this entry as confirmation or condemnation of a microservices approach. The entry is an experience report, nothing more, nothing less.

On to Ade's questions (in bold).

Why isn't this a library?:
It is. Each of our services compile to a jar that can be run independently, but could just as easily be used as a library.

What heuristics do you use to decide when to build (or extract) a service versus building (or extracting) a library?:
Something that's strictly a library provides no business value on it's own.

How do you plan to deploy your microservices?:
Shell scripts copy jars around, and we have a web UI that provides easy deployment via a browser on your laptop, phone, or anywhere else.

What is your deployable unit?:
A jar.

Will you be deploying each microservice in isolation or deploying the set of microservices needed to implement some business functionality?:
In isolation. We have a strict rule that no feature should require deploying separate services at the same time. If a new feature requires deploying new versions of two services, one of them must support old and new behavior - that one can be deployed first. After it's run in production for a day the other service can be deployed. Once the second service has run in production for a day the backwards compatibility for the first service can be safely removed.

Are you capable of deploying different instances (where an instance may represent multiple processes on multiple machines) of the same microservice with different configurations?:
If I'm understanding your question correctly, absolutely. We currently run a different instance for each trading desk.

Is it acceptable for another team to take your code and spin up another instance of your microservice?:

Can team A use team B's microservice or are they only used within rather than between teams?:
Anyone can request a supported instance or fork individual services. If a supported instance is requested the requestor has essentially become a customer, a peer of the trading desks, and will be charged the appropriate costs allocated to each customer. Forking a service is free, but comes with zero guarantees. You fork it, you own it.

Do you have consumer contacts for your microservices or is it the consumer's responsibility to keep up with the changes to your API?:
It is the consumer's responsibility.

Is each microservice a snowflake or are there common conventions?:
There are common conventions, and yet, each is also a snowflake. There are common libraries that we use, and patterns that we follow, but many services require sightly different behavior. A service owner (or primary) is free to make whatever decision best fits the problem space.

How are these conventions enforced?:
Each service has a primary, who is responsible for ensuring consistency (more here). Before you become a primary you'll have spent a significant amount of time on the team; thus you'll be equipped with the knowledge required to apply conventions appropriately.

How are these conventions documented?:
They are documented strictly by the code.

What's involved in supporting these conventions?:
When you first join the team you pair exclusively for around 4 weeks. During those 4 weeks you drive the entire time. By pairing and being the primary driver, you're forced to become comfortable working within the various codebases, and you have complete access to someone who knows the history of the overall project. Following those 4 weeks you're free to request a pair whenever you want. You're unlikely to become a service owner until you've been on the team for at least 6 months.

Are there common libraries that help with supporting these conventions?:
A few, but not many to be honest. Logging, testing, and some common functions - that's about it.

How do you plan to monitor your microservices?:
We use proprietary monitoring software managed by another team. We also have front line support that is on call and knows who to contact when they encounter issues they cannot solve on their own.

How do you plan to trace the interactions between different microservices in a production environment?:
We log as much as is reasonable. User generated interactions are infrequent enough that every single one is logged. Other interactions, such as snapshots of large pieces of data, or data that is updated 10 or more times per second, are generally not logged. We have a test environment were we can bring the system up and mirror prod, so interactions we miss in prod can (likely) be reproduced in dev if necessary. A few of our systems are also experimenting with a logging system that logs every interaction shape, and a sample per shape. For example, if the message {foo: {bar:4 baz:45 cat:["dog", "elephant"]}} is sent, the message and the shape {foo {bar:Int, baz:Int, cat:[String]}} will be logged. If that shape previous existed, neither the shape nor the message will be logged. In practice, logging what you can and having an environment to reproduce what you can't log is all you need 95% of the time.

What constitutes a production-ready microservice in your environment?:
Something that provides any value whatsoever, and has been hardened enough that it should create zero production issues under normal working circumstances.

What does the smallest possible deployable microservice look like in your environment?:
A single webpage that displays (readonly) data it's collecting from various other services.

I'm happy to share more of my experiences with a microservice architecture, if people find that helpful. If you'd like elaboration on a question above or you'd like an additional question answered, please leave a comment.

Monday, February 23, 2015

Experience Report: Weak Code Ownership

In 2006 Martin Fowler wrote about Code Ownership. It's a quick read, I'd recommend checking it out if you've never seen it. At the time I was working at ThoughtWorks; I remember thinking "Clearly Strong makes no sense and I have no idea what scenario would make Weak reasonable". 8 years later, I find myself advocating for Weak Code Ownership within my team.

Collective Code Ownership (CCO) served me well between 2005 and 2009. Given the make-up of the teams that I was a part of I found it to be the most effective way to deliver software. Around 2009 I joined a team that eventually grew to around 9 people, all very senior developers. The team practiced Collective Code Ownership. Everyone on the team was very talented, but that didn't translate to constant agreement. In fact, we disagreed far more often than I thought we should. That experience drove me to write about the importance of Compatible Opinions. I still believe in the importance of compatible opinions, but I now wonder if the team wouldn't have been more effective (despite incompatible opinions) if we had adopted Weak Code Ownership.

The 2009 project heavily shaped my approach to developing software. I suspect I'm not the only one who (at one time) believed: if we get a team full of massively talented people we can do anything. It turns out, it's not nearly that easy. Too many cooks in the kitchen is the obvious concern, and it does come up. However, the much larger problem is that talented people work in vastly different ways. Some meticulously refactor in small steps, others make wide reaching and large changes. Some prefer one language to rule them all, others are comfortable switching between 12-15 different languages in the same day. Monolithic vs separated codebases. Inherited vs duplicated config. It goes on and on. You try to optimize for everyone, to ensure everyone is maximally effective. Pretty quickly you run into this situation-
If you optimize everything, you will always be unhappy. --Donald Knuth
Looking back, I believe our "Collective Code Ownership" degraded to "Last in Wins". People began to cluster around shared opinions as the team grew. Inevitably the components of the project splintered and work included constant angling to ensure you only worked in areas designed to match your personal style. Perhaps that wasn't such a bad thing, but never formalizing the splinters led to constant discussion around responsibility and design. Those discussions always felt like waste to me.

I eventually left that team, and I left with the feeling that as a team we'd been ignoring Diffusion of responsibility (DOR), Bystander effect (BE), and Crowd psychology (CP). In my opinion the combination of (exclusively) talented, senior developers and collective code ownership led to suboptimal productivity. The belief that "CCO works and we're just doing it wrong" made things worse. It seemed like our solution was always: we just need to do better. You could write this team off as not as talented as I describe, but you'd be mistaken. All of the developers on the team had great success before the project, and (after that team split up) 8 out of 9 have gone on to lead very successful teams.

Several years later I found myself leading a team that was beginning to grow. We had just expanded to a 4 person team, and I could see the high level problems beginning to appear. Consistency had begun slip, and bugs had begun to appear in places where code was "correct" at the micro level, but the system didn't work as expected at the macro level. There are a few ways to manage these issues, pair programming was an obvious solution to this problem, but I believe pair programming would have introduced a different class of problems. I believe switching to pair programming (exclusively) would actually have been net negative for the team. I've written about giving up on exclusive pair programming, I'll leave it at that. The issues we were facing felt vaguely familiar, and I started to wonder if they stemmed from DOR, BE, and CP. Collective code ownership allowed developers to pop in and out of codebases, making small changes to enable features, but where was the person looking at the big picture? "It's everyone's responsibility" wasn't an answer I was comfortable with.

I found myself looking for another solution that would
  • encourage consistency
  • give equal importance to macro and micro quality
  • address the other issues caused by DOR, BE, and CP
We already had our project split in a microservices style, and I proposed to the team that we move to what I called Primary Code Ownership. Primary Code Ownership can be described as-
  • A codebase has a "primary", the person who's responsible for that process in production. You could use the word "owner", but I don't think it conveys what I'm looking for. The team still owns the code, but when problems occur the responsibility falls on the primary first.
  • The primary drives the architecture of the codebase. The primary gets final say on architectural decisions within a codebase.
  • A primary may commit to master a codebase without code review.
  • Any commit to master from a non-primary must come via a Pull Request, and only primaries can merge (or give permission for the non-primary to merge).
  • note: in emergency circumstances, anyone can commit to any codebase.
  • Primaries can change at any time. If a primary feels that another team member is better suited to become the primary they can propose a switch.
  • Rotation of primary responsibility occurs naturally as people contribute to different codebases. If you seek to become a primary of a codebase, all you need to do is focus your efforts on features that require changing said codebase. If you're doing the majority of the work in that codebase, you should become the primary in a reasonable amount of time.
The motivation for this approach is that a primary will commit keeping their vision in mind. Non-primaries can continue to commit to the same codebase, and are given the confidence that if they make a consistency or macro mistake the primary will catch it during Pull Request review.

We've been working with this approach for about a year now and I've been happy with the results. We've had 1 major bug in 12 months and our architecture has remained consistent despite losing 1 team member and gaining 2 more (team size is 5 now). It's probably worth mentioning that my team is entirely remote as well, making those results even more impressive (imo).

A nice side effect of this approach is eased on-boarding of new team-members. After an initial stretch of about a month of co-located pair programming, a new team member is free to work on whatever they want. They can work knowing that their changes will be reviewed by someone with deep understanding of the necessary changes. The rest of the team is confident that the newbie's changes shouldn't cause prod issues, and wont be merged until they're consistent with existing conventions. Another nice side effect of this process (pointed out by Dan Bodart) is that some people are more inclined to polish their code if they know it's going to be featured in a pull request.

It's not (and never is) all roses. Primaries do complain at times about the context switches required to merge pull requests. I haven't found a solution to this issue, yet. In the beginning people complained about having to create branches and create pull requests. However, as the team got comfortable with this workflow, the pain seemed to disappear. Pull requests are often fairly small; the issues do not mirror the complaints against feature branches.

While reviewing this entry Jake McCrary and others pointed out that while context switching is annoying, there are many benefits that also come from the pull request review process. I agree with this observation, and I'm always pleased when I see discussion (comments) occur on a pull request. Perhaps I'm over optimistic, but I always see the discussion as evidence that the team is learning from each other and we are further advancing on our shared goals.

12 months in, I'm happy with the results.

Unsurprisingly, there are other people using and/or experimenting with similar approaches. Paul Nasrat noted the similarities with this approach and the maintainer model of Linux; Scott Robinson pointed me at OWNERS Files; Romily Cocking noted the similarities with the underlying model used in Envy/Developer (pre Java).

Thursday, January 08, 2015

Preview Arbitrary HTML

I'm a big fan of for sharing code. It's fantastic for quickly putting something online to talk over with someone else. I've often found myself wishing for something that allowed me to share rendered HTML in the same way.

For example, a gist of the HTML for this blog entry can be seen here: If I want someone to give me feedback on the rendered output, that gist isn't very helpful.

It turns out, it's really easy to see that HTML rendered: switch the file extension to md. Here's the same gist with a md extension: This all works due to the way Github renders markdown gists, and markdown supporting inline HTML.

I wouldn't use this trick to create any data you'd like to have online in the long term, but it's great for sharing short lived HTML.

Wednesday, January 07, 2015

LTR Org Chart

Most traditional organizations have an Org Chart that looks similar to the following:

Org Charts displayed in this way emphasize the chain of command. A quick look at a node in this Org Chart will let you know who someone reports to and who reports to them. The chart gives a clear indication of responsibility; it also gives an impression of "who's in charge" and how far away you are from the top.

Several years ago I joined ThoughtWorks (TW). TW (at least, when I was there) boasted of their Flat Org Chart and it's advantages.
flat organization (also known as horizontal organization or delayering) is an organization that has an organizational structure with few or no levels of middle management between staff and executives. wikipedia
A Flat Org Chart may look something like the following image.

I've previously written on how joining TW was the single best thing for my career. There were many things I loved about TW - the Flat Org Chart was not one of those things.

The (previously linked) wikipedia article discusses the positive aspects of Flat Organizations - there are several I would agree with. What I find generally missing from the discussion is responsibility. If I make a mistake, who, other than me, could have helped me avoid that mistake? If I need help, whose responsibility is it to ensure I get the help I need? If I'm unhappy and plan to quit, who's going to understand my value and determine what an appropriate counter offer looks like?

In a flat organization, this is me:

Reporting to, thus having access to the person who reports directly to the CEO is great. At the same time, over a hundred other people report to the same person. Would anyone honestly argue that this one individual can guide, mentor, and properly evaluate over a hundred people?

Whenever I bring up responsibility people quickly point out that: those are everyone's issues and it's everyone's responsibility. I generally point these people at Diffusion of responsibility, Bystander effect, and Crowd psychology. Personally, I'd love to see the result of, following a resignation, "everyone" determining an individual's value and producing a counter offer.

I'm thankful for the experience I gained working in a Flat Organization. I'll admit that there's something nice about being one manager away from the CEO on an Org Chart. That said, it was the access to a high level manager and the CEO that I found to be the largest benefit of working in a Flat Organization.

I eventually left TW - a move that was partially motivated by the issues I described above. When I began looking for a new job, organizational structure didn't factor into my decision making process, but I did look for something I called Awesome All the Way Up.
Awesome All the Way Up: Given an Org Chart, do you consider the CEO and every person between you and the CEO to be awesome.
I believed then, and I believe now that you'll always be best off if you put yourself in Awesome All the Way Up situations. When I left TW, I only considered 2 companies - they were the only two I could find that offered Awesome All the Way Up. Everyone will likely have their own definition of "awesome". My definition of "awesome", in this context, is: above all, someone I can trust; someone whose vision I believe in; someone I will always have access to.

One of the two previously referenced companies became my next (and current) employer: DRW Trading. DRW could have a traditional Org Chart, and I would sit towards the "bottom" of it. (note: this is an approximation, not an actual DRW Org Chart.)

Personally, I wouldn't have any complaints with this Org Chart. Every person between me and DRW himself plays an important and specific role. Since I'm in an Awesome All the Way Up situation I have access to each of those people. If I have a CSO problem, I go to the CSO; if I have a CTO problem I go to the CTO; you get the idea. I strongly prefer to know whose responsibility it is when I identify what I consider to be a problem.

You may have noticed I said DRW could have an Org Chart similar to what's above. The phrasing was intentional - if you ask our CSO Derek we have an upside down Org Chart. I'm not sure which of those articles best reflects his opinion. He once showed me a picture similar to the one below and said "part of being a leader is understanding that the shit still rolls downhill, but our hill is different".

Derek and I haven't spoken extensively on the upside down Org Chart, but I assume he's a fan of Servant Leadership. As an employee I (obviously) appreciate the upside down Org Chart and Derek's approach to leadership. As a team lead I find the upside down Org Chart to be a great reminder that I need to put the needs of my teammates first. As someone sitting above Derek on the upside down Org Chart I find the idea of my shit (and the shit of all of my "peers") rolling down to Derek to be... unpleasant.

I was once out with one of the partners from DRW and he introduced me to a friend of his (John). John asked how I knew the DRW partner, I said: he's my boss. The DRW partner interrupted and said: that's not true, we're teammates - I can't do what he does and he can't do what I do, we work together. The partner from this story isn't in my "chain of command", but I find the same team-first attitude present in each person that is.

My discomfort with my shit rolling down on anyone combined with the team-first attitude I've encountered led me to the idea of the Left-to-Right Org Chart (or LTR Org Chart). Below you'll find what would be my current LTR Org Chart.

The LTR Org Chart reminds me to be a Servant Leader while also emphasizing the teammates I can count on to help me out when I encounter issues.

It's probably not practical to generate an LTR Org Chart for every individual in your organization, but it certainly wouldn't be hard to create software to present an LTR Org Chart on a website. It's possible the subtle difference between upside down Org Charts and LTR Org Charts isn't worth the effort required to build the software. I wouldn't mind finding out.

Tuesday, January 06, 2015

Making Remote Work: Tools

I recently wrote about my experiences working on a remote team. Within that blog entry you can find a more verbose version of the following text:
Communication is what I consider to be the hardest part of remote work. I haven't found an easy, general solution. A few teammates prefer video chat, others despise it. A few teammates like the wiki as a backlog, a few haven't ever edited the wiki. Some prefer strict usage of email/chat/phone for async-unimportant/async-important/sync-urgent, others tend to use one of those 3 for all communication.
As you can tell, we have several different communication tools. When writing, I generally prefer to include concrete examples. This blog entry will list each tool referenced above. However, I cannot emphasize enough that: this list is a snapshot of what we're using, not a recommended set of tools.

app: Github
usage: We use many of the features of Github; however, the two features that help facilitate remote work are (a) pull requests with inline comments and (b) compare. A pull request with inline comments has (thus far) been the most productive way to asynchronously discuss specific pieces of code. Almost all non-trivial commits will eventually end up in a pull request that's reviewed by at least one other team member. We've found compare view to be the best solution for distilling changes for a teammate with limited context.

app: Hipchat
usage: We have 3 hipchat rooms: work, social, support. It should be pretty obvious what we use each room for. The primary driver for splitting the 3 is for keeping noise down. Most team members look at chat history for work and support, reading anything that happened between now and the last time they were logged in. Social tends to be more verbose, often off-topic, and never required reading for keeping up with what the team is up to.

app: Cisco Jabber
usage: Within the team, we primarily use Cisco Jabber for video calls; however Cisco Jabber is also a great way for people within DRW offices to reach anyone on my team without having to know their location. Cisco Jabber is significantly better than asking people to remember to call your cell, or forwarding your desk phone to your cell - it provides you 1 number that anyone can reach you at, regardless of your physical location. There's not much to say about the video capabilities, they're there, they work well. Cisco Jabber also provides desktop sharing, which we use occasionally for "remote pair-programming".

app: Confluence
usage: Our backlog resides on a Confluence wiki; it's a single page with about 150 lines. The backlog is split into 3 sections: Milestones, Now, and Soon. There are generally 3-5 Milestones, which list (as sub-bullets) their dependencies. A dependency is a reference to a line item that will live in Now or Soon. Now is the list of highest priority tasks - the things we need to get down right away. Soon is the list of things that are urgent but not important, or important but not urgent. Both Now and Soon lists contain placeholders for conversations, each placeholder is around 1-2 lines. Below you'll find a contrived, sample backlog.
  • Deploy to Billy Ray Valentine
    • (market data 1)
    • (execution 1)
  • Automated Order Matching (stakeholder: Mortimer and Randolph Duke)
    • (execution 2)
    • (reporting 1)
  • Market data
    • (1) support pork bellies
    • support orange juice
  • Execution
    • (1) support pork bellies
    • (2) internally match incoming orders from customers
  • Market data
    • support coffee
    • support wheat
  • Reporting
    • (1) Commission summary
note: some things in Now need to be done immediately, but do not support an upcoming milestone. These things are incremental changes for previously met milestones.
Obviously we use email and other tools as well, but I can't think of any remote specific usage patterns that are worth sharing.

As I previously mentioned, each member of the team uses each of these tools in their own way. None of these tools are ideal for every member of the team, and I believe a good team lead helps ensure each team member is only required to use the tools they find most helpful.