“80% of men never noticed anything”

Brief thoughts on this presentation:

  • This is a great presentation.
  • The use of Kirk/Spock slash art is inspired
  • The idea of 60kloc generated mostly by newbie programmers scares the crap out of me.
  • It sounds like the women-dominated projects aren’t just doing a better job attracting women – they are doing a better job attracting non/novice-programmers, period. That’s pretty remarkable.
  • This is awesome for OSS projects. I don’t know how I could ever apply that to my workplace, though – the only company I’ve ever worked for that had the schedule/budget slack to accommodate bringing unskilled programmers up to speed was a giant government contractor. Small agile companies can’t afford to spend much time on education.
  • Very good point about the OSS developer market not being a zero-sum game; although I’m not sure how many male devs are actually worried about losing their job to a woman.
  • I’m adding Infotropism to my my reading list.

Recursively Symbolize Keys

Given a YAML stream that looks like this:

    garnish: olive
    garnish: onion

The Ruby-fied version looks like this:

{"drinks"=>{"gibson"=>{"garnish"=>"onion"}, "martini"=>{"garnish"=>"olive"}}}

But we don’t like string keys, we like symbol keys. A number of libraries exist to extend Hash with interchangeable string- or key-based indexing. But adding a library dependency seems like overkill for symbolizing some keys. Here’s a quick recursive string-to-symbol key converter:

def symbolize_keys(hash)
  hash.inject({}){|result, (key, value)|
    new_key = case key
              when String then key.to_sym
              else key
    new_value = case value
                when Hash then symbolize_keys(value)
                else value
    result[new_key] = new_value

Note the block parameters to Hash#inject.

  hash.inject({}){|result, (key, value)|

The block arguments would normally be a hash (the accumulator) and a two-element array of [key, value]. Putting parentheses around the second two parameters causes the second argument to be “splatted” into its component parts and assigned to key and value separately.

Let’s take a look at the output.

puts "Symbolized hash:"
puts symbolize_keys(YAML.load(yaml)).inspect
Symbolized hash:
{:drinks=>{:martini=>{:garnish=>"olive"}, :gibson=>{:garnish=>"onion"}}}

BarCamp Baltimore 2009

I had a fantastic time at BarCamp Baltimore today. I’m so glad I got my butt
out the door (with Stacey’s help) early on a Saturday morning to attend. There
was some great energy going on there. Got to see some friends; put some faces
to names; and met a lot of cool new people. The best part was definitely the
conversations at lunch and between sessions.

My sole contribution was to suggest a “How do we get more women into software”
topic, which was developed by the organisers into “Education / Diversity”. As
I’d hoped, it spawned a pretty vigorous discussion, and also as hoped, I spend
most of the time listening. Unfortunately there was a lot of handwaving about
fixing the school system which I felt was pretty pie-in-the-sky. But there were
also some good practical steps suggested.

My concrete takeaways from the session were:

  • Mentoring, mentoring, mentoring. Find some way, such as after-school
    programs, to catch girls and minority kids young. Pass on the passion for
    tech before they get locked into gender roles and social conditioning about
    what they can and can’t do.
  • For hiring and broadening tech groups, reach out to womens groups and
    minority organisations like Black Engineers.

Unfortunately, one of my other takeaways from the session was that one reason
women aren’t as prominent as men in technology is that male geeks like to hear
themselves talk, sometimes at the expense of the women in the room. Even when
the women in the room might be the ones with the most to say about the topic at
hand.Baltimore group

The other standout session for me was the HackerSpace one. I was excited to
learn about HackerSpace and I hope the nascent Baltimore group succeeds. It’s
good to see old-school tinkering and software/hardware hacks going on.

All in all a wonderful way to spend the morning and afternoon. I came away
energised and excited about the explosion of cross-disciplinary creativity going
on in this time and place. I’m looking forward to more events like it. Many
thanks to Dave Troy and everyone else for making it happen!

Playing Grown-Up: The Rails Maturity Model

When I first heard about the RMM I thought it was a joke. Then I thought it was a terrible idea. Then Obie assured us all that it wasn’t about certification, and I thought about it for a while, and decided that it was still a terrible idea. Here’s why.

Let’s start by taking the RMM’s namesake, CMM[I], seriously. Taking CMM[I] seriously is also a terrible idea, but indulge me for the sake of exposition.

I was once tangentially involved in the effort to bring a large organization to CMMI level 3. In the process I became more familiar with Maturity Models than ever wanted to be. The first thing you learn about CMM[I] is that it’s not about specific practices. You will never see “waterfall development” or “cubicle officing” specified in CMM[I] literature. The creators of CMM[I] went to great lengths to keep it free of practice recommendations.

CMM[I] is a process meta-model: it’s a process about processes. The core idea of CMM[I] is that of learning from experience and incorporating that knowledge back into your processes – whatever they may be. The essence of the coveted CMMI Level 5 certification is simply this: An organization which continuously improves itself by gauging the effectiveness of processes and adjusting them accordingly.

So if the RMM seeks to impress the kind of novelty-wary big corporate customers who put stock in things like CMM[I] and 6Sigma, it has failed right off the bat. Anyone who knows anything about CMM[I] will take one look at RMM and laugh at those silly Rails kids who think maturity equates to using certain processes. “Go back to your sandbox” they’ll say. “Come back when you’re older”.

Now let’s stop talking about CMM[I] the lofty ideal, and start talking about CMM[I] the cynical reality. The real purpose of CMM[I] is to guarantee a reliable income for Carnegie Mellon, and to solidify the position of large established corporations. The amount paperwork and expense involved in getting CMM[I] certified is difficult to convey to anyone who hasn’t seen it first-hand.

Only large organizations have the kind of slack necessary to take the productivity hit a CMM[I] program inflicts and keep chugging along. And the sad truth is that the people on the ground have no idea what it’s all about; just that they suddenly have more paperwork to fill out and a lot of dull training sessions to go to where they learn to spit back what the auditors want to hear.

“But wait” you say. “RMM isn’t about certification!”. True. And perhaps that will save it from the excesses of paperwork and naked profiteering that characterizes the CMM[I]. But the fact remains that RMM is and will continue to be defined by the major players in the Rails Arena. If it gains traction, companies that are just getting started will face the choice of getting things done and risking discrimination by a market that knows only “RMM=good”;
or spending their time on processes instead of deliverable code. In the eyes of customers the onus will be on the “non-compliant” shops to explain why they don’t practice pair-programming, or collocation, etc. Not only will this place an undue burden on small, young companies; but as others have pointed out, it does not create a very fertile environment for process innovation. Which, ironically, makes it diametrically opposed to the intent of the original CMM.

I respect Obie a lot, but the RMM project appears like nothing so much as a group of children playing grown-up by aping the dress and mannerisms of stodgy, button-down businessmen. A high-power executive observing might be amused by their antics, but would scoff at their failure to grasp the reasons behind the trappings they have adopted. And anyone who has been through the corporate machine and come out the other side would ask them why a bunch of happy kids could possibly want to imitate the very worst aspects of being an adult.

In summary, I have three things to say to Obie and anyone else who thinks the RMM is a good idea.

  1. It’s a trap!
  2. Look, if nothing else, ditch the name. It will become a source of derision for anyone familiar with CMM; a source of confusion for anyone who hasn’t heard of CMM; and potentially a source of lawsuits if Carnegie Mellon catches wind of it.
  3. If you really care about your customers, help them talk to each other.

UPDATE: Anyone who thinks that the potential for an organisation like RMM to lock-in outdated practices is merely academic at this point should have a look at the current Practice #3: Everyone Together. This is an incredibly backward value to be espousing in 2009, and, again, favors established companies over ultra-lean organisations which prioritise finding the talent wherever it is over building up a brick-and-mortar presence.

Rehabilitating the professional rock star

I’ve been following the “Rails pr0n star” scandal, probably too closely for my own good, and making my share of grumpy twitter comments. Let me just get the standing and being counted out of the way first:

I found the slides in question entertaining and cleverly put together, but inappropriate for their context. But moreover, I found the reaction of DHH and other prominent members of the Rails community distressing and distasteful. It shows a lack of maturity to be unable to distinguish between sensitivity and censorship. I’ve been involved with Ruby since years before Rails came along, and this is emphatically not the warm, humble, encouraging Ruby community I know and love.

That said, I want to talk about something else. I want to talk about a couple of terms that I feel have been unfairly besmirched as this mess developed.

The first term is “Rock Star“. Nearly every blog post I’ve read about how sucky the Rails community is has put the blame on the “rock star” stereotype. I don’t think this is fair to rock stars.

What programmer has not, at one time or another, committed a righteous act of transcendent coding and been seized with the urge to run down the hall screaming “WHO RULES? THAT’S RIGHT I RULE, BABY! FUCK YEAH!” and then do an air-guitar solo on top of their desk? Come on, you all know what I’m talking about.

In decades past when a Joe Hacker got this feeling it was quickly quashed by the realisation that he was a mere geek, a permanent social pariah, and that such displays of blatant self-confidence were simply not for him. So he’d tamp it down, open another Mt. Dew, and slink back into his cave.

But something happened at the dawn of the 21st century. Geek started to become cool. Hollywood started making blockbusters out of comic books. Programmers started writing programs for the web that made ordinary people happy and got them laid, rather than just irritating the living hell out of them at their day job. That made programmers kind of cool. And the programmer demographic itself started changing. Suddenly it was possible to go to a users group meeting and talk to people who not only dug obscure programming languages and D&D, but also playing the guitar and snowboarding. The fact that you wrote code for fun stopped being something to hide from potential dates, and started being something that might actually help score you a date

Joe Hacker looked out over the rim of his cubicle and realized “hey! I am pretty awesome! And it’s OK to feel good about that!” And then he moved out of his parents’ basement.

Being a programming rock star is about empowerment. It’s about pride. It’s about knowing that you’re not just another cog in a corporate wheel or an academic wanker, but an artisan with the power to change people’s lives with nothing but information. It’s about hearing “this is gonna be a great party – be sure and bring your laptop“. It’s about embracing the electricity of that “eureka” moment and sharing it with other people who understand the feeling.

But rock stars are supposed to be “bad boys”, right? Sure; but chest-beating about being “edgy” does not make you a bad-ass.  If anyone knows about being a bad-boy rock star in the Rubyverse it’s Giles; and he wasn’t impressed.  Neither were Zed or Obie.  To quote Reg Braithwaite:

Porn is *not* edgy. Walking into Oracle’s Head Office and shitting on their conference table is edgy.

Matt’s talk wasn’t rock star behaviour, and the defensiveness and posturing that followed it were even less so.  It was just plain old garden-variety immaturity.

The second term I want to defend is “professional“. DHH doesn’t seem to think much of it:

Professional to me is facade, fake sincerity, political correctness, not offending anyone, and everything else that makes life lifeless

Spend some time with any real-life career rock star – the kind with groupies and a dozen guitars- and I’ll wager you’ll find that whatever else they are, they are a deeply professional musician. Professionalism is orthogonal to “edginess”. I know edgy people. I mean really edgy people. People who play with fire and get suspended from hooks for fun. People who do things in private to other people for both business and pleasure, things which are beyond the scope of this technical blog to describe. These people are nothing if not professional about the things they do. They have careful rules about when and where.  They are profoundly mindful of boundaries, and painstakingly sensitive. They have to be. It’s the same for any practitioner of an extreme sport – beyond the gung-ho, devil-may-care veneer, you will almost always find someone with a finely honed sense of where to draw the line, of what their limitations are, and most importantly, the ability to listen.

There are always exceptions, of course. In any given “fringe” community there’s always a few irresponsible characters. They are the ones who wind up getting ostracised from the group because their cavalier attitude threatens everyone else’s enjoyment and/or safety.

So I don’t think that word “professional” means what he think it means. I think what he really means is “corporate”, which is a whole different animal.

As for me, I still aspire to be both professional and a rock star, both in code and in music. And I don’t think there’s anything wrong with that.

UPDATE: I will moderate any more comments that try to argue the case that people overreacted to the slides. That debate is both irrelevant and over. To quote Martin Fowler:

At this point there’s an important principle. I can’t choose whether someone is offended by my actions. I can choose whether I care. The nub is that whatever the presenter may think, people were offended – both in the talk and those who saw the slides later. It doesn’t matter whether or not you think the slides were pornographic. The question is does the presenter, and the wider community, care that women feel disturbed, uncomfortable, marginalized and a little scared.

This is a post about the meanings of the terms “rock star’ and “professional”. Anyone who wants to debate the meaning of the term “porn” is welcome to take it up with Justice Potter Stewart.


One of my goals has long been to be paid to work on a product that I can be personally enthusiastic about. I’m happy to announce that that goal has been realised. At the beginning of this week I started working at Devver. I’m incredibly excited about this job. When the founders first explained to me what it is they are doing, I immediately thought “I’d buy that!”. It makes me very happy to have the opportunity to work on something that I can really get behind. Not to mention working with a couple of sharp, fun guys whom I’m already learning a lot from.

I’m in Boulder with the Devver team until Tuesday, hanging out with with Ben and Dan, getting to know how they work, getting up to speed on the codebase, and ironing out our remote collaboration toolchain. After that I’ll be working remotely from home in Pennsylvania.

I’m looking forward to a lot of hard, interesting work over the coming weeks as I get ramped up. This job promises to be a challenging and rewarding experience. I can’t wait to see what Devver and I can accomplish together!

Ubuntu, Emacs, and Fonts

Ubuntu is a wonderful development environment in many ways, but let’s not beat around the bush: fonts in Linux have always been a disaster. It’s not as bad is it used to be; these days Ubuntu ships with some nice-looking fonts by default and apps mostly use them out of the box. Things get hinky fast, though, if you step off the beaten path. Like, say, if you want to install your own user fonts.

As it turns out, it’s trivially easy to install your own TrueType fonts in Ubuntu; it’s just not at all obvious how. Here’s the secret: simply copy the .ttf file into a directory called “.fonts” in your home directory. You can create the directory if it doesn’t already exist. Next time you start a program the font should be available.

Naturally Emacs has to throw a wrench in the works. Emacs font-handling on Linux can charitably be described as “eccentric” and more bluntly as “schizophrenic”. There are about a half-dozen different ways to specify fonts. The most obvious place is a menu item titled “Set Font/Fontset” under the “Options” menu. As far as I can tell this item is placed in the UI strictly as a diversion; it pops up a bafflingly organized menu of fonts which bears no relation to any other list of fonts on the system, and which are all hideously ugly.

If you are persistent you will eventually discover the “set-default-font” function, which is what you really want. You will type “M-x set-default-font” and then hit TAB to see a list of completions and see a list of a hojillion X11 font spec strings. After frustratedly scrolling around for a while you’ll punch up xfontsel and discover that the the font you are looking for is listed under the “unknown” foundry, or something equally unpredictable. Sensing victory close at hand, you’ll type in the whole font spec (you do know the ISO designation for your native character set, right?) and hit RET.

And then Emacs will spit out an error about it being an “undefined font”.

The completion list, as it turns out, is just another clever ruse. Emacs actually has its own syntax for specifying fonts. I don’t claim to understand this syntax. What I do know is that entries of the form [font name]-[font size in points] seem to work nicely.

So, to summarize, if you want to try out a new TrueType font (for instance, Anonymous) in Emacs, here are the steps:

  1. Put the .ttf file in ~/.fonts, creating the directory if needed.
  2. Type M-x set-default-font RET "Anonymous-10" (without the quotes)
  3. Enjoy your new font.

UPDATE: Emacs informs me that set-default-font is actually deprecated and I should be using set-frame-font instead, Also, if you want to persist this configuration the best way to do it appears to be by adding (font . "Anonymous-10") to default-frame-alist. The Emacs documentation recommends using your ~/.Xresources file for this instead, but in my experience getting X Resources to “take” is something of a crap shoot.

UPDATE 2: If you like the Inconsolata font, do not install the ttf-inconsolata package on Ubuntu 8.10 (or lower). It is broken, and it will override your .fonts version of Inconsolata with it’s nasty brokenness.

UPDATE 3: As of Ubuntu 12.10, I can now report that the newer “fonts-inconsolata” package seems to work fine.

Go Fetch

I’m a fan of the #fetch method in Ruby. I’ve noticed that other Rubyists don’t use it as much as I do, so I thought I’d write a little bit about why I like it so much.

First of all, in case you’ve forgotten, #fetch is a method implemented on both Array and Hash, as well as some other Hash-like classes (like the built-in ENV global). It’s a near-synonym for the subscript operator (#[]). #fetch differs from the square brackets in how it handles missing elements:

  h = {:foo => 1, :bar=> 2}
  h[:buz] # => nil
  h.fetch(:buz) # => IndexError: key not found
  h.fetch(:buz){|k| k.to_s * 3} # => "buzbuzbuz"

The simplest use of #fetch is as a “bouncer” to ensure that the given key exists in a hash (or array). This can eliminate confusing NoMethodErrors later in the code:

  color = options[:color]
  rgb  = RGB_VALUES[color]
  red = rgb >> 32 # => undefined method `>>' for nil:NilClass (NoMethodError)

In the preceding code you have to trace back a few steps to determine where that nil is coming from. You could surround your code with nil-checks and AndAnd-style conditional calls – or you could just use #fetch:

  color = options.fetch(:color) # => IndexError: key not found
  # ...

Here we’ve caught the missing value at the point where it was first referenced.

You can use the optional block argument to #fetch to either return an alternate value, or to take some arbitrary action when a value is missing. This latter use is handy for raising more informative errors:

  color = options.fetch(:color) { raise "You must supply a :color option!" }
  # ...

Another common use case is default values. These are often handled with the || operator:

  verbose = options['verbose'] || false

But this has the problem that the case where the element is missing, and the case where the element is set to nil or false, are handled interchangeably. This is often what you want; but if you make it your default it will eventually bite you in a case where false is a legitimate value, distinct from nil. I find that #fetch is both more precise and better expresses your intention to provide a default:

  verbose = options.fetch('verbose'){ false }

In my code I try to remember to use #fetch unless I am reasonably sure that the Array or Hash dereference can’t fail, or I know that a nil value is acceptable by the code that will use the resulting value.

Writing Facades on Legacy Code

h2. The Setup

Let’s say we have a system in production which manages college financial aid departments. Over the years it has accumulated a fairly complex object model for handling the many kinds of forms a financial aid department has to generate and process. It’s not the most elegant and well-factored model in the world, but it gets the job done.

Here are a few of the classes involved in the forms component of the system:

  class Form < ActiveRecord::Base
    has_many :form_revisions
    # ...

  class FormRevision < ActiveRecord::Base
    has_many :form_sections
    has_many :form_signoffs
    # ...

  class FormSection < ActiveRecord::Base
    has_many :form_questions
    # ...

  class FormQuestion < ActiveRecord::Base
    # ...

  class FormSignoff < ActiveRecord::Base
    # ...

h2. The Problem

One particular subsystem deals with student applications for financial aid. All of the application forms have a common essential format: a section listing acceptance criteria that must be met in order to qualify; a section listing exclusion criteria which might disqualify the student, and a field for the financial aid counselor to sign off that the student filled out the form correctly.

Currently whenever an administrator clicks the “new application form” button, the controller code which creates the new form does it something like this:

  form = Form.create!(:name => form_name)
  first_version = FormRevision.create!(:form => form, :version => 1)
  acceptance_section = FormSection.create!(:name => "Acceptance Criteria")
  rejection_section = FormSection.create!(:name => "Rejection Criteria")
  first_version.form_sections < < acceptance_section
  first_version.form_sections << rejection_section
  first_version.form_signoffs.create!(:name => 'Counselor')

The process for adding a new acceptance or rejection criterion is similarly tedious:

  form = Form.find_by_name(name)
  form_version = form.current_version
  section = form_version.sections.find_by_name('Acceptance Criteria')
  section.form_questions.create!(:text => question_text)

The tests for this logic (which is all contained in controllers) have to duplicate all of this setup in order to exercise the controllers with realistic data. Lately the devs have taken to using Factory Girl to make the setup easier, but it’s still a duplication, and it seems like there’s always some little detail of how the application code assembles a form that differs from how the test code does it. The other day one of the devs tried to debug one of these differences by manually assembling forms in the console, but he quickly got frustrated by all the steps necessary to get the form “just right”.

h2. Facade Methods

Clearly, there is an opportunity for simplification here. One option is to add some facade methods to the Form class which encapsulate the complexity of building application forms:

  class Form
    def self.create_application_form!
      # ...

    def add_criterion(question_text, kind=:acceptance)
      # ...

However, in our hypothetical financial aid system the Form class is already over 1000 lines long, and the developers have decided to draw a Picard Line on it. Not only that, but application forms are only one of many different kinds of forms that the system manages. If the Form class were to contain specialized code for every type of Form it manages, it would grow unmanageably large.

Subclassing is a possibility. But this system doesn’t use Rails Single Table Inheritance, so even if you saved an ApplicationForm it would come back as a plain Form next time you loaded it, and Ruby doesn’t provide any convenient way for us to downcast it to the correct type.

h2. Wrapper Facade

This is a situation where a Wrapper Facade may be called for.

(Note: I got the term form this paper by Doug Schmidt. I believe the pattern described here follows the spirit, if not the letter, of that work.)

It could look something like this:

  require 'delegate'
  class ApplicationForm < DelegateClass(Form)
    def self.create!
      form = Form.create!(:name => form_name)
      first_version = FormRevision.create!(:form => form, :version => 1)
      acceptance_section = FormSection.create!(:name => "Acceptance Criteria")
      rejection_section = FormSection.create!(:name => "Rejection Criteria")
      first_version.form_sections < < acceptance_section
      first_version.form_sections << rejection_section
      first_version.form_signoffs.create!(:name => 'Counselor')

    def add_criterion(question_text, kind=:acceptance)
      form_version = current_version # delegated to the underlying Form
      section_name = 
        (kind == :acceptance) ? 'Acceptance Criteria' : 'Rejection Criteria'
      section = form_version.sections.find_by_name(section_name)
      section.form_questions.create!(:text => question_text)

Here we use the Ruby ‘delegate’ standard library to define a wrapper class which will delegate all undefined calls to an underlying Form instance.

Using the wrapper is straightforward, if slightly more verbose than using methods directly on the Form class:

  # To create a form:
  form = ApplicationForm.create!
  form.add_criterion("Do you own an Escalade?", :rejection)

  # To modify a form:
  form = ApplicationForm.new(Form.find(form_id))
  form.add_criterion("Can you count to ten without using your fingers?",

h2. Advantages Over Other Approaches

Using a delegate class confers several useful advantages. For instance, it is very easy to construct unit tests that test just the functionality in the wrapper facade by mocking out the underlying Form instance.

  describe ApplicationForm do
    before :each do
      @form = stub("form")
      @it   = ApplicationForm.new(@form)

    it "should be able to add eligibility sections to the form" do
      # etc...

Using a wrapper instead of extending the @Form@ instance with a module means we can selectively override methods in the Form class if needed. Below, we override the @Form#name@ method with our own which appends some text to the name:

  require 'delegate'
  class ApplicationForm < DelegateClass(Form)
    # ...
    def name
      __getobj__.name + " (Aid Application)"

Using a delegate also gives us our own namespace “sandbox” to play in. Any instance variables we use in implementing @ApplicationForm@ will be kept separate from the @Form@ instance variables, so we don’t have to worry about naming clashes.

And, of course, if we ever decide that some or all of the code in the wrapper does belong in the @Form@ class, it is simple enough to move it over.

h2. Conclusion

To sum up, the Wrapper Facade is a useful tool to keep in your toolbox for situations where you want to simplify a particular scenario for a class, without adding any code to the class itself.

Smart Requires in Ruby

I have a lot of Ruby RSpec files that start out with a line something like this:

 require File.join(File.dirname(__FILE__), %w[.. spec_helper])

“This is boilerplate” thought I one day, “my editor should insert this line for me!” But there’s a problem: the line must change depending on how deep in the directory hierarchy the file is found. E.g.:

 require File.join(File.dirname(__FILE__), %w[.. .. .. spec_helper])

This is Ruby, though. Surely there is a concise way to dynamically locate a file in a parent directory? As it turns out, there is. Here’s my solution:

  require 'pathname'
  require Pathname(__FILE__).ascend{|d| h=d+'spec_helper.rb'; break h if h.file?}

OK, it’s two lines instead of one. But the advantage is, now I can insert those two lines into my standard editor template for *_spec.rb files. And it’ll Just Work so long as there is a spec_helper.rb somewhere in the file’s parent directories.

How it works:

  • Pathname#ascend iterates backwards up a file path, successively removing path elements.
  • Pathname#+ joins two path elements using the path separator character (e.g. /).
  • break is called with an argument, causing the block to return the matching path.