Dr. Ben Maughan writes:

At the moment I am rewriting some LaTeX notes into org mode to use in lecture slides. This involves several repetitive tasks, like converting a section heading like this

\subsection{Object on vertical spring}

into this

** Object on vertical spring

Whenever I come across a problem like this, my first inclination is always to write a regular expression replacement for it.

A regular expression solution would likely be concise. It would be elegant. It would neatly state the abstract transform that needs to be performed, rather than getting bogged down in details of transformation. A clean, beautiful, stateless function from input to output.

And by definition, a regular expression replacement solution would have a well-defined model of valid input data. Only lines that match the pattern would be touched.

Like I said, a regular expression is always my first thought. But then I’ll work on the regex for a while, and start to get frustrated. There’s always some aspect that’s just a little bit tricky to get right. Maybe I’ll get the transform to work right on one line, but then fail on the next, because of a slight difference I hadn’t taken into account.

Minutes will tick by, and eventually I’ll decide I’m wasting time, throw it away, and just do the editing manually.

Or, on a good day, when I’ve had just the right amount of coffee, I will instead remember that macros exist. Macros are the subject of Maughan’s article.

The trick to making a good macro is to make it as general as possible, like searching to move to a character instead of just moving the cursor. In this case I did the following:

  1. Start with the cursor somewhere on the line containing the subsection and hitC-x C-( to start the macro recording
  2. C-a to go to the start of the line
  3. C-SPC to set the mark
  4. C-s { to search forward to the “{” character
  5. RET to exit the search
  6. C-d to delete the region
  7. Type “** ” to add my org style heading
  8. C-e to move to the end of the line
  9. BACKSPACE to get rid of the last “}”
  10. C-x ) to end the recording

Now I can replay my macro with C-x e but I know I’ll need this again many times in the future so I use M-x name-last-kbd-macro and enter a name for the macro (e.g. bjm/sec-to-star ).

If I ask Emacs to show me an editable version of Maughan’s macro, I see this:

C-a         ;; move-beginning-of-line
C-SPC           ;; set-mark-command
C-s         ;; isearch-forward
{           ;; self-insert-command
RET         ;; newline
C-d         ;; delete-char
**          ;; self-insert-command * 2
SPC         ;; self-insert-command
C-e         ;; move-end-of-line
DEL         ;; backward-delete-char-untabify
C-x

C-x

This is the antithesis of a pattern-matching, functional-style solution. This is imperative code. It’s a procedure.

Let’s list some of the negatives of the procedural style:

  • It reveals nothing about the high-level transformation being performed. You can’t look at that procedure definition and get any sense of what it’s for.
  • It’s almost certainly longer than a pattern-replacement solution.
  • It implies state: the “point” and “mark” variables that Emacs uses to track cursor and selection position, as well as the mutable data of the buffer itself.
  • It has no clear statement of the acceptable inputs. It might start working on a line and then break halfway through.

Now let’s talk about some of the strengths of the procedural approach:

  • It is extraordinarily easy to arrive at using hands-on trial and error.
  • The hands-on manipulation becomes the definition, rather than forcing the writer to first identify the transforms, then mentally convert them into a transformation language.
  • It has a fair amount of robustness built-in: by using actions like “go to the next open bracket”, it’s likely to work on a variety of inputs without any specific effort on the part of the programmer.
  • It can get part of the work done and then fail and ask for help, instead of rejecting input that fails to match pattern.
  • It lends itself to a compelling human-oriented visualization: a cursor, moving around text and adding and deleting characters. In other words, it can tell its own story.
  • You can edit it without thinking too hard. You don’t have to hold a whole pattern in your head. You can just advance through the story until you get to the point where something different needs to happen, and add, delete, or edit lines at that point.
  • As the transforms become more elaborate, a regex-transformational approach will eventually hit a wall where regex is no longer a sufficiently powerful model, and the whole thing has to be rewritten. There’s no such inflection point with procedural code.

Time after time, the pattern-matching, functional, transformational approach is the first that appeals to me. And time after time it becomes a frustrating time-sink process of formalising the problem. And time after time, I then turn to the macro approach and just get shit done.

The procedural solution strikes me as being at the “novice” level on the Dreyfus Model of Skill Acquisition. We tell the computer: do this sequence of steps. If something goes wrong, call me.

By contrast, more “formal” solutions strike me as an attempt to jump straight to the “competent” or even “proficient” level: here is an abstract model of the problem. Get it done.

One problem with this, at least looking at it from an anthropomorphic point of view, is that this isn’t how knowledge transfer normally works. People work up to the point of advanced beginner, then competent, then proficient by doing the steps, and gradually intuiting the relations between them, understanding which parts are constant and which parts vary, and then gaining a holistic model of the problem.

Of course, we make it work with computers. We do all the hard steps of modeling the problem, of gaining that level-three comprehension, and then freeze-dry that model and give it to the computer.

But this imposes an artificially high “first step”: witness me trying, and failing, to get a regex solution working in a short period of time. Before reverting to the “dumb” solution of writing a procedural macro through trial and error.

And I worry about the scalability of this approach, as we have to do the hard work of modeling the problem for every last little piece of an application. And then re-modeling when our understanding turns out to be flawed.

This is one reason I’m not convinced that fleeing the procedural paradigm as fast as possible is the best approach for programming languages. I fear that by assuming that a problem must always be modeled before being addressed, we’re setting ourselves up for the exhausting assumption that we have to be the ones doing the modeling.

(And I think there might be a tiny bit of elitism there, as well: so long as someone has to model the problem before telling the computer how to solve it, we’ll always have jobs.)

This is also why I worry a little about a movement toward static systems. The interactive process described above works because Emacs is a dynamic lisp machine. A machine which can both do a thing and reflect on and record the fact that it is doing a thing, and then explain what it did, and then take a modified version of that explanation and do that instead.

I’ve recently realized that I’m one of those nutjobs who wants to democratize programming. And I think in order for that to happen, we need computing systems which are dynamic, but which moreover are comfortable sitting down at level 1 of the Dreyfus model. Systems that can watch us do things, and then repeat back what we told them. Systems that can go through the rote steps, and ask for help when something doesn’t go as expected.

Systems that have a gradual, even gradient from manual to automated.

And then, gradually, systems that can start to come up with their own models of the problem based on those procedures. And revise those models when the procedures change. Maybe they can come up with internal, efficient, elegant transformational solutions which accomplish the same task. But always with the procedure to fall back on when the model falls apart. And the users to fall back on when the procedure falls apart.

Now, there are some false dichotomies that come up when we talk about procedural/functional, formal/informal. For instance: there’s no reason that stateful, destructive procedures can’t be built on top of persistent immutable data structures. The bugbear of statefulness needn’t haunt every discussion of imperative coding.

But anyway, getting back the point at hand: there is an inescapable pragmatism to imperative, procedural code that mutates data (at least locally). There is a powerful convenience to it. And I think that convenience is a signal of a deeper dichotomy between how we show things to other people, vs. how we [think we should] explain things to computers. And for that reason, I’m nervous about discarding the procedural model.

P.S: I’m going to flagrantly exploit the popularity of this post to say: if you like software thinky-thoughts such as this one, you might also enjoy my newsletter!

Published by Avdi Grimm

15 Comments

  1. I guess there is a reason why in memory you always put the information about where the data ends right into the model or at the beginning of the data itself. A curly bracket at the end kinda forces you to analyze the underlying structure fully, the macro just being a heuristic shortcut-approach that just might as well fail under not-so-normal conditions. You have to go to the end to know where the end is. Silly, isn’t it?

    But we prefer to create data structures and file representations of them, where you never really know what to expect next. It’s a bag full of surprises, which is nice for easily creating those structures in the first place, but shifts all the effort to reading/updating time. It would be so much nicer if complexity is shifted back to the time of designing, creation and deletion of these structures. These are much rarer and less time-critical than reading and updating, imho.

    Just imagine, a subsection wouldn’t allow any other } on the same line. Or having a indicator at the { that tells you exactly how many characters will follow until the subsection header ends. That would make it predictable. It will make pattern matching easy. Just as it would shorten the macro approach immensely.

    Reply
  2. […] The Inescapable Pragmatism of Procedures // Lobsters […]

    Reply
  3. A real emacs user/programmer would realize he’s just described the input for an emacs regex generator. You obviously need to add a key command that starts taking keystrokes, processes/compiles them into the regex you were not a good enough programmer to write, and then applies that regex instead of running the macro until it fails.

    OK – let me pull my tongue out of my cheek – it’s pretty thoroughly wedged.

    Yeah, I used to do this exact thing when I used emacs.

    Now that I [mostly] use Atom my approach to this problem is different. I highlight the identifying text, duplicate my cursor until I have N copies of it at all the appropriate spots, then I just do the edit that I need to happen at all those locations. It’s pretty cool.

    Additional advantages:
    * If your matching regex was wrong initially, you get to see the incorrectly cursor’d block before you edit
    * If part of your process edits the wrong thing/incorrectly, again you get to see it as it is happening

    Reply
  4. Aren’t OO and FP merely different strategies for organizing procedures, anyway? OO organizes around the most commonly-used parameter among groups of procedures and FP organizes around little, composable fragments of the procedures. Awfully pragmatic, no? 🙂

    Reply
    • No, I think they’re fundamentally different — depending on how you define your fundamentals, of course.

      FP tries to define functions which have inputs and outputs but no (or limited) side effects.

      OO tries to define functions which operate on state held in the object. (I tend to agree with Kent Pitman who said that OO is fundamentally about identity, not encapsulation or polymorphism.)

      Reply
  5. @Kurt you can of course do this in emacs as well https://github.com/magnars/multiple-cursors.el

    Reply
  6. The very reason why I love C# : it has no opinion on how things ougth to be done.

    want functionnal ? LINQ to objects
    want objects ? can do
    want lambdas ? can do
    want procedures ? can do
    want patterns ? planned for next version

    It’s all about getting the job done, with or without style.

    Reply
  7. You obviously aren’t advocating tossing out functional patterns, but rather recognize that procedural programming has advantages, one being it’s approachability for newcomers, and also for veterans dealing with unfamiliar terrain. Couldn’t agree more.

    One thought, though, occurred to me: Have you not provided an example of functional programming at its worst (e.g. regex), and procedural at its best (e.g. record and playback)?

    It may be the type of projects I work on, but I typically find functional paradigms – on balance – offer both the shortest solutions AND the least amount of mind juggling.

    Thoughts?

    Reply
  8. It’s old – even its sequel is old – but you might enjoy “Watch What I Do: Programming By Demonstration.” You might enjoy it a lot.

    http://acypher.com/wwid/

    PBD seems to have faded, but it aimed to use AI to implement the kind of systems you describe: “Systems that can watch us do things, and then repeat back what we told them. Systems that can go through the rote steps, and ask for help when something doesn’t go as expected.”

    It looks like Allen Cypher and others put out a related book this decade: https://www.elsevier.com/books/no-code-required/cypher/978-0-12-381541-5

    Reply
  9. This is a great way of thinking about it.

    Another example I’d put forth is the shell. I build up Unix command lines by typing “cat foo.txt”, then “cat foo.txt | grep blah”, then “cat foo.txt | grep blah | sed blarg”, and so on until I’m done. I’d probably never be able to write a whole command line correctly on the first try, but I can string together a bunch of tiny pieces and check the result of each one.

    Reply
  10. I like the thoughts in the later parts of this article – indeed, I have been left wondering more than once why we don’t all just write Haskell or Scala the way they are sometimes advertised.

    The initial example strikes me as odd, however. When I think “manipulating Latex source code and transform it into some other representation”, I don’t jump to regexes. The first thing I’d consider would be a proper (context-free) parser. Now I don’t know about the quality of available Latex parsers, and it obviously depends on the amount of source code you have to convert in order to justify whether you want to learn that parser’s interface, but there should – in principle – be little objection to the fact that programming and markup languages are generally not regular and as such a regex is, purely from a modeling point of view, exactly the wrong tool to throw at it. It’s a bit like insisting that every mathematical function is linear – we do it often in physics or engineering, to approximate a result because it’s easier, and similarly, a regex might yield a useful result in some cases, but it’s not technically the right thing.

    I always cringe a little when I see people throwing regexes at HTML. That’s not what they’re good for.

    So, in a sense, the example you’re giving is not the best one. The regex-based method is inferior not because modelling is not suited for this task, but because the model was inappropriate. That’s not to say that the arguments for using a more procedural approach are all invalid; one might be unfamiliar or uncomfortable with formal language hierarchies or it’s possible that there are no good, easy to use Latex parsers out there. However, I would contend that given that the problem is big (i.e. enough source code) and important enough using a parser would be the proper way to do it.

    So while I would agree that in many cases we don’t have a full model of the problem we’re trying to solve (and that’s why the Haskell approach possibly? fails? – I know some Haskell advocates would probably argue this and point out that one can prototype in Haskell as well – I don’t know how well that works.), this example might not be the best fit because the problem (essentially, cross-compilation) has been fully solved already.

    Sorry for going off a tangent, pretty much enjoyed the post. 🙂

    Reply
    • A formal grammar may be more correct, but it’s even more “indirect”. Take the time I’ve hurled down the toilet using regexps for little problems like this, multiply it by ten, and you have the time I’d waste building a formal grammar for it.

      So I’d say your example reinforces my point. It’s not that we don’t always have a formal model. It’s that when you think “hey, there’s a formal model for this problem somewhere”, now you have two problems. More importantly, the point is that you can solve problems quickly and incrementally WITHOUT formal models using automated direct manipulation, and that’s the secret power of “inferior”, “informal” procedures.

      Reply
      • Your argument is completely correct when you have to build a formal grammar from scratch. However, in the case of Latex we should assume that this has already been done, because obviously the code has to be parsed before it is compiled to postscript (or PDF, or whatever). The only question remains how easy it would be to pull out that parser and integrate it into your workflow. For editing 2 or 3 documents that are not too long, this would be too much effort, I agree (if we were talking about HTML, however, not so. With Nokogiri at least I am comfortable enough to be able to do this in a quick script fairly easily). If you have hundreds of long documents, it’s the way I’d choose.

        So from my point of view, I’d rank the easiness of a strategy like this:

        using a formal model that has been built and tested is easier than
        hand-rolling / prototyping your own approximation, which is easier than
        build your formal model from scratch

        But I don’t think we’re essentially in disagreement here. I just find that sometimes some people overlook the fact that a problem has already been solved somewhere and have to reinvent the wheel all over – where it usually, IMHO, pays off more dividends to use a proven solution.

        Reply

Leave a Reply

Your email address will not be published. Required fields are marked *