Code And Cocktails

Initial Emacs Setup for React/ReactNative

| Comments

(Cross posted to the Cyrus Innovation Blog)

It looks likely that I’ll be doing some ReactNative work soon so I took some time to start setting up my Emacs environment. All my relevant setup can be found in the init-react.el file in my GitHub dotfiles repository. This is likely to change so the previous link may not match the code below. The code below matches specifically the initial version (59e7728).

Hereinbelow I will add more annotation to those already found in that file along with snippets of code.

The first thing that needed to be set up was a mode for React code. React code files can mix Javascript with HTML markup and it does not appear that js-mode (the built-in JavaScript mode) handles that. After a short Googling it looks like web-mode is a mode that can handle it. After some brief testing it does appear to work reasonably. It appears that the React/ReactNative community has not decided to use *.js or *.jsx as the extension for the code files and since web-mode appears to handle JavaScript just fine I chose to use it in all cases.

(add-to-list 'auto-mode-alist '("\\.jsx?$" . web-mode))

The next thing I wanted to get set up was a linter. I thought this especially important as React uses an ES6 dialect of JavaScript which I am not entirely familiar with yet and a linter can help me “do the right thing”. With a suggestion from a coworker (who has done some React work) I chose ESLint with the AirBnB configuration settings. These defaults prompted me to standardize on two spaces for indentation. (Setting js-indent-level as js-mode is still in use for JSON files.)

  (setq web-mode-markup-indent-offset 2
        web-mode-css-indent-offset 2
        web-mode-code-indent-offset 2)
  (setq js-indent-level 2)

Setting up the linter to run via flycheck took a small amount of work since I don’t like to install project specific tools globally. (I know that this is contrary to current mores, but I have been tripped up by global vs. local installations before so I shy away from them when I can.)

First I needed to integrate NVM with Emacs so that Emacs could run ESLint at all.

(require 'nvm)
(nvm-use (caar (last (nvm--installed-versions))))

(The choice to use the last version found is totally arbitrary. If and when I get more versions of Node.js on my machine I’ll have to make a more careful choice.)

Next I hooked into projectile to look for a locally installed ESLint and use it if found. The projectile-after-switch-project-hook functions are called after Projectile has switched directories to the project so one can simply check the project for the desired file.

(add-hook 'projectile-after-switch-project-hook 'mjs/setup-local-eslint)

(defun mjs/setup-local-eslint ()
    "If ESLint found in node_modules directory - use that for flycheck.
    (let ((local-eslint (expand-file-name "./node_modules/.bin/eslint")))
      (setq flycheck-javascript-eslint-executable
            (and (file-exists-p local-eslint) local-eslint))))

(Note: the function is interactive because I found at least a few times I was looking at a JavaScript file which I had come to not via projectile. By making it interactive lets me use it manually in the rare case I need to.)

Flycheck’s ESLint integration is limited to only certain modes and web-mode is not one of them so I needed to add that to the white-list of modes

  (with-eval-after-load 'flycheck
    (push 'web-mode (flycheck-checker-get 'javascript-eslint 'modes))))

With that I can easily write React/ReactNative code with all the bells and whistles I like.

Later I will add support for building and testing I’m sure. But first I need to determine what building & testing in a ReactNative environment will even look like.

Software Life Span

| Comments

(Cross posted to the Cyrus Innovation Blog)

Software maintenance is an extremely important but highly neglected activity.

…says Boehm1 in the midst of a long paper about the current and possible future state of Software Engineering at the end of 1976. I think this statement is just as true today as it was almost 40 years ago when Boehm wrote it.

Boehm defines Software Maintenance as “the process of modifying existing operational software while leaving its primary functions intact.” (which sounds a lot like what we now call Refactoring). He states the Maintenance has three main ‘functions’ which imply certain needs:

  • Understanding the existing software: This implies the need for…well-structured and well-formatted code
  • Modifying the existing software: This implies the need for software, …, and data structures which are easy to expand and which minimize side effects of changes…
  • Revalidating the modified software: This implies the need for software structures which facilitate selective retest, and aids for making retest more thorough and efficient.

We have the tools today to do these things. They are things like modular design, the SOLID principles, and TDD. We have learned ways to write well-structured code which minimize side effects of changes and which facilitate retest. We’ve written languages and libraries which help us do these things.

But yet maintenance is still a problem.

Perhaps we as an industry are focusing only on the “cool” stuff. Or believe/hope that “someone else” will maintain it. Or focusing on immediate customer need and not considering long term needs. The customer themselves may not consider the long term costs.

As professionals we need to consider the life-span of the code we are writing, and write it accordingly. If a civil engineer is asked to build a bridge that need only last for a single crossing it will be quite different than if the bridge needed to last for decades (or longer). We need to think in same ways.

Currently we often write software as if it only needs to work for today. No thought is given to tomorrow.

We have the tools to write software that can live as long as it needs to. But we need to treat the life-span as a requirement.

  1. Boehm, B. W. “Software Engineering” Classics in Software Engineering. Ed. Edward Nash Yourdon. New York: YOURDON Press, 1979. pp325-361. Print. 

Is Your Retro Done?

| Comments

(Cross posted to the Cyrus Innovation Blog)

Retrospective meetings are an important part of any sort of “Agile” process. I have a rule about when a retrospective meeting is over.

  1. Everyone that wants to talk has talked.
  2. There is at least one action item.
  3. There is at least one volunteer for each action item.

A good retrospective meetings can help a team achieve what they need and want. Often, however, these meetings tend toward directionless discussion and complaining. While sometimes a ‘complaining’ retrospective can be good and cathartic, these should not be the usual meeting.

Recently a client’s retrospective meetings had fallen by the wayside. Even when they had them they were mostly at the complaining end of the spectrum. We had helped them get back into regular retrospectives, weekly actually, but they were still not very good.

At the end of one retrospective the team member playing the facilitator role tried to wrap the long ramble up by asking “Retro Complete?”. I cut off the murmured agreements with another question: “Do we have some action items?”.

A retrospective meeting should result in action items. These might be a new story/ticket/card, a task, a new process, or an experiment to try. Without these the meeting was just discussion.

Also important is that the action items have volunteers. If no one wants to do or ‘champion’ an action item then find out why. Perhaps the action item just isn’t that important to the team; then drop it. Perhaps it is too big and amorphous; then break it down to a single next action. Or perhaps it is scary; if this is the case then this scariness is something for more discussion.

In the end the team discussed what actions to take on the topics they had discussed and created a short list. They even had volunteer for at least one. Still not the most successful retrospective, but better than the last. Hopefully they’ll keep this habit and their retros will become even more effective.

Reminder: Agile Is Risk Management

| Comments

In a note of the References section of “Waltzing with Bears” (DeMarco & Lister 2003), there is a note on “Planning Extreme Programming” (Beck & Fowler 2001) which says “When viewed as a set of [Risk Management] strategies, XP makes all kinds of sense.” This made me review how XP (or Agile more generally) is a risk management technique.

The incremental approach of XP reduces risk of late delivery or wrong delivery. The demo, planning and retrospective meetings seem to be an implicit risk analysis/mitigation exercise. It might be beneficial to make this more explicit.

One place where XP doesn’t line up with DeMarco & Lister’s thoughts on risk management is their advice that there should be sizable up-front design and estimation. XP eschews this. XP argues that the cost of up-front design & estimation of higher than the risk that they mitigate. It seems a reasonable risk vs. cost choice. Adding some explicit up-front brainstorming should be sufficient to cover the problem of missing large-impact risks. Furthermore the iterative nature of the methodology allows for a just-in-time approach to the costs and risks.

The book also contains a quote from Tom Glib who said (paraphrased) ‘Be ready to pack up whatever you’ve got any given morning and deliver it by close of day’. This is very fitting with the idea in XP that the result of every iteration should be deliverable and producing value. While a one-day iteration, as implied in the quote, is perhaps too extreme for many teams; the exercise of determining what it would take to deliver value in shorter and shorter iterations is valuable.

Bootstrapping a Testing Library

| Comments


What’s a person to do when one prefers developing in a TDD style and decides to start a project where nothing not directly in the language needs to be written? Why, write a testing library of course! But… without a testing library how can one write this testing library in a TDD fashion? Herein is how I went about doing it.

(Full disclosure: this is not the first time I’ve tried this sort thing1. But I had forgotten I had done it before until I was writing this article.)


“If you wish to make an apple pie from scratch, you must first invent the universe.” – Carl Sagan

I have decided to have a crazy project wherein I’ll write an application in Common Lisp and only Common Lisp. That is to say, no other libraries. I have allowed myself to use extensions provided by the the implementation SBCL2 so luckily I won’t need to write my own socket and threading extensions.

This is a crazy and stupid idea. But the point of the project is to give me a place to play around with things I don’t normally deal with and in a language that I like to play with.

Test Driving a Testing Library

Where to start?

Common Lisp has no built in testing library, but it does have assert3. If you have assert you can write a testing library. The problem is writing the first test. I puzzled it over a little and decided I’d need a function which would take an expression to evaluate (the test) and would return some data structure which would indicate the results of the test. Given that here is the first test:

(assert (assoc :failure (test::collect-test-results (assert nil))))

So this is asserting that the return value of collect-test-results when applied to (assert nil) will be an alist4 which will have a cons cell whose car is :failure.

This test directly implies the following test:

(assert (not (assoc :failure (test::collect-test-results (assert t)))))

That is to say: if the expression does not raise an exception there will not be such an element in the returned alist.

After implementing a macro which lets those test pass I wrote some more tests to round out what I thought would be a useful implmentation of this base function of the library. I now had these tests to bootstrap my library.

(assert (assoc :failure (test::collect-test-results (assert nil))))
(assert (not (assoc :failure (test::collect-test-results (assert t)))))
(assert (equal (assoc :value (test::collect-test-results 'foo)) '(:value . foo)))
(assert (assoc :duration (test::collect-test-results 'foo)))
(assert (assoc :start (test::collect-test-results 'foo)))
(assert (assoc :end (test::collect-test-results 'foo)))
(format t "~A...PASSED~&" 'test::collect-test-results)

With this I felt that I had enough testing to make me feel confident in my little implementation. It wasn’t perfect - but good enough for me to now use this function to test other parts of my library.

How to write a test

With collect-test-results I had a way to writing and running individual simple tests. But it wasn’t a very convenient thing to use. But what it let me do is now write tests for deftest which would let me define tests. I started by writing the sort of test I wanted to write:

(deftest a-simple-failing-test
    "This is a very simple test which fails"
  (assert (= 5 (+ 2 2))))

Through a few tests, written using collect-test-results I determined that deftest would intern a symbol with the same name as the first argument of the deftest, bind its function property to a function which when called would, by calling collect-test-results evaluate the body of the deftest. These symbols were put into a list which could be retrieved from the library. Furthermore, defining a test with the same name as an existing test does not create a duplicate test.

It might be clearest to just show you how these tests (and an extracted helper function) ended up:

(defmacro assert-no-failure (&body assertion)
  (let ((failure (gensym "fail")))
    `(let ((,failure (assoc :failure (test::collect-test-results
                                       (assert ,@assertion)))))
       (assert (not ,failure) () (format nil "~A" ,failure)))))

(deftest a-simple-failing-test
    "This is a very simple test which fails"
  (assert (= 5 (+ 2 2))))

(let ((test-list (test::test-list)))
  (assert-no-failure (equal test-list '(a-simple-failing-test)))
  (assert-no-failure (fboundp (car test-list)))
  (assert-no-failure (assoc :failure (funcall (car test-list))))
  (assert-no-failure (string= "This is a very simple test which fails"
                              (documentation (car test-list) 'function)))
  (assert-no-failure (equal (assoc :test-name (funcall (car test-list)))
                            '(:test-name . test-test::a-simple-failing-test))))

(deftest a-simple-failing-test
    "This is a very simple test which fails"
  (assert (= 5 (+ 2 2))))

(assert-no-failure (= (length (test::test-list)) 1))

(deftest a-simple-passing-test
    "This is a very simple test which passes"
  (assert (= 4 (+ 2 2))))

(assert-no-failure (= (length (test::test-list)) 2))

(format t "~A...PASSED~&" 'test:deftest)

Running tests

Now that we can define tests and evaluate them all that is left is to have a convenient way to run the defined tests. Three quick tests on the output of run-all-tests were proof enough for me (the tests that the call to run-all-tests would run were the ones defined above, one passing and one failing) that it would execute each test, report which ones failed and a count of passes and fails to *standard-output*.:

(let ((*standard-output* (make-string-output-stream)))
  (let ((output (get-output-stream-string *standard-output*)))
    (assert-no-failure (search "A-SIMPLE-FAILING-TEST...FAILED." output))
    (assert-no-failure (search "PASSED: 1" output))
    (assert-no-failure (search "FAILED: 1" output))))
(format t "~A...PASSED~&" 'test:run-all-tests)


At this point my testing library has two main entry points: deftest and run-all-tests. To create them, I first used assert to test drive the creation on a lower-level function collect-test-results, which I then used to test drive deftest, which I then used to test drive run-all-tests.

Next Steps and Thoughts

Now that I have this testing library I can use it to test drive the rest of the application I will write. I’m sure along the way I’ll be extending this library as I find new requirements for it. I’ll probably also be writing some assertion library to make the tests more expressive.

The resulting tests5 and code6 are in my GitHub repository for this project: yakshave7.

Clojure Project 'Reloaded' Pattern Gotcha

| Comments


Ensure that user.clj does not contain dependencies (even transitive) upon code that needs to be compiled. This file is loaded well before Leiningen is started enough to compile files.

Problem Description

In my latest learning project with Clojure I decided to try Stuart Sierra’s Reloaded project pattern1. I liked the idea of this project pattern because it promised to make the REPL driven development on a web application smoother. Things were going smoothly as I worked on simple dummy pages for my little application. However when I connected the data file parsing code with its custom exception to the application I started getting compilation and class not found errors related to the custom exception. The reason for this is a rather simple one - but was difficult for me to find information on or figure out.

First an aside about custom exceptions. It seems that in the Clojure community that custom exceptions are avoided2. Either one of the built-in Java exceptions, ex-info3 or Slingshot4 is used instead. However it had been my favored approach to create a domain-specific exception for my application’s needed, so I stumbled forward and learned how to do it. It was relatively simple by using the :gen-class option of ns and the :aot feature of Leiningen. And it worked well… until I connected the code using the exception with the web application that was implemented in the reloaded pattern.

A big part of the reloaded pattern is the use of functions in user.clj to start and stop the system. This file is loaded by Clojure whenever it is started5 so it is perfect for functionality wanted in a REPL. This file is kept in a directory which is only included in the class path when the dev profile is used (the repl and compile tasks use the dev profile by default).

Everything is fine until the code in user.clj depends upon (even transitively) code which must be compiled (such as custom exceptions). Then we hit a annoying chicken-and-egg problem6 wherein Leiningen when trying to compile (or launch the REPL) naturally starts Clojure, which in turn loads user.clj which in turn depends upon code that needs to be compiled. The error that is reported says that the compiled class cannot be found.

The Confusion

This error led me to first think that my custom exception was not written properly and thus wasn’t being compiled, then I thought it was a problem of the :aot feature in Leiningen and interaction with the dev profile. But the problem was more fundamental. It was just my sort of luck that kept me from finding the answer until I spent hours debugging and researching. Now it is easy to find several reports of this problem7. It is not a Leiningen problem, not a Clojure problem, not a reloaded pattern problem, but an annoyingly unfortunate interaction between them.

The Workaround

Luckily, once the problem was identified, there is a relatively easy workaround. I took the parts of user.clj which depended upon the custom exception and moved those to a new file reloaded.clj which is then loaded when the REPL starts by using the :repl-options configuration in project.clj. :repl-options as a :init configuration which can contain an expression which is evaluated when the REPL is starting. I set it to (load "reloaded").








Mocking Class Constructor in Python With Mockito

| Comments

In the current code I’m working on we once and a while want to mock the constructor of a class. It is only really needed because we have a few classes who sadly do heavy-lifting in their constructors.

The common wisdom on the team was “you can’t do that”, but it bugged me and I eventually googled the right things and found out how to do it. The moment I did it myself I realized that it was obvious. Let me explain.

In [Python Mockito][] the standard form of stubbing is done with the when function, operating upon an object:

when(obj).method(arg1, arg2).thenReturn(5)

This states that when obj.method is called with arguments arg1 and arg2 then return 5. (There are other things to do besides thenReturn but for purposes of this discussion that is enough).

What about if you want to mock a constructor? The naïve approach is to try:

when(Klass).__init__(arg1, arg2).thenReturn(fakeKlassInstance)

But that doesn’t work. It will return an error equivalent to “NoneType does not have an attribute thenReturn”.

Making it work is pretty simple. Let’s say that Klass is defined in a module called klass. Then we can do the following:

import klass
when(klass).Klass(arg1, arg2).thenReturn(fakeKlassInstance)

It works because a module is an object on which the classes are methods! It was so obvious when I saw it work. The clues were right in from of my face, calling the constructor of a class is as simple as using the class name as if it were a function. That’s because it is! I’d like to blame too many years of Java & C# where “importing” is more about telling the linker where to find things than about creating objects that can be manipulated.

So to sum up: a class is a method on a module which returns an instance of that class. When you import a module you are creating a variable of that name bound to an object which represents that module.

Here is a test file which can be used to play with this idea. Note it uses the fact that the name of the current modules is bound to a variable called __name__ and that sys.modules is a hash of all modules.

import sys
import mockito

class Foo(object):
    def __init__(self):
        self.x = 5

    def method(self):
        return 10

f = Foo()
print f.method()
print f.method()

# This will not work and throw an error.
# print Foo()
# mockito.when(Foo).__init__(mockito.any()).thenReturn('blah')
# print Foo()

print Foo()
print Foo()

The Cocktail Pattern Language

| Comments


The idea of a “Pattern Language” is well known and well received in the Software community. Since the work of the Gang of Four1 it has helped in the communication of solutions and the discovery of new ones Cocktails have a pattern language which is well recognized by professionals but not by consumers. In this post I hope to make the Cocktail Pattern Language visible and useful to cocktail consumers and amateur bartenders.

What is a cocktail?

To begin with a “cocktail” is defined, classically, as the mixture of a “spirit”, sugar, water, bitters. This itself is less of a recipe for a specific drink as it is a pattern for all possible cocktails. While the modern concept of a cocktail has expanded from this origin it is still a good starting place for this discussion, and a fundamental example of cocktail patterns.

What is a pattern?

The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. – Christopher Alexander2

Basically by “pattern” I mean a name for a particular way of doing something, a solution to a problem. A pattern does not dictate the only way to do something but shows the “shape” of the solution. The specifics of the situation will always cause variations.

Examples of Patterns

The easiest way to describe what cocktail patterns are is to give examples.

The Manhattan:

The recipe for a Manhattan is 2 parts Rye (or Bourbon), 1 part Sweet Vermouth and Angostura bitters. The recipe contains a main spirit: Rye, bitters: Angostura, and sweet: Vermouth (which also adds more kick and bit more bitter).

The simplest variant is The Rob Roy, replace the Rye with Scotch. Next simplest are the Dry and Perfect variants: Dry instead of Sweet Vermouth, or equal parts of two respectively.

A more differing variant is The Toronto: 2 oz. Rye, .25 oz Fernet Branca, .25 tsp sugar, Angostura bitters. While on the surface it doesn’t sounds like the same drink it is if you break it down. The main spirit is still Rye. The bitter & extra kick is added by the Fernet and Angostura while the sweet is provided directly by sugar.

Thus the pattern of a Manhattan is a whiskey paired with bitters and sweet.

The Martini

One cocktail which is (in)famous for its variants is the Martini. The pattern of a Martini is simply Gin (or Vodka) and Vermouth. Martinis can be Sweet, Perfect or Dry. The amount of vermouth can vary depending on the taste of the drinker. Different garnishes can be used, additional ingredients can be added as desired.

One variant is The Bronx. It is a Perfect Gin Martini with the addition of Orange Juice: 6 parts Gin, 3 parts Sweet Vermouth, 2 part Dry Vermouth and 3 parts Orange Juice.

Another variant is The Vesper Martini which is essentially a Dry Gin Martini: 3 oz. Gin, 1 oz Vodka, .5 oz Lillet Blanc3. It replaces the Vermouth with Lillet (similar to Vermouth in that it is a fortified wine with herbal ingredients).

One last variant is The Martinez which is basically a Sweet Martini: 2 oz Gin, .75 oz Sweet Vermouth, .25 oz Maraschino Liqueur and Angostura bitters. Here the Sweet Vermouth is augmented with Maraschino and a bit of bitters is added.

The Sour

A final pattern I’ll mention is the sour. Drinks in this pattern are The Whiskey sour, the Gimlet, the Daiquiri. The pattern here is a spirit along with a good amount of citrus juice (Lime often) and some sugar.

Why it is useful

The usefulness of speaking of patterns is to have a common terminology for discussing the situations designers already see over and over.4

Patterns in cocktails can be very useful to the bartender because it helps them choose the next drink for the patron. The patron may, as I do, say “I like Manhattans” or “I’d like something like a Daiquiri” and the bartender can rattle off a series of variants upon that pattern, or even invent their own.

Also the patterns overlap. For instance a Manhattan and a Sweet Martini have the same basic pattern: 2 spirit to 1 Sweet Vermouth. So a person who likes Manhattans may slide into Sweet Martinis and thus explore a whole new pattern of cocktails.

Thus the patterns of cocktails, as in Architecture and Programming, allow one to have a language to talk about the problem and possible solutions to it and to find new areas of discovery.

  1. Gamma, Helm, Johnson, and Vlissides. Design Patterns. Reading, MA: Addison-Wesley, 1995. Print. 

  2. Alexander, Christopher. Pattern Language. New York: Oxford UP,

    1. Print.

  3. Due to recipe changes the current Lillet Blanc is sweeter and less bitter than previous versions. A better ingredient would be Cocchi Americano. 

  4. “Cocktail.” Wikipedia. Wikimedia Foundation, 29 Oct. 2014. Web. 01 Nov. 2014.

To Estimating Bugs or Not: The Definitive Answer

| Comments

I was recently re-reading the XP books and in Extreme Programming Installed1 I came upon a section where the authors say that bugs should be estimated and planned into iterations like other stories. I disagreed with that idea, thinking about how ‘opinion had changed’ on that topic since the book was written and that ‘that’s not how I was taught to do it’.

Then I wondered why I thought that, just who had taught me that bugs should not be estimated. At first I assumed it was Jim Shore from The Art of Agile2, but a quick look in the book3 shows that he says to estimate bug stories. That led me to do some looking around the Internet4 and discuss this issue with my colleagues at Cyrus Innovation5.

I know believe I have the definitive answer to the question of whether or not to estimate bugs.

(First a clarification: I am not talking about defects found during an iteration in, or caused by, stories being worked on during that iteration. That just means that story is not done. Don’t take credit for it and create a bug card. Just accept that the story is not done.)

And now for that definitive answer:

It depends.

Some teams might find it useful to give the Customer the ability to prioritize bugs along with other work. By estimating the bugs the Customer has all the data they need to make their trade-offs. That being said: estimating bugs can be very difficult since there is often more than the usual share of unknown unknowns; thus the estimate will be less reliable than normal.

Some teams might find it useful to estimate bugs so that planning an iteration is easier. If a team knows it can do 14 points of work, then 14 points of work can be taken off the top of the backlog, maybe they are all bugs, or some bugs or no bugs; doesn’t matter - it is 14 points. If bugs are not estimated then maybe the team can do some bugs and how many more points? shrug.

Some teams might find that estimating bugs allows the number of bugs to be hidden among the work being done. They want to make sure that bugs are always visible, so they do not estimate want to make sure that bugs always detract from their velocity. Estimating bugs makes them a bit more normal and thus less visible.

So ultimately, as with any such question, the answer depends upon the needs and desires of the team. Do what works for you, and periodically question why and even try a different way.

(Drink Pairing: I paired the writing of this post with a simple Daiquiri6.)

and strain.



  3. My copy contains the note: “To Mark - Here’s to sucking less! Jim Shore” 

  4. Specifically the World Wide Web section of the Internet. 


  6. 1.5oz Rum, .75oz Lime Juice, .25oz Simple Syrup. Shake over ice 

What I Have (Re)-learned From SICP

| Comments

After about two years of slowly working through Structure and Interpretation of Computer Programs (SICP)1 I finally completed it. I didn’t do every exercise but I did many of them2.

The History

I first worked through SICP back in 1989 as part of my class at Worcester Polytechnic Institute (where I received my BSCS in 1992). It is a book that I remember with awe. I remember being, along with most of the class, mystified by this dumb language Scheme. It was so unlike anything we were used to. I was familiar with BASIC and Pascal and some very basic Bourne Shell scripting. Scheme was alien. Most of us thought it crazy. But then, somewhere about half-way through the class it clicked, suddenly I got it. Scheme, and Lisp by association, was amazing. The REPL development, the flexibility, the lack and regularity of syntax gave it great power. And then… then… we started on the chapter about the Meta-circular Evaluator3; writing an evaluator of Scheme in Scheme. It was so simple, so clear4.

This was a Freshman class, and the rest of my time at WPI I played around with Lisp, sometimes doing assignments in it, playing with my Emacs initialization files, that sort of thing. Never got serious. And then I joined the work force in 1990 with a summer job doing C/Motif work at a startup. And then after college a C job connecting their Windows app to a particular printer for graphs, and then a C/Motif job on a news ticker for a stock market application. Oh and then I learned5 object-oriented programming with C++… I forgot the lessons I learned in SICP.

The Return

I went back to SICP expecting to be reminded of a few things. What I found was that I was taught things I never remember being taught. Things which I so wish I had grokked at the time. Things which I obviously never understood and then forgot.

A few of the things I learned way back when and then forgot were:

  • Judge a programming language by the means of abstraction and the means of composition.
  • The power/utility of REPL development.
  • Lazy streams to deal with infinities are not scary.
  • Functional Programming.
  • Object-Oriented Programming.
  • The Problem of State.

The Reaction

This is a Freshman level textbook, and all of this was taught as early as the mid-eighties. Looking back I feel like I was caught up in a great forgetting. There are things we (as Software Engineers6) knew and then forgot, or chose to forget/overlook.

Now I see people going back to old papers/books and re-learning/discovering what we already knew but forgot/ignored. It is like a wider version of Greenspun’s tenth rule7. We had something and then had to put it aside because it didn’t work in our “reality”, but now we come back and “discover” it all over again.

OMG Garbage Collection8 OMG interactive language shells (aka REPL) OMG dynamic languages OMG functional programming OMG OO is about message passing.

What happened, were we asleep?

It would be one thing if we acted like these are in fact old ideas that are just now realizing are good/possible - but we act like they are new. Reality has caught up to where the 1960’s thought was possible9.

The Back to the Future

One last thing I (re)learned: The fun of learning and working through problems. And that leads to what is yet to come.

I’m going to start looking at the old texts of our practice. The ancients knew things that we have forgotten. I will see what I can learn from them. I will strive to avoid Argument from Antiquity however, that is always a danger with this sort of thing.

I can only see so far because I stand upon the shoulders of Giants10.




  4. It was in this class I created the joke idea of ‘God code’. The stereotypical 3 line Lisp code, first line defines function, second like is a null check, third line recurses. And shit got done. 

  5. learned 

  6. Software Engineer, Programmer, Coder, Hacker whatever you want to call us/ourselves. 

  7. “Any sufficiently complicated C or FORTRAN program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.” 



  10. with apologies to Newton.