# The Cocktail Pattern Language

## Abstract

The idea of a “Pattern Language” is well known and well received in the Software community. Since the work of the Gang of Four1 it has helped in the communication of solutions and the discovery of new ones Cocktails have a pattern language which is well recognized by professionals but not by consumers. In this post I hope to make the Cocktail Pattern Language visible and useful to cocktail consumers and amateur bartenders.

## What is a cocktail?

To begin with a “cocktail” is defined, classically, as the mixture of a “spirit”, sugar, water, bitters. This itself is less of a recipe for a specific drink as it is a pattern for all possible cocktails. While the modern concept of a cocktail has expanded from this origin it is still a good starting place for this discussion, and a fundamental example of cocktail patterns.

## What is a pattern?

The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. – Christopher Alexander2

Basically by “pattern” I mean a name for a particular way of doing something, a solution to a problem. A pattern does not dictate the only way to do something but shows the “shape” of the solution. The specifics of the situation will always cause variations.

### Examples of Patterns

The easiest way to describe what cocktail patterns are is to give examples.

#### The Manhattan:

The recipe for a Manhattan is 2 parts Rye (or Bourbon), 1 part Sweet Vermouth and Angostura bitters. The recipe contains a main spirit: Rye, bitters: Angostura, and sweet: Vermouth (which also adds more kick and bit more bitter).

The simplest variant is The Rob Roy, replace the Rye with Scotch. Next simplest are the Dry and Perfect variants: Dry instead of Sweet Vermouth, or equal parts of two respectively.

A more differing variant is The Toronto: 2 oz. Rye, .25 oz Fernet Branca, .25 tsp sugar, Angostura bitters. While on the surface it doesn’t sounds like the same drink it is if you break it down. The main spirit is still Rye. The bitter & extra kick is added by the Fernet and Angostura while the sweet is provided directly by sugar.

Thus the pattern of a Manhattan is a whiskey paired with bitters and sweet.

#### The Martini

One cocktail which is (in)famous for its variants is the Martini. The pattern of a Martini is simply Gin (or Vodka) and Vermouth. Martinis can be Sweet, Perfect or Dry. The amount of vermouth can vary depending on the taste of the drinker. Different garnishes can be used, additional ingredients can be added as desired.

One variant is The Bronx. It is a Perfect Gin Martini with the addition of Orange Juice: 6 parts Gin, 3 parts Sweet Vermouth, 2 part Dry Vermouth and 3 parts Orange Juice.

Another variant is The Vesper Martini which is essentially a Dry Gin Martini: 3 oz. Gin, 1 oz Vodka, .5 oz Lillet Blanc3. It replaces the Vermouth with Lillet (similar to Vermouth in that it is a fortified wine with herbal ingredients).

One last variant is The Martinez which is basically a Sweet Martini: 2 oz Gin, .75 oz Sweet Vermouth, .25 oz Maraschino Liqueur and Angostura bitters. Here the Sweet Vermouth is augmented with Maraschino and a bit of bitters is added.

#### The Sour

A final pattern I’ll mention is the sour. Drinks in this pattern are The Whiskey sour, the Gimlet, the Daiquiri. The pattern here is a spirit along with a good amount of citrus juice (Lime often) and some sugar.

## Why it is useful

The usefulness of speaking of patterns is to have a common terminology for discussing the situations designers already see over and over.4

Patterns in cocktails can be very useful to the bartender because it helps them choose the next drink for the patron. The patron may, as I do, say “I like Manhattans” or “I’d like something like a Daiquiri” and the bartender can rattle off a series of variants upon that pattern, or even invent their own.

Also the patterns overlap. For instance a Manhattan and a Sweet Martini have the same basic pattern: 2 spirit to 1 Sweet Vermouth. So a person who likes Manhattans may slide into Sweet Martinis and thus explore a whole new pattern of cocktails.

Thus the patterns of cocktails, as in Architecture and Programming, allow one to have a language to talk about the problem and possible solutions to it and to find new areas of discovery.

1. Gamma, Helm, Johnson, and Vlissides. Design Patterns. Reading, MA: Addison-Wesley, 1995. Print.

2. Alexander, Christopher. Pattern Language. New York: Oxford UP, 1977. Print.

3. Due to recipe changes the current Lillet Blanc is sweeter and less bitter than previous versions. A better ingredient would be Cocchi Americano.

4. “Cocktail.” Wikipedia. Wikimedia Foundation, 29 Oct. 2014. Web. 01 Nov. 2014. http://en.wikipedia.org/wiki/Cocktail.

# To Estimating Bugs or Not: The Definitive Answer

I was recently re-reading the XP books and in Extreme Programming Installed1 I came upon a section where the authors say that bugs should be estimated and planned into iterations like other stories. I disagreed with that idea, thinking about how ‘opinion had changed’ on that topic since the book was written and that ‘that’s not how I was taught to do it’.

Then I wondered why I thought that, just who had taught me that bugs should not be estimated. At first I assumed it was Jim Shore from The Art of Agile2, but a quick look in the book3 shows that he says to estimate bug stories. That led me to do some looking around the Internet4 and discuss this issue with my colleagues at Cyrus Innovation5.

I know believe I have the definitive answer to the question of whether or not to estimate bugs.

(First a clarification: I am not talking about defects found during an iteration in, or caused by, stories being worked on during that iteration. That just means that story is not done. Don’t take credit for it and create a bug card. Just accept that the story is not done.)

And now for that definitive answer:

It depends.

Some teams might find it useful to give the Customer the ability to prioritize bugs along with other work. By estimating the bugs the Customer has all the data they need to make their trade-offs. That being said: estimating bugs can be very difficult since there is often more than the usual share of unknown unknowns; thus the estimate will be less reliable than normal.

Some teams might find it useful to estimate bugs so that planning an iteration is easier. If a team knows it can do 14 points of work, then 14 points of work can be taken off the top of the backlog, maybe they are all bugs, or some bugs or no bugs; doesn’t matter - it is 14 points. If bugs are not estimated then maybe the team can do some bugs and how many more points? shrug.

Some teams might find that estimating bugs allows the number of bugs to be hidden among the work being done. They want to make sure that bugs are always visible, so they do not estimate want to make sure that bugs always detract from their velocity. Estimating bugs makes them a bit more normal and thus less visible.

So ultimately, as with any such question, the answer depends upon the needs and desires of the team. Do what works for you, and periodically question why and even try a different way.

(Drink Pairing: I paired the writing of this post with a simple Daiquiri6.)

and strain.

1. http://www.amazon.com/Extreme-Programming-Installed-Ron-Jeffries/dp/0201708426/

2. http://www.jamesshore.com/Agile-Book/

3. My copy contains the note: “To Mark - Here’s to sucking less! Jim Shore”

4. Specifically the World Wide Web section of the Internet.

5. http://www.cyrusinnovation.com/

6. 1.5oz Rum, .75oz Lime Juice, .25oz Simple Syrup. Shake over ice

# What I Have (Re)-learned From SICP

After about two years of slowly working through Structure and Interpretation of Computer Programs (SICP)1 I finally completed it. I didn’t do every exercise but I did many of them2.

## The History

I first worked through SICP back in 1989 as part of my class at Worcester Polytechnic Institute (where I received my BSCS in 1992). It is a book that I remember with awe. I remember being, along with most of the class, mystified by this dumb language Scheme. It was so unlike anything we were used to. I was familiar with BASIC and Pascal and some very basic Bourne Shell scripting. Scheme was alien. Most of us thought it crazy. But then, somewhere about half-way through the class it clicked, suddenly I got it. Scheme, and Lisp by association, was amazing. The REPL development, the flexibility, the lack and regularity of syntax gave it great power. And then… then… we started on the chapter about the Meta-circular Evaluator3; writing an evaluator of Scheme in Scheme. It was so simple, so clear4.

This was a Freshman class, and the rest of my time at WPI I played around with Lisp, sometimes doing assignments in it, playing with my Emacs initialization files, that sort of thing. Never got serious. And then I joined the work force in 1990 with a summer job doing C/Motif work at a startup. And then after college a C job connecting their Windows app to a particular printer for graphs, and then a C/Motif job on a news ticker for a stock market application. Oh and then I learned5 object-oriented programming with C++… I forgot the lessons I learned in SICP.

## The Return

I went back to SICP expecting to be reminded of a few things. What I found was that I was taught things I never remember being taught. Things which I so wish I had grokked at the time. Things which I obviously never understood and then forgot.

A few of the things I learned way back when and then forgot were:

• Judge a programming language by the means of abstraction and the means of composition.
• The power/utility of REPL development.
• Lazy streams to deal with infinities are not scary.
• Functional Programming.
• Object-Oriented Programming.
• The Problem of State.

## The Reaction

This is a Freshman level textbook, and all of this was taught as early as the mid-eighties. Looking back I feel like I was caught up in a great forgetting. There are things we (as Software Engineers6) knew and then forgot, or chose to forget/overlook.

Now I see people going back to old papers/books and re-learning/discovering what we already knew but forgot/ignored. It is like a wider version of Greenspun’s tenth rule7. We had something and then had to put it aside because it didn’t work in our “reality”, but now we come back and “discover” it all over again.

OMG Garbage Collection8 OMG interactive language shells (aka REPL) OMG dynamic languages OMG functional programming OMG OO is about message passing.

What happened, were we asleep?

It would be one thing if we acted like these are in fact old ideas that are just now realizing are good/possible - but we act like they are new. Reality has caught up to where the 1960’s thought was possible9.

## The Back to the Future

One last thing I (re)learned: The fun of learning and working through problems. And that leads to what is yet to come.

I’m going to start looking at the old texts of our practice. The ancients knew things that we have forgotten. I will see what I can learn from them. I will strive to avoid Argument from Antiquity however, that is always a danger with this sort of thing.

I can only see so far because I stand upon the shoulders of Giants10.

1. http://mitpress.mit.edu/sicp/

2. https://github.com/verdammelt/sicp

3. http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1

4. It was in this class I created the joke idea of ‘God code’. The stereotypical 3 line Lisp code, first line defines function, second like is a null check, third line recurses. And shit got done.

5. learned

6. Software Engineer, Programmer, Coder, Hacker whatever you want to call us/ourselves.

7. “Any sufficiently complicated C or FORTRAN program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.”

8. http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-33.html#%_sec_5.3

9. http://www-formal.stanford.edu/jmc/recursive.pdf

10. with apologies to Newton.

# Thoughts About the Domain Model

The CHECKS Pattern Language paper(Cunningham 1994) has some good ideas about what they call “Information Integrity”, ensuring that inputs are parsed/validated/handled well. But I also found that it made some good statements about the domain model in general. I’m going to write about a few of these quotes here.

Your domain model must express the “logic” of the business in its richest and often illogical detail. Every clause of every statement should be motivated by some business fact of life.

A program will always have an unfortunate amount of code in it, but an important chunk of it is the domain model. That part that is the reason for the application. Not the code that does the jQuery jiggery-pokery, not the mallocs and frees. When the business rules change this is where the change will be made. Keep it clean and uncluttered with non-domain details. If there are details in it which do not correspond to the domain logic then there will be confusion when talking about the domain with the experts. If the code doesn’t express the domain the way the domain experts talk about it, or it is incomplete, then there will be confusion again.

In your domain models you are chartered to express business logic with no more complexity than originally conceived or currently expressed.

Domain logic can be complicated to begin with. It doesn’t help that you are not an expert in the domain of (e.g) Etruscan Snoods, but you need to write a program to work with them. You talk with the experts and translate their expertise into a program. Don’t make it more complicated than it already is. Keep it simple and to the point so it is easy to talk about it with the domain experts. If it is not easy to discuss what it does with the domain experts then adding new features or finding/fixing problems with it will be more difficult.

A person reaches through a program’s interface to manipulate the domain model.

This is an important thought - the user thinks and talks about the domain with a certain language, pattern and style; make sure your program talks their language. If the program’s interface doesn’t match the user’s language then the user will have trouble using it, or using it correctly. Data should be labeled with the right words, operations with the correct terms, and permissible inputs and output formats must match expected conventions.

In closing the paper discusses good patterns for dealing with inputs and the domain model, but I think that it is the above quotes really that really stand out as good things to think about.

# Serving Coffee With Express

A few weeks ago I was playing around with a new stack: Node-Express-Angular all in CoffeeScript1. I found it quite easy to have the Express server compile my CoffeeScript files and then return the JavaScript to the browser. I then found myself last week in a position at work where I wanted a little test page to hang the Angular directives I was working on so I quickly used the same trick.

I’m not saying this trick is bullet-proof, or even a necessarily good idea for production. But it seems like a good thing for testing and perhaps for getting a project off the ground with some prototyping. It also serves as a small example of how to write an Express Middleware2.

With that caveat I will present you with the code and explanation of it3.

The part that concerns us are lines 12-18. This section is telling Express that if a request comes in with a path starting with /client that we will want to do some special processing on it.

• Line 13 determines the path to the CoffeeScript file (which is in the ../client directory relative to our current directory). Of note here is that request.path does not contain the /client part of the path and __dirname is the directory of the currently executing file.

• Line 14 sets up the reading of this CoffeeScript file. It is important to specify the encoding otherwise the Coffeescript compile will throw an error.

• If there is an error reading the file (line 15), such as the file not existing, then we tell express to run whatever is the next rule (eventually if no other rules work Express will send a 404 for us).

• If we can read the file then we use CoffeeScript to compile that file (line 18) and send that data back as the response (lines 16-18). (Of note here is that we set the content type to be text/javascript (line 17) to make sure that the browser does the right thing (it doesn’t appear necessary with Chrome at least, but probably best to do it.)).

Congratulations, we’ve just written an Express Middleware!

1. It is my current opinion that CoffeeScript is better than writing directly in JavaScript. It saves the developer from making some very basic stupid mistakes with JavaScript and also removes some of the syntatic noise.

2. [](http://expressjs.com/4x/api.html#middleware)

3. The toy app bootstrapping kit I mentioned at the top of this post is in my expressular-kit. The code here is not a direct copy of the code there in app/server/app.coffee.

# Only One Way of Doing Things…

When there is only one way of doing things, it is easier to
modify and reuse code. When code is reused, programs are easier
to change and most importantly, shrink. When a program shrinks
its construction and maintenance requires fewer people which
allows for more opportunities for reuse to be found. Consistency
understanding.

Programming as Experience: The Inspiration of Self (1995)

## Resuse

I like the meaning of ‘reuse’ in this quote. This is not the ‘reuse’ promised by early OO proponents. They promised1: that one would be able to create objects which would be reused from one project to another. This is the reuse which means that code is reused in a project. This is the reuse which I believe to be attainable.

It is like a inward-spiral: the more reuse of code the fewer ways to do a thing there will be which leads to only one way.

The sentence about fewer people is a bit odd - but I feel it is does make sense. The fewer people on the team the more likey there will be only one way to do things which means there is by definition more reuse.

## Consistency

Perhaps more important than reuse though is consistency. If code is consistent I have found it to be easier to reason about, because reasonable assumptions can be made when reading it. Being able to make reasonable assumptions, and have those borne out, is very important to me when reading code.

## Conciseness and Understanding

And it follows that with reuse and consistency one will get consiceness. But does that necessarily lead to underderstanding? That is something I feel is right, but I have seen some concise code which is hard to understand.

References:

• “Programming as Experience: The Inspiration of Self”, 1995, Smith, Randall B and Ungar, David

1. Or perhaps that is not what they said, but what we heard.

# TDD in Common Lisp: Recursive Yak-shaving

While avoiding family this Christmas-day I started trying to figure out way for me to do TDD in Common Lisp (as one does…). That led to SLIME1 and QuickLisp2 and lisp-unit3 and asdf4 (&c. &c).

## …it’s yaks the whole way down.

The hardest part for me was to find a simple way to load my code and run the tests. ASDF ships with asdf:test-system defined but it does nothing by default5. So I did some digging around and saw some other’s solutions and synthesized my version.

Here is an example ASDF system definition for a Game of Life6 implementation.

## My God… it’s full of yaks

I’m just going to let that sink in for a bit while I get some coffee.

Ok, now that I’m back I will explain each part. The first interesting bit is on line 6:

This line tells ASDF that it should find and load a system called lisp-unit because our system depends upon it. This is where QuickLisp makes things simple since that is its job and it does a fine one at that.

Next is the definition of our components:

I’ve decided that my project should have two main directories, one to contain all the production source code and one for all the test source code. Each one will be in their own packages, with the :gol-test package :use-ing the :gol package. :serial t by the way tells ASDF that each item in the component list depends upon the one before it.

Now the hairier parts which tell ASDF to run our tests:

The first line is simply stating that in order to do test-op ASDF needs to perform load-op on the :gol system. This way every time we run the tests we’ll reload the system if needed.

Now… this next bit probably shows how rusty my Lisp is. Maybe there is an easier way. Basically we want to run (lisp-unit:run-tests :all :gol-test) with lisp-unit:*print-errors* and lisp-unit:*print-failures* both bound to t (they default to nil and that doesn’t give enough info IMNSHO). However when Lisp is reading and evaluating this code lisp-unit has probably not yet been loaded - so we have to be crafty and find the symbol by name in the package then funcall it. Binding the variables was trickier but I stumbled upon progv7 and that seems to be just what I needed.

progv creates new dynamic bindings for the symbols in the list which is its first argument. These symbols are determined at runtime. That means we can use intern to find or create these symbols in the right package8. progv binds these symbols to the values in the list which is its second parameter and then finally executes the sexp which is its third argument in an environment which contains these new bindings.

With this work done I can now type in (asdf:test-system 'gol) into my REPL and it loads and runs my tests and gives me useful output of those tests. I might go so far as to bind that to a very short function name, or even a key in Emacs.

## There are still more yaks to shave.

What is left to be determined is if this is the best way to do things.

Some documentation/blogs I read lead me to believe that putting the tests in the same system with the production code is verboten (or at least a no-no). I tried having the tests be in their own system definition (depending upon the production code) but then ASDF & QuickLisp both strongly pushed me to have that definition it its own .asd file9 which seemed awkward to me. I personally have no problem shipping tests with the production code, and I believe that if needed all the symbols of the test package could be unintern-ed before an image was saved if desired.

I’ll play around with this setup and see how things go.

1. http://www.common-lisp.net/project/slime/

2. http://www.quicklisp.org/

3. https://github.com/OdonataResearchLLC/lisp-unit

4. http://common-lisp.net/project/asdf/

5. http://common-lisp.net/project/asdf/asdf/Predefined-operations-of-ASDF.html#test_002dop

6. Really? You don’t know what this is already and you had to look in the footnotes? Go look it upon Wikipedia already!

7. http://www.lispworks.com/documentation/HyperSpec/Body/s_progv.htm#progv

8. Perhaps I should use find-symbol instead so that I don’t create symbols that the package doesn’t export. I will experiment with that.

9. since they look up the defintion by looking for a file with the same name as the system

## (How It Ought To Be Done)

IMNSHO1.

Being able to read source code2 is very important. So how does one do it well?

Just like reading any other material - first decide why is one reading it. Why is one reading the listing? Is one debugging a problem or trying to gain specific knowedge?; or reading to get general or pleasure? One takes different approaches depending upon the desired goal.

The first step is to find an interesting spot to start.

### For Debugging (or to Gain Specific Knowledge)

Start by skimming of the code, grep3 and tags4 tables are useful for this. This will help one find the right, or at least likely entry point. There is a lot of instinct5 involved in this step. One gets better at it the more one needs to do it6.

### To Get General Knowledge (or for Pleasure)

Code is for reading7 so read it. Skim the code (including the tests8). Look for something interesting, get the lay of the land, the shape of the code. Note the Dramatis Personæ. Perhaps the code seems to be about widgets and frobbing, oh and here’s something about twiddling that looks interesting.

Pick one of these that is most intriguing (e.g. One wonders what happens when a widget is frobbed.) and use that as one’s entry point. This is a matter of personal taste/fetish9.

### Now that One has One’s Entry Point…

Now that one is staring in the face a chunk, hopefully a small chunk, of code; how does one read it?

Top Down. It is as simple as that10.

Read the first line of code, what does it say? Believe it. That can be risky - but at this point one must do it. What does the next line say, and so on. Follow function calls11 if they seem interesting. Skip over things that are not, or do not seem relevant. If one’s interest is in frobbing ignore code about tweaking or twiddling. Perhaps one makes a note that frobbing seems to involve tweaking or twiddling; but one must know about frobbing now so those are not relevant. This is also very tied to instinct.

It is this point here which I think is most important. One must gain a sense of what is relevant when reading code. One cannot read every code path and think that will lead to understanding. That would be like reading the dictionary and assuming one knows the language. This is where I have seen programmers fail in their reading.

This is a very good reason why naming is so important12.

1. It has occurred to me that my style of reading code is not the same as some other’s. I have been told by a person I think highly of that they feel my way is the/a right way. I have finally gotten around to writing it up as a quick blog post.

2. Is it time for us to stop calling it ‘code’ given its implications of obfuscation? I think it is still appropriate as long as we remember that the definition is not about obfuscation per se but about using a set of symbols to mean other words/letters/symbols. This is, IMNSHO, exactly what we are doing.

3. Or its ilk (e.g. ack)

4. New fangled IDEs keep the equivalent of tags tables.

5. Gut feeling, hunch, guess work, randomness, luck &c.

6. Especially when at the client site, trying to figure out why the trades are not going through. Take my word on it.

7. Most often code is for reading (and evaluating/compiling) by the computer; but also for oneself and one’s fellow humans. (Apologies for being speciest.)

8. Why do we keep referring to code and test? Tests are code.

9. Of course it is not that simple.

10. And other sorts of papered over goto statements.

11. Perhaps a post/rant for another day.

# SCNA 2013 - Effortful vs. Automatic Brain

## Experts Don’t Know what they Know

This year’s SCNA Conference seemed to have several talks which touched upon the idea of experts having implicit knowledge. That is to say: they know things but can’t necessarily explain what they know, or why. Ken Auer and Dave Thomas touched upon it; during the panel on Software Quality it was also brought up (e.g. Corey Haines’ amazing reference to Chicken Sexing). But I felt that Brian Marick’s talk took the idea and then added to it.

## Bright vs. Dull Cows

Brian started talking about this implicit knowledge idea - giving his wife as an example. His wife (a veterinary professor) teaches her students to determine if a cow is ‘bright’ or ‘dull’ (a subjective diagnosis). Her students specify, day-after-day, if cows are dull or bright. She then corrects them. They usually ask how she could tell and she tells them something (drooping ears, nose-cleaning, etc.). But actually these specific clues are not really the way to tell. Those who can diagnose a cow as bright or dull cannot explain why.

This anecdote seems to show that the implicit knowledge of the expert is something that can be trained. When discussing this idea after the talk Corey Haines mentioned a conference session in which Michael Feathers gave where he had the attendees give quick good/bad judgment calls on snippets of code. Then tallied those up and found that some snippets had a large amount of agreement as to their good-/bad-ness. They then discussed and tried to determine why. In light of Brian’s anecdote however it seems that they might never find out why, but maybe they could train themselves by looking at a lot of code and being corrected when they get the judgment wrong.

(Brian hopes to use this sort of idea to be able to train himself to have a nearly physical reaction to bad code. While that sounds cool, I worry about a ‘Ludovico Technique’ for code.)

## Effortful vs. Automatic

Brian then talked about two types of thinking: Effortful and Automatic. Effortful is the sort when you need to do a non-trivial arithmetic, or a parking in a tight spot. Automatic is when you do simple math like 2+2 or easy driving where you are sort of on autopilot. When you use the Effortful way, it actually takes effort. It is slower seems to be only possible to do it serially. Automatic thinking is faster, takes little effort, and seems to be parallelizable.

If you don’t have something in Automatic then your brain shifts to Effortful; sort of a cache miss. If you do something enough, or become good enough at it, you can put stuff into your Automatic cache.

## Switching to Automatic

So how can this be done? While Brian offers the usual ‘practice, practice, practice’, he adds to it an interesting idea about ‘Set and Setting’. ‘Set and Setting’ is an idea from Timothy Leary about why drugs such as LSD seem to have very different effects in different people. The mindset (Set) of the person (which includes things like rituals which build up that mindset) and the Setting (environment) have a strong influence on the experiences of that person.

By being aware of which mindsets (and how to enter them) and what settings are most beneficial for programming (and even different types of programming) he feels he can move some programming tasks into his Automatic thinking area.

## Conclusion

It seems to me that the idea of ‘Automatic’ and ‘Effortful’ adds to the idea of the expert’s implicit knowledge. Perhaps the expert’s knowledge is just an expression of Automatic thinking?

I’m interested in trying out the practice of being more aware of my own ‘Set and Setting’ with respect to code.

# Quick Start for Compojure/Clojurescript

I am once again jumping into the Clojure world and trying to learn a bit of it. I am doing so via a small project called defclink (source). I am also attempting to get back into blogging. So I thought I’d combine the two items into this short post.

## Goals of my project.

My goal of this project is to learn some Clojure building a webapp. I decided to also use Clojurescript since it seemed like an interesting idea. I wanted to keep the number of libraries and frameworks low to maximize the Clojure code I would need to write, thus learning Clojure. That being said I allowed myself the luxury of the Compojure library (with Ring under it) to handle the HTML protocol stuffs.

I’ll start by explaining how to get a brand new Compojure project.

## How to set up a Compojure project

Starting a Compojure project is very simple:

lein new compojure awesome-thing


This will create:

README.md    project.clj  resources/   src/         test/

./resources:
public/

./resources/public:

./src:
awesome_thing/

./src/awesome_thing:
handler.clj

./test:
awesome_thing/

./test/awesome_thing:
test/

./test/awesome_thing/test:
handler.clj


and furthermore will setup your project.clj file to pull in compojure.

It sets up simple tests that going to “/” will result in a success page with “Hello World” and that going to a non-existant page will result in a 404. There is a defined set of routes to accomplish this.

To run the server at this point it is easy to do:

lein ring server


and then go to localhost:3000 to see the awesome_thing

### Heroku?

At this point you probably want to push this to awesome thing to Heroku to start get VC funding. This is easy to do (the pushing part).

First create your heroku app in the usual manner (at the time of this writing you don’t even need to choose the cedar stack - but check the docs to make sure you are Clojure compatible).

Now create a Procfile with the following contents:

web: lein with-profile production trampoline run -m defclink.main \$PORT


Now you should be able to push to Heroku and sit back waiting for the funding to roll in.