Code And Cocktails

What I Have (Re)-learned From SICP

| Comments

After about two years of slowly working through Structure and Interpretation of Computer Programs (SICP)1 I finally completed it. I didn’t do every exercise but I did many of them2.

The History

I first worked through SICP back in 1989 as part of my class at Worcester Polytechnic Institute (where I received my BSCS in 1992). It is a book that I remember with awe. I remember being, along with most of the class, mystified by this dumb language Scheme. It was so unlike anything we were used to. I was familiar with BASIC and Pascal and some very basic Bourne Shell scripting. Scheme was alien. Most of us thought it crazy. But then, somewhere about half-way through the class it clicked, suddenly I got it. Scheme, and Lisp by association, was amazing. The REPL development, the flexibility, the lack and regularity of syntax gave it great power. And then… then… we started on the chapter about the Meta-circular Evaluator3; writing an evaluator of Scheme in Scheme. It was so simple, so clear4.

This was a Freshman class, and the rest of my time at WPI I played around with Lisp, sometimes doing assignments in it, playing with my Emacs initialization files, that sort of thing. Never got serious. And then I joined the work force in 1990 with a summer job doing C/Motif work at a startup. And then after college a C job connecting their Windows app to a particular printer for graphs, and then a C/Motif job on a news ticker for a stock market application. Oh and then I learned5 object-oriented programming with C++… I forgot the lessons I learned in SICP.

The Return

I went back to SICP expecting to be reminded of a few things. What I found was that I was taught things I never remember being taught. Things which I so wish I had grokked at the time. Things which I obviously never understood and then forgot.

A few of the things I learned way back when and then forgot were:

  • Judge a programming language by the means of abstraction and the means of composition.
  • The power/utility of REPL development.
  • Lazy streams to deal with infinities are not scary.
  • Functional Programming.
  • Object-Oriented Programming.
  • The Problem of State.

The Reaction

This is a Freshman level textbook, and all of this was taught as early as the mid-eighties. Looking back I feel like I was caught up in a great forgetting. There are things we (as Software Engineers6) knew and then forgot, or chose to forget/overlook.

Now I see people going back to old papers/books and re-learning/discovering what we already knew but forgot/ignored. It is like a wider version of Greenspun’s tenth rule7. We had something and then had to put it aside because it didn’t work in our “reality”, but now we come back and “discover” it all over again.

OMG Garbage Collection8 OMG interactive language shells (aka REPL) OMG dynamic languages OMG functional programming OMG OO is about message passing.

What happened, were we asleep?

It would be one thing if we acted like these are in fact old ideas that are just now realizing are good/possible - but we act like they are new. Reality has caught up to where the 1960’s thought was possible9.

The Back to the Future

One last thing I (re)learned: The fun of learning and working through problems. And that leads to what is yet to come.

I’m going to start looking at the old texts of our practice. The ancients knew things that we have forgotten. I will see what I can learn from them. I will strive to avoid Argument from Antiquity however, that is always a danger with this sort of thing.

I can only see so far because I stand upon the shoulders of Giants10.


  1. http://mitpress.mit.edu/sicp/

  2. https://github.com/verdammelt/sicp

  3. http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1

  4. It was in this class I created the joke idea of ‘God code’. The stereotypical 3 line Lisp code, first line defines function, second like is a null check, third line recurses. And shit got done.

  5. learned

  6. Software Engineer, Programmer, Coder, Hacker whatever you want to call us/ourselves.

  7. “Any sufficiently complicated C or FORTRAN program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.”

  8. http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-33.html#%_sec_5.3

  9. http://www-formal.stanford.edu/jmc/recursive.pdf

  10. with apologies to Newton.

Thoughts About the Domain Model

| Comments

The CHECKS Pattern Language paper(Cunningham 1994) has some good ideas about what they call “Information Integrity”, ensuring that inputs are parsed/validated/handled well. But I also found that it made some good statements about the domain model in general. I’m going to write about a few of these quotes here.

Your domain model must express the “logic” of the business in its richest and often illogical detail. Every clause of every statement should be motivated by some business fact of life.

A program will always have an unfortunate amount of code in it, but an important chunk of it is the domain model. That part that is the reason for the application. Not the code that does the jQuery jiggery-pokery, not the mallocs and frees. When the business rules change this is where the change will be made. Keep it clean and uncluttered with non-domain details. If there are details in it which do not correspond to the domain logic then there will be confusion when talking about the domain with the experts. If the code doesn’t express the domain the way the domain experts talk about it, or it is incomplete, then there will be confusion again.

In your domain models you are chartered to express business logic with no more complexity than originally conceived or currently expressed.

Domain logic can be complicated to begin with. It doesn’t help that you are not an expert in the domain of (e.g) Etruscan Snoods, but you need to write a program to work with them. You talk with the experts and translate their expertise into a program. Don’t make it more complicated than it already is. Keep it simple and to the point so it is easy to talk about it with the domain experts. If it is not easy to discuss what it does with the domain experts then adding new features or finding/fixing problems with it will be more difficult.

A person reaches through a program’s interface to manipulate the domain model.

This is an important thought - the user thinks and talks about the domain with a certain language, pattern and style; make sure your program talks their language. If the program’s interface doesn’t match the user’s language then the user will have trouble using it, or using it correctly. Data should be labeled with the right words, operations with the correct terms, and permissible inputs and output formats must match expected conventions.

In closing the paper discusses good patterns for dealing with inputs and the domain model, but I think that it is the above quotes really that really stand out as good things to think about.

Serving Coffee With Express

| Comments

A few weeks ago I was playing around with a new stack: Node-Express-Angular all in CoffeeScript1. I found it quite easy to have the Express server compile my CoffeeScript files and then return the JavaScript to the browser. I then found myself last week in a position at work where I wanted a little test page to hang the Angular directives I was working on so I quickly used the same trick.

I’m not saying this trick is bullet-proof, or even a necessarily good idea for production. But it seems like a good thing for testing and perhaps for getting a project off the ground with some prototyping. It also serves as a small example of how to write an Express Middleware2.

With that caveat I will present you with the code and explanation of it3.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
express = require 'express'
coffee = require 'coffee-script'
fs = require 'fs'

app = express()

app.configure ->
  app.set 'port', process.env.PORT ? 3000

app.use express.logger()

app.use '/client', (request, response, next) ->
  coffeeFile = path.join __dirname, "../client", request.path
  file = fs.read coffeeFile, "utf-8", (err, data) ->
    return next() if err?
    response
      .contentType('text/javascript')
      .send coffee.compile data

app.listen app.get('port'), ->
  console.log 'listening on port %d', app.get('port')

The part that concerns us are lines 12-18. This section is telling Express that if a request comes in with a path starting with /client that we will want to do some special processing on it.

  • Line 13 determines the path to the CoffeeScript file (which is in the ../client directory relative to our current directory). Of note here is that request.path does not contain the /client part of the path and __dirname is the directory of the currently executing file.

  • Line 14 sets up the reading of this CoffeeScript file. It is important to specify the encoding otherwise the Coffeescript compile will throw an error.

  • If there is an error reading the file (line 15), such as the file not existing, then we tell express to run whatever is the next rule (eventually if no other rules work Express will send a 404 for us).

  • If we can read the file then we use CoffeeScript to compile that file (line 18) and send that data back as the response (lines 16-18). (Of note here is that we set the content type to be text/javascript (line 17) to make sure that the browser does the right thing (it doesn’t appear necessary with Chrome at least, but probably best to do it.)).

Congratulations, we’ve just written an Express Middleware!


  1. It is my current opinion that CoffeeScript is better than writing directly in JavaScript. It saves the developer from making some very basic stupid mistakes with JavaScript and also removes some of the syntatic noise.

  2. [](http://expressjs.com/4x/api.html#middleware)

  3. The toy app bootstrapping kit I mentioned at the top of this post is in my expressular-kit. The code here is not a direct copy of the code there in app/server/app.coffee.

Only One Way of Doing Things…

| Comments

When there is only one way of doing things, it is easier to
modify and reuse code. When code is reused, programs are easier
to change and most importantly, shrink. When a program shrinks
its construction and maintenance requires fewer people which
allows for more opportunities for reuse to be found. Consistency
leads to reuse, reuse leads to conciseness, conciseness leads to
understanding.

Programming as Experience: The Inspiration of Self (1995)

Resuse

I like the meaning of ‘reuse’ in this quote. This is not the ‘reuse’ promised by early OO proponents. They promised1: that one would be able to create objects which would be reused from one project to another. This is the reuse which means that code is reused in a project. This is the reuse which I believe to be attainable.

It is like a inward-spiral: the more reuse of code the fewer ways to do a thing there will be which leads to only one way.

The sentence about fewer people is a bit odd - but I feel it is does make sense. The fewer people on the team the more likey there will be only one way to do things which means there is by definition more reuse.

Consistency

Perhaps more important than reuse though is consistency. If code is consistent I have found it to be easier to reason about, because reasonable assumptions can be made when reading it. Being able to make reasonable assumptions, and have those borne out, is very important to me when reading code.

Conciseness and Understanding

And it follows that with reuse and consistency one will get consiceness. But does that necessarily lead to underderstanding? That is something I feel is right, but I have seen some concise code which is hard to understand.


References:

  • “Programming as Experience: The Inspiration of Self”, 1995, Smith, Randall B and Ungar, David

  1. Or perhaps that is not what they said, but what we heard.

TDD in Common Lisp: Recursive Yak-shaving

| Comments

While avoiding family this Christmas-day I started trying to figure out way for me to do TDD in Common Lisp (as one does…). That led to SLIME1 and QuickLisp2 and lisp-unit3 and asdf4 (&c. &c).

…it’s yaks the whole way down.

The hardest part for me was to find a simple way to load my code and run the tests. ASDF ships with asdf:test-system defined but it does nothing by default5. So I did some digging around and saw some other’s solutions and synthesized my version.

Here is an example ASDF system definition for a Game of Life6 implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(asdf:defsystem #:gol
  :description "Conways Game of Life in Lisp"
  :author "Mark Simpson <verdammelt@gmail.com>"
  :version "0.0.0"
  :depends-on ("lisp-unit")
  :components ((:module "src"
			:serial t
			:components ((:file "package")
				     (:file "gol"))
                (:module "test"
			:serial t
            :depends-on ("src")
			:components ((:file "package")
				     (:file "gol-test"))))
  :in-order-to ((test-op (load-op :gol)))
  :perform (test-op (o c)
		    (progv
			(list (intern "*PRINT-ERRORS*" :lisp-unit)
			      (intern "*PRINT-FAILURES*" :lisp-unit))
			'(t t)
		      (funcall (find-symbol "RUN-TESTS" :lisp-unit)
			       :all :gol-test))))

My God… it’s full of yaks

I’m just going to let that sink in for a bit while I get some coffee.

Ok, now that I’m back I will explain each part. The first interesting bit is on line 6:

depends-on
1
  :depends-on ("lisp-unit")

This line tells ASDF that it should find and load a system called lisp-unit because our system depends upon it. This is where QuickLisp makes things simple since that is its job and it does a fine one at that.

Next is the definition of our components:

Defining the Components
1
2
3
4
5
6
7
8
9
  :components ((:module "src"
			:serial t
			:components ((:file "package")
				     (:file "gol"))
                (:module "test"
			:serial t
            :depends-on ("src")
			:components ((:file "package")
				     (:file "gol-test"))))

I’ve decided that my project should have two main directories, one to contain all the production source code and one for all the test source code. Each one will be in their own packages, with the :gol-test package :use-ing the :gol package. :serial t by the way tells ASDF that each item in the component list depends upon the one before it.

Now the hairier parts which tell ASDF to run our tests:

Telling ASDF How to Test Our System
1
2
3
4
5
6
7
8
  :in-order-to ((test-op (load-op :gol)))
  :perform (test-op (o c)
		    (progv
			(list (intern "*PRINT-ERRORS*" :lisp-unit)
			      (intern "*PRINT-FAILURES*" :lisp-unit))
			'(t t)
		      (funcall (find-symbol "RUN-TESTS" :lisp-unit)
			       :all :gol-test))))

The first line is simply stating that in order to do test-op ASDF needs to perform load-op on the :gol system. This way every time we run the tests we’ll reload the system if needed.

Now… this next bit probably shows how rusty my Lisp is. Maybe there is an easier way. Basically we want to run (lisp-unit:run-tests :all :gol-test) with lisp-unit:*print-errors* and lisp-unit:*print-failures* both bound to t (they default to nil and that doesn’t give enough info IMNSHO). However when Lisp is reading and evaluating this code lisp-unit has probably not yet been loaded - so we have to be crafty and find the symbol by name in the package then funcall it. Binding the variables was trickier but I stumbled upon progv7 and that seems to be just what I needed.

progv creates new dynamic bindings for the symbols in the list which is its first argument. These symbols are determined at runtime. That means we can use intern to find or create these symbols in the right package8. progv binds these symbols to the values in the list which is its second parameter and then finally executes the sexp which is its third argument in an environment which contains these new bindings.

With this work done I can now type in (asdf:test-system 'gol) into my REPL and it loads and runs my tests and gives me useful output of those tests. I might go so far as to bind that to a very short function name, or even a key in Emacs.

There are still more yaks to shave.

What is left to be determined is if this is the best way to do things.

Some documentation/blogs I read lead me to believe that putting the tests in the same system with the production code is verboten (or at least a no-no). I tried having the tests be in their own system definition (depending upon the production code) but then ASDF & QuickLisp both strongly pushed me to have that definition it its own .asd file9 which seemed awkward to me. I personally have no problem shipping tests with the production code, and I believe that if needed all the symbols of the test package could be unintern-ed before an image was saved if desired.

I’ll play around with this setup and see how things go.


  1. http://www.common-lisp.net/project/slime/

  2. http://www.quicklisp.org/

  3. https://github.com/OdonataResearchLLC/lisp-unit

  4. http://common-lisp.net/project/asdf/

  5. http://common-lisp.net/project/asdf/asdf/Predefined-operations-of-ASDF.html#test_002dop

  6. Really? You don’t know what this is already and you had to look in the footnotes? Go look it upon Wikipedia already!

  7. http://www.lispworks.com/documentation/HyperSpec/Body/s_progv.htm#progv

  8. Perhaps I should use find-symbol instead so that I don’t create symbols that the package doesn’t export. I will experiment with that.

  9. since they look up the defintion by looking for a file with the same name as the system

On Reading Code

| Comments

(How It Ought To Be Done)

IMNSHO1.

Being able to read source code2 is very important. So how does one do it well?

Just like reading any other material - first decide why is one reading it. Why is one reading the listing? Is one debugging a problem or trying to gain specific knowedge?; or reading to get general or pleasure? One takes different approaches depending upon the desired goal.

The first step is to find an interesting spot to start.

For Debugging (or to Gain Specific Knowledge)

Start by skimming of the code, grep3 and tags4 tables are useful for this. This will help one find the right, or at least likely entry point. There is a lot of instinct5 involved in this step. One gets better at it the more one needs to do it6.

To Get General Knowledge (or for Pleasure)

Code is for reading7 so read it. Skim the code (including the tests8). Look for something interesting, get the lay of the land, the shape of the code. Note the Dramatis Personæ. Perhaps the code seems to be about widgets and frobbing, oh and here’s something about twiddling that looks interesting.

Pick one of these that is most intriguing (e.g. One wonders what happens when a widget is frobbed.) and use that as one’s entry point. This is a matter of personal taste/fetish9.

Now that One has One’s Entry Point…

Now that one is staring in the face a chunk, hopefully a small chunk, of code; how does one read it?

Top Down. It is as simple as that10.

Read the first line of code, what does it say? Believe it. That can be risky - but at this point one must do it. What does the next line say, and so on. Follow function calls11 if they seem interesting. Skip over things that are not, or do not seem relevant. If one’s interest is in frobbing ignore code about tweaking or twiddling. Perhaps one makes a note that frobbing seems to involve tweaking or twiddling; but one must know about frobbing now so those are not relevant. This is also very tied to instinct.

It is this point here which I think is most important. One must gain a sense of what is relevant when reading code. One cannot read every code path and think that will lead to understanding. That would be like reading the dictionary and assuming one knows the language. This is where I have seen programmers fail in their reading.

This is a very good reason why naming is so important12.


  1. It has occurred to me that my style of reading code is not the same as some other’s. I have been told by a person I think highly of that they feel my way is the/a right way. I have finally gotten around to writing it up as a quick blog post.

  2. Is it time for us to stop calling it ‘code’ given its implications of obfuscation? I think it is still appropriate as long as we remember that the definition is not about obfuscation per se but about using a set of symbols to mean other words/letters/symbols. This is, IMNSHO, exactly what we are doing.

  3. Or its ilk (e.g. ack)

  4. New fangled IDEs keep the equivalent of tags tables.

  5. Gut feeling, hunch, guess work, randomness, luck &c.

  6. Especially when at the client site, trying to figure out why the trades are not going through. Take my word on it.

  7. Most often code is for reading (and evaluating/compiling) by the computer; but also for oneself and one’s fellow humans. (Apologies for being speciest.)

  8. Why do we keep referring to code and test? Tests are code.

  9. Go ahead.

  10. Of course it is not that simple.

  11. And other sorts of papered over goto statements.

  12. Perhaps a post/rant for another day.

SCNA 2013 - Effortful vs. Automatic Brain

| Comments

Experts Don’t Know what they Know

This year’s SCNA Conference seemed to have several talks which touched upon the idea of experts having implicit knowledge. That is to say: they know things but can’t necessarily explain what they know, or why. Ken Auer and Dave Thomas touched upon it; during the panel on Software Quality it was also brought up (e.g. Corey Haines’ amazing reference to Chicken Sexing). But I felt that Brian Marick’s talk took the idea and then added to it.

Bright vs. Dull Cows

Brian started talking about this implicit knowledge idea - giving his wife as an example. His wife (a veterinary professor) teaches her students to determine if a cow is ‘bright’ or ‘dull’ (a subjective diagnosis). Her students specify, day-after-day, if cows are dull or bright. She then corrects them. They usually ask how she could tell and she tells them something (drooping ears, nose-cleaning, etc.). But actually these specific clues are not really the way to tell. Those who can diagnose a cow as bright or dull cannot explain why.

This anecdote seems to show that the implicit knowledge of the expert is something that can be trained. When discussing this idea after the talk Corey Haines mentioned a conference session in which Michael Feathers gave where he had the attendees give quick good/bad judgment calls on snippets of code. Then tallied those up and found that some snippets had a large amount of agreement as to their good-/bad-ness. They then discussed and tried to determine why. In light of Brian’s anecdote however it seems that they might never find out why, but maybe they could train themselves by looking at a lot of code and being corrected when they get the judgment wrong.

(Brian hopes to use this sort of idea to be able to train himself to have a nearly physical reaction to bad code. While that sounds cool, I worry about a ‘Ludovico Technique’ for code.)

Effortful vs. Automatic

Brian then talked about two types of thinking: Effortful and Automatic. Effortful is the sort when you need to do a non-trivial arithmetic, or a parking in a tight spot. Automatic is when you do simple math like 2+2 or easy driving where you are sort of on autopilot. When you use the Effortful way, it actually takes effort. It is slower seems to be only possible to do it serially. Automatic thinking is faster, takes little effort, and seems to be parallelizable.

If you don’t have something in Automatic then your brain shifts to Effortful; sort of a cache miss. If you do something enough, or become good enough at it, you can put stuff into your Automatic cache.

Switching to Automatic

So how can this be done? While Brian offers the usual ‘practice, practice, practice’, he adds to it an interesting idea about ‘Set and Setting’. ‘Set and Setting’ is an idea from Timothy Leary about why drugs such as LSD seem to have very different effects in different people. The mindset (Set) of the person (which includes things like rituals which build up that mindset) and the Setting (environment) have a strong influence on the experiences of that person.

By being aware of which mindsets (and how to enter them) and what settings are most beneficial for programming (and even different types of programming) he feels he can move some programming tasks into his Automatic thinking area.

Conclusion

It seems to me that the idea of ‘Automatic’ and ‘Effortful’ adds to the idea of the expert’s implicit knowledge. Perhaps the expert’s knowledge is just an expression of Automatic thinking?

I’m interested in trying out the practice of being more aware of my own ‘Set and Setting’ with respect to code.

Quick Start for Compojure/Clojurescript

| Comments

I am once again jumping into the Clojure world and trying to learn a bit of it. I am doing so via a small project called defclink (source). I am also attempting to get back into blogging. So I thought I’d combine the two items into this short post.

Goals of my project.

My goal of this project is to learn some Clojure building a webapp. I decided to also use Clojurescript since it seemed like an interesting idea. I wanted to keep the number of libraries and frameworks low to maximize the Clojure code I would need to write, thus learning Clojure. That being said I allowed myself the luxury of the Compojure library (with Ring under it) to handle the HTML protocol stuffs.

I’ll start by explaining how to get a brand new Compojure project.

How to set up a Compojure project

Starting a Compojure project is very simple:

lein new compojure awesome-thing

This will create:

README.md    project.clj  resources/   src/         test/

./resources:
public/

./resources/public:

./src:
awesome_thing/

./src/awesome_thing:
handler.clj

./test:
awesome_thing/

./test/awesome_thing:
test/

./test/awesome_thing/test:
handler.clj

and furthermore will setup your project.clj file to pull in compojure.

It sets up simple tests that going to “/” will result in a success page with “Hello World” and that going to a non-existant page will result in a 404. There is a defined set of routes to accomplish this.

To run the server at this point it is easy to do:

lein ring server

and then go to localhost:3000 to see the awesome_thing

Heroku?

At this point you probably want to push this to awesome thing to Heroku to start get VC funding. This is easy to do (the pushing part).

First create your heroku app in the usual manner (at the time of this writing you don’t even need to choose the cedar stack - but check the docs to make sure you are Clojure compatible).

Now create a Procfile with the following contents:

web: lein with-profile production trampoline run -m defclink.main $PORT

Now you should be able to push to Heroku and sit back waiting for the funding to roll in.

Learning When to Give Up

| Comments

“When you find yourself in a hole, quit digging” – Will Rogers

Sometimes I wonder what the important skills for a developer are. I now believe that one of the important ones (perhaps most important) is knowing how/when to give up and start again. This contradicts another important skill/trait for a developer to have: tenacity. But I think they are balancing forces with giving up being slightly more important. Let me explain.

Often when working on a problem, it takes longer; is more complicated and difficult than expected. At this point there are many files modified and it looks like there are many more to modify; plenty of broken tests, maybe an unknown quantity. There are two options: keep going, or give up.

This is the point I like to remind myself (and my coworkers) of Will Rogers quote. Another thing I do is ask “Are we digging a tunnel, or a hole?” If we think we are digging a tunnel then we can give ourself a time limit for further digging; if it pays off – great! otherwise we need to climb out of the hole.

I used to believe, quite strongly, that tenacity was more important. To persevere through the mountain of outstanding changes and get it done was admirable. I now believe that giving up is the better option. What changed my mind? I now see more benefit in taking the knowledge acquired from the work done and reapply it to the problem from a fresh start. Powering through the situation is deciding at the current way is the only way; starting again is deciding that there is a different way, with the knowledge that the current way has giving you (which is at least that the current way isn’t working out).


As a footnote I want to add that I also thinking giving up in another way is important. When working on something collaboratively (such as when pairing) one can find oneself at odds with another developer about how to proceed. I believe that unless the idea is very obviously bad that to stop arguing/discussing and do something is better that spending time discussing. Especially when there is source code involved - any code is better than no code. Any code, can then be changed into better code.

Cooking Chicken Soup

| Comments

In the interest in breaking my current blogging writing block I’ll record for posterity my ‘recipe’ for chicken soup.

First you need to eat some chicken so you have bones. I usually get rotisserie chickens for another recipe (Skillet Chicken Pot Pie) and then save the bones in the freezer. I use two carcasses for this recipe.

So take the bones and entomb them in a big pot with some carrots, celery, garlic and onion. Don’t worry about chopping up the vegetables or even peeling them - this step is making stock.

Look at the bones!

(The clever ones among you will note that I didn’t tell you how much of these things to put in. I am doing this in honor of my mother on this Mother’s Day.)

Now put in enough water to drown it all…

Now with water

and bring it to a boil.

Starting to Simmer

Let it boil for as long as you can stand to. You want it to get all the good tastes out of the chicken bones and vegetables. I usually let it go for 2-4 hours and stir it occasionally. I don’t let it boil down more than half-way however.

At this point it should be filling up your house with a very delicious aroma.

Boiling down

After boiling it down you need to strain out all the solid bits and throw them away. I have not included a picture of this because it looks rather unpleasant and might put you off the recipe. But take my word for it - this bit of unpleasantness will be worth it.

What you have now is chicken stock. Do with it what you will!

I usually choose to turn right around and create soup. I now put in carrots, celery, onion, garlic, maybe peas. These, since you will be eating them should be peeled, cleaned, etc.

Now making it into soup

I usually, because I am lazy, add a pasta to this soup, but potato or rice could be used instead.

Bon Appetit! Julia Child weilds Mjolnir to defeat the Midgard Serpent