Code And Cocktails

Life Is Short, and the Craft Is Long

| Comments

Hippocrates noted that there was so much to his craft to learn, but only such a short life to learn it in1. Chaucer paraphrased it as ‘Life is so short, and the craft takes so long to learn’2.

While I can totally relate to that idea these days I didn’t always. At one point I was a foolish person who thought they were good at what they did and just slid along, month after month, year upon year not improving. That changed several years ago when I was introduced to a bunch of new (to me) ideas and techniques and the scales fell from my eyes and I realized how little I knew.

I feel I know so little sometimes; that my skills as a developer are limited. I know that is not entirely true, my self-deprecation rearing its ugly head. But it is a frequent feeling.

There is so much to learn and one’s life is so short; why then should I even bother? I enjoy programming, and I, as part of my character, prefer to do the best job I can. Thus I am left with no choice but to do everything I can to improve my skills.

There is so much to learn and one’s life is so short; how then can I move forward? I have been focusing on the basics (such as simple design and clean code) - these will be skills applicable to any programming language I may use or project I may be on. From this base I can expand into areas which are less familiar to me (such as functional and logic programming) - these will broaden the mind and help clarify existing skills in the familiar areas.


  1. Aphorismi 

  2. Parlement of Foules 

Make Same Things Similar, Different Things Dissimilar

| Comments

Abstract

Whilst fixing some bugs in data file parsing code I found that the bugs lie in the fact that the types of data files were more similar than not; but the code said that they were more dissimilar than similar. Refactoring to make same code same fixed at least part of the bugs on its own.

A Story

On my current project we have some code that reads data files and puts the data into the database. The files come in different formats and layouts, however there are only really two different kinds of files. I have always maintained that the processing of these two types of files are more same than different. i.e. How to read an excel file with the data laid out just so and put that data into the database is all the same whereas the way to validate if this line in the file a data line or something to ignore, or how to split all the data into larger logical chunks (required before putting into the database) is perhaps different between these two types of files.

I was very involved in the creation of the code for the reading of the first type of data file; but not very involved for the second. Recently when we shook out some bugs in the parsing of the second type of file I found myself in the code for both types of files fixing the bugs.

I found that the code implied that the two files were more different than I had imagined. Code that I thought would be similar or even exactly the same was not. This difference could be seen on multiple levels; from the naming of things up to the factoring of the code into methods and classes.

As I reviewed the bugs I had to fix I saw that the bugs were stating that the second type of file was in fact not so different from the first type. As far as the user was concerned much of the “rules” of parsing were exactly the same.

The first thing I did was make the code for the second type of file much more similar to the first type. This involved renaming and refactoring. Then once things were much more the same I extracted the sameness into the base class (which had already existed). By doing this refactoring at least parts of the bugs were fixed. All that was left was some tweaking to the code left behind which defined the difference.

Conclusion

This coding really showed to me the importance of keeping a consistency in the code. Things which are the same should be named and look the same. Things which are different should NOT named the same or look the same; they must be kept dissimilar. The compiler doesn’t care of course; but I think it will help the programmer.

Not keeping to this ‘rule’ I think will lead to bugs where things which should be the handled in the same or similar manner are mistakenly not since they appear to the following developers to be different. Or even more simply because the commonality was not extracted earlier because it was not noticed.

The way to keep this in mind is not not just think if the code is factored well or has good naming; but also if the code you wrote is similar or dissimilar to other code around it - and if that is correct or not.

Cocktail: Bijou

| Comments

I visited Brick and Mortar before Thanksgiving for a Sazerac and to let Lisa know that I’d be in on Monday with the Code-and-Cocktails group. She pointed to a cocktail on their menu she thought I might like. Misty came over after my Sazerac was done and I ordered it; since it had [Chartreuse][chartreuse] in it I decided to ask her what the differences, if any, there were between Green and Yellow Chartreuse. Turns out there are a bunch of differences and Misty was quite happy to tell me about them. I asked if I could have a taste of each to compare and she obliged.

I’m bad at describing tastes but I can say that Yellow Chartreuse is sweeter and not as strong, the Green is a stronger Liqueur and not as sweet, more herbal.

After this experiment I decided I needed to get some and add more drinks to my home repertoire. After buying it I pinged Misty on twitter and asked for a recommendation for a drink combining gin, or rye, or vermouth (my staples) with the Chartreuse. She pointed me at the Bijou. The recipe she gave me was: equal parts gin, chartreuse and sweet vermouth (I did 1oz), a dash of orange bitters and lemon oil garnishing.

The taste was very complicated and very good. Chartreuse is something I am going to continue to explore.

Recipe: Mama Simpson's Kartoffelsalat

| Comments

My Mother’s Kartoffelsalat (Potato Salad) recipe is a German style potato salad; not one with mayonnaise. This is the type I grew up with and I think the other type is just wrong.

One day many years ago I asked my mother from the recipe and she gave it to me. Trouble was - she didn’t have any measurements. It was all done by how much seemed good. I present the recipe to you in the same way.

  • Potatoes. Boiled until they are cooked through - but not mushy. Cut up potatoes if some are much bigger than others (so they cook evenly). Then sliced into approx 1/4” slices. Peeling is optional.
  • Oil
  • Vinegar
  • Salt & Pepper
  • Onion - chopped
  • Garlic - minced
  • (optional) Bacon - cut into little pieces and cooked crisp.
  • (optional) Cucumber - sliced very thin.

I usually make it with 5lbs of potatoes (because I am bringing it to Thanksgiving dinner for example). For that much potato 1-2 onions will be good and something like a tablespoon of garlic (hard to have too much for me).

After slicing the potatoes while still hot and adding garlic and onion put in oil and vinegar - make sure the potatoes are well coated. I like it to be a bit sour so I make sure there is a bit more vinegar than oil.

Salt and pepper to taste.

Can be served warm after cooking or cold, or reheated in a microwave, or fried up in a pan (Papa did that one time when Mama was in the hospital and I was still pretty young).

Enjoy.

GOOS Review Ch 14 & 15

| Comments

(Just a quick post about chapters 14 & 15 of GOOS to get myself back into the swing of things after a while…)

A few points hit me during these two chapters. Things which I know but which perhaps are not entirely second nature to me yet (even after all these years)

  • In chapter 14 new features were added with little fuss - the relatively clean design allowed for this.
  • Chapter 15 showed doing refactorings bit by bit - it kept tests red for a while but they got done. Just because it will take a while doesn’t mean not to do it.
  • Chapter 15 also saw the developers thinking about how to continue down the path they were one was going to be pretty annoying/boring. They took this to mean that they were doing something wrong. They stopped and thought about it and found a better design which allowed their needed changes to be easier.

These are three important things to internalize:

  • Good clean design leads to ease in implementing new features.
  • A refactoring might be hard - but that is not a reason not to do it.
  • If the change you need to do is hard / annoying - step back and figure out why (see the first point).

These are all things I know; but in the heat of the moment will tend to forget. I need to really get them internalized; then I can choose to ignore them at certain circumstances, but then I’ll do it more explicitly and know the pros and cons.

Also as a side note there were some good points where the OO design skills shone through. For example the addition of Java enum objects which quickly became feature-ful objects which caused cleaner code elsewhere (Column and SniperState)

Delivery vs. Code Quality: Any Corners Safe to Cut?

| Comments

The balancing act between delivering vs. code quality has been on my mind lately. It seems to me that there are times when delivery trumps code quality. But sacrificing code quality for delivery speed results in slower delivery of future features.

(‘Code Quality’ here is defined as ‘good design’, ‘easy to change’, ‘easy to understand’. This is not about quality as in free-of-bugs: that is not a corner I will cut. I am also not suggesting writing shit.)

Are there any corners which are safe to cut? I have a feeling there is no good answer; but is there a right answer?

I suppose as a bottom line the best to do is to keep the corner-cutting explicit and visible to all people involved (that includes non-developers) so that the eventual slowdown will not be a surprise.

Or is this just an invalid assumption on my part? Is there any time that delivery can trump code quality? I know that everyone says that keeping code quality high is the only way to go quickly (@unclebob’s pithy: “The only way to go fast is to go well”). But that seems to be to be about the long term; I am talking about short term speed gains.

This is something I’m going to continue to think about; I’d love to hear your thoughts on this.

Having a Conversation With Your Code

| Comments

Just sat in on a talk by Cory Foy entitled “When Code Cries: Listening to What Your Code is Saying”.

(Warning: there will be anthropomorphizing of source code in this post)

One thing I wanted to pull out of it is the idea of listening to your code. Really listening. Like it was a person.

I don’t think it is just listening to code - but a conversation with code. The programmer is talking about what they need, and the code is telling the programmer what it needs. It is a give and take like any conversation. If one side of the conversation tries to dominate, or ignores the other, the outcome will not be successful. Like any conversation the failure may not be immediate but later when misunderstandings between the parties of the conversation cause problems.

Listening is not easy however. To really listen you have to:

  • Decide to listen
  • Listen for the whole message
  • Let go of your personal agenda
  • Be patient
  • Be curious
  • Test for understanding

I personally have problems with listening patiently for the whole message. I tend to jump in, trying to add things, perhaps of my own agenda, saying things starting with “So what you’re saying is…” instead of just waiting to find out what is being said. With code this comes out when I too quickly jump to refactorings or thinking that my coding task is done - because I’m not listening to the code.

Having a conversation with code is not easy either. Mike Clement @mdclement likened it to talking with his 18 month old child. They can’t talk - but they are trying to communicate. One needs to be patient in order to get the message.

I am working on being a better listener, with people, but I can see that I need to work on those skills for my code as well.

Recipe: Washabinaro Chili

| Comments

The following is the recipe for “Washabinaros Chili”, we found it on All Recipes. It serves 8.

The startling parts of the recipe which always attract people’s attention when I describe it is the wasabi paste, the beer and the coffee.

The recipe is not difficult but takes time. The prepping can take 20-30 minutes, and the total cooking time is 3 hours. However it is worth the time.

Ingredients

  • 4 tbl vegetable oil, divide
  • 2 onions, chopped
  • 4 cloves garlic, minced
  • 1 lb ground beef
  • 3/4 lb spicy Italian sausage, casing removed
  • 14.5 oz peeled/diced tomatoes with juice
  • 12 oz dark beer
  • 1 cup strong brewed coffee
  • 12 oz tomato paste
  • 14 oz beef broth
  • 1/4 cup chili powder
  • 1/4 cup brown sugar
  • 1 tbl ground cumin
  • 1 tsp dried oregano
  • 1 tsp ground coriander
  • 1 tsp cayenne pepper
  • 1 tsp salt
  • 1 tblspn wasabi paste
  • 45 oz kidney beans
  • 2 Anaheim chili peppers, chopped
  • 1 Serrano chili pepper, chopped
  • 1 Habenero pepper, sliced

Instructions

  1. Place 2 tblsp oil in large pot over medium heat
  2. Cook & stir onions, garlic, beef & sausage until meats are tender
  3. Pour in tomatoes, coffee, beer, tomato paste & broth
  4. Season with chili powder, cumin, sugar, oregano, cayenne, coriander, salt & wasabi.
  5. Stir in 1/3 of the beans, bring to a boil, reduce heat, cover & simmer.
  6. In a large skillet over medium heat, heat the remaining oil.
  7. Cook the chilies in the oil until just tender, 5-10 min.
  8. Add peppers to pot and simmer 2 hours.
  9. Add remaining beans & cook for additional 45 minutes.

Notes

We use mostly Chipolte chili powder in this recipe, it really adds a great taste and kick.

Also pick a good dark beer, a really ‘chewy’ stout. The last time I made it I used Babayaga Stout by Pretty Things. Of course enjoy the rest of the bottle while cooking.

Test Latex Post

| Comments

I was prompted by Steven Proctor’s post about using on WordPress to see if it could be done on Octopress. Of course it can!

A quick googling turned up a Eason’s Blog about doing just this. His post links to another which is in Chinese but the code snippets are all you need. Note that Eason’s post points out that config.yml needs to be updated… something left out of the other.

(Also a note: leave a blank line above the $$ starting a multi-line block for best results.)

And now for a test

Looks pretty good.

I think Knuth would be pleased.

Knuth

On Implementing a Lisp

| Comments

0. Background

About a year ago I read McCarthy’s paper “Recursive Functions of Symbolic Expressions and Their Computation by Machine and I thought about implementing the Lisp described in it in Ruby (a language I was learning at the time). The project languished for quite some time until just recently when I went ahead and implemented the language specification laid out in that paper.

The project is yarlisp, it’s name means “Yet Another Ruby Lisp” given an assumption that I could not possibly be the first person to do this sort of thing, mixed with the idea that “Ruby is an acceptable Lisp” and even a bit of “Talk Like a Pirate Day”.

My goal in this project is to try to minimize the amount I need to implement in my ‘assembly language’ (Ruby). There is an idea that Lisp needs to have only seven primitive operations implemented in its assembly language and everything else can be implemented in the Lisp itself. (Obviously this seven primitives thing really only implements the symbol processing parts - things like numbers, IO etc. really need to be implemented in the underlying language). I have not entirely succeeded in this so far - I was more interested in the beginning to implement the functionality; I think I can refactor to more purity.

1. Current state of the project

I have implemented ATOM, EQ, CAR, CDR, CONS, COND, QUOTE and EVAL all in Ruby. The features of Ruby I have used are subroutine definition, conditionals, boolean true/false, equality and raising exceptions (for undefined behavior).

The implementation of EVAL required some more helper functions due to their recursive definitions, these too are written with Ruby as I had not yet found a way to implement subroutine definition - especially not recursive subroutine definition.

ATOMs are symbols (e.g. :foo); CONS cells are arrays of length two (e.g. [:a, [:b, :NIL]). Anything not an array is an ATOM. All public methods of the Yarlisp module are named in ALL CAPS; these are the Lisp primitives. Any helper methods are implemented as inner methods and are named in lowercase. If it is not in ALL CAPS it is not a method available in my Lisp language.

Currently the only sexp syntax available is CONS cells; the list shorthand style of sexp is not yet available. This means I need to write lists like [:a, [:b, :NIL]] instead of [:a, :b]. This has become very annoying, very quickly.

One oddity of my implementation is wherever I call a Lisp primitive I do it in an oddly Lisp looking way (odd to see in a Ruby file). For example, when the CAR of the expression passed to EVAL is :CDR it returns: (EVAL (CAR args), env). While it makes the code look odd for Ruby - it brings out the Lispyness of it. I will continue with this as an aesthetic in the code I write in this project.

Because of my choice of Ruby as my ‘assembly language’ I have not needed to implement memory allocation or garbage collection.

2. Thoughts

I found this a fun little exercise and want to continue to do it. I think I will also implement Lisp in other languages. It turned out not to be very hard and was something that could be easily be broken up into pieces by the different primitives and clauses in EVAL.

Now that I see all of EVAL implemented I think I can see a way to reimplement more of the language in Lisp itself by using the LABEL and LAMBDA Lisp functions. Also there are things like :NIL and true/false which I could implement by having EVAL append bindings to the environment it is passed. That way there will be default bindings for these things. Or perhaps I’ll just leave that for the caller of EVAL to do.

I ran into an odd issue while finishing my implementation of LABEL. There was a feeling of double evaluation on the arguments the method I was defining. Since my goal at the time was to implement the spec as written I went along with it. However I believe there is an superfluous call to EVAL in the definition as given in the paper. I want to hunt that down and fix it.

What I am struck with overall is the cleanliness and sparseness of it. The logic of simple, regular syntax, and the association list lead to no surprises during implementing. Each step made sense; although LABEL and LAMBDA took longer for me to understand - partly/mostly because the M-expr syntax used in the paper, along with the formatting of the paper, made that section of the specification harder to read. In reviewing my implementation I can see the specification very clearly in it; the definition of Lisp is very clear in its implementation.

I hope to continue working on this project - and others like it. I’ll continue to write about my adventures with it.

3. Next steps

The next phase of this project will include the following:

  1. Adding the list style sexp shorthand - all these CONS cells are getting hard to write out.
  2. Refactoring to higher purity of the language - implementing more of the language in itself - less in Ruby
  3. Implementing a reader - which will need to be in Ruby. Although with the more comfortable sexp syntax this may not be so necessary.
  4. I want to find a more Lisp way of handling the undefined cases - to remove the raising of exceptions from Ruby code. This is an extension of the purity issue above.

The very next step for me to take is to implement a real example method in my Lisp. Even though I do not have numbers I may implement a Fibonacci or factorial method using Church Numerals. At the very least I will implment the FF method presented in the paper.