So you have a “Agile Board”(TM) congratulations! If you are like any
of the people I’ve worked with recently you’ve got lots of columns to
keep track of all the possible states a story card could be in.
Back away from the board…
You should start with very few columns. I think four should be plenty:
Backlog is the place to keep all your new ideas. Make sure the next
priorities are at the top and ready to go, as you go down the list
don’t worry about the rest too much, they can stay vague.
Current is for the things you are planning on doing this
iteration. Use the Goldilocks principle. Don’t agree to do too much
or too little. Just the Right(TM) amount.
Doing is for things that are being worked on RIGHT NOW. A good
number of items here is a function of the number of people on your
team and how they collaborate (pairing, mobbing, solo if you must). If
there are too few or too many items there is a problem - discuss
it. If something stays here for days that is the sign of a problem -
Done is for keeping track of the valuable things you are delivering
as a team. Rejoice. Throw them away after sufficient rejoicing
(sometime after the next iteration starts is a good time).
What about Ready for QA? or Ready for Deploy? you ask. I’d ask why
isn’t QA and/or Deploy part of your definition of done?
What about Blocked. OK this one might be useful. But a red sticky
(or equivalent for non-physical boards) on the card is probably
enough. Moving it to another column makes it less visible, and a card
being blocked is a problem and we want problems to be visible.
Of course it is your process not mine. Use the columns you
need. But know why you need them, and feel free to add and remove
them if after discussion you realize the board is not serving you
When I think of how “Craftspeople” do their work I think of
tools. Good tools. They pick good tools because they know that it is
easier to get the job done when you have the right tool, and it is
well made. They don’t use a tool that is the wrong size for them, the
tool fits their hand. They may also modify their tools to better fit
Sometimes a craftsperson will make entirely new tools for their
work. It might be a special jig for cutting the same type of cut in a
lot of lumber, or for drilling the same sort of hole. Sometimes these
special jigs even come into common use, such as a door lock
installation jig, or a mitre box.
Because a craftsperson uses their own tools, and special tools, they
might seem to have a handicap when they don’t have those tools. But a
unless the job cannot be done except with a particular tool (it is
very difficult to cut a piece of wood with a hammer), they can still
get the job done (albeit perhaps a bit slower).
This is because they know how to do the job; the tools are just how
they go about it.
Some people think that the idiom: “A poor craftsman blames his tools.”
implies that tools are not important. This does not mean that tools
are not important! This means that a craftsperson knows that the
failure to do a good job is not the fault of the tool; but the fault
of the craftsperson: their skill, or their choice of tool.
Do you pick the right tool for the job? Do you change it to fit your
hand? Do you make special tools for the work?
Recently I learned some Android programming by writing a simple app
for a client. It was a great opportunity to learn the platform and how
“easy” it is to write an app. I ran into one ‘gotcha’ that I thought
might be valuable to others.
One feature that was needed was a swipeable carousel of YouTube
videos. Google provides some widgets for showing YouTube videos on an
Android device and
was a (almost) perfect fit for my needs1. Also
was just the thing for creating the swipeable list of items. It was
easy enough to create a subclass of
which knew the list of videos and created YouTubePlayerFragments as
needed (actually a subclass whose job was to handle the initialization
of the YouTubePlayerFragment).
While this was easy to code - it was not so easy to make it actually
Trying to play videos resulted in a cryptic message about the player
not playing because it was not visible on the screen. The coordinates
in the error message made it seem like the object was way to the left
of the visible screen. That was the first clue. It was perplexing
though since the player was quite obviously right there on the
Some debugging gave me the second clue I needed. When I pressed play
on the player on the screen, multiple players were firing events
saying that they were playing. Multiple players?
Reading2 into the documentation of ViewPager some more told me that
it will request multiple views from the ViewPageAdapter, so that other
views are “ready to go”. But why did they all respond when I clicked
on one of them?
More debugging did not solve the mystery but solidified my hypothesis:
The YouTubePlayer and/or YouTubePlayerFragment has state shared
between all their instances. That is the only explanation that would
fit the observed behavior.
So I needed a way to ensure that only one YouTubePlayer was in play at
a time. The ViewPager documentation says you can change the number of
other pages that will be created. Changing that did not work for me -
at least one other view was always created. That left me with ensuring
that only one player was initialized.
I tried various event listeners but found that none of them fit the
need. Sometimes I would get an event firing both on the active and the
inactive viewer and it was not possible to tell the difference.
Finally I found one thing that did seem consistent and usable:
setUserVisibleHint. It was called on the fragment with a true
value when that fragment was the one shown to the user and was called
with false when it was not. So I made sure my fragment was not
initialized until it got told that it was visible; and then released
it when it was no longer visible.
Except for the supremely annoying fact that the YouTube player
widgets DO NOT WORK on emulators. So I had to do all this work
with a physical device tethered to my machine. Like a savage. ↩
Hereinbelow I will add more annotation to those already found in that
file along with snippets of code.
The first thing that needed to be set up was a mode for React code.
After a short Googling it looks like
web-mode is a mode that can handle it. After
some brief testing it does appear to work reasonably. It appears that
the React/ReactNative community has not decided to use *.js or
*.jsx as the extension for the code files and since web-mode
The next thing I wanted to get set up was a linter. I thought this
am not entirely familiar with yet and a linter can help me “do the
right thing”. With a suggestion from a coworker (who has done some
React work) I chose ESLint with the
configuration settings. These defaults prompted me to standardize on two
spaces for indentation. (Setting js-indent-level as js-mode is still
in use for JSON files.)
Setting up the linter to run via flycheck
took a small amount of work since I don’t like to install project
specific tools globally. (I know that this is contrary to current
mores, but I have been tripped up by global vs. local installations
before so I shy away from them when I can.)
First I needed to integrate NVM with Emacs so that Emacs could run
ESLint at all.
(The choice to use the last version found is totally arbitrary. If and
when I get more versions of Node.js on my machine I’ll have to make a
more careful choice.)
Next I hooked into projectile to look
for a locally installed ESLint and use it if found. The
projectile-after-switch-project-hook functions are called after
Projectile has switched directories to the project so one can simply
check the project for the desired file.
(Note: the function is interactive because I found at least a few
projectile. By making it interactive lets me use it manually in the
rare case I need to.)
Flycheck’s ESLint integration is limited to only certain modes and
web-mode is not one of them so I needed to add that to the
white-list of modes
Software maintenance is an extremely important but highly neglected
…says Boehm1 in the midst of a long paper about the current and
possible future state of Software Engineering at the end of 1976. I
think this statement is just as true today as it was almost 40 years
ago when Boehm wrote it.
Boehm defines Software Maintenance as “the process of modifying
existing operational software while leaving its primary functions
intact.” (which sounds a lot like what we now call Refactoring). He
states the Maintenance has three main ‘functions’ which imply certain
Understanding the existing software: This implies the need
for…well-structured and well-formatted code
Modifying the existing software: This implies the need for software,
…, and data structures which are easy to expand and which minimize
side effects of changes…
Revalidating the modified software: This implies the need for
software structures which facilitate selective retest, and aids for
making retest more thorough and efficient.
We have the tools today to do these things. They are things like
modular design, the SOLID principles, and TDD. We have learned ways
to write well-structured code which minimize side effects of changes
and which facilitate retest. We’ve written languages and libraries
which help us do these things.
But yet maintenance is still a problem.
Perhaps we as an industry are focusing only on the “cool” stuff. Or
believe/hope that “someone else” will maintain it. Or focusing on
immediate customer need and not considering long term needs. The
customer themselves may not consider the long term costs.
As professionals we need to consider the life-span of the code we are
writing, and write it accordingly. If a civil engineer is asked to build
a bridge that need only last for a single crossing it will be quite
different than if the bridge needed to last for decades (or longer).
We need to think in same ways.
Currently we often write software as if it only needs to work for
today. No thought is given to tomorrow.
We have the tools to write software that can live as long as it needs
to. But we need to treat the life-span as a requirement.
Boehm, B. W. “Software Engineering” Classics in Software
Engineering. Ed. Edward Nash Yourdon. New York: YOURDON
Press, 1979. pp325-361. Print. ↩
Retrospective meetings are an important part of any sort of “Agile”
process. I have a rule about when a retrospective meeting is over.
Everyone that wants to talk has talked.
There is at least one action item.
There is at least one volunteer for each action item.
A good retrospective meetings can help a team achieve what they need
and want. Often, however, these meetings tend toward directionless
discussion and complaining. While sometimes a ‘complaining’
retrospective can be good and cathartic, these should not be the usual
Recently a client’s retrospective meetings had fallen by the wayside.
Even when they had them they were mostly at the complaining end of the
spectrum. We had helped them get back into regular retrospectives,
weekly actually, but they were still not very good.
At the end of one retrospective the team member playing the
facilitator role tried to wrap the long ramble up by asking “Retro
Complete?”. I cut off the murmured agreements with another question:
“Do we have some action items?”.
A retrospective meeting should result in action items. These might be
a new story/ticket/card, a task, a new process, or an experiment to
try. Without these the meeting was just discussion.
Also important is that the action items have volunteers. If no one
wants to do or ‘champion’ an action item then find out why. Perhaps
the action item just isn’t that important to the team; then drop it.
Perhaps it is too big and amorphous; then break it down to a single
next action. Or perhaps it is scary; if this is the case then
this scariness is something for more discussion.
In the end the team discussed what actions to take on the topics they
had discussed and created a short list. They even had volunteer for at
least one. Still not the most successful retrospective, but better
than the last. Hopefully they’ll keep this habit and their retros will
become even more effective.
In a note of the References section of “Waltzing with Bears” (DeMarco
& Lister 2003), there is a note on “Planning Extreme Programming”
(Beck & Fowler 2001) which says “When viewed as a set of
[Risk Management] strategies, XP makes all kinds of sense.” This made
me review how XP (or Agile more generally) is a risk management
The incremental approach of XP reduces risk of late delivery or wrong
delivery. The demo, planning and retrospective meetings seem to be an
implicit risk analysis/mitigation exercise. It might be beneficial to
make this more explicit.
One place where XP doesn’t line up with DeMarco & Lister’s thoughts on
risk management is their advice that there should be sizable up-front
design and estimation. XP eschews this. XP argues that the cost of
up-front design & estimation of higher than the risk that they
mitigate. It seems a reasonable risk vs. cost choice. Adding some
explicit up-front brainstorming should be sufficient to cover the
problem of missing large-impact risks. Furthermore the iterative
nature of the methodology allows for a just-in-time approach to the
costs and risks.
The book also contains a quote from Tom Glib who said (paraphrased)
‘Be ready to pack up whatever you’ve got any given morning and
deliver it by close of day’. This is very fitting with the idea in XP
that the result of every iteration should be deliverable and producing
value. While a one-day iteration, as implied in the quote, is perhaps
too extreme for many teams; the exercise of determining what it would
take to deliver value in shorter and shorter iterations is valuable.
What’s a person to do when one prefers developing in a TDD style and
decides to start a project where nothing not directly in the language
needs to be written? Why, write a testing library of course! But…
without a testing library how can one write this testing library in a
TDD fashion? Herein is how I went about doing it.
(Full disclosure: this is not the first time I’ve tried this sort
thing1. But I had forgotten I had done it before until I was
writing this article.)
“If you wish to make an apple pie from scratch, you must first
invent the universe.” – Carl Sagan
I have decided to have a crazy project wherein I’ll write an
application in Common Lisp and only Common Lisp. That is to say, no
other libraries. I have allowed myself to use extensions provided by
the the implementation SBCL2 so luckily I won’t need to write my
own socket and threading extensions.
This is a crazy and stupid idea. But the point of the project is to
give me a place to play around with things I don’t normally deal with
and in a language that I like to play with.
Test Driving a Testing Library
Where to start?
Common Lisp has no built in testing library, but it does have
assert3. If you have assert you can write a testing library.
The problem is writing the first test. I puzzled it over a little and
decided I’d need a function which would take an expression to evaluate
(the test) and would return some data structure which would indicate
the results of the test. Given that here is the first test:
That is to say: if the expression does not raise an exception there
will not be such an element in the returned alist.
After implementing a macro which lets those test pass I wrote some
more tests to round out what I thought would be a useful implmentation
of this base function of the library. I now had these tests to
bootstrap my library.
With this I felt that I had enough testing to make me feel confident
in my little implementation. It wasn’t perfect - but good enough for
me to now use this function to test other parts of my library.
How to write a test
With collect-test-results I had a way to writing and running
individual simple tests. But it wasn’t a very convenient thing to use.
But what it let me do is now write tests for deftest which would let
me define tests. I started by writing the sort of test I wanted to
(deftesta-simple-failing-test"This is a very simple test which fails"(assert(=5(+22))))
Through a few tests, written using collect-test-results I determined
that deftest would intern a symbol with the same name as the first
argument of the deftest, bind its function property to a function
which when called would, by calling collect-test-results evaluate
the body of the deftest. These symbols were put into a list which
could be retrieved from the library. Furthermore, defining a test with
the same name as an existing test does not create a duplicate test.
It might be clearest to just show you how these tests (and an
extracted helper function) ended up:
(defmacroassert-no-failure(&bodyassertion)(let((failure(gensym"fail")))`(let((,failure(assoc:failure(test::collect-test-results(assert,@assertion)))))(assert(not,failure)()(formatnil"~A",failure)))))(deftesta-simple-failing-test"This is a very simple test which fails"(assert(=5(+22))))(let((test-list(test::test-list)))(assert-no-failure(equaltest-list'(a-simple-failing-test)))(assert-no-failure(fboundp(cartest-list)))(assert-no-failure(assoc:failure(funcall(cartest-list))))(assert-no-failure(string="This is a very simple test which fails"(documentation(cartest-list)'function)))(assert-no-failure(equal(assoc:test-name(funcall(cartest-list)))'(:test-name.test-test::a-simple-failing-test))))(deftesta-simple-failing-test"This is a very simple test which fails"(assert(=5(+22))))(assert-no-failure(=(length(test::test-list))1))(deftesta-simple-passing-test"This is a very simple test which passes"(assert(=4(+22))))(assert-no-failure(=(length(test::test-list))2))(formatt"~A...PASSED~&"'test:deftest)
Now that we can define tests and evaluate them all that is left is to
have a convenient way to run the defined tests. Three quick tests on
the output of run-all-tests were proof enough for me (the tests that the
call to run-all-tests would run were the ones defined above, one
passing and one failing) that it would execute each test, report which
ones failed and a count of passes and fails to *standard-output*.:
At this point my testing library has two main entry points: deftest
and run-all-tests. To create them, I first used assert to test
drive the creation on a lower-level function collect-test-results,
which I then used to test drive deftest, which I then used to test
Next Steps and Thoughts
Now that I have this testing library I can use it to test drive the
rest of the application I will write. I’m sure along the way I’ll be
extending this library as I find new requirements for it. I’ll
probably also be writing some assertion library to make the tests more
The resulting tests5 and code6 are in my GitHub repository for this
Ensure that user.clj does not contain dependencies (even transitive)
upon code that needs to be compiled. This file is loaded well before
Leiningen is started enough to compile files.
In my latest learning project with Clojure I decided to try Stuart
Sierra’s Reloaded project pattern1. I liked the idea of this
project pattern because it promised to make the REPL driven
development on a web application smoother. Things were going smoothly
as I worked on simple dummy pages for my little application. However
when I connected the data file parsing code with its custom exception
to the application I started getting compilation and class not found
errors related to the custom exception. The reason for this is a
rather simple one - but was difficult for me to find information on or
First an aside about custom exceptions. It seems that in the Clojure
community that custom exceptions are avoided2. Either one of the
built-in Java exceptions, ex-info3 or Slingshot4 is used
instead. However it had been my favored approach to create a
domain-specific exception for my application’s needed, so I stumbled
forward and learned how to do it. It was relatively simple by using
the :gen-class option of ns and the :aot feature of Leiningen.
And it worked well… until I connected the code using the exception
with the web application that was implemented in the reloaded pattern.
A big part of the reloaded pattern is the use of functions in
user.clj to start and stop the system. This file is loaded by
Clojure whenever it is started5 so it is perfect for functionality
wanted in a REPL. This file is kept in a directory which is only
included in the class path when the dev profile is used (the repl
and compile tasks use the dev profile by default).
Everything is fine until the code in user.clj depends upon (even
transitively) code which must be compiled (such as custom exceptions).
Then we hit a annoying chicken-and-egg problem6 wherein Leiningen
when trying to compile (or launch the REPL) naturally starts Clojure,
which in turn loads user.clj which in turn depends upon code that
needs to be compiled. The error that is reported says that the
compiled class cannot be found.
This error led me to first think that my custom exception was not
written properly and thus wasn’t being compiled, then I thought it was
a problem of the :aot feature in Leiningen and interaction with the
dev profile. But the problem was more fundamental. It was just my
sort of luck that kept me from finding the answer until I spent hours
debugging and researching. Now it is easy to find several reports of
this problem7. It is not a Leiningen problem, not a Clojure
problem, not a reloaded pattern problem, but an annoyingly unfortunate
interaction between them.
Luckily, once the problem was identified, there is a relatively easy
workaround. I took the parts of user.clj which depended upon the
custom exception and moved those to a new file reloaded.clj which is
then loaded when the REPL starts by using the :repl-options
configuration in project.clj. :repl-options as a :init
configuration which can contain an expression which is evaluated when
the REPL is starting. I set it to (load "reloaded").
In the current code I’m working on we once and a while want to mock
the constructor of a class. It is only really needed because we have a
few classes who sadly do heavy-lifting in their constructors.
The common wisdom on the team was “you can’t do that”, but it bugged
me and I eventually googled the right things and found out how to do
it. The moment I did it myself I realized that it was obvious. Let me
In [Python Mockito][https://code.google.com/p/mockito-python/] the
standard form of stubbing is done with the when function, operating
upon an object:
This states that whenobj.method is called with arguments arg1
and arg2 then return 5. (There are other things to do besides
thenReturn but for purposes of this discussion that is enough).
What about if you want to mock a constructor? The naïve approach is to
It works because a module is an object on which the classes are
methods! It was so obvious when I saw it work. The clues were right in
from of my face, calling the constructor of a class is as simple as
using the class name as if it were a function. That’s because it
is! I’d like to blame too many years of Java & C# where “importing”
is more about telling the linker where to find things than about
creating objects that can be manipulated.
So to sum up: a class is a method on a module which returns an
instance of that class. When you import a module you are creating a
variable of that name bound to an object which represents that module.
Here is a test file which can be used to play with this idea. Note it
uses the fact that the name of the current modules is bound to a
variable called __name__ and that sys.modules is a hash of all
importsysimportmockitoclassFoo(object):def__init__(self):self.x=5defmethod(self):return10f=Foo()printf.method()mockito.when(f).method().thenReturn('blah')printf.method()# This will not work and throw an error.# print Foo()# mockito.when(Foo).__init__(mockito.any()).thenReturn('blah')# print Foo()printFoo()mockito.when(sys.modules[__name__]).Foo().thenReturn('blah')printFoo()