I’m beginning to learn a little Elixir & Phoenix and I ran into a case
where I wish I had a Mix task for something. Specifically I wanted to
run npm scripts with mix so I’d only have one command to run instead
of both mix and npm for my toy Phoenix project.
Writing a Mix task is reasonably straightforward with only a few
steps:
If you want to create a mix task called “echo” then create a module
called Mix.Tasks.Echo. The task name seen in mix help is based
upon the name of the module minus Mix.Tasks. (The module needs to
be called Mix.Tasks.Something otherwise mix will not see it.)
Add use Mix.Task to this module.
Write a public method called run. It has the type signature:
run([binary]) :: any. That means that it will get a list of
strings (the command line arguments) and can return anything.
Add a @shortdoc. This will be used as the text in mix
help. Without this your task will not appear in mix help but
will still be usable.
Optionally add a @moduledoc. This will be used if you run mix
help YOURTASK
You can put this module where ever you want in lib but typically you
would put it into lib/mix/tasks.
That’s it.
An interesting thing I found was that step #2 was not actually needed.
That behaviour defines @shortdoc so without it you cannot use
@shortdoc to add the task to mix help output.
Since I was creating a mix task for use in a build I needed to make
sure that if the task was not successful that mix would return an
error code so that the shell could see the error and fail a build. At
first I assumed that the return value of the task was how that would
be done; however I didn’t find much documentation about this. I
experimented with some likely return values like :error or {:error,
"something"} but that had no effect, it always returned a zero exit
status to the shell. Ultimately I choose to raise an error when the
task didn’t work and that definitely caused a non-zero exit status.
If you want to see the end of result of this experimentation you can
check out my first ever hex package:
mix-npm. The source can be found
in the GitHub repository:
verdammelt/mix_npm.
My company, Cyrus Innovation, has started an Apprenticeship program.
This program involves bringing on people who are brand new to
programming and pair them with a Senior developer to train them in
good practices. About a month ago I ‘acquired’ an Apprentice:
Paige.
I’ve never done this sort of thing before and I quickly came to
realize that the relationship wasn’t just Master & Apprentice but
really Apprentice-Master & Apprentice. I am not only learning as I go
what it means to have an Apprentice but also learning from my
Apprentice.
Even more than normally during pair-programming I feel I need to
explain myself. Not only explaining how to do something in the
programming language, but how to do it with our tools, and how we’ll
do something in the context of our team. Add to that explaining the
why of it all as well. This exercise helps me remember and
reconsider why I do things the way I do, why they are important.
This also brings to light good practices which I have chosen not to
do in this current situation. Perhaps there are reasons, perhaps those
reasons are good, sometimes they are not. By having to explain the
trade-offs and reconsider the bad reasons I am learning why I make
these decisions.
Also by embodying some good behaviors she is teaching me some tricks
about how to learn. She is a careful attentive listener, always asking
questions to get clarification. She also tests her knowledge by
sharing what she does know (even if not sure) which either adds to the
collective knowledge or is corrected.
So I hope she is learning something by pair-programming with me; but I
know I’m learning something from her.
So you have a “Agile Board”(TM) congratulations! If you are like any
of the people I’ve worked with recently you’ve got lots of columns to
keep track of all the possible states a story card could be in.
Back away from the board…
You should start with very few columns. I think four should be plenty:
Backlog
Current
Doing
Done
Backlog is the place to keep all your new ideas. Make sure the next
priorities are at the top and ready to go, as you go down the list
don’t worry about the rest too much, they can stay vague.
Current is for the things you are planning on doing this
iteration. Use the Goldilocks principle. Don’t agree to do too much
or too little. Just the Right(TM) amount.
Doing is for things that are being worked on RIGHT NOW. A good
number of items here is a function of the number of people on your
team and how they collaborate (pairing, mobbing, solo if you must). If
there are too few or too many items there is a problem - discuss
it. If something stays here for days that is the sign of a problem -
discuss it.
Done is for keeping track of the valuable things you are delivering
as a team. Rejoice. Throw them away after sufficient rejoicing
(sometime after the next iteration starts is a good time).
What about Ready for QA? or Ready for Deploy? you ask. I’d ask why
isn’t QA and/or Deploy part of your definition of done?
What about Blocked. OK this one might be useful. But a red sticky
(or equivalent for non-physical boards) on the card is probably
enough. Moving it to another column makes it less visible, and a card
being blocked is a problem and we want problems to be visible.
Of course it is your process not mine. Use the columns you
need. But know why you need them, and feel free to add and remove
them if after discussion you realize the board is not serving you
anymore.
When I think of how “Craftspeople” do their work I think of
tools. Good tools. They pick good tools because they know that it is
easier to get the job done when you have the right tool, and it is
well made. They don’t use a tool that is the wrong size for them, the
tool fits their hand. They may also modify their tools to better fit
their hand.
Sometimes a craftsperson will make entirely new tools for their
work. It might be a special jig for cutting the same type of cut in a
lot of lumber, or for drilling the same sort of hole. Sometimes these
special jigs even come into common use, such as a door lock
installation jig, or a mitre box.
Because a craftsperson uses their own tools, and special tools, they
might seem to have a handicap when they don’t have those tools. But a
unless the job cannot be done except with a particular tool (it is
very difficult to cut a piece of wood with a hammer), they can still
get the job done (albeit perhaps a bit slower).
This is because they know how to do the job; the tools are just how
they go about it.
Some people think that the idiom: “A poor craftsman blames his tools.”
implies that tools are not important. This does not mean that tools
are not important! This means that a craftsperson knows that the
failure to do a good job is not the fault of the tool; but the fault
of the craftsperson: their skill, or their choice of tool.
Do you pick the right tool for the job? Do you change it to fit your
hand? Do you make special tools for the work?
Recently I learned some Android programming by writing a simple app
for a client. It was a great opportunity to learn the platform and how
“easy” it is to write an app. I ran into one ‘gotcha’ that I thought
might be valuable to others.
One feature that was needed was a swipeable carousel of YouTube
videos. Google provides some widgets for showing YouTube videos on an
Android device and
YouTubePlayerFragment
was a (almost) perfect fit for my needs1. Also
ViewPager
was just the thing for creating the swipeable list of items. It was
easy enough to create a subclass of
FragmentPageAdapter
which knew the list of videos and created YouTubePlayerFragments as
needed (actually a subclass whose job was to handle the initialization
of the YouTubePlayerFragment).
While this was easy to code - it was not so easy to make it actually
work.
Trying to play videos resulted in a cryptic message about the player
not playing because it was not visible on the screen. The coordinates
in the error message made it seem like the object was way to the left
of the visible screen. That was the first clue. It was perplexing
though since the player was quite obviously right there on the
screen.
Some debugging gave me the second clue I needed. When I pressed play
on the player on the screen, multiple players were firing events
saying that they were playing. Multiple players?
Reading2 into the documentation of ViewPager some more told me that
it will request multiple views from the ViewPageAdapter, so that other
views are “ready to go”. But why did they all respond when I clicked
on one of them?
More debugging did not solve the mystery but solidified my hypothesis:
The YouTubePlayer and/or YouTubePlayerFragment has state shared
between all their instances. That is the only explanation that would
fit the observed behavior.
So I needed a way to ensure that only one YouTubePlayer was in play at
a time. The ViewPager documentation says you can change the number of
other pages that will be created. Changing that did not work for me -
at least one other view was always created. That left me with ensuring
that only one player was initialized.
I tried various event listeners but found that none of them fit the
need. Sometimes I would get an event firing both on the active and the
inactive viewer and it was not possible to tell the difference.
Finally I found one thing that did seem consistent and usable:
setUserVisibleHint. It was called on the fragment with a true
value when that fragment was the one shown to the user and was called
with false when it was not. So I made sure my fragment was not
initialized until it got told that it was visible; and then released
it when it was no longer visible.
Except for the supremely annoying fact that the YouTube player
widgets DO NOT WORK on emulators. So I had to do all this work
with a physical device tethered to my machine. Like a savage. ↩
It looks likely that I’ll be doing some ReactNative work soon so I
took some time to start setting up my Emacs environment. All my
relevant setup can be found in the
init-react.el file in my GitHub dotfiles repository.
This is likely to change so the previous link may not match the code
below. The code below matches specifically the
initial version (59e7728).
Hereinbelow I will add more annotation to those already found in that
file along with snippets of code.
The first thing that needed to be set up was a mode for React code.
React code files can mix Javascript with HTML markup and it does not
appear that js-mode (the built-in JavaScript mode) handles that.
After a short Googling it looks like
web-mode is a mode that can handle it. After
some brief testing it does appear to work reasonably. It appears that
the React/ReactNative community has not decided to use *.js or
*.jsx as the extension for the code files and since web-mode
appears to handle JavaScript just fine I chose to use it in all cases.
The next thing I wanted to get set up was a linter. I thought this
especially important as React uses an ES6 dialect of JavaScript which I
am not entirely familiar with yet and a linter can help me “do the
right thing”. With a suggestion from a coworker (who has done some
React work) I chose ESLint with the
AirBnB
configuration settings. These defaults prompted me to standardize on two
spaces for indentation. (Setting js-indent-level as js-mode is still
in use for JSON files.)
Setting up the linter to run via flycheck
took a small amount of work since I don’t like to install project
specific tools globally. (I know that this is contrary to current
mores, but I have been tripped up by global vs. local installations
before so I shy away from them when I can.)
First I needed to integrate NVM with Emacs so that Emacs could run
ESLint at all.
(The choice to use the last version found is totally arbitrary. If and
when I get more versions of Node.js on my machine I’ll have to make a
more careful choice.)
Next I hooked into projectile to look
for a locally installed ESLint and use it if found. The
projectile-after-switch-project-hook functions are called after
Projectile has switched directories to the project so one can simply
check the project for the desired file.
123456789
(add-hook'projectile-after-switch-project-hook'mjs/setup-local-eslint)(defunmjs/setup-local-eslint()"If ESLint found in node_modules directory - use that for flycheck.Intended for use in PROJECTILE-AFTER-SWITCH-PROJECT-HOOK."(interactive)(let((local-eslint(expand-file-name"./node_modules/.bin/eslint")))(setqflycheck-javascript-eslint-executable(and(file-exists-plocal-eslint)local-eslint))))
(Note: the function is interactive because I found at least a few
times I was looking at a JavaScript file which I had come to not via
projectile. By making it interactive lets me use it manually in the
rare case I need to.)
Flycheck’s ESLint integration is limited to only certain modes and
web-mode is not one of them so I needed to add that to the
white-list of modes
With that I can easily write React/ReactNative code with all the bells
and whistles I like.
Later I will add support for building and testing I’m sure. But first
I need to determine what building & testing in a ReactNative
environment will even look like.
Software maintenance is an extremely important but highly neglected
activity.
…says Boehm1 in the midst of a long paper about the current and
possible future state of Software Engineering at the end of 1976. I
think this statement is just as true today as it was almost 40 years
ago when Boehm wrote it.
Boehm defines Software Maintenance as “the process of modifying
existing operational software while leaving its primary functions
intact.” (which sounds a lot like what we now call Refactoring). He
states the Maintenance has three main ‘functions’ which imply certain
needs:
Understanding the existing software: This implies the need
for…well-structured and well-formatted code
Modifying the existing software: This implies the need for software,
…, and data structures which are easy to expand and which minimize
side effects of changes…
Revalidating the modified software: This implies the need for
software structures which facilitate selective retest, and aids for
making retest more thorough and efficient.
We have the tools today to do these things. They are things like
modular design, the SOLID principles, and TDD. We have learned ways
to write well-structured code which minimize side effects of changes
and which facilitate retest. We’ve written languages and libraries
which help us do these things.
But yet maintenance is still a problem.
Perhaps we as an industry are focusing only on the “cool” stuff. Or
believe/hope that “someone else” will maintain it. Or focusing on
immediate customer need and not considering long term needs. The
customer themselves may not consider the long term costs.
As professionals we need to consider the life-span of the code we are
writing, and write it accordingly. If a civil engineer is asked to build
a bridge that need only last for a single crossing it will be quite
different than if the bridge needed to last for decades (or longer).
We need to think in same ways.
Currently we often write software as if it only needs to work for
today. No thought is given to tomorrow.
We have the tools to write software that can live as long as it needs
to. But we need to treat the life-span as a requirement.
Boehm, B. W. “Software Engineering” Classics in Software
Engineering. Ed. Edward Nash Yourdon. New York: YOURDON
Press, 1979. pp325-361. Print. ↩
Retrospective meetings are an important part of any sort of “Agile”
process. I have a rule about when a retrospective meeting is over.
Everyone that wants to talk has talked.
There is at least one action item.
There is at least one volunteer for each action item.
A good retrospective meetings can help a team achieve what they need
and want. Often, however, these meetings tend toward directionless
discussion and complaining. While sometimes a ‘complaining’
retrospective can be good and cathartic, these should not be the usual
meeting.
Recently a client’s retrospective meetings had fallen by the wayside.
Even when they had them they were mostly at the complaining end of the
spectrum. We had helped them get back into regular retrospectives,
weekly actually, but they were still not very good.
At the end of one retrospective the team member playing the
facilitator role tried to wrap the long ramble up by asking “Retro
Complete?”. I cut off the murmured agreements with another question:
“Do we have some action items?”.
A retrospective meeting should result in action items. These might be
a new story/ticket/card, a task, a new process, or an experiment to
try. Without these the meeting was just discussion.
Also important is that the action items have volunteers. If no one
wants to do or ‘champion’ an action item then find out why. Perhaps
the action item just isn’t that important to the team; then drop it.
Perhaps it is too big and amorphous; then break it down to a single
next action. Or perhaps it is scary; if this is the case then
this scariness is something for more discussion.
In the end the team discussed what actions to take on the topics they
had discussed and created a short list. They even had volunteer for at
least one. Still not the most successful retrospective, but better
than the last. Hopefully they’ll keep this habit and their retros will
become even more effective.
In a note of the References section of “Waltzing with Bears” (DeMarco
& Lister 2003), there is a note on “Planning Extreme Programming”
(Beck & Fowler 2001) which says “When viewed as a set of
[Risk Management] strategies, XP makes all kinds of sense.” This made
me review how XP (or Agile more generally) is a risk management
technique.
The incremental approach of XP reduces risk of late delivery or wrong
delivery. The demo, planning and retrospective meetings seem to be an
implicit risk analysis/mitigation exercise. It might be beneficial to
make this more explicit.
One place where XP doesn’t line up with DeMarco & Lister’s thoughts on
risk management is their advice that there should be sizable up-front
design and estimation. XP eschews this. XP argues that the cost of
up-front design & estimation of higher than the risk that they
mitigate. It seems a reasonable risk vs. cost choice. Adding some
explicit up-front brainstorming should be sufficient to cover the
problem of missing large-impact risks. Furthermore the iterative
nature of the methodology allows for a just-in-time approach to the
costs and risks.
The book also contains a quote from Tom Glib who said (paraphrased)
‘Be ready to pack up whatever you’ve got any given morning and
deliver it by close of day’. This is very fitting with the idea in XP
that the result of every iteration should be deliverable and producing
value. While a one-day iteration, as implied in the quote, is perhaps
too extreme for many teams; the exercise of determining what it would
take to deliver value in shorter and shorter iterations is valuable.
What’s a person to do when one prefers developing in a TDD style and
decides to start a project where nothing not directly in the language
needs to be written? Why, write a testing library of course! But…
without a testing library how can one write this testing library in a
TDD fashion? Herein is how I went about doing it.
(Full disclosure: this is not the first time I’ve tried this sort
thing1. But I had forgotten I had done it before until I was
writing this article.)
Why?
“If you wish to make an apple pie from scratch, you must first
invent the universe.” – Carl Sagan
I have decided to have a crazy project wherein I’ll write an
application in Common Lisp and only Common Lisp. That is to say, no
other libraries. I have allowed myself to use extensions provided by
the the implementation SBCL2 so luckily I won’t need to write my
own socket and threading extensions.
This is a crazy and stupid idea. But the point of the project is to
give me a place to play around with things I don’t normally deal with
and in a language that I like to play with.
Test Driving a Testing Library
Where to start?
Common Lisp has no built in testing library, but it does have
assert3. If you have assert you can write a testing library.
The problem is writing the first test. I puzzled it over a little and
decided I’d need a function which would take an expression to evaluate
(the test) and would return some data structure which would indicate
the results of the test. Given that here is the first test:
So this is asserting that the return value of collect-test-results
when applied to (assert nil) will be an alist4 which will
have a cons cell whose car is :failure.
That is to say: if the expression does not raise an exception there
will not be such an element in the returned alist.
After implementing a macro which lets those test pass I wrote some
more tests to round out what I thought would be a useful implmentation
of this base function of the library. I now had these tests to
bootstrap my library.
With this I felt that I had enough testing to make me feel confident
in my little implementation. It wasn’t perfect - but good enough for
me to now use this function to test other parts of my library.
How to write a test
With collect-test-results I had a way to writing and running
individual simple tests. But it wasn’t a very convenient thing to use.
But what it let me do is now write tests for deftest which would let
me define tests. I started by writing the sort of test I wanted to
write:
123
(deftesta-simple-failing-test"This is a very simple test which fails"(assert(=5(+22))))
Through a few tests, written using collect-test-results I determined
that deftest would intern a symbol with the same name as the first
argument of the deftest, bind its function property to a function
which when called would, by calling collect-test-results evaluate
the body of the deftest. These symbols were put into a list which
could be retrieved from the library. Furthermore, defining a test with
the same name as an existing test does not create a duplicate test.
It might be clearest to just show you how these tests (and an
extracted helper function) ended up:
(defmacroassert-no-failure(&bodyassertion)(let((failure(gensym"fail")))`(let((,failure(assoc:failure(test::collect-test-results(assert,@assertion)))))(assert(not,failure)()(formatnil"~A",failure)))))(deftesta-simple-failing-test"This is a very simple test which fails"(assert(=5(+22))))(let((test-list(test::test-list)))(assert-no-failure(equaltest-list'(a-simple-failing-test)))(assert-no-failure(fboundp(cartest-list)))(assert-no-failure(assoc:failure(funcall(cartest-list))))(assert-no-failure(string="This is a very simple test which fails"(documentation(cartest-list)'function)))(assert-no-failure(equal(assoc:test-name(funcall(cartest-list)))'(:test-name.test-test::a-simple-failing-test))))(deftesta-simple-failing-test"This is a very simple test which fails"(assert(=5(+22))))(assert-no-failure(=(length(test::test-list))1))(deftesta-simple-passing-test"This is a very simple test which passes"(assert(=4(+22))))(assert-no-failure(=(length(test::test-list))2))(formatt"~A...PASSED~&"'test:deftest)
Running tests
Now that we can define tests and evaluate them all that is left is to
have a convenient way to run the defined tests. Three quick tests on
the output of run-all-tests were proof enough for me (the tests that the
call to run-all-tests would run were the ones defined above, one
passing and one failing) that it would execute each test, report which
ones failed and a count of passes and fails to *standard-output*.:
At this point my testing library has two main entry points: deftest
and run-all-tests. To create them, I first used assert to test
drive the creation on a lower-level function collect-test-results,
which I then used to test drive deftest, which I then used to test
drive run-all-tests.
Next Steps and Thoughts
Now that I have this testing library I can use it to test drive the
rest of the application I will write. I’m sure along the way I’ll be
extending this library as I find new requirements for it. I’ll
probably also be writing some assertion library to make the tests more
expressive.
The resulting tests5 and code6 are in my GitHub repository for this
project: yakshave7.