Documentation Driven Development

I’m by no means the first to propose this approach: I first heard the phrase “Readme Driven Development” from Tom Preston-Werner in 2010, and there he referenced Documentation Driven Development:

It’s important to distinguish Readme Driven Development from Documentation Driven Development. RDD could be considered a subset or limited version of DDD.

A quick google(v.) gives me various results for “documentation driven development”:

…and that’s just from the first page of results!

So lets be clear: I’m not claiming to have invented anything here; I’m just distilling the various sources into my own thoughts.

I recently got around to reading The Year Without Pants by Scott Berkun, which details his actually-more-than-a-year working for Automattic. When he was describing a general workflow for updating WordPress.com, this caught my attention:

Write a launch announcement and a support page. Most features are announced to the world after they go live on WordPress.com . But long before launch, a draft launch announcement is written. This sounds strange. How can you write an announcement for something that doesn’t exist? The point is that if you can’t imagine a compellingly simple explanation for customers, then you don’t really understand why the feature is worth building. Writing the announcement first is a forcing function. You’re forced to question if your idea is more exciting for you as the maker than it will be for your customer. If it is, rethink the idea or pick a different one.

This reminded me of Tom Preston-Werner’s approach, and set me thinking about the problem again.

A README file or launch announcement (or release note) are user facing, but users aren’t our only audience. If writing those things first help us distil our thoughts about what we are going to deliver to our users, then doing the same thing for our commit messages or – taking it to the extreme – comments will help us stay focused too. This can be particularly relevant when fixing bugs/issues/defects, as you will generally go into them with a clear idea of what you are going to do to address them.

Of course, just like TDD isn’t always practical – e.g., exploratory spikes – so too can DDD not always be used. Nor should your documentation be set in stone: good documentation lives and breathes alongside the code.

Using Travis CI for testing Django projects

A couple of weeks months1 ago in my post about using tox with Django projects, I mentioned using Travis CI as well as tox for testing.

There’s plenty of documentation out there for Python modules and Django applications, but not so much guidance for testing a complete Django project. I can only speculate that this is because testing projects normally involves more moving parts (e.g., databases) as opposed to applications which are supposed to be self-contained.

A good example here is the cookiecutter templates of Daniel Greenfeld – one of the authors of Two Scoops of Django. His djangopackage template contains a .travis.yml file, yet his django[project] template doesn’t. Since many consider these templates best practices, and Two Scoops as a Django “bible”, perhaps I’m wrong to want to use CI on a Django project?

Well (naturally) I don’t think I am, so here is how to do it.

Travis CI has a whole bunch of services you can use in your tests. The obvious ones are there e.g., PostgreSQL, MySQL, and Memcached. For a more complete system, there’s also Redis, RabbitMQ, and ElasticSearch. Using these you can build a pretty complete set of integration tests using the same components you will in a production environment.

For the purposes of testing capomastro2, we only need a PostgreSQL database 3. The first step is to say we want PostgreSQL available during tests:

services:
– postgresql

Now we need to create the database, using the postgres user provided by Travis CI:

  psql -c 'create database capomastro;' -U postgres

The next part is to configure our Django project to use this database. Fortunately our project already provides a sample local_settings.py that is configured for connecting to PostgreSQL on localhost without a password, so all we need to do is modify this file to use the same postgres user:

  cp capomastro/local_settings.py.example capomastro/local_settings.py
  sed -i -e 's/getpass.getuser()/"postgres"/g' capomastro/local_settings.py

Finally we can call the Django syncdb (and migrate because like everyone else, we use South) to setup our database:

  python manage.py syncdb --migrate --noinput

All of this is done in the before_script hook in Travis CI:

before_script:
– cp capomastro/local_settings.py.example capomastro/local_settings.py
– psql -c 'create database capomastro;' -U postgres
– sed -i -e 's/getpass.getuser()/"postgres"/g' capomastro/local_settings.py
– python manage.py syncdb –migrate –noinput

We can now execute the full test suite for the project, using the same database we use in development and production!

For reference, here is the complete .travis.yml file:

language: python
services:
– postgresql
python:
– 2.7
install:
– pip install -r dev-requirements.txt
before_script:
– cp capomastro/local_settings.py.example capomastro/local_settings.py
– psql -c 'create database capomastro;' -U postgres
– sed -i -e 's/getpass.getuser()/"postgres"/g' capomastro/local_settings.py
– python manage.py syncdb –migrate –noinput
script:
– python manage.py test


  1. Wow, this post has been sitting in drafts for quite a while! 
  2. master builder“ 
  3. A lot of people deploy against PostgreSQL, but develop and test against SQLite for speed and convenience. This will eventually bite them. 

Using tox with Django projects

Today I was adding tox and Travis-CI support to a Django project, and I ran into a problem: our project doesn’t have a setup.py. Of course I could have added one, but since by convention we don’t package our Django projects (Django applications are a different story) – instead we use virtualenv and pip requirements files – I wanted to see if I could make tox work without changing our project.

Turns out it is quite easy: just add the following three directives to your tox.ini.

In your [tox] section tell tox not to run setup.py:

skipsdist = True

In your [testenv] section make tox install your requirements (see here for more details):

deps = -r{toxinidir}/dev-requirements.txt

Finally, also in your [testenv] section, tell tox how to run your tests:

commands = python manage.py test

Now you can run tox, and your tests should run!

For reference, here is a the complete (albeit minimal) tox.ini file I used:

[tox]
envlist = py27
skipsdist = True

[testenv]
deps = -r{toxinidir}/dev-requirements.txt
setenv =
    PYTHONPATH = {toxinidir}:{toxinidir}
commands = python manage.py test

HP Chromebook 14 + DigitalOcean (and Ubuntu) = Productivity

Although I still use my desktop replacement (i.e., little-to-no battery life) for a good chunk of my work, recent additions to my setup have resulted in some improvements that I thought others might be interested in.

For Christmas just gone my wonderful wife Suzanne – and my equally wonderful children, but let’s face it was her money not theirs! – bought me a HP Chromebook 14. Since the Chromebooks were first announced, I was dismissive of them, thinking that at best they would be a cheap laptop to install Ubuntu on. However over the last year my attitudes had changed, and I came to realise that at least 70% of my time is spent in some browser or other, and of the other 30% most is spent in a terminal or Sublime Text. This realisation, combined with the improvements Intel Haswell brought to battery life made me reconsider my position and start seriously looking at a Chromebook as a 2nd machine for the couch/coffee shop/travel.

I initially focussed on the HP Chromebook 11 and while the ARM architecture didn’t put me off, the 2GB RAM did. When I found the Chromebook 14 with a larger screen, 4GB RAM and Haswell chipset, I dropped enough subtle hints and Suzanne got the message. 🙂

So Christmas Day came and I finally got my hands on it! First impressions were very favourable: this neither looks nor feels like a £249 device. ChromeOS was exactly what I was expecting, and generally gets out of my way. The keyboard is superb, and I would compare it in quality to that of my late MacBook Pro. Battery life is equally superb, and I’m easily getting 8+ hours at a time.

Chrome – and ChromeOS – is not without limitations though, and although a new breed of in-browser environments such as Codebox, Koding, Nitrous.io, and Cloud9 are giving more options for developers, what I really want is a terminal. Enter Secure Shell from Google – SSH in your browser (with public key authentication). This lets me connect to any box of my choosing, and although I could have just connected back to my desk-bound laptop, I would still be limited to my barely-deserves-the-name-broadband ADSL connection.

So, with my Chromebook and SSH client in place, DigitalOcean was my next port of call, using their painless web interface to create an Ubuntu-based droplet. Command Line Interfaces are incredibly powerful, and despite claims to the contrary most developers spending most of their time with them1. There are a plethora of tools to improve your productivity, and my three must-haves are:

With this droplet I can do pretty much anything I need that ChromeOS doesn’t provide, and connect through to the many other droplets, linodes, EC2 nodes, OpenStack nodes and other servers I use personally and professionally.

In some other posts I’ll expand on how I use (and – equally importantly – how I secure) my DigitalOcean droplets, and which “apps” I use with Chrome.


  1. The fact that I now spend most of my time in the browser and not on the command-line shows you that I’ve settled into my role as an engineering manager! 🙂