Makefiles and Docker for Local Development

I’ve spent a good part of my career automating the setup of projects for local development and CI. Projects are like snowflakes, no two are alike. I prefer to choose tools that allow me to adapt to each project’s requirements. I’ve settled on a mix of Makefiles, Docker, and a sprinkle of Bash scripts. This post is focused on how I design Makefiles for projects. Makefiles end up becoming an interface for interacting with projects. Who doesn’t like consistent interfaces?

1. The Interface

make build      # Build everything to run the project
make up         # Bring up the development environment (usually docker-compose up)
make enter      # Enter a shell development environment
make start      # Run the development server
make down       # Stop the external services
make clean      # Shut down and remove services
make test       # Run the test suite
make lint       # Run linting tests on the project
make release    # Generate release artifacts for the project
make deploy     # Deploy the project

These are the high level commands you’ll want to cover. Most developer tooling exposes a similar set of commands for interacting with projects. The point here is to capture the important bits of a project workflow. There are two big benefits here.

  1. Project onboarding is faster.

If you work on a bunch of different projects this is a boon. If you only work on one project, it means you have less questions to answer. In short, it’s nice.

  1. Your CI can run the same commands as developers.

This comes pretty easy if you use docker in docker images in your CI pipeline. Think of it like local development integration testing. I’ve found this helps prevent my projects from breaking. This helps keep consistency between what developers run and what the CI runs.

2. The Commands

2.1 make build

The goal is to grab everything you need for the project. Grab some static files, create any required directories, build a docker image. In the end you’ll have an immutable artifact. I say immutable artifact because while all my use cases involve containers, your’s could be binaries.

2.2 make up

Analogous to docker compose -up in most cases. This command usually depends on make build. You need a container to bring up the environment. This will create the development environment as well as all the external services.

Sometimes you won’t want to bring up all the containers by default. Maybe some containers take a long time to build and aren’t part of the regular workflow. I like to optimize for the common use case so that devs can get up running faster. You can use compose profiles to target containers you want by default.

2.3 make enter

Maybe the most controversial command/decision in the interface. I don’t like the project automatically starting when I run make up. Running make enter will jump into the development container. Similar to source ./venv/bin/activate in python.

Yes, you can install remote debuggers. It doesn’t matter. Developers are used to having the process in the foreground for development. They’re used to running debuggers in the foreground. Developers will want to launch repls. I choose to trade-off full automation for familiar process and less interruptions from questions.

Overoptimizing this decision has lead to frustrated developers every time I’ve seen it implemented. Give the developers what they want. They’ll be happier, you’ll get less questions, everyone wins.

2.4 make start

The command to run the service inside the container.

  1. make enter
  2. make start

This usually doesn’t call the command itself, but a shell script. The shell script can configure what it needs on a per environment basis. This let’s you override functionality environment variables. Now you may ask, how do you run this in the CI?

Have you tried to write a bash script that interactively jumps into docker and runs the command? I did, I got most of the way with expect but it was jaaaaaanky. Instead I will create a make start_ci that just runs the start script from outside the container. If someone knows a better way of doing this, I’d love to hear. Yes it duplicates the command, but duplication isn’t always bad.

2.5 make down

I don’t usually use this command much, but others have given feedback that they enjoy it. This just shuts down the external services (postgres, redis, w/e) without removing them. Helpful in situations where you have static ports colliding.

2.6 make clean

Used like c projects use clean. Shut’s down the development container and extra services, then removes them. Iterating on projects becomes a loop of make clean && make up then doing what I need to. You’ll want to make sure you are effectively using Docker’s image caching to speed this up. Also, you can reduce the context sent to the docker daemon with the .dockerignore file.

2.7 make test

Run the testing suite, for me it’s typically pytest. This command will almost always has a make test_ci relative command. The beauty of this is that you can use docker in docker images and do something like:

make build
make up_ci
sleep 5
make test_ci
make clean

In your CI pipeline to do integration testing of the services. This will keep you project in sync on many different levels. Developers can run tests locally while developing instead of having to wait for the CI. Their local environment workflows will be checked on each CI run. Who doesn’t love consistency.

2.8 make lint

Run the linting, nothing special. This helps when jumping between different languages if you aren’t familiar with the linting process. I prefer having this command in a Makefile rather than the so the CI makes sure it works.

2.9 make release

This will generate a release for your project. With docker, this means pushing you image to your container registry of choice. Or maybe you’re using artifactory, or pushing to a private devpi server Heck, you could even just zip the code up and store it a S3 bucket by git hash. The point is you should aim to decouple your release process from your deploy process.

2.10 make deploy

Select one of your releases and put it into production. The reason that you separate release and deploy is so you don’t have to juggle git checkouts. It also guarantees that the correct source is being used from environment to environment. You want to avoid developers deploying from their machines. It’s harder to troubleshoot and much harder to automate in the long run.

3. Conclusion

Scripts are documentation that you can execute. I’m pretty sure I read that somewhere in continuous delivery. It’s stuck with me ever since. Instead of adding steps to the, try some automation out. You may be surprised how far it will take you.