07 Mar 2017, 18:00

Crafting perfect Java Docker build flow


What is the bare minimum you need to build, test and run my Java application in Docker container?

The recipe: Create a separate Docker image for each step and optimize the way you are running it.

Duke and Container


I started working with Java in 1998, and for a long time, it was my main programming language. It was a long love–hate relationship.

DDuring my work career, I wrote a lot of code in Java. Despite that fact, I don’t think Java is usually the right choice for writing microservices running in Docker containers.

But, sometimes you have to work with Java. Maybe Java is your favorite language and you do not want to learn a new one, or you have a legacy code that you need to maintain, or your company decided on Java and you have no other option.

Whatever reason you have to marry Java with Docker, you better do it properly.

In this post, I will show you how to create an effective Java-Docker build pipeline to consistently produce small, efficient, and secure Docker images.

Be careful

There are plenty of “Docker for Java developers” tutorials out there, that unintentionally encourage some Docker bad practices.

For example:

For current demo project, first two tutorials took around 15 minutes to build (first build) and produced images of 1.3GB size each.

Make yourself a favor and do not follow these tutorials!

What should you know about Docker?

Developers new to Docker are often tempted to think of it as just another VM. Instead, think of Docker as a “child process”. The files and packages needed for an entire VM are different from those needed by just another process running a dev machine. Docker is even better than a child process because it allows better isolation and environmental control.

If you’re new to Docker, I suggest reading this Understanding Docker article. Docker isn’t so complex than any developer should not be able to understand how it works.

Dockerizing Java application

What files need to be included in a Java Application’s Docker image?

Since Docker containers are just isolated processes, your Java Docker image should only contain the files required to run your application.

What are these files?

It starts with a Java Runtime Environment (JRE). JRE is a software package, that has everything required to run a Java program. It includes an implementation of the Java Virtual Machine (JVM) with an implementation of the Java Class Library.

I recommend using OpenJDK JRE. OpenJDK is licensed under GPL with Classpath Exception. The Classpath Exception part is important. This license allows using OpenJDK with any software of any license, not just the GPL. In particular, you can use OpenJDK in proprietary software without disclosing your code.

Before using Oracle’s JDK/JRE, please read the following post: “Running Java on Docker? You’re Breaking the Law.”

Since it’s rare for Java applications to be developed using only the standard library, you most likely need to also add 3rd party Java libraries. Then add the application compiled bytecode as plain Java Class files or packaged into JAR archives. And, if you are using native code, you will need to add corresponding native libraries/packages too.

Choosing a base Docker image for Java Application

In order to choose the base Docker image, you need to answer the following questions:

  • What native packages do you need for your Java application?
  • Should you choose Ubuntu or Debian as your base image?
  • What is your strategy for patching security holes, including packages you are not using at all?
  • Do you mind paying extra (money and time) for network traffic and storage of unused files?

Some might say: “but, if all your images share the same Docker layers, you only download them just once, right?”

That’s true in theory, but in reality is often very different.

Usually, you have lots of different images: some you built lately, others a long time ago, others you pull from DockerHub. All these images do not share the same base image or version. You need to invest a lot of time to align these images to share the same base image and then keep these images up-to-date.

Some might say: “but, who cares about image size? we download them just once and run forever”.

Docker image size is actually very important.

The size has an impact on …

  • network latency - need to transfer Docker image over the web
  • storage - need to store all these bits somewhere
  • service availability and elasticity - when using a Docker scheduler, like Kubernetes, Swarm, DC/OS or other (scheduler can move containers between hosts)
  • security - do you really, I mean really need the libpng package with all its CVE vulnerabilities for your Java application?
  • development agility - small Docker images == faster build time and faster deployment

Without being careful, Java Docker images tends to grow to enormous sizes. I’ve seen 3GB Java images, where the real code and required JAR libraries only take around 150MB.

Consider using Alpine Linux image, which is only a 5MBs image, as a base Docker image. Lots of “Official Docker images” have an Alpine-based flavor.

Note: Many, but not all Linux packages have versions compiled with musl libc C runtime library. Sometimes you want to use a package that is compiled with glibc (GNU C runtime library). The frolvlad/alpine-glibc image based on Alpine Linux image and contains glibc to enable proprietary projects, compiled against glibc (e.g. OracleJDK, Anaconda), working on Alpine.

Choosing the right Java Application server

Frequently, you also need to expose some kind of interface to reach your Java application, that runs in a Docker container.

When you deploy Java applications with Docker containers, the default Java deployment model changes.

Originally, Java server-side deployment assumes that you have already pre-configured a Java Web Server (Tomcat, WebLogic, JBoss, or other) and you are deploying an application WAR (Web Archive) packaged Java application to this server and run it together with other applications, deployed on the same server.

Lots of tools are developed around this concept, allowing you to update running applications without stopping the Java Application server, route traffic to the new application, resolve possible class loading conflicts and more.

With Docker-based deployments, you do not need these tools anymore, you don’t even need the fat “enterprise-ready” Java Application servers. The only thing that you need is a stable and scalable network server that can serve your API over HTTP/TCP or other protocol of your choice. Search Google for “embedded Java server” and take one that you like most.

For this demo, I forked Spring Boot’s REST example and modified it a bit. The demo uses Spring Boot with an embedded Tomcat server. Here is my fork on GitHub repository (blog branch).

Building a Java Application Docker image

In order to run this demo, I need to create a Docker image with JRE, the compiled and packaged Java application, and all 3rd party libraries.

Here is the Dockerfile I used to build my Docker image. This demo Docker image is based on slim Alpine Linux with OpenJDK JRE and contains the application WAR file with all dependencies embedded into it. It’s just the bare minimum required to run the demo application.

# Base Alpine Linux based image with OpenJDK JRE only
FROM openjdk:8-jre-alpine

# copy application WAR (with libraries inside)
COPY target/spring-boot-*.war /app.war

# specify default command
CMD ["/usr/bin/java", "-jar", "-Dspring.profiles.active=test", "/app.war"]

To build the Docker image, run the following command:

$ docker build -t blog/sbdemo:latest .

Running the docker history command on created Docker image will let you to see all layers that make up this image:

  • 4.8MB Alpine Linux Layer
  • 103MB OpenJDK JRE Layer
  • 61.8MB Application WAR file
$ docker history blog/sbdemo:latest

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
16d5236aa7c8        About an hour ago   /bin/sh -c #(nop)  CMD ["/usr/bin/java" "-...   0 B                 
e1bbd125efc4        About an hour ago   /bin/sh -c #(nop) COPY file:1af38329f6f390...   61.8 MB             
d85b17c6762e        2 months ago        /bin/sh -c set -x  && apk add --no-cache  ...   103 MB              
<missing>           2 months ago        /bin/sh -c #(nop)  ENV JAVA_ALPINE_VERSION...   0 B                 
<missing>           2 months ago        /bin/sh -c #(nop)  ENV JAVA_VERSION=8u111       0 B                 
<missing>           2 months ago        /bin/sh -c #(nop)  ENV PATH=/usr/local/sbi...   0 B                 
<missing>           2 months ago        /bin/sh -c #(nop)  ENV JAVA_HOME=/usr/lib/...   0 B                 
<missing>           2 months ago        /bin/sh -c {   echo '#!/bin/sh';   echo 's...   87 B                
<missing>           2 months ago        /bin/sh -c #(nop)  ENV LANG=C.UTF-8             0 B                 
<missing>           2 months ago        /bin/sh -c #(nop) ADD file:eeed5f514a35d18...   4.8 MB              

Running the Java Application Docker container

In order to run the demo application, run following command:

$ docker run -d --name demo-default -p 8090:8090 -p 8091:8091 blog/sbdemo:latest

Let’s check, that application is up and running (I’m using the httpie tool here):

$ http http://localhost:8091/info

HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 09 Mar 2017 14:43:28 GMT
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked

    "build": {
        "artifact": "${project.artifactId}",
        "description": "boot-example default description",
        "name": "spring-boot-rest-example",
        "version": "0.1"

Setting Docker container memory constraints

One thing you need to know about Java process memory allocation is that in reality it consumes more physical memory than specified with the -Xmx JVM option. The -Xmx option specifies only the maximum Java heap size. But the Java process is a regular Linux process and what is interesting, is how much actual physical memory this process is consuming.

Or in other words - what is the Resident Set Size (RSS) value for running a Java process?

Theoretically, in the case of a Java application, a required RSS size can be calculated by:

RSS = Heap size + MetaSpace + OffHeap size

where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itself.

There is a very good post on this topic: Analyzing java memory usage in a Docker container by Mikhail Krestjaninoff.

When using the --memory option in docker run make sure the limit is larger (at least twice) than what you specify for -Xmx.

Offtopic: Using OOM Killer instead of GC

There is an interesting JDK Enhancement Proposal (JEP) by Aleksey Shipilev: [Epsilon GC]((http://openjdk.java.net/jeps/8174901). This JEP proposes to develop a GC that only handles memory allocation, but does not implement any actual memory reclamation mechanism.

This GC, combined with --restart (Docker restart policy) should theoretically allow supporting “Extremely short lived jobs” implemented in Java.

For ultra-performance-sensitive applications, where developers are conscious about memory allocations or want to create completely garbage-free applications - GC cycle may be considered an implementation bug that wastes cycles for no good reason. In such use case, it could be better to allow OOM Killer (Out of Memory) to kill the process and use Docker restart policy to restarting the process.

Anyway, Epsilon GC is not available yet, so it’s just an interesting theoretical use case for a moment.

Building Java applications with Builder container

As you can probably see, in the previous step, I did not explain how I’ve created the application WAR file.

Of course, there is a Maven project file pom.xml which most Java developers should be familiar with. But, in order to actually build, you need to install the same Java Build tools (JDK and Maven) on every machine, where you are building the application. You need to have the same versions, use the same repositories and share the same configurations. While’s tt’s possible, managing different projects that rely on different tools, versions, configurations, and development environments can quickly become a nightmare.

What if you might also want to run a build on a clean machine that does not have Java or Maven installed? What should you do?

Java Builder Container

Docker can help here too. With Docker, you can create and share portable development and build environments. The idea is to create a special Builder Docker image, that contains all tools you need to properly build your Java application, e.g.: JDK, Ant, Maven, Gradle, SBT or others.

To create a really useful Builder Docker image, you need to know well how you Java Build tools are working and how docker build invalidates build cache. Without proper design, you will end up with non-effective and slow builds.

Running Maven in Docker

While most of these tools were created nearly a generation ago, they are still are very popular and widely used by Java developers.

Java development life is hard to imagine without some extra build tools. There are multiple Java build tools out there, but most of them share similar concepts and serve the same targets - resolve cumbersome package dependencies, and run different build tasks, such as, compile, lint, test, package, and deploy.

In this post, I will use Maven, but the same approach can be applied to Gradle, SBT, and other less popular Java Build tools.

It’s important to learn how your Java Build tool works and how can it’s tuned. Apply this knowledge, when creating a Builder Docker image and the way you are running a Builder Docker container.

Maven uses the project level pom.xml file to resolve project dependencies. It downloads missing JAR files from private and public Maven repositories, and caches these files for future builds. Thus, next time you run your build, it won’t download anything if your dependency had not been changed.

Official Maven Docker image: should you use it?

The Maven team provides an official Docker images. There are multiple images (under different tags) that allow you to select an image that can answer your needs. Take a deeper look at the Dockerfile files and mvn-entrypoint.sh shell scripts when selecting Maven image to use.

There are two flavors of official Maven Docker images: regular images (JDK version, Maven version, and Linux distro) and onbuild images.

What is the official Maven image good for?

The official Maven image does a good job containerizing the Maven tool itself. The image contains some JDK and Maven version. Using such image, you can run Maven build on any machine without installing a JDK and Maven.

Example: running mvn clean install on local folder

$ docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/app -w /usr/src/app maven:3.2-jdk-7 mvn clean install

Maven local repository, for official Maven images, is placed inside a Docker data volume. That means, all downloaded dependencies are not part of the image and will disappear once the Maven container is destroyed. If you do not want to download dependencies on every build, mount Maven repository Docker volume to some persistent storage (at least local folder on the Docker host).

Example: running mvn clean install on local folder with properly mounted Maven local repository

$ docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/app -v "$HOME"/.m2:/root/.m2 -w /usr/src/app maven:3.2-jdk-7 mvn clean install

Now, let’s take a look at onbuild Maven Docker images.

What is Maven onbuild image?

Maven onbuild Docker image exists to “simplify” developer’s life, allowing him/er skip writing a Dockerfile. Actually, a developer should write a Dockerfile, but it’s usually enough to have the single line in it:

FROM maven:<versions>-onbuild

Looking into onbuild Dockerfile on the GitHub repository …

FROM maven:<version>

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

ONBUILD ADD . /usr/src/app
ONBUILD RUN mvn install

… you can see several Dockerfile commands with the ONBUILD prefix. The ONBUILD tells Docker to postpone the execution of these build commands until building a new image that inherits from the current image.

In our example, two build commands will be executed, when you build the application Dockerfile created FROM: maven:<version>-onbuild :

  • Add current folder (all files, if you are not using .dockerignore) to the new Docker image
  • Run mvn install Maven target

The onbuild Maven Docker image is not as useful as the previous image.

First of all, it copies everything from the current repository, so do not use it without a properly configured .dockerignore file.

Then, think: what kind of image you are trying to build?

The new image, created from onbuild Maven Docker image, includes JDK, Maven, application code (and potentially all files from current directory), and all files produced by Maven install phase (compiled, tested and packaged app; plus lots of build junk files you do not really need).

So, this Docker image contains everything, but, for some strange reason, does not contain a local Maven repository. I have no idea why the Maven team created this image.

Recommendation: Do not use Maven onbuild images!

If you just want to use Maven tool, use non-onbuild image.

If you want to create proper Builder image, I will show you how to do this later in this post.

Where to keep Maven cache?

Official Maven Docker image chooses to keep Maven cache folder outside of the container, exposing it as a Docker data volume, using VOLUME root/.m2 command in the Dockerfile. A Docker data volume is a directory within one or more containers that bypasses the Docker Union File System, in simple words: it’s not part of the Docker image.

What you should know about Docker data volumes:

  • Volumes are initialized when a container is created.
  • Data volumes can be shared and reused among containers.
  • Changes to a data volume are made directly to the mounted endpoint (usually some directory on host, but can be some storage device too)
  • Changes to a data volume will not be included when you update an image or persist Docker container.
  • Data volumes persist even if the container itself is deleted.

So, in order to reuse Maven cache between different builds, mount a Maven cache data volume to some persistent storage (for example, a local directory on the Docker host).

$ docker run -it --rm --volume "$PWD"/pom.xml://usr/src/app/pom.xml --volume "$HOME"/.m2:/root/.m2 maven:3-jdk-8-alpine mvn install

The command above runs the official Maven Docker image (Maven 3 and OpenJDK 8), mounts project pom.xml file into working directory and $HOME"/.m2 folder for Maven cache data volume. Maven running inside this Docker container will download all required JAR files into host’s local

Maven running inside this Docker container will download all required JAR files into host’s local folder $HOME/.m2. Next time you create new Maven Docker container for the same pom.xml file and the same cache mount, Maven will reuse the cache and will download only missing or updated JAR files.

Maven Builder Docker image

First, let’s try to formulate what is the Builder Docker image and what should it contain?

Builder is a Docker image that contains everything to allow you creating a reproducible build on any machine and at any point of time.

So, what should it contain?

  • Linux shell and some tools - I prefer Alpine Linux
  • JDK (version) - for the javac compiler
  • Maven (version) - Java build tool
  • Application source code and pom.xml file/s - it’s the application code SNAPSHOT at specific point of time; just code, no need to include a .git repository or other files
  • Project dependencies (Maven local repository) - all POM and JAR files you need to build and test Java application, at any time, even offline, even if library disappear from the web

The Builder image captures code, dependencies, and tools at a specific point of time and stores them inside a Docker image. The Builder container can be used to create the application “binaries” on any machine, at any time and even without internet connection (or with poor connection).

Here is the sample Dockerfile for my demo Builder:

FROM openjdk:8-jdk-alpine

# ----
# Install Maven
RUN apk add --no-cache curl tar bash


RUN mkdir -p /usr/share/maven && \
  curl -fsSL http://apache.osuosl.org/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && \
  ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
# speed up Maven JVM a bit
ENV MAVEN_OPTS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1"

ENTRYPOINT ["/usr/bin/mvn"]

# ----
# Install project dependencies and keep sources 

# make source folder
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# install maven dependency packages (keep in image)
COPY pom.xml /usr/src/app
RUN mvn -T 1C install && rm -rf target

# copy other source files (keep in image)
COPY src /usr/src/app/src

Let’s go over this Dockerfile and I will try to explain the reasoning behind each command.

  • FROM: openjdk:8-jdk-alpine - select and freeze JDK version: OpenJDK 8 and Linux Alpine
  • Install Maven
    • ARG ... - Use build arguments to allow overriding Maven version and local repository location (MAVEN_VERSION and USER_HOME_DIR) with docker build --build-arg ...
    • RUN mkdir -p ... curl ... tar ... - Download and install (untar and ln -s) Apache Maven
    • Speed up Maven JVM a bit: MAVEN_OPTS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1", read the following post
  • RUN mvn -T 1C install && rm -rf target Download project dependencies:
    • Copy project pom.xml file and run mvn install command and remove build artifacts as far as I know, there is no Maven command that will let you download without installing)
    • This Docker image layer will be rebuilt only when project’s pom.xml file changes
  • COPY src /usr/src/app/src - copy project source files (source, tests, and resources)

Note: if you are using Maven Surefire plugin and want to have all dependencies for the offline build, make sure to lock down Surefire test provider.

When you build a new Builder version, I suggest you use a --cache-from option passing previous Builder image to it. This will allow you reuse any unmodified Docker layer and avoid obsolete downloads most of the time (if pom.xml did not change or you did not decide to upgrade Maven or JDK).

$ # pull latest (or specific version) builder image
$ docker pull myrep/mvn-builder:latest
$ # build new builder
$ docker build -t myrep/mvn-builder:latest --cache-from myrep/mvn-builder:latest .
Use Builder container to run tests
$ # run tests - test results are saved into $PWD/target/surefire-reports
$ docker run -it --rm -v "$PWD"/target:/usr/src/app/target myrep/mvn-builder -T 1C -o test
Use Builder container to create application WAR
$ # create application WAR file (skip tests) - $PWD/target/spring-boot-rest-example-0.3.0.war
$ docker run -it --rm -v $(shell pwd)/target:/usr/src/app/target myrep/mvn-builder package -T 1C -o -Dmaven.test.skip=true


Take a look at images bellow:

sbdemo/run      latest  6f432638aa60 7 minutes ago  143 MB
sbdemo/tutorial 1       669333d13d71 12 minutes ago 1.28 GB
sbdemo/tutorial 2       38634e4d9d5e 3 hours ago    1.26 GB
sbdemo/builder  mvn     2d325a403c5f 5 days ago     263 MB
  • sbdemo/run:latest - Docker image for demo runtime: Alpine, OpenJDK JRE only, demo WAR
  • sbdemo/builder:mvn - Builder Docker image: Alpine, OpenJDK 8, Maven 3, code, dependency
  • sbdemo/tutorial:1 - Docker image created following first tutorial (just for reference)
  • sbdemo/tutorial:2 - Docker image created following second tutorial (just for reference)

Bonus: Build flow automation

In this section, I will show how to use Docker build flow automation service to automate and orchestrate all steps from this post.

Build Pipeline Steps

I’m going to use Codefresh.io Docker CI/CD service (the company I’m working for) to create a Builder Docker image for Maven, run tests, create application WAR, build Docker image for application and deploy it to DockerHub.

The Codefresh automation flow YAML (also called pipeline) is pretty straight forward:

  • it contains ordered list of steps
  • each step can be of type:
  • - build - for docker build command
  • - push - for docker push
  • - composition - for creating environment, specified with docker-compose
  • - freestyle (default if not specified) - for docker run command
  • /codefresh/volume/ data volume (git clone and files generated by steps) is mounted into each step
  • current working directory for each step is set to /codefresh/volume/ by default (can be changed)

For detailed description and other examples, take a look at the documentation.

For my demo flow I’ve created following automation steps:

  1. mvn_builder - create Maven Builder Docker image
  2. mv_test - execute tests in Builder container, place test results into /codefresh/volume/target/surefire-reports/ data volume folder
  3. mv_package - create application WAR file, place created file into /codefresh/volume/target/ data volume folder
  4. build_image - build application Docker image with JRE and application WAR file
  5. push_image - tag and push the application Docker image to DockerHub

Here is the full Codefresh YAML:

version: '1.0'


    type: build
    description: create Maven builder image
    dockerfile: Dockerfile.build
    image_name: <put_you_repo_here>/mvn-builder

    description: run unit tests 
    image: ${{mvn_builder}}
      - mvn -T 1C -o test
    description: package application and dependencies into WAR 
    image: ${{mvn_builder}}
      - mvn package -T 1C -o -Dmaven.test.skip=true

    type: build
    description: create Docker image with application WAR
    dockerfile: Dockerfile
    working_directory: ${{main_clone}}/target
    image_name: <put_you_repo_here>/sbdemo

    type: push
    description: push application image to DockerHub
    candidate: '${{build_image}}'
    tag: '${{CF_BRANCH}}'
      # set docker registry credentials in project configuration
      username: '${{DOCKER_USER}}'
      password: '${{DOCKER_PASS}}'

Hope, you find this post useful. I look forward to your comments and any questions you have.

_This is a working draft version. The final post version is published at Codefresh Blog on March 22, 2017._

02 Jan 2017, 18:00

Everyday hacks for Docker

In this post, I’ve decided to share with you some useful commands and tools, I’m frequently using, working with amazing Docker technology. There is no particular order or “coolness level” for every “hack”. I will try to present the use case and how does specific command or tool help me with my work.

Docker Hacks

Cleaning up

Working with Docker for some time, you start to accumulate development junk: unused volumes, networks, exited containers and unused images.

One command to “rule them all”

$ docker system  prune

prune is a very useful command (works also for volume and network sub-commands), but it’s only available for Docker 1.13. So, if you are using older Docker versions, then following commands can help you to replace the prune command.

Remove dangling volumes

dangling volumes - volumes not in use by any container. To remove them, combine two commands: first, list volume IDs for dangling volumes and then remove them.

$ docker volume rm $(docker volume ls -q -f "dangling=true")

Remove exited containers

The same principle works here too: first, list containers (only IDs) you want to remove (with filter) and then remove them (consider rm -f to force remove).

$ docker rm $(docker ps -q -f "status=exited")

Remove dangling images

dangling images are Docker untagged images, that are the leaves of the images tree (not intermediary layers).

docker rmi $(docker images -q -f "dangling=true")

Autoremove interactive containers

When you run a new interactive container and want to avoid typing rm command after it exits, use --rm option. Then when you exit from created container, it will be automatically destroyed.

$ docker run -it --rm alpine sh

Inspect Docker resources

jq - jq is a lightweight and flexible command-line JSON processor. It is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.

docker info and docker inspect commands can produce output in JSON format. Combine these commands with jq processor.

Pretty JSON and jq processing

# show whole Docker info
$ docker info --format "{{json .}}" | jq .

# show Plugins only
$ docker info --format "{{json .Plugins}}" | jq .

# list IP addresses for all containers connected to 'bridge' network
$ docker network inspect bridge -f '{{json .Containers}}' | jq '.[] | {cont: .Name, ip: .IPv4Address}'

Watching containers lifecycle

Sometimes you want to see containers being activated and exited, when you run some docker commands or try different restart policies. watch command combined with docker ps can be pretty useful here. The docker stats command, even with --format option is not useful for this use case since it does not allow you to see same info as you can see with docker ps command.

Display a table with ‘ID Image Status’ for active containers and refresh it every 2 seconds

$ watch -n 2 'docker ps --format "table {{.ID}}\t {{.Image}}\t {{.Status}}"'

Enter into host/container Namespace

Sometimes you want to connect to the Docker host. The ssh command is the default option, but this option can be either not available, due to security settings, firewall rules or not documented (try to find how to ssh into Docker for Mac VM).

nsenter, by Jérôme Petazzoni, is a small and very useful tool for above cases. nsenter allows to enter into namespaces. I like to use minimalistic (580 kB) walkerlee/nsenter Docker image.

Enter into Docker host

Use --pid=host to enter into Docker host namespaces

# get a shell into Docker host
$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh

Enter into ANY container

It’s also possible to enter into any container with nsenter and --pid=container:[id OR name]. But in most cases, it’s better to use standard docker exec command. The main difference is that nsenter doesn’t enter the cgroups, and therefore evades resource limitations (can be useful for debugging).

# get a shell into 'redis' container namespace
$ docker run --rm -it --privileged --pid=container:redis walkerlee/nsenter -t 1 -m -u -i -n sh

Heredoc Docker container

Suppose you want to get some tool as a Docker image, but you do not want to search for a suitable image or to create a new Dockerfile (no need to keep it for future use, for example). Sometimes storing a Docker image definition in a file looks like an overkill - you need to decide how do you edit, store and share this Dockerfile. Sometimes it’s better just to have a single line command, that you can copy, share, embed into a shell script or create special command alias. So, when you want to create a new ad-hoc container with a single command, try a Heredoc approach.

Create Alpine based container with ‘htop’ tool

$ docker build -t htop - << EOF
FROM alpine
RUN apk --no-cache add htop

Docker command completion

Docker CLI syntax is very rich and constantly growing: adding new commands and new options. It’s hard to remember every possible command and option, so having a nice command completion for a terminal is a must have.

Command completion is a kind of terminal plugin, that lets you auto-complete or auto-suggest what to type in next by hitting tab key. Docker command completion works both for commands and options. Docker team prepared command completion for docker, docker-machine and docker-compose commands, both for Bash and Zsh.

If you are using Mac and Homebrew, then installing Docker commands completion is pretty straight forward.

# Tap homebrew/completion to gain access to these
$ brew tap homebrew/completions

# Install completions for docker suite
$ brew install docker-completion
$ brew install docker-compose-completion
$ brew install docker-machine-completion

For non-Mac install read official Docker documentation: docker engine, docker-compose and docker-machine

Start containers automatically

When you are running process in Docker container, it may fail due to multiple reasons. Sometimes to fix this failure it’s enough to rerun the failed container. If you are using Docker orchestration engine, like Swarm or Kubernetes, the failed service will be restarted automatically. But if you are using plain Docker and want to restart container, based on exit code of container’s main process or always (regardless the exit code), Docker 1.12 introduced a very helpful option for docker run command: restart.

Restart always

Restart the redis container with a restart policy of always so that if the container exits, Docker will restart it.

$ docker run --restart=always redis

Restart container on failure

Restart the redis container with a restart policy of on-failure and a maximum restart count of 10.

$ docker run --restart=on-failure:10 redis

Network tricks

There are cases when you want to create a new container and connect it to already existing network stack. It can be Docker host network or another container’s network. This can be pretty useful for debugging and audition network issues. The docker run --network/net option support this use case.

Use Docker host network stack

$ docker run --net=host ...

The new container will attach to same network interfaces as the Docker host.

Use another container’s network stack

$ docker run --net=container:<name|id> ...

The new container will attach to same network interfaces as another container. The target container can be specified with id or name.

Attachable overlay network

Using docker engine running in swarm mode, you can create a multi-host overlay network on a manager node. When you create a new swarm service, you can attach it to the previously created overlay network.

Sometimes to inspect network configuration or debug network issues, you want to attach a new Docker container, filled with different network tools, to existing overlay network and do this with docker run command and not to create a new “debug service”.

Docker 1.13 brings a new option to docker network create command: attachable. The attachable option enables manual container attachment.

# create an attachable overlay network
$ docker network create --driver overlay --attachable mynet
# create net-tools container and attach it to mynet overlay network
$ docker run -it --rm --net=mynet net-tools sh

This is a working draft version. The final post version is published at Codefresh Blog on January 5, 2017.

18 Dec 2016, 14:00

Deploy Docker Compose (v3) to Swarm (mode) Cluster

Disclaimer: all code snippets bellow are working only with Docker 1.13+


Docker 1.13 simplifies deployment of composed application to a swarm (mode) cluster. And you can do it without creating a new dab (Distribution Application Bundle) file, but just using familiar and well-known docker-compose.yml syntax (with some additions) and --compose-file option.

Compose to Swarm

Swarm cluster

Docker Engine 1.12 introduced a new swarm mode for natively managing a cluster of Docker Engines called a swarm. Docker swarm mode implements Raft Consensus Algorithm and does not require using external key value store anymore, such as Consul or etcd.

If you want to run a swarm cluster on a developer’s machine, there are several options.

The first option and most widely known, is to use a docker-machine tool with some virtual driver (Virtualbox, Parallels or other).

But, in this post I will use another approach: using docker-in-docker Docker image with Docker for Mac, see more details in my Docker Swarm cluster with docker-in-docker on MacOS post.

Docker Registry mirror

When you deploy a new service on local swarm cluster, I recommend to setup local Docker registry mirror and run all swarm nodes with --registry-mirror option, pointing to local Docker registry. By running a local Docker registry mirror, you can keep most of the redundant image fetch traffic on your local network and speedup service deployment.

Docker Swarm cluster bootstrap script

I’ve prepared a shell script to bootstrap 4 nodes swarm cluster with Docker registry mirror and very nice swarm visualizer application.

The script initialize docker engine as a swarm master, then starts 3 new docker-in-docker containers and join them to the swarm cluster as worker nodes. All worker nodes run with --registry-mirror option.

Deploy multi-container application - the “old” way

The Docker compose is a tool (and deployment specification format) for defining and running composed multi-container Docker applications. Before Docker 1.12, you could use docker-compose tool to deploy such applications to a swarm cluster. With 1.12 release, it’s not possible anymore: docker-compose can deploy your application only on single Docker host.

In order to deploy it to a swarm cluster, you need to create a special deployment specification file (also knows as Distribution Application Bundle) in dab format (see more here).

The way to create this file, is to run the docker-compose bundle command. The output of this command is a JSON file, that describes multi-container composed application with Docker images referenced by @sha256 instead of tags. Currently dab file format does not support multiple settings from docker-compose.yml and does not allow to use supported options from docker service create command.

Such a pity story: the dab bundle format looks promising, but currently is totally useless (at least in Docker 1.12).

Deploy multi-container application - the “new” way

With Docker 1.13, the “new” way to deploy a multi-container composed application is to use docker-compose.yml again (hurrah!). Kudos to Docker team!

*Note: And you do not need the docker-compose tool, only yaml file in docker-compose format (version: "3")

$ docker deploy --compose-file docker-compose.yml myapp

Docker compose v3 (version: "3")

So, what’s new in docker compose version 3?

First, I suggest you take a deeper look at docker-compose schema. It is an extension of well-known docker-compose format.

Note: docker-compose tool (ver. 1.9.0) does not support docker-compose.yaml version: "3" yet.

The most visible change is around swarm ***service deployment. Now you can specify all options supported by docker service create/update commands:

  • number of service replicas (or global service)
  • service labels
  • hard and soft limits for service (container) CPU and memory
  • service restart policy
  • service rolling update policy
  • deployment placement constraints link

Docker compose v3 example

I’ve created a “new” compose file (v3) for classic “Cats vs. Dogs” example. This example application contains 5 services with following deployment configurations:

  1. voting-app - a Python webapp which lets you vote between two options; requires redis
  2. redis - Redis queue which collects new votes; deployed on swarm manager node
  3. worker .NET worker which consumes votes and stores them in db;
    • # of replicas: 2 replicas
    • hard limit: max 25% CPU and 512MB memory
    • soft limit: max 25% CPU and 256MB memory
    • placement: on swarm worker nodes only
    • restart policy: restart on-failure, with 5 seconds delay, up to 3 attempts
    • update policy: one by one, with 10 seconds delay and 0.3 failure rate to tolerate during the update
  4. db - Postgres database backed by a Docker volume; deployed on swarm manager node
  5. result-app Node.js webapp which shows the results of the voting in real time; 2 replicas, deployed on swarm worker nodes

Run the docker deploy --compose-file docker-compose.yml command to deploy my version of “Cats vs. Dogs” application on a swarm cluster.

Hope you find this post useful. I look forward to your comments and any questions you have.

This is a working draft version. The final post version is published at Codefresh Blog on December 25, 2016.