03 Jun 2017, 18:00

Debugging remote Node.js application running in a Docker container


Suppose you want to debug a Node.js application already running on a remote machine inside Docker container. And would like to do it without modifying command arguments (enabling debug mode) and opening remote Node.js debugger agent port to the whole world.

I bet you didn’t know that it’s possible and also have no idea how to do it.

I encourage you to continue reading this post if you are eager to learn some new cool stuff.

The TodoMVC demo application

I’m using the fork of TodoMVC Node.js application (by Gleb Bahmutov) as a demo application for this blog post. Feel free to clone and play with this repository.

Here is the Dockerfile, I’ve added, for TodoMVC application. It allows to run TodoMVC application inside a Docker container.

FROM alpine:3.5

# install node
RUN apk add --no-cache nodejs-current tini

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Build time argument to set NODE_ENV ('production'' by default)
ENV NODE_ENV ${NODE_ENV:-production}

# install npm packages: clean obsolete files
COPY package.json /usr/src/app/
RUN npm config set depth 0 && \
    npm install && \
    npm cache clean && \
    rm -rf /tmp/*

# copy source files
COPY . /usr/src/app


# Set tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]

CMD [ "npm", "start" ]

# add VCS labels for code sync and nice reports
ARG VCS_REF="local"
LABEL org.label-schema.vcs-ref=$VCS_REF \          

Building and Running TodoMVC in a Docker container:

To build a new Docker image for TodoMVC application, run the docker build command.

$ # build Docker image; set VCS_REF to current HEAD commit (short)
$ docker build -t local/todomvc --build-arg VCS_REF=`git rev-parse --short HEAD` .
$ # run TodoMVC in a Docker container
$ docker run -d -p 3000:3000 --name todomvc local/todomvc node src/start.js

The Plan

Final Goal - I would like to be able to attach a Node.js debugger to a Node.js application already up and running inside a Docker container, running on remote host machine in AWS cloud, without modifying the application, container, container configuration or restarting it with additional debug flags. Imagine that the application is running and there is some problem happening right now - I want to connect to it with debugger and start looking at the problem.

So, I need a plan - a step by step flow that will help me to achieve the final goal.

Let’s start with exploring the inventory. On the server (AWS EC2 VM) machine, I have a Node.js application running inside a Docker container. On the client (my laptop), I have an IDE (Visual Studio Code, in my case), Node.js application code (git pull/clone) and a Node.js debugger.

So, here is my plan:

  1. Set already running application to debug mode
  2. Expose a new Node.js debugger agent port to enable remote debugging in a secure way
  3. Syncronize client-server code: both should be on the same commit in git tree
  4. Attach a local Node.js debugger to the Node.js debugger agent port on remote server and do it in a secure way
  5. And, if everything works, I should be able to perform regular debugging tasks, like setting breakpoints, inspecting variables, pausing execution and others.

Debug Node in Docker

Step 1: set already running Node.js application to the debug mode

The V8 debugger can be enabled and accessed either by starting Node with the --debug command-line flag or by signaling an existing Node process with SIGUSR1. (Node API documentation)

Cool! So, in order to switch on Node debugger agent, I just need to send the SIGUSR1 signal to the Node.js process of TodoMVC application. Remember, it’s running inside a Docker container. What command can I use to send process signals to an application running in a Docker container?

The docker kill - is my choice! This command does not actually “kill” the PID 1 process, running in a Docker container, but sends a Unix signal to it (by default it sends SIGKILL).

Setting TodoMVC into debug mode

So, all I need to do is to send SIGUSR1 to my TodoMVC application running inside todomvc Docker container.

There are two ways to do this:

  1. Use docker kill --signal command to send SIGUSR1 to PID 1 process running inside Docker container, and if it’s a “proper” (signal forwarding done right) init application (like tini), than this will work
  2. Or execute kill -s SIGUSR1 inside already running Docker container, sending SIGUSR1 signal to the main Node.js process.
$ # send SIGUSR1 with docker kill (if using proper init process)
$ docker kill --signal SIGUSR1 todomvc 
$ # OR run kill command for node process inside todomvc container
$ docker exec -it todomvc sh -c 'kill -s SIGUSR1 $(pidof -s node)'

Let’s verify that Node application is set into debug mode.

$ docker logs todomvc

TodoMVC server listening at http://:::3000
emitting 2 todos
server has new 2 todos
GET / 200 31.439 ms - 3241
GET /app.css 304 4.907 ms - -
Starting debugger agent.
Debugger listening on

As you can see the Node.js debugger agent was started, but it can accept connections only from the localhost, see the last output line: Debugger listening on

Step 2: expose Node debug port

In order to attach a remote Node.js debugger to a Node application, running in the debug mode, I need:

  1. Allow connection to debugger agent from any (or specific) IP (or IP range)
  2. Open port of Node.js debugger agent outside of Docker container

How to do it when an application is already running in a Docker container and a Node.js debugger agent is ready to talk only with a Node.js debugger running on the same machine, plus a Node.js debugger agent port is not accessible from outside of the Docker container?

Of cause it’s possible to start every Node.js Docker container with exposed debugger port and allow connection from any IP (using --debug-port and --debug Node.js flags), but we are not looking for easy ways :).

It’s not a good idea from a security point of view (allowing unprotected access to a Node.js debugger). Also, if I restart an already running application with debug flags, I’m going to loose the current execution context and may not be able to reproduce the problem I wanted to debug.

I need a better solution!

Unfortunately, Docker does not allow to expose an additional port of already running Docker container. So, I need somehow connect to a running container network and expose a new port for Node.js debugger agent.

Also, it is not possible to tell a Node.js debugger agent to accept connections from different IP addresses, when Node.js process was already started.

Both of above problems can be solved with help of the small Linux utility called socat (SOcket CAT). This is just like the netcat but with security in mind (e.g., it support chrooting) and works over various protocols and through files, pipes, devices, TCP sockets, Unix sockets, a client for SOCKS4, proxy CONNECT, or SSL etc.

From socat man page: > socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

Exactly, what I need!

So, here is the plan. I will run a new Docker container with the socat utility onboard, and configure Node.js debugger port forwarding for TodoMVC container.


FROM alpine:3.5
RUN apk add --no-cache socat
CMD socat -h

Building socat Docker container

$ docker build -t local/socat - < socat.Dockerfile

Allow connection to Node debugger agent from any IP

I need to run a “sidecar” socat container in the same network namespace as the todomvc container and define a port forwarding.

$ # define local port forwarding
$ docker run -d --name socat-nid --network=container:todomvc local/socat socat TCP-LISTEN:4848,fork TCP:

Now any traffic that arrives at 4848 port will be routed to the Node.js debugger agent listening on The 4848 port can accept traffic from any IP. It’s possible to use an IP range to restrict connection to the socat listening port, adding range=<ANY IP RANGE> option.

Exposing Node.js debugger port from Docker container

First, we will get IP of todomvc Docker container.

$ # get IP of todomvc container
$ TODOMVC_IP=$(docker inspect -f "{{.NetworkSettings.IPAddress}}" todomvc)

Then, configure port forwarding to the “sidecar” socat port, we define previously, running on the same network as the todomvc container.

$ # run socat container to expose Node.js debugger agent port forwarder
$ docker run -d -p 5858:5858 --name socat local/socat socat TCP-LISTEN:5858,fork TCP:${TODOMVC_IP}:4848

Any traffic that will arrive at the 5858 port on the Docker host will be forwarded, first, to the 4848 socat port and then to the Node.js debugger agent running inside the todomvc Docker container.

Exposing Node.js debugger port for remote access

In most cases, I would like to debug an application running on a remote machine (AWS EC2 instance, for example). I also do not want to expose a Node.js debugger agent port unprotected to the whole world.

One possible and working solution is to use SSH tunneling to access this port.

$ # Open SSH Tunnel to gain access to servers port 5858. Set `SSH_KEY_FILE` to ssh key location or add it to ssh-agent
$ #
$ # open an ssh tunnel, send it to the bg, and wait 20 seconds for connections
$ # once all connections are closed after 20 seconds then close the tunnel
$ ssh -i ${SSH_KEY_FILE} -f -o ExitOnForwardFailure=yes -L 5858: ec2_user@some.ec2.host.com sleep 20

Now all traffic to the localhost:5858 will be tunneled over SSH to the remote Docker host machine and after some socat forwarding to the Node.js debugger agent running inside the todomvc container.

Step 3: Synchronizing on the same code commit

In order to be able to debug a remote application, you need to make sure that you are using the same code in your IDE as one that is running on remote server.

I will try to automate this step too. Remember the LABEL command, I’ve used in TodoMVC Dockerfile?

These labels help me to identify git repository and commit for the application Docker image:

  1. org.label-schema.vcs-ref - contains short SHA for a HEAD commit
  2. org.label-schema.vcs-url - contains an application git repository url (I can use in clone/pull)

I’m using (Label Schema Convention)[http://label-schema.org/rc1/], since I really like it and find it useful, but you can select any other convention too.

This approach allows me, for each, properly labeled, Docker image, to identify the application code repository and the commit it was created from.

$ # get git repository url form Docker image
$ GIT_URL=$(docker inspect local/todomvc | jq -r '.[].ContainerConfig.Labels."org.label-schema.vcs-url"')
$ # get git commit from Docker image
$ GIT_COMMIT=$(docker inspect local/todomvc | jq -r '.[].ContainerConfig.Labels."org.label-schema.vcs-ref"')
$ # clone git repository, if needed
$ git clone $GIT_URL
$ # set HEAD to same commit as server
$ git checkout $GIT_COMMIT

Now, both my local development environment and remote application are on the same git commit. And I can start to debug my code, finally!

Step 4: Attaching local Node.js debugger to debugger agent port

To start debugging, I need to configure my IDE. In my case, it’s Visual Studio Code and I need to add a new Launch configuration.

This launch configuration specifies remote debugger server and port to attach and remote location for application source files, which should be in sync with local files (see the previous step).

    // For more information about Node.js debug attributes, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
            "type": "node",
            "request": "attach",
            "name": "Debug Remote Docker",
            "address": "",
            "port": 5858,
            "localRoot": "${workspaceRoot}/",
            "remoteRoot": "/usr/src/app/"


And finally, I’ve met my goal: I’m able to attach a Node.js debugger to a Node.js application, that is already up and running in a Docker container on a remote machine.

It was a long journey to find the proper solution, but after I found it, the process does not look complex at all. Now, once I met a new problem in our environment I can easily attach the Node.js debugger to the running application and start exploring the problem. Nice, isn’t it?

I’ve recorded a short movie, just to demonstrate all steps and prove that things are working fluently, exactly as I’ve described in this post.

Hope, you find this post useful. I look forward to your comments and any questions you have.

*This is a working draft version. The final post version is published at Codefresh Blog on June 6, 2017.*

25 Apr 2017, 18:00

Create lean Node.js image with Docker multi-stage build


Starting from Docker 17.05+, you can create a single Dockerfile that can build multiple helper images with compilers, tools, and tests and use files from above images to produce the final Docker image.

Multi-stage Docker Build

The “core principle” of Dockerfile

Docker can build images by reading the instructions from a Dockerfile. A Dockerfile is a text file that contains a list of all the commands needed to build a new Docker image. The syntax of Dockerfile is pretty simple and the Docker team tries to keep it intact between Docker engine releases.

The core principle is very simple: 1 Dockerfile -> 1 Docker Image.

This principle works just fine for basic use cases, where you just need to demonstrate Docker capabilities or put some “static” content into a Docker image.

Once you advance with Docker and would like to create secure and lean Docker images, singe Dockerfile is not enough.

People who insist on following the above principle find themselves with slow Docker builds, huge Docker images (several GB size images), slow deployment time and lots of CVE violations embedded into these images.

The Docker Build Container pattern

Docker Pattern: The Build Container

The basic idea behind Build Container pattern is simple:

Create additional Docker images with required tools (compilers, linters, testing tools) and use these images to produce lean, secure and production ready Docker image.


An example of the Build Container pattern for typical Node.js application:

  1. Derive FROM a Node base image (for example node:6.10-alpine) node and npm installed (Dockerfile.build)
  2. Add package.json
  3. Install all node modules from dependency and devDependency
  4. Copy application code
  5. Run compilers, code coverage, linters, code analysis and testing tools
  6. Create the production Docker image; derive FROM same or other Node base image
  7. install node modules required for runtime (npm install --only=production)
  8. expose PORT and define default CMD (command to run your application)
  9. Push the production image to some Docker registry

This flow assumes that you are using two or more separate Dockerfiles and a shell script or flow tool to orchestrate all steps above.


I use a fork of Let’s Chat node.js application.

Builder Docker image with eslint, mocha and gulp

FROM alpine:3.5
# install node 
RUN apk add --no-cache nodejs
# set working directory
WORKDIR /root/chat
# copy project file
COPY package.json .
# install node packages
RUN npm set progress=false && \
    npm config set depth 0 && \
    npm install
# copy app files
COPY . .
# run linter, setup and tests
CMD npm run lint && npm run setup && npm run test

Production Docker image with ‘production’ node modules only

FROM alpine:3.5
# install node
RUN apk add --no-cache nodejs tini
# set working directory
WORKDIR /root/chat
# copy project file
COPY package.json .
# install node packages
RUN npm set progress=false && \
    npm config set depth 0 && \
    npm install --only=production && \
    npm cache clean
# copy app files
COPY . .
# Set tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]
# application server port
# default run command
CMD npm run start

What is Docker multi-stage build?

Docker 17.0.5 extends Dockerfile syntax to support new multi-stage build, by extending two commands: FROM and COPY.

The multi-stage build allows using multiple FROM commands in the same Dockerfile. The last FROM command produces the final Docker image, all other images are intermediate images (no final Docker image is produced, but all layers are cached).

The FROM syntax also supports AS keyword. Use AS keyword to give the current image a logical name and reference to it later by this name.

To copy files from intermediate images use COPY --from=<image_AS_name|image_number>, where number starts from 0 (but better to use logical name through AS keyword).

Creating a multi-stage Dockerfile for Node.js application

The Dockerfile below makes the Build Container pattern obsolete, allowing to achieve the same result with the single file.

# ---- Base Node ----
FROM alpine:3.5 AS base
# install node
RUN apk add --no-cache nodejs-npm tini
# set working directory
WORKDIR /root/chat
# Set tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]
# copy project file
COPY package.json .

# ---- Dependencies ----
FROM base AS dependencies
# install node packages
RUN npm set progress=false && npm config set depth 0
RUN npm install --only=production 
# copy production node_modules aside
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install

# ---- Test ----
# run linters, setup and tests
FROM dependencies AS test
COPY . .
RUN  npm run lint && npm run setup && npm run test

# ---- Release ----
FROM base AS release
# copy production node_modules
COPY --from=dependencies /root/chat/prod_node_modules ./node_modules
# copy app sources
COPY . .
# expose port and define CMD
CMD npm run start

The above Dockerfile creates 3 intermediate Docker images and single release Docker image (the final FROM).

  1. First image FROM alpine:3.5 AS bas - is a base Node image with: node, npm, tini (init app) and package.json
  2. Second image FROM base AS dependencies - contains all node modules from dependencies and devDependencies with additional copy of dependencies required for final image only
  3. Third image FROM dependencies AS test - runs linters, setup and tests (with mocha); if this run command fail not final image is produced
  4. The final image FROM base AS release - is a base Node image with application code and all node modules from dependencies

Try Docker multi-stage build today

In order to try Docker multi-stage build, you need to get Docker 17.0.5, which is going to be released in May and currently available on the beta channel.

So, you have two options:

  1. Use beta channel to get Docker 17.0.5
  2. Run dind container (docker-in-docker)

Running Docker-in-Docker 17.0.5 (beta)

Running Docker 17.0.5 (beta) in docker container (--privileged is required):

$ docker run -d --rm --privileged -p 23751:2375 --name dind docker:17.05.0-ce-dind --storage-driver overlay2

Try mult-stage build. Add --host=:23751 to every Docker command, or set DOCKER_HOST environment variable.

$ # using --host
$ docker --host=:23751 build -t local/chat:multi-stage .

$ # OR: setting DOCKER_HOST
$ export DOCKER_HOST=localhost:23751
$ docker build -t local/chat:multi-stage .


With Docker multi-stage build feature, it’s possible to implement an advanced Docker image build pipeline using a single Dockerfile. Kudos to Docker team!

Hope, you find this post useful. I look forward to your comments and any questions you have.

_This is a working draft version. The final post version is published at Codefresh Blog on April 24, 2017._