Docker for Node.js in Production
My checklist for running Node.js applications in Docker in a production environment

1. Choose the Correct Base Image
It’s essential to choose the right base Docker image for your Node.js application. You should always try to use the official Docker images — as they have excellent documentation, use best practices, and are designed for most common use cases.
There are still a lot of images to choose from if you look at the official Node.js Docker images. I always choose the image with the smallest size that can run my Node.js app. Currently, if you want to run your Node.js app on 64bits Linux, you have the following options:
- node:stretch (latest)→ 344.36 MB
- node:stretch-slim → 65.29 MB
- node:lts-buster-slim → 57.57 MB
- node:lts-buster → 321.14 MB
- node:13.5.0-alpine3.11 → 37.35 MB
The Node.js Docker team, who is responsible for maintaining the Node.js images, based the stretch and buster images on Debian. The Docker team based the alpine images on the Alpine Linux distribution, which is a minimal Linux distribution with a hardened kernel. My suggestion would be that if your app works on Alpine, you should use an Alpine image as your base image, as this will probably result in the smallest image size.
2. Use a Nonroot Container User
By default, your app runs inside the container as root. However, root in a container is not the same as root on the host. Users within Docker containers are still restricted. But to decrease the security-attack surface even further, you’d want to run the container as an unprivileged user wherever possible.
Most of the official images already create a nonroot user in their Docker image. See, for example, the Alpine Dockerfile, which already creates a group called node and a user called node for you to use. If you base your Docker file upon Alpine, you can use this node user to run your app as a nonroot user. See the Dockerfile below for an example of using the node user.
In the first row, I use the node:12-alpine image as the base image. The statement in row 3 creates the directory for my app. I explicitly set the owner of that new directory to the node user as I’m still running a root user. Note that the WORKDIR
instruction also creates the folder, but there’s no way to set the owner of the directory. The WORKDIR
instruction sets the working directory.
I set the user to the node user on line 9. The USER
instruction sets the user name (or UID) when running the image. After the Dockerfile sets the user, I execute NPM install
to install the dependencies of my app. This command will run as the node user. On row 13 and 14, I copy the source of my application to the working folder. In the last row, the application starts using the CMD
instruction. The CMD
instruction provides the defaults for executing the container.
3. Starting and Stopping Your Node.js App
It’s crucial to start and stop your Node.js app inside the Docker container in the correct way. In the Dockerfile shown before, I start the Node.js app using theCMD
instruction. When you start your Node.js using the CMD
instruction, you ensure that inside your application you can receive signals from the operating system, such as SIGINT
and SIGTERM
and handle them to gracefully shutdown your application.
If you ignore these signals and you stop your container, Docker will wait for 10 seconds (default time-out) for your app to respond. If, at that time, your app does not answer, it’ll kill your node process and stop the container. So the first thing you want to do is respond to the SIGINT
and SIGTERM
commands and gracefully shut down your application.
The code sample above shows how you can implement responding to the SIGINT
and SIGTERM
signals. In this example, I exit the process by calling process.exit()
, but this is also the place for stopping your server and cleaning up the outstanding connections.
If you don’t have access to the source code or do not want to change the source code of an application, there are two different options to shut down your app. The first is using --init
; this flag indicates to Docker that an init process should be used as the PID 1 in the container. You use it like this when starting your container:
docker run --init -d yournodeappimage
Your container will then directly react to Ctrl-C or Docker stop commands.
Another more permanent option is to add tini to your Dockerfile and include it in your image. This is also what Docker is doing in the background when you are using the --init
flag. You have to install tini in your image and use ENTRYPOINT
to start and wrap the CMD
— see the example below.
Gracefully closing your outstanding HTTP connections
The last thing to add to enable gracefully stopping your Node.js application is handling your outstanding HTTP requests and stopping responses to new HTTP requests. This will allow updates without downtime when using a container orchestrator, such as Docker Swarm or Kubernetes.
Instead of developing your own, there are some libraries that can handle this for you. The one I use most often is Stoppable — this library stops accepting new connections and closes existing, idle connections (including keep-alives) without killing requests that are in flight.
You have to wrap creating your HTTP server with the stoppable constructor, as you can see below.const server = stoppable(http.createServer(handler))
Finally, the shutdown function, as shown previously, is extended by closing the server via Stoppable’s server.close
.
4. Health checks
The HEALTHCHECK
instruction inside your Dockerfile tells Docker how to validate that a container is still working. This can detect cases such as a web server that’s stuck in an infinite loop and unable to handle new connections, even though the server process is still running.
You have to implement the functionality to perform the health check yourself, which is evident as Docker doesn’t know when your app is functioning correctly or not. I usually add a different route to my server, specifically for handling health requests.
I add a separate small Node.js application that performs the GET request to the health endpoint. This little application is used by the HEALTHCHECK
instruction.
In the Dockerfile, I use this small application to perform the health check.
HEALTHCHECK --interval=21s --timeout=3 --start-period=10s CMD node healthcheck.js
5. Logging from your Node.js application
Logging from a Node.js application when running in a Docker container is simple: Log to stdout
or stderr
, depending on the situation. The rationale behind this is to let something else be responsible for handling logging. Letting something else be responsible for logging makes sense as Docker containers are mostly used in microservice architectures where responsibilities are distributed among multiple services.
I wouldn’t recommend using console.log
or console.err
directly from your Node.js application. Instead, I’d write a wrapper or use a low overhead logging framework, such as Pino. Pino logs to stdout
and stderr
using JSON, providing structured logging.
6. Configuration Using Environment Variables
I suspect you already use a specific configuration object that structures and centralizes all the configuration options for your application, such as the following:
Such an object neatly defines and centralizes all the configuration options for your application. The problem with this approach when running inside a Docker container is that all configuration is expected to be done through environment variables.
I usually solve this by letting the environment variable override the default value of the config object httpPort: process.env.HTTP_PORT || 3000
. For example: