Docker is a software platform designed to make it easier to create, deploy, and run applications by using containers and is one of the most required and great technologies in the world of modern development. So, we run almost all of our projects through it. Our business task was to run as many Docker containers as possible on an embedded device (Raspberry Pi), the problem is that usually, standard Docker images have a really big size ~600 MB and the average size of our Node Docker images were reaching over than 700 MB. Therefore, today, I would like to share our experience with you on how to optimize Node.js Docker images and go through all the steps from the starting point of our optimization to the point where we are now.
Steps to Node.js Docker image optimization we will go through:
By the way, if you need help in implementing your project on the JS Node, not only in optimizing images, you can always contact us.
The first thing that comes on mind when we do not think about optimization is using a standard image. So let’s find out how to reduce the Docker image size in Node.js.
Here is an example of Dockerfile that we were using:
FROM node:10 WORKDIR /app COPY . . RUN npm install CMD ["npm", "run", "start"]
Build via this Dockerfile looks acceptable, but there is much work to do:
From here we have ~100 MB of Node modules and 600 MB of the only base image. The total size of our images was ~700 MB. Yes, we had a Layered Node.js image, but 600 MB was still unacceptable for us.
Take an initial smaller Node image. It is easy to find on dockerhub, and there are few small sized images.
Alpine is the best choice for a base image – it is the smallest one (only 70 MB). We tried it, but it did not work for us because of the processor architecture on the target platform. Our armv7 and alpine are not supported for this one. So we decided to use Docker slim (~100 MB), but we still keep in mind that we can build a new image on the base of alpine Linux for armv7.
After this step, our Dockerfile looked like:
FROM node:10-slim WORKDIR /app COPY . . RUN npm install CMD ["npm", "run", "start"]
So the difference now is 100 MB of the base image and ~100 MB of node_modules for each image
FROM node:10-slim WORKDIR /app COPY . . RUN npm install --production CMD ["npm", "run", "start"]
After this small improvement, our delta for node_modules was ~60 MB
Process Manager is a tool, which provides an ability to control application lifecycle and monitor the running services to maintain your project operability. Here is the list of most popular ones:
When we do not use Docker, we use PM2 to run our app in production. The great thing for Docker is that you don’t need process manager inside the container. You can use the Docker restart policy instead:
docker run --restart on-failure <your image>
Also, Docker provides
--restart always/no/unless-stopped
What did we get from it? By the way, find out about Nodejs configuration management
– Before getting rid of pm2:
– After:
So the Dockerfile is:
FROM node:10-slim WORKDIR /app COPY . . RUN npm install --production CMD ["node", “app.js”]
FROM node:10-slim WORKDIR /app COPY . . RUN npm install --production CMD ["node",
Read Also: IoT wearable technologies
NCC is a simple CLI for compiling Node.js app into a single file, together with all its dependencies. Moreover, it is easy to use:
npm i -g @zeit/ncc mkdir dist ncc build app.js -o dist
The last command will build your app and store it in dist/index.js file. By the way, we have best practices for building CLI and publishing it to NPM. In this article we wrote about it in more detail.
Now, let’s get back to Docker.
Every time you do RUN, COPY, ADD in your Dockerfile Docker will create a new layer, directly increase the size of the build and cache it so if you do:
FROM node:10-slim WORKDIR /app COPY . . RUN npm install --production RUN npm install -g @zeit/ncc RUN ncc build app.js -o dist RUN rm -rf node_modules CMD ["node", "dist/index.js"]
Your image size still will be like with node_modules and Docker will cache all the layers, so if you build another image via this Dockerfile, it will use files and node_modules form the old one.
node_modules
To prevent caching use:
docker build -t myimage . --no-cache
To prevent size increasing, we have 3 options:
1. Build locally and copy.
ncc build app.js -o dist rm -rf node_modules
FROM node:10-slim WORKDIR /app COPY dist/index.js . CMD ["node", "index.js"]
2. Run everything in one command.
FROM node:10-slim WORKDIR /app COPY . . RUN npm install --production && npm install -g @zeit/ncc && ncc build app.js -o dist && rm -rf node_modules CMD ["node", "dist/index.js"]
Also will work, but it is necessary to rm -rf node_modules each time before build, and besides index.js file, we will have unused app files
3. Multi-stage Docker builds, which is the best one!
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artefacts from one stage to another, leaving behind everything you don’t want in the final image.
After using ncc with multi-stage builds, our largest build was 2.5 MB, before it was 70 MB. We’ve finally got the smallest image.
FROM node:10-slim as builder WORKDIR /app COPY . . RUN npm install --production RUN npm install -g @zeit/ncc RUN ncc build app.js -o dist FROM node:10-slim WORKDIR /app COPY --from=builder /app/dist/index.js . CMD ["node", "index.js"]
Small TIP:
We didn’t use any binary dependencies for our applications, so we could build our images on x86 and run them on armv7. If you use binary dependencies and your target platform processor architecture is different, build your images on a machine with the same architecture or use a virtual machine which emulates the architecture of the processor you need.
I hope you have learned at least some useful information from this post and at the end, I want to share with you our final results:
-Delta for our images before any optimisation:
-Delta, after all the steps:
As I noticed in the beginning, our goal was to run as many containers as possible. For example, to run 7 containers we needed ~1.3 GB of SSD on start for images and overhead ~30 MB RAM for each container because of PM2, now it is only 117.5 MB of SSD and 210 MB less RAM usage.
Learn more about how we engage and what our experts can do for your business
Written by:
JavaScript Developer
Almost six years of experience in software development
In this guide, we provide you with comprehensive information on how to enable Facebook login in the React Native application, a description of the process…
How to Train Word2vec Model With NodeJS in 5 steps: Introduction This will be a really short reading about how to get a working word2vec…
Do you know that only 26% of IT projects are completed on time and budget, 46% are late or over budget, and 28% fail? Failure…
By this article, I’d like to provide you a checklist that will help you to test your product better and discover it more in-depth. Things…
3 Steps to Blur Face with NodeJS and OpenCV: How to Blur Image This reading is about a task which our team had on one…
How to Update Internet-of-Things (IoT) Devices: Programmer’s Guide The Internet of things is growing exponentially every year. Internet-connected devices are changing the way we work,…