Related Resources

Our Services

You May Also Like

How to use Docker and Docker Compose with NodeJS (NodeJS + Mongodb + Elasticsearch)

Krunal Shah

Nov 06, 2019

10 min readLast Updated Jan 27, 2021

Docker and Docker Compose with NodeJSDocker is a tool designed to make it easier to create, deploy, and run applications by using containers. Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications to be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.

Advantages

1. Continuous Integration Efficiency

  • Docker enables you to build a container image and use that same image across every step of the deployment process.

2. Compatibility and Maintainability

  • Eliminate the “it works on my machine” problem once and for all. One of the benefits that the entire team will appreciate is parity. Parity, in terms of Docker, means that your images run the same no matter which server or whose laptop they are running on.

3. Standardization

  • Docker containers ensure consistency across multiple developments and release cycles, standardizing your environment.
  • Docker provides repeatable development, build, test, and production environments. Standardizing service infrastructure across the entire pipeline allows every team member to work in a production equality environment.

Prerequisite

  • Docker
  • Docker-compose
  • The compatible OS which supports docker [For Windows we need Windows-PRO version]

Let us begin with Dockerising Mongo and Elasticsearch with Nodejs.

I have created a sample app with NodeJs, Mongo ElasticSearch and I have Integrated it.

Dockerfile:

FROM node:12

WORKDIR /app

COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]

RUN npm install --production --silent && mv node_modules ../

COPY . .

EXPOSE 3010

RUN npm install -g nodemon

CMD npm start

Let’s see what are above commands are actually meant for

From node:12

Set the base image to use for the building image. FROM must be the first instruction in a Dockerfile

WORKDIR /app

Set the working directory for any command ADD, COPY, CMD, ENTRYPOINT, or RUN instructions that follow it in the Dockerfile.

COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]

Copy files or folders from source to the dest path in the image's filesystem.

RUN npm install --production --silent && mv node_modules ../

Execute any commands on top of the current image as a new layer and commit the results.

COPY . .

Copying everything from source folder to destination folder i.e from current place to inside of the docker.

EXPOSE 3010

Define the network ports that this container will listen on at runtime

RUN npm install -g nodemon

Installing nodemon globally inside the docker

CMD npm start

Provide defaults for an executing container. If an executable is not specified, then ENTRYPOINT must be specified as well. There can only be one CMD instruction in a Dockerfile

How to make a Docker Image?

docker build -t sample-app . 

*Here we are using . as the current directory

This is a command for building a docker image for a sample app that I have created.

-t stands for giving an Allocate a pseudo-TTY

While building the Image, if you are building an image for the first time, you see that images will be downloaded from the docker hub and files are processed from your local system to the docker container.

Here is the sample output my docker image which I have built from the above docker file.

Building Image will show the following output
Step 1/8 : FROM node:12
---> b074182f4154
Step 2/8 : WORKDIR /app
---> Using cache
---> e07060a5ab32
Step 3/8 : COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
---> Using cache
---> fd19d59b39e9
Step 4/8 : RUN npm install --production --silent && mv node_modules ../
---> Using cache
---> b2aa67ffc9c2
Step 5/8 : COPY . .
---> c05330c66aa3
Step 6/8 : EXPOSE 3000
---> Running in a5d196effe33
Removing intermediate container a5d196effe33
---> 94fc592e18f8
Step 7/8 : RUN npm install -g nodemon
---> Running in 34debda3811d
/usr/local/bin/nodemon -> /usr/local/lib/node_modules/nodemon/bin/nodemon.js

> nodemon@1.19.1 postinstall /usr/local/lib/node_modules/nodemon
> node bin/postinstall || exit 0

Love nodemon? You can now support the project via the open collective:
> https://opencollective.com/nodemon/donate

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.9 (node_modules/nodemon/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.9: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

+ nodemon@1.19.1
added 221 packages from 128 contributors in 18.789s
Removing intermediate container 34debda3811d
---> ffd667c80190
Step 8/8 : CMD npm start
---> Running in a9d44a256c2c
Removing intermediate container a9d44a256c2c
---> 5b2751381d7d
Successfully built 5b2751381d7d
Successfully tagged sample-app:latest

Now we have built the image, we need to understand how to run the docker image.

docker run -p 3000:3000 sample-app

We are running a docker image which we built earlier by the name “sample-app” of the docker image which I gave earlier.

In the command above -p stands for the port mapping for host and container It will bind port 3000 of the container to TCP port 3000 on 127.0.0.1/localhost of the host machine

We can also use -d for running docker in detach mode.

Let's us see what will be the output for running a docker file

node ./bin/www

Elasticsearch INFO: 2019-08-29T10:18:59Z
Adding connection to http://localhost:9200/

MongoDB connection error: MongoNetworkError: failed to connect to server [localhost:27017] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27017
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1054:14) {
name: 'MongoNetworkError',
errorLabels: [Array],
[Symbol(mongoErrorContextSymbol)]: {}
}]
npm ERR! code ELIFECYCLE
npm ERR! errno 255
npm ERR! mongoelastic@0.0.0 start: `node ./bin/www`
npm ERR! Exit status 255
npm ERR!
npm ERR! Failed at the mongoelastic@0.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2019-08-29T10_18_59_116Z-debug.log

Oops! I think only using the docker file which we had made is not enough for the task we have to achieve.

TRT

We are a team of expert developers, testers and business consultants who strive to deliver nothing but the best. Planning to build a completely secure and efficient node app? 'Hire Node Developers'.

Why the docker file is failing ??

Our app needs to have “MongoDB” and as well as “Elastic Search” and our docker file was not able to find any source for MongoDB and Elastic Search.

As you can see in the above error logs while running our docker file it throws an error for connecting with MongoDB. For solving the above issues, I had to dig up more regarding the docker.

After exploring, I ran into docker-compose, which helps in defining and running multi-container Docker applications. With a single command, you create and start all the services from your configuration. I faced lots of failures but finally succeeded in making a perfect docker-compose as per our requirement.

I have shared my docker-compose which runs with MongoDB and Elastic Search along with Node JS backend.

We can add gif regarding the getting a miracle thing

version: "3"
services:
backend:
    container_name: nodejs
    restart: always
    build: ./
    ports:
    - "3010:3010"
    volumes:
    - .:/app
    - ./error.log:/usr/src/app/error.log
    links:
    - mongo
    - elasticsearch
mongo:
    container_name: mongo
    image: mongo
    ports:
    - "27017:27017"
    volumes:
    - ./data:/data/db
elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0
    environment:
    - node.name=es01
    - discovery.type=single-node
    - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
    -  esdata:/usr/share/elasticsearch/data
    ports:
    - "9200:9200"    
volumes:
esdata:
    driver: local

Let me explain the docker-compose file what does the things that are written in it does.

version: "3"

version is just describing which type of Docker-Compose file format you are going to use, there are several versions you can choose.In this, I am currently using Version 3.

services:

Services are just a way of declaring or differentiating the various containers which are made inside the service with a unique name. In our compose file we have 3 services viz. backend, mongo, and elasticsearch.

container_name:

It is used for declaring the container name but it is not a mandatory field in docker-compose. I have added it because after executing docker-compose I want to have custom names shown so that I can easily recognize the activities going on.

restart: always

restart allows us to define frequency when to restart the build process in order to incorporate the change in environment file or docker file.

build: ./

build command is used to declare our source directory where docker needs to take files and run the system

ports:
    - "3000:3000"
ports:
    - "27017:27017"

ports command is used for mapping the ports for host and container. Here in our example, our Node JS backend runs on 3000 and Elastic Search runs on 27017 which is its default port

volumes:
    - .:/app
    - ./error.log:/usr/src/app/error.log
volumes:
    - ./data:/data/db
volumes:
    -  esdata:/usr/share/elasticsearch/data

volumes are mainly used for storing data which is generated inside the docker container on to the host machine so that we can access it even after the container gets destroyed.

Here in our app, we are storing error logs, MongoDB data, and elasticsearch data.

 links:
    - mongo
    - elasticsearch

links create the bridge between our NodeJS backend with MongoDB and Elasticsearch so that all three containers can communicate with each other.

environment:
    - node.name=es01
    - discovery.type=single-node
    - "ES_JAVA_OPTS=-Xms512m -Xmx512m"

Elasticsearch uses node.name as a human-readable identifier for a particular instance of Elasticsearch so it is included in the response of many APIs.

Discovery.type specifies whether Elasticsearch should form a multiple-node cluster, here I am using single-node

ESJAVAOPTS=-Xms512m -Xmx512m it set heap size for elasticsearch. It is also recommended to set a memory limit for the container.

Now let’s see how to run the docker-compose file Traditionally, the command for running docker-compose is

docker-compose up

Or to run in the detached mode we can add -d at the end of the above command.

docker-compose up -d

But in my case, I needed to make docker files as per environments [development, Test/QA, and Production] So I have 3 compose file in our system.

For development

docker-compose -f docker-compose.dev.yml build

You can skip the build command because if there is no docker image found when you execute the command shown below it will automatically build the image as per the docker file.

docker-compose -f docker-compose.dev.yml up -d

I generally start with building docker-compose and running the docker-compose When you build the docker-compose image for the first time it would take a couple of minutes. Once the docker image build is completed you can see the output as below.

mongo uses an image, skipping
elasticsearch uses an image, skipping
Building backend
Step 1/8 : FROM node:12
---> b074182f4154
Step 2/8 : WORKDIR /usr/src/app
---> Using cache
---> 51f2e238ec24
Step 3/8 : COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
---> 1cba60d1b0d1
Step 4/8 : RUN npm install --production --silent && mv node_modules ../
---> Running in 38ba6b4206ad
added 470 packages from 332 contributors and audited 6850 packages in 15.205s
found 55 vulnerabilities (9 low, 4 moderate, 40 high, 2 critical)
run `npm audit fix` to fix them, or `npm audit` for details
Removing intermediate container 38ba6b4206ad
---> cfbd4706cd3c
Step 5/8 : COPY . .
---> 3469281650a7
Step 6/8 : EXPOSE 3000
---> Running in 97aed101c9b1
Removing intermediate container 97aed101c9b1
---> 66cd1898a59b
Step 7/8 : RUN npm install -g nodemon
---> Running in 469c04f242cd
/usr/local/bin/nodemon -> /usr/local/lib/node_modules/nodemon/bin/nodemon.js

> nodemon@1.19.4 postinstall /usr/local/lib/node_modules/nodemon
> node bin/postinstall || exit 0

Love nodemon? You can now support the project via the open collective:
> https://opencollective.com/nodemon/donate

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.9 (node_modules/nodemon/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.9: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

+ nodemon@1.19.4
added 221 packages from 128 contributors in 27.31s
Removing intermediate container 469c04f242cd
---> db4710afc11b
Step 8/8 : CMD npm start
---> Running in 5b5a8fce0e08
Removing intermediate container 5b5a8fce0e08
---> 6f8fbf6e5989
Successfully built 6f8fbf6e5989
Successfully tagged express-mongo-elasticsearch-1_backend:latest

If you had read docker file output you would know this is quite similar. You can also run your docker-compose file by adding -d so that it can run in the background.

Here is the output when the build process occurs in the background.

Starting elasticsearch  ... done
Starting mongo           ... done                                                                                                               Starting nodejs            ... done

If you want to check logs in detached mode then you can use the following command

docker logs [container name]
E.g docker logs nodejs

Let's check whether your app is working or not, for that, I have made a sample status API which shows the health of our app

Health Check API is:- http://localhost:3000/v1/status

And if u want to check elasticsearch is running or not, you can use his URL:- http://localhost:9200 And for Mongo:- http://localhost:27017

{"mongo":"Connected to server.","elastic":"Connected to server.","version":"1.0"}

Now, that you have seen how docker and docker-compose works with NodeJS, go ahead and use it in your own applications.

Now, if you want the complete source code of the example I showcased above you can check out the code from the link below.

https://github.com/Abhilashtrt/express-mongo-elasticsearch.git

6 Types of Applications You Can Build With Node.js

Read More
· · · ·

Third Rock Techkno is a leading IT services company. We are a top-ranked web, voice and mobile app development company with over 10 years of experience. Client success forms the core of our value system.

We have expertise in the latest technologies including angular, react native, iOs, Android and more. Third Rock Techkno has developed smart, scalable and innovative solutions for clients across a host of industries.

Our team of dedicated developers combine their knowledge and skills to develop and deliver web and mobile apps that boost business and increase output for our clients.

Projects Completed till now.

Discover how we can help your business grow.

"Third Rock Techkno's work integrates complex frameworks and features to offer everything researchers need. They are open-minded and worked smoothly with the academic subject matter."

- Dr Daniel T. Michaels, NINS