It Works with My Machine, Should be the Same with Deployment? Isn’t it?

This article is part of Proyek Perangkat Lunak (PPL) or Software Engineering Project Course. Any information written here may or may not be true. Please use it wisely and let me know if there is any wrong information in it.

If you read this article I assume you already know what deployment is. If not I’ll try to explain it briefly

What is Deployment?

Deployment in software and web development means pushing changes or updates from one deployment environment to another. When setting up a website you will always have your live website, which is called the live environment or production environment. By deploying your app or website we expect the software or website can be accessible by all users around the world via the internet. But sometimes when we try to deploy our website or application to the internet it does not work as expected. But in the local or development, it works perfectly fine. Here we need something to make developers and servers have the same environment so it works well on both sides. Here comes docker to help you.

What is Docker?

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. Docker is composed of the following elements:

  • a Daemon, which is used to build, run, and manage the containers
  • a high-level API which allows the user to communicate with the Daemon,
  • and a CLI, the interface we use to make this all available.

Some docker terminologies:

  • Docker Image: A Docker image is a single file that comprises all the dependencies, Configurations files, and source code that are required to run an application. It is sort of a read-only snapshot or template which is used by containers to run applications. Docker image also consists of a very specific startup command that is useful when a Container start. For instance, if the image is for a node.js web app, it will contain a base image for node.js, a package.json file, node_modules, source code, and a startup command like npm start or node index.js.
  • Container: Docker image can’t run an application as it’s only a read-only file. Therefore, to run the application inside a docker image we need a run time environment that is called a container. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. A container is a running process along with a subset of physical resources(on your system) that are allocated to that process specifically. A docker Container has many advantages like portability, security, Quick and easy configuration, etc. You can deploy Docker containers on any physical and virtual machines and even on the cloud.
  • Dockerfile: A Dockerfile is a way to create your custom docker images. It is a file or a script that contains a set of instructions to build an image. Usually, the first line specifies the base image followed by steps of copying the Dependency list, installing dependencies, copying source Code, and a startup command.
  • Docker Build: It simply means building or creating a Docker image from a Dockerfile.
  • Docker Compose: It is a tool used for developing multi-container applications. Such as, if the application has multiple containers like a front-end container, one back-end container, and a container for the database, docker-compose will build that application. Containers are isolated and can’t communicate with the other containers directly, but docker-compose automatically connects all the containers over the bridge network.
  • Base Image: A base image is a sort of an initial starting point or initial set of programs you can use to further customize your images. You can consider it as a kind of an operating system.

Why using docker?

1. Consistent & Isolated Environment

The very first advantage of Docker is that it provides you with a consistent and isolated environment. It takes the responsibility of isolating and segregating your apps and resources in such a way that each container becomes able to access all the required resources in an isolated manner i.e., without disturbing or depending on another container.

2. Rapid Application Deployment

Docker indeed fastens the application deployment process to a greater extent. It efficiently organizes the entire development lifecycle by providing a standardized working environment to the developers. You need to know that Docker creates a container for every individual process and subsequently the Docker apps do not boot into an OS — that saves a lot of time.

3. Ensures Scalability & Flexibility

Docker leverages you with the utmost level of scalability and flexibility. Due to the consistent environment — the Docker images can be easily sorted across multiple servers. For instance, if you’re required to do an upgrade during the release of the application — you can conveniently do the changes in Docker containers, can test them & roll out new containers.

4. Better Portability

Another enriching advantage of Docker is Portability! The applications created with Docker containers are immensely portable. The Docker containers can run on any platform whether it be Amazon EC2, Google Cloud Platform, VirtualBox, Rackspace server, or any other — though the host OS should support Docker.

How we implemented in PPL

Actually, we use docker both in the backend and frontend. But in this example I’ll show how we implemented it on Frontend side.

First we need Dockerfile in root of project directory, a Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. This is our Dockerfile configuration in frontend:

FROM node:14-alpine AS builder
WORKDIR /app
COPY package.json .
COPY yarn.lock .
RUN yarn install --frozen-lockfile
COPY . .
COPY .env.$ENVIRONMENT .env
RUN yarn build

FROM node:14-alpine as runner
WORKDIR /app
COPY --from=builder /app/.env ./.env
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
EXPOSE $PORT
CMD ["yarn", "start"]

FROM is used as our base image, you can consider is like a operating system. The WORKDIRcommand is used to define the working directory of a Docker container at any given time. The command is specified in the Dockerfile. Any RUN, CMD, ADD, COPY, or ENTRYPOINT command will be executed in the specified working directory. COPY is used to copy our file from computer to container. EXPOSE is to expose the port in container so we can call it outside the container. And the last thing is EXPOSE it used to run our app inside the container. After that we use docker build to create image of our app

docker build

If image successfully created we need to run it by using

docker run -d <image id>

We use -d to detach our app or to run it in the background. Horray now you successfully create your own container and run it in your server. Don’t forget to add certain configurations on your server, so we can access it through the internet. For example to expose your VPS port to the internet.

Final Words

By using docker it helps a lot, especially in deployment. By using docker we can have the same condition or environment between machines. Learning docker can be a complex thing, but will help you in many ways.

Thanks for reading and don’t forget to be grateful!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store