Docker : Container to Save Your Life

Fathinah Asma Izzati
9 min readNov 4, 2021

--

Docker with its iconic blue whale logo

According to IBM — “Containerization involves encapsulating or packaging up software code and all its dependencies so that it can run uniformly and consistently on any infrastructure.”

There are some common pains when building a software project :

  • The app only works in your local machine and complicated to deploy
  • Dependencies doesn’t match with your teammates environment due to different OS

Docker comes as a rescue by providing the following tools:

  • Develop and run the application inside an isolated environment (container) that matches your final deployment environment.
  • Put your application inside a single file (image) along with all its dependencies and necessary deployment configurations.
  • And share that image through a central server registry) that is accessible by anyone with proper authorization

Now, your teammates can download the image and run the application as it is without much hassle, and even deploy it, given that the docker image comes with a production configuration. That is containerization: “Putting your applications inside a self-contained package making it portable and reproducible across various environments.” And Docker is one of the platforms that can implement them.

Hello World in Docker

Let’s jump to an example before going to more detailed parts. Below, I run a hello-world image that creates a container of a simple program that prints the lines : “Hello from Docker!…” until it ends.

Docker run <image_name> command creates and builds a container. What is the difference between Image and Container? In short, Image is just a collection of commands that will trigger Docker Daemon to create a container.

We’ll learn about many docker commands in the later section. “Docker ps -a” is the command to list all containers. Since we had just executed the hello-world image, it appeared on the first line. In the output, a container named vigilant_sutherland (random name generated by Docker) was run with the container id of fca682374439 using the hello-world image and has Exited (0) 54 minutes s ago where the (0) exit code means no error was produced during the runtime of the container.

The container creation process is being explained in the diagram with the following steps:

  1. You execute docker run hello-world command where hello-world is the name of an image.
  2. Docker client reaches out to the daemon, tells it to get the hello-world image and run a container from that.
  3. Docker daemon looks for the image within your local repository and realizes that it’s not there, hence the Unable to find image ‘hello-world:latest’ locally line gets printed on your terminal.
  4. The daemon then reaches out to the default public registry which is Docker Hub and pulls in the latest copy of the hello-world image, indicated by the latest: Pulling from library/hello-world line in your terminal.
  5. Docker daemon then creates a new container from the freshly pulled image.
  6. Finally Docker daemon runs the container created using the hello-world image outputting the wall of text on your terminal.

Docker vs Virtual Machine

Docker and virtual systems are both softwares to virtualize your hardware. They both are completely isolated from the host environment. However, Docker outperforms virtual systems. Why so?

  • Docker is a lot lighter because of the faster chain of communication when passing data between the guest computer and host OS.
  • Many containers can be simultaneously run with Docker-compose ( we’ll touch on this in the implementation section ) due to its lightweight.

VMs are created and managed by hypervisors like Oracle VMs VirtualBox. Hypervisor is seated between Guest OS and Host OS. “Applications running inside a virtual machine communicates with the guest operating system, which talks to the hypervisor, which then in turn talks to the host operating system to allocate necessary resources from the physical infrastructure to the running application.” This is a long process and can easily cause resources/memory/computation overhead on the guest OS.

Meanwhile with Docker, the guest OS isn’t used and it is solely the work of the Host OS + Docker Container to be able to run any application in any kind of environment. As a result of eliminating the entire guest operating system layer, containers are much lighter and less resource-hogging than traditional virtual machines.

Docker Fundamental Components

Container

“A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”

Container Manipulation Basics

The syntax for most Docker commands is as follows:

docker <object-type> <command> <options>
  • object-type indicates the type of Docker object you’ll be manipulating. This can be a container, image, network or volume object.
  • command indicates the task to be carried out by the daemon i.e. run command.
  • options can be any valid parameter that can override the default behavior of the command i.e. the — publish option for port mapping.

Let’s delve into some examples.

  1. Running Image and Renaming Container
docker container run <image name> --name "hello-doc-container"

Running a docker image is actually running two commands at once: docker container create and docker container start. To rename the container, you may add — name “Docker_container_name”, otherwise, Docker will assign your container a random name.

2. Detached Mode, Publishing Ports, and Renaming Container

docker container run — detach — publish 8080:80 fhsinchy/hello-dock

Closing the terminal window also stopped the running container.To keep a container running in the background, you can include the — detach.

— publish 8080:80 means that any request sent to port 8080 of your host system will be forwarded to port 80 inside the container. To access the application on your browser, visit http://127.0.0.1:8080 address.

3. Listing Containers

to list out containers that are currently running

docker container ls — all

4. Stop/Killing Running Container

docker container stop hello-dock-container

5. Restarting Containers

To restart inactive container, use this:

docker container start <container identifier>

To reboot a running container, use this:

docker container restart hello-dock-container-2

6. removing container

docker container rm <container identifier>

7. Running Containers in Interactive Model

Docker container run -it node

7. Executing Commands Inside a Container

I checked how “my-secret” is represented in base64 on my local machine. In the second command, I run the same command inside a busybox container with extension -c “echo -n my-secret | base64” to specify the command I want to run inside that container’s terminal.

Image

“Images are multi-layered self-contained files that act as the template for creating containers. They are like a frozen, read-only copy of a container. Images can be exchanged through registries.” Containers are just images in running state. When you obtain an image from the internet and run a container using that, you essentially create another temporary writable layer on top of the previous read-only ones.

How images are stored on your disk inside the Docker app

Registry

An image registry is a centralized place where you can upload your images and can also download images created by others. Docker Hub is the default public registry for Docker. You can share any public images on Docker Hub for free. People around the world will be able to download them and use them freely. Here is the Docker Cheat Sheet to sum up the section.

Docker Chat Sheet to Help You Start

How I Implemented Docker In My Project

Dockerfile is created to build an image. Instead of Docker build , Docker run for all containers, Docker has dockerfile to run all containers just by executing one command.

The script commands the following:

  • The FROM instruction sets python as the base, so our application — Walkiddie — can run inside a container with python environment installed.
  • WORKDIR: Sets the working directory inside the Docker container
  • ENV: Set up some configurations to the Python based container
  • ENV Proxy: Since we use Fasilkom’s VM, we need to set up proxy to enable the VMs connect to the internet.
  • RUN: apk add is like apt-get in ubuntu terminal. It installs all necessary dependencies for our app to run. It also installs requirements.txt to the container.
  • COPY ./entrypoint.sh . : Copy entrypoint.sh to the container. We use this script to notify if the postgres DB is running, to collect all static files, and to migrate the Database from Django.
  • COPY . .: Copy the remaining project files to the container.
  • ENTRYPOINT: Execute the entrypoint.sh script.

According to the Docker documentation — “Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.”

Docker-compose file is created to run multiple services at once. Our team has 3 different services: Nginx, Backend, and Database. Instead of writing so many commands for each services, we use docker compose to manage them all-together.

  • Every docker-compose.yaml file starts by defining the file version. You can look up the latest version here.
  • The services block holds the definitions for each of the services or containers in the application. nginx ,backend and db are the three services that comprise this project.
  • The nginx block defines a new service in the application and holds necessary information to start the container. Every service requires either a pre-built image or a Dockerfile to run a container. We’re using official Nginx image, set the container to always restart, set the app and docker ports, set the volumes, and set which containers it depends on in order to run.
  • For the db service we're using the official PostgreSQL image.
  • Unlike the db and nginx services, a pre-built image for the backend service doesn't exist. Hence, we set the current directory as context to access all the files and run them inside the container.

Lastly, run this command and your app is good to go:

# To build and run docker-compose.yml
docker-compose up -d --build
# To shutdown the container
docker-compose down -v

My Thoughts

Docker has allowed our team to collaborate— remotely — via different laptops OS, some of us use Mac, Linux, and Windows, without much hassle, compared to Node Package Manager where we experienced some dependencies conflicts or System’s incompatibility with certain versions. With Docker, our team experiences the ease of developing the app in our own computers and bring the app to the production levels in a pretty straightforward manner.

References

--

--