Docker Simplified

 Apr, 17 - 2017   no comments   DevOpsDockerNodeJS

Docker Simplified

In this article I will try to simplify the concept of docker as much as I can, identify the benefits of docker and why it’s important, clarify the differences between docker containers and virtual machines and what’s exactly happening in each step of creating images and containers. I hope you will enjoy it.

 

Introduction

 

Docker has gained pretty enough reputation in development to at least motivate everybody to wonder what is it and how can I use it.

I still do remember when I started using docker, I was a little bit confused especially to understand the difference with the concept of virtualization or virtual machines.

To understand the concept, let’s have a look how usually we as developers have our web development environment. I will imagine we are going to build a website in nodejs using express framework so I will run the following after making sure that I installed nodejs, npm, express, and express-generator:

Installing nodejs differs according to OS, so If we have mac we can use brew while if we have ubuntu we should use apt-get, if we have red-hat based linux we would use “yum” package manager and finally in windows a packaged installer, it is painful especially if you have different developers each has different OS or even likes special flavor of OS to setup development environment for each one. let’s have a look:

For mac lovers, we should execute the following to get it installed, check this article for more details

 

brew update 
brew install node

For Centos:

sudo yum update
sudo yum install nodejs npm --enablerepo=epel

For ubuntu:

sudo apt-get update 
sudo apt-get install -y nodejs

For more info about NodeJS installation in different kinds of operating systems, please visit this

now let’s create a simple hello world NodeJs website in our machine and assume that this is our system under development. using the following commands we should have a website up and running in port 3000, got and browse http://localhost:3000 now

 

npm install express express-generator -g
express mySite --view=ejs
npm install
npm start

 

However, installation and configuration operations are looking somewhat similar but still it is painful in case of you’re having big system to develop that have a lot of configurations or may be later some problems in Windows arose but not shown up in Ubuntu which leads to solve problems or have different configuration in Windows than Ubuntu. So we need similarity or consistency not only in development environment but also in staging and production to avoid the burden of fixing configurations or issues that might arise due to existence of different operating systems. that is opening the gate to introduce and welcome docker where we can build and ship containers to different environments.

 

How many times have you faced problems in staging environment which didn’t happen in your development or test environment and may be taking days to figure out that problem, how many time have you ever faced failures or crashes during your live deployment to production environment after your new release due to missing configurations or even missing service packs of a specific software or service that your system depends on. all of those issues can be solved using docker as it is keeping all of your environments consistent, think about the concept of shipping your environment as a container containing your application to staging or even production environment. this would be more clarified in details as we go deeper through the article.

 

You should keep in your mind that docker doesn’t create virtual machines nor having any virtualization layers like hypervisor, It’s a collection of file system layers that is sorted in a stack we will simplify this concept as we go in this article.

 

Docker Components:

 

You should know that docker has the following main parts to understand:

  • Docker Hub (registry)
  • Image
  • Container
  • Daemon (engine)
  • Client

 

 

Docker Hub (Registry):

 

Think of it as exactly like a GitHub repositories where we can save all of our “images”, each you want to build a container you are pulling that image to your local system and saving in your hard drive then creating an instance of that image into a something called “container”. don’t worry all of these concept will be more clear as we go through

Docker Images and Containers (Layers concept)

 

Each Docker image contains a list of filesystem layers that are READ ONLY and  only represents filesystem differences from its parent image that was built from. Those layers are stacked on top of each other.

When you download an image from docker’s hub, you download all layers that already build or represent that image in hard drive, you cannot modify any files of this image however if you run a new instance of that image (I mean a container) you already created a new layer which is writable on top of the read only layers of that image. This writable layer is called container layer and actually should have not only new file(s) but also any changes you are going to make to the current files stored on the read only layers (image layers)

 

In other words when you create a new container from an image, you add a new layer (WRITABLE) on top of the underlying image layers. This layer is often called the container layer. All changes made to the running container – such as writing new files, modifying existing files, and deleting files – are written to this thin writable container layer.

If you’re a developer, you can simply think of images and containers as a class and an object from object oriented aspect where we are instantiating a container from an image each time we are running a new container. each container (object) has its own data state or fields.

 

The major difference between image and container is the top writable layer that allows all types of writes for the container on top of image, each container has its own writable container layer which means multiple containers can share access to the same underlying image base layer(s) and still yet have their own separate or isolated data state. the diagram below shows multiple containers share the same underlying layer (ubuntu).

 

 

Let’s desribe in different way by example:

I will make figures after each step to be more clear enough for you:

 

Let’s imagine we pulled an image named “ubuntu”, we will have now in our hard drive  the read only layer of ubuntu as below.

docker pull ubuntu

Now if we run a new container, now notice the new container layer added on top of ubuntu layer:

docker run -ti ubuntu  /bin/bash

 

 

 

All changes you are doing to files in ubuntu layer itself are not changing the original files in this layer but instead it keeps all of these changes recorded in the top layer (which is writable) and may be making some links to the original file in the read only layer (similar to concept of symbolic links in linux or file shortcuts in windows)

Let’s install nodejs (all changes will be recorded or persisted in the top layer)

 

apt-get update -y
apt-get install -y nodejs npm

 

 

In our coding, usually when we are done with a stable release we are commiting our code to the repository, it is a concept of baseline. the same concept here which says that we can actually convert our current container to an image (like a stable release) which can be later used or pulled by team mates or may be even publicly. when doing so, this will save our container layer (writable) that’s on top of current image layer (read-only) into a new image merging all of layers as read-only ones.

docker commit containerId imageName

 

In other words It will build a new image exactly like “ubuntu” image but now it contains 2 image layers (both are read only)

 

Now when we run a new container from that new image we again creating a new container layer writable on top of both layers:

 

 

Let’s suppose we have a simple nodejs application inside current working directory:

docker run -ti -p 80:80 -v $(pwd):/var/www -w /var/www nodejs npm start

 

What this command actually do!!!

  • It is creating a new container layer on top of image layers
  • Exposing port 80 and map to our host’s port 80     -p 80:80
  • Injecting or map current workding directory inside container in path “/var/www”, in other words our code should be available inside container under /var/www, same concept as mounting in virtual box or virtual machines   -v $(pwd):/var/www
  • Change working directory inside container to “/var/www”  -w /var/www
  • Run container of type “imageName” which is “nodejs” in our example.
  • After running the container, invoke npm start

If we run the same command we will have a new different container layer on top of the same image layers, the same concept that we discussed before, sharing the same base image layer(s) but each container would have its own isolated container layer. so you might have 2 applications running in the same host however both of them isolated, any vulnerabilities in app1 won’t affect app2 in any ways and vice versa. if we noticed also app1 have a link to db1 and app2 have a link to its own database db2. in real life this shouldn’t be in the same machine however this is only for clarifying the concept of linking and isolation of containers.

 

 

But how do we connect both of them together, simply by linking them together. each container has its own unique name and by default all containers are isolated from each other and cannot communicate unless you specified to do so. simply we can use --link option to link a container to a specific container by name, below is an example of 2 containers of mysql. The first one is creating a new container of mysql (as server) serving as database engine and has its own name specified by --name option

docker run -d -p 3306:3306 —name mydatabase mysql

 

While the second container will be used only as client to test and connect to our database:

 

docker run -ti --rm --link mydatabase:mydatabase mysql /bin/bash
[email protected]:/# mysql -h mydatabase -uroot -proot -P 3306
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 53
Server version: 5.7.18 MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| shopify            |
| sys                |
+--------------------+
5 rows in set (0.01 sec)
mysql>

 

Docker Daemon and client:

 

Simply docker daemon is responsible for building images, pulling or pushing images, running new containers, stoping containers, assigning CPU cores and memory to containers. you client which is actually the docker commands you’re writing that communicates with the engine through APIs to get the job done for you.

 

 

Simple Practice:

 

One final thing to do as a practice, let’s do the same scenario that I’ve introduced in Introduction section but in docker with the same directory we had created before for our simple express site, I will assume that we don’t even have now nodejs or npm in our host machine, every thing is included and packaged inside our docker container (nodejs, npm, and express)

 

docker pull node
docker run -t -d -p 80:3000 -v $(pwd):/var/www -w "/var/www" node npm start

 

Now browse http://locahost, you should see your site working

 

In the first line, we pulled an official image of nodejs named as “node” and second line is responsible of running a new container having nodejs and mounting current working dir as volume inside the container which will allow us to run “npm start” easily inside container but npm start should be executed after changing working directory to “/var/www” that is why we used “-w” option which is responsible to make change working directory. below is a short description for each option:

-d: daemon, for running the command in background

-w: for specify current working directory inside container when it is loaded.

-v: for mapping directory from host machine with the container

-p: for mapping ports from host machine to the container, or in other words it is like port forwarding in virtual machines so in our example we specified port 80 in host to be mapped with port 3000 inside container (that makes http://localhost:80 to point to port 3000 inside container)

 

 

Summary:

 

Image is the base from which you are creating instances which named containers, images can be downloaded locally from docker hub which might be considered as the same concept as a remote repository exactly like GitHub but for docker world. docker engine is the underlying controlling layer or engine that manages images and containers while client is what we are using to send requests to engine to manage those images and containers.


Related articles