Most Common Docker Commands

 Apr, 20 - 2017   no comments   DevOpsDockerLinux


Most Common Docker Commands

Here’s in this article I tried to list all the most used and common docker commands with examples that are used in daily basis, I will start with search, images, containers, docker machines, this article suppose you at least has knowledge about docker and how it’s at least working, if not please read Docker Simplified first.

 

 

Docker Machines:

 

Docker machine in a simply described way, it is a docker installed in a virtual machine, in other words is a tool that lets you install docker engine in a virtual machine or host. in real life this is helpful in case of abstraction of our work as we might have a machine for web development that groups all images and containers related only to web layer (like nodejs, ,express, varnish cache, nginx, HA Proxy …etc) while we might have a different machine for persistence engines where it has all related persistence stuffs (like redis, mysql, and mongodb). even we might have separate docker machines for different environments (development, testing, staging, and production).

Hint: You can create a dockerized machine or server in linux, Mac, windows, or even in the cloud “e.g. AWS or Digital Ocean”.

 

Difference between docker engine and docker machine:

Docker engine is the docker itself, client-server application is made up of docker daemon, REST api that is an interface for the daemon, and the CLI commands that call APIs.

Docker machine itself is a virtual host inside which docker engine already installed. These virtual machines can be local in the same machine or remote (when used to provision dockerised hosts on the cloud)

List All docker machines

docker-machine ls

Create a new docker machine:

docker-machine create machineName
e.g.
docker-machine create web

Start docker machine:

docker-machine start machineName

Stop docker machine:

docker-machine stop machineName

Point or connect docker client to a machine named “web”,

docker-machine env web

List ip address of a specific machine by name:

docker-machine ip machineName

Display the status of a specific docker machine by name:

docker-machine status machineName

Images

Use the following command if you need to search for an image by name from registries, you could use hub.docker.com or Kitematic instead if you prefer GUI.

docker search imageName
e.g.
docker search hadoop

To pull or download an image locally from docker’s hub for later usage or running, use docker pull like the following:

docker pull imageName
e.g.
docker pull node

List all docker images stored locally in your machine, image is considered like a blueprint or template from which we are going to run multiple containers or instances:

docker image ls  
OR
docker images  

List all running containers:

docker container ls 
OR
docker ps

HINTS:

  • If you want to list all containers including the ones with status “Exited” you can use option “-a” that stands for “all”
  • If you want to list the latest created container (any state up and running or exited), you can use  “-l” that stands for “latest”
  • If you want to list n last created containers, you can use option “-n” or “–last” for example docker ps --last 3

Containers:

Run a new container from a specific image, use -i for interactive mode, -t for terminal to be attached within the container with having features like tab completion and same formatting .. please notice that /bin/bash  is the program that’s going to run inside the container after it runs.

docker run -t -i imageName /programName
e.g.
docker run -i -t ubuntu /bin/bash

Notice If you run the following command that is used to display the current active and running processes, you will notice that there are only 2 processes which might be looking strange to new novice people as the container only has one process which is “/bin/bash” and the “ps” itself. also very important thing to notice is that if you exited the main process of the container, it will be exited automatically.

ps -ef     -> inside the vm (or the container itself)
exit       -> to exit from interactive mode (from container to the host machine), exiting the main process will exit the container itself

If you want to exit without having the container to exit, press Ctrl+P then Ctrl+Q

Also notice that by default if you set nothing about memory, this means the processes in the container can use as much memory and swap memory as they need. however to specify memory limit for container you will need to use “-m” option and to specify how many CPUs to be assigned to the container use “-c” option. the following example shows unlimited number of CPUs assigned and only 300 Mega bytes are allocated to that container.

docker run -ti -m 300M -c 0.000 ubuntu /bin/bash

Look at the following example which might be obvious however it is not practical, sleep command suspends execution of commands for specific amount of time here it is “infinity”, -d for daemonizing the container in background

docker run -d imageName sleep infinity

Practically, we need to name our containers according to internal naming standards and conventions, so we can use –name option

docker run --name=containerName imageName /command

If we need to access the container as it’s exactly in the host, we can use the option --net=host while --publish 80:80 or -p 80:80keeps docker in the same network but expose only port 80 to the host machine.

run a new instance or container from “imageName” and use -p for port mappings between host and guest:

docker run -d -p 9060:9060 imageName

To attach to the container or to open its process that was run in background:

docker attach containerId 

As described earlier to detach yourself from container without exiting the container simple use the shortcut Ctrl+P – Ctrl+Q

To remove container , you can get container id from “docker ps”, you can not remove running container unless you pass “-f” that’s gonna stop and remove it

docker rm containerId | containerName
e.g.
docker rm hungry_wing
docker rm c8e

Hint: “c8e” is the first 3 chars from containerId

To delete or physically remove all containers that were exited:

docker rm `docker ps -a | grep Exited | cut -d ' ' -f 1`

 

To start your container with its main process and remove the container once the process has stopped instead of removing it manually use the –rm option while running your container, may be it’s not obvious what is benefit of this but a simple example looks like a process that we need it to work in a dockerized way to have only limited access to only the linked containers.

docker run --rm

Run container and sleep for 5 seconds then exits and remove the container at all

docker run --rm -ti ubuntu sleep 5 
docker run —rm -ti ubuntu bash -c ‘sleep 3; echo done’

To list all top running processes inside the container:

docker top containerId | containerName

To remove all containers:

docker rm -f $(docker ps -a -q)

list all logs of docker related to specific container

docker logs containerName | containerId

To kill all processes inside container, it’s safe now to run docker rm

Docker kill containerName

To pause container:

Docker pause containerName

To stop container, if you stopped the container all resources (like CPU/Memory) is already released except the space used for fs layer in hard drive.

docker stop  containerID|conainerName
OR
docker container stop containerID|conainerName

To start or restart container:

docker start containerId | containerName
docker restart containerId | containerName

To pause or un-pause container, remember the difference here with “docker stop” which is releasing the resources assigned to container but in pause case, resources assigned to the container won’t be released:

docker pause containerId | containerName
docker unpause containerId | containerName

To print configuration of container, like mounts or volumes, port mappings, … etc

docker inspect container

To commit or snapshot container as an image for later use, the opposite of “docker run” takes container and convert current state of container to an image, usually after making a base configuration of web server you can save it as an image for later usage instead of reconfiguring every time you are going to spin a new container.

docker commit containerId|containerName imageTagName 

To tag or name containers or images:

docker tag containerId|imageId tagName
docker tag debian:sid msolimanz/test-image:v99.9

To copy file or directories from container to host machine or vice versa:

docker cp container:/path /host/path 
docker cp /host/path container:/path
e.g.
docker cp $HOME/mydata/mydata.xml my_solr:/opt/solr/mydata.xml

To execute a command inside container without the need to attach to the container:

docker exec containerId|containerName touch file

Run bash with interactive mode inside container, if the container stopped this session will be ended or died abruptly

docker exec -ti container /bin/bash

Real example involves creating a new core for solr if your are familiar with it it might makes better sense to understand the following commands:

docker exec -it --user=solr my_solr bin/solr create_core -c gettingstarted
docker cp $HOME/mydata/mydata.xml my_solr:/opt/solr/mydata.xml
docker exec -it --user=solr my_solr bin/post -c gettingstarted mydata.xml

To see how’s container changed from its base state

docker diff containerId | containerName

 

Linking Containers:

To link 2 containers to be able for both to talk to each other, you should use –link option, check the below real example:

docker -t -d --name container1 ubuntu nc -l 12345
docker -t -i —name container2 --link container1:containerAliasNameInsideContainer2 ubuntu /bin/bash 
ping containerAliasNameInsideContainer2
nc containerAliasNameInsideContainer2
ENTER: hello from container2
exit

–volumes-from can be used to get shared vols from specific container

Now run the following to see what we have sent from container2 to container1, you should see “hello from container2”

docker logs container1

Volumes:

Special type of directories in a container called “data volume” .. it could be shared and reused among containers. Updates to image will not affect on data volumes in other words data volumes will be still existing or persisted even in case of the container is deleted or removed at all.

Mount volumes in real life is used to hook in your code inside containers. simple create volumes by using “-v” and specify the first part as the volume path in the host machine and the second one is the container’s directory path. the following example mount the file or directory in the host machine “path/in/host” as a device to a file or directory in the container “path/in/container”

docker run -v /path/in/host:/path/in/container -t -i ubuntu /bin/bash

the following example maps the current working directory from the host machine to “/var/www” inside the container then changes the current working directory inside the container to be “/var/www” using “-w” option and finally runs “npm start” to start nodejs applicaiton.

docker run -p 8080:3000 -v $(pwd):/var/www -w “/var/www” node npm start

please notice these options and their usages, “-w” change working directory inside container, “-p” make port forwarding or mappings between host and container or in other words expose ports to the host machine, “-v” create volume or mount directories from host machines as a mounted volumes inside container.

 

docker run --name my_solr -d -p 8983:8983 -t -v $HOME/mydata:/opt/solr/mydata solr

For databases, we could manage to use one docker image to either connect to test or live db using this way.

In host machine: we will have the database file itself so in live we have its database file and in test it’s having its own specific database file, we will try to simplify the concept using sqlite database:

Install sqlite database engine:

apt-get install sqlite
mkdir -p /var/dbs

Now we will have to create 2 directories to emulate as if they are 2 different machines (one for test and the other for live)

Test database:

sqlite /var/dbs/test
create table mydata(name text);
insert into mydata values(“test”);

Live database:

sqlite /var/dbs/live
create table mydata(name text);
insert into mydata values(“live”);

 

Now we are going to use docker to mount in test server:

docker run -v /var/dbs/test:/var/db -t -i sqlite /bin/bash
sqlite /var/db
select * from mydata;
=> should display "test"

 

Using docker to mount in live server:

docker run -v /tmp/dbs/live:/tmp/db -t -i sqlite /bin/bash
sqlite /tmp/db
select * from mydatal
=> should display "live"

Hint: --volumes-from containerName | containerId can be specified to add all vol shares from a specific container inside new container

To add a volume to a running container, it is not possible however there’s a workaround is to commit the container to an image then run that image again with new volumes you need to mount or map:

docker commit containerId newImageName
docker run -ti -v /path/in/host:path/in/container newImageName /bin/bash

 

Docker Hub

where all docker images are stored there, it is somewhat looks like repository for docker images in the same way as you’re pulling or pushing like git repositories:

Upload images to docker hub:

To login to docker hub that should be the first step to be able to push your changes or remove to/from docker hub:

Docker login

To pushe changes you’ve done or made in image to docker’s hub so it might be pulled again by docker pull, this is in case if you need to code your infrastructure.

docker push image

To remove image from docker hub:

Dockerfile:

Docker file helps you to save commands in order for docker to execute, usually we are  saving them in a file for later usage instead of inline commands, from such file we are building an image and from that image we can spin up new containers or instances: example is given below:

#image to start building on
From ubuntu:14.04
#identifies the maintainer of the package
MAINTAINER [email protected]
#Run the command in the container
RUN echo “hello world” > /etc/hello.txt
#identifies the command that should be run when running image as a container (like provisioning in vagrant)
CMD [“cat”, “/etc/hello.txt”]

 

To build an image from docker file we can build it then start running new containers or instances from that image, -t for just tagging to a simple name to be used as image name.

After doing any changes to Dockerfile, you can start building again the same file, docker will smartly detect only the changes and run them and for all unmodified changes, it will run from cache, you should understand that docker is using file system layers.

From the newly built image, you can easily now run or create a new container exactly as shown below:

docker build -t sample 
docker run sample

Hint: as described earlier, docker is going to run unmodified code from cache to build fs layer, to override this behavior for building image without using any caching at all use the –no-cache option

docker build --no-cache sample

 

Docker Orchestration Using docker-compose

Returning back to the same example of sqlite I described before but here with docker-compose:

 

Install python-pip and create a new file named “docker-compose.yml”:

apt-get install python-pip
touch docker-compose.yml && vi docker-compose.yml

Edit the file and add the following content:

One:
image: sqlite
command: nc -l 12345
Two:
image: sqlite
command: /bin/bash -c “sleep 3 && socat FILE:/etc/issue TCP:one:12345”
links:
One:One
ports:
123:123
volumes:
/var/dbs/live:/opt/live

Run the following to orchestrate:

docker-compose up

 

Other Orchestration tools that are well known and famous currently are listed below: 

  1. Kubernetes
  2. Flynn
  3. Fleet
  4. Mesos

 

I hope this helps every one, don’t forget to leave your thoughts and/or comments.


Related articles