Notes on the Docker: Up & Running by Sean P. Kane, Karl Matthias.

The obvious impact of docker and the ease of use it brings to the linux containers is the possibility to redifine the organiazational between business, application development and IT infrastructure teams. It allows “proper” ownership of technology stack and processes, reducing handovers and the costly change coordination that comes with them.

Containers represents the way forward to the application development world, but it’s critical that we do not lose sight of the old as we bring in the new.

Few challenges in the container world, (i) Management, (ii) Security and (iii) Certification


Shipping software in the today’s world is harder while taking care o fnew releases and bug fixes and on top of it, developer on-boarding is a different process in setting up the machine to get the codebase working. Developer has to understand lot of complexities about the environment that they will be shipping the product into and the ops team has to understand the internals of the software that they ship to make sure the version of a specific package used in the code works well on the environment that the code gets deployed to.

Imagine if the dev team has to upgrade a spcific package version they use in order to benefit of the run time. This is a minor release which does not bring any advanges to the business but the deployment team has to work closly to understand the setup and bring this together. What if the developer just has to change their internal upgrade on the package and test the changes and ship it , delivery time will be shortened. If the operations team could just upgrade the software on the host system without having to coordinate with multiple application developers, they could move faster.

Docker helps to build a layer of isolation in software that reduces the burden of communication in the world of humans.

Docker at a glance

This is how traditional deployment happens in the normal scenario without the docker, Traditional deployment devops

It is not a productive process since it take long time to get it deployed on the production environment. Docker allows the responsibility of building the application image to be sepetated from the deployment and operation of the container. What this means is developer team can build the application with all the depencies and test and ship the bundle to the ops team. Outside this image looks similar, ops team has to install the bundle to deploy and run. This process save huge time and improves the developer workflow. This process now looks like this, Docker deployment devops


Docker consists of, (i) Docker server - ongoing work of running and managing your containers (ii) Docker client - tells the server what to do (iii) Docker registry - stores docker images and metadata of those images


Docker client Docker command used to control most of the docker workflow and talk to remote docker servers.

Docker server The docker command run in daemon mode. This turns linux system into a docker server that can have containers deployed, launched and torn down via a remote client.

Docker images Docker images consists of one or more filesystem layers and some important metadata that represents all the files required to run a dockerized application. A single docker image can be copied to multiple hosts.

Docker container A docker container is a linux container that has been instantiated from a docker image. A specific container can exist only once; however, you can create multiple containers from the same image.

Atomic host small, finely tuned operating system image, like coreOS and project atomic, that supports container hosting and atomic OS upgrades.

Working with images

Eevery docker container is based on a image. To launch a container, you either have to dowload a public image or create an image of your own. Every docker image consists of one or more filesystem layers that generally that generally have direct one-to-one mapping to each individual build step used to create that image.

Anatomy of a Dockerfile

A typical docker file which will create a container for a NodeJS application looks like this:

It pulls nodejs image from the docker register with tag of vertsion 0.10

FROM node:0.10

Contact information of the Dockerfile’s author, this information will be part of author field in the resulting image


option to add metadata via key-value pairs that can later be used to search and identify the docker images and containers

LABEL “rating”=“Five Stars” “Class”=“First Class”

run all the instructions as root user

USER root

allows to set shell variables can be used during build process to simplify the Dockerfile

ENV AP /data/app

ENV SCPATH /etc/supervisor/conf.d

it install required software dependencies

RUN apt-get -y update

// the daemons

RUN apt-get -y install supervisor

RUN mkdir -p /var/log/supervisor

// Supervisor configuration copies the local filesystem to your image

ADD ./supervisord/conf.d/* $SCPATH/

// application code

ADD .js $AP/

change the working directory to the image to continue the build process


RUN npm install

It launches the process that you want to run within the container

CMD [“supervisord”, “-n”]

Filename and its contents:

  • Dockerfile - contains list of instructions that required to build the container image.
  • .dockerignore - allows to define the files and directories that you do not want to be part of the image.

Building an image

Prereq: docker server needs to be running and client should be set properly inorder to build the image.

docker build -t sibisakthivel/node-test:latest . This follows the steps mentioned in the dockerfile nad builds the image as per the tag mentioned above.

Docker will sometimes loads it from the cache if anything is already available to speedup the process. Inorder to over rule that, --no-cache flag needs to be passed during the build time.

Running your image

docker run -d -p 8080:8080 sibisakthivel/node-test:latest This tells the docker to create the running container in the background from the image and then map the port 8080 to the port 8080 on the docker host.

To verify if is started running the container in the background, run

docker ps

you can simply run,

echo $DOCKER_HOST which will print the ip of the docker host and accessing the ip nad port 8080 on the browser can lets you see the result.

Environment variable

If your application needs or expects any env vairable that needs to be passed to run, docker allows you to provide the vairable while running by using -e flag,

first stop your running container by fetching it’s container id,

docker ps

this will return the container id in the output. then,

docker stop $containerid

now you can restart the conatiner by passing the extra arguments to the env variable,

docker run -d -p 8080:8080 sibisakthivel/node-test:latest -e NAME=“sibis”

Custom base images

Base images are the lowest-level images that other docker images are built upon. more often these images are based on minimal installs of linux distributions like Ubuntu, Federo and CentOS, but they can be much smaller containing sigle statically compiled binary.

Storing the images

Once you build an image, you will want to store it somewhere in order to access by the docker host when you need it in the future. You don’t normally build the images on the server and run them. Normally you store the images somewhere and pull it on the servers whennever you want them.

There are many ways for storing the docker images, (i) Public Registries (ii) Private Registries

Public Registries Dockerhub and provides options to store public images as well as private images. For comapanies that are heavily depends on this public images, have to pull the image layers from the internet which can cause latency issues while pulling. It is important to make a good image design where you make thin layers that are easy to mve around the internet.

Mirroring a registry

It is possible to set up a local registry in your network that will mirror images fro the public registry so that we don’t need pull the images from the registry over the internet for every usage. This is very useful for local dev purpose where it can act as a cache for the rfrequently used images.

  • Configuring the docker daemon

run the command,

docker -d –registry-mirror=http://${YOUR_REGISTRY-MIRROR-HOST}

Working with docker container

To create a docker container,

docker create –name"awesome-sibi" hello-world:latest when the name is not provided, docker randomly assigns a name to a container by combining adjective and name of a famous person. No two container can have same name, we need to remove the existing one if you want to crearte a container with the same name.

docker inspect,

docker inspect $containerid lists all the tags associated with that container

Hostname passing hostname as an argument while docker run,

docker run –rm –ti –hostname=“” hello-world:latest /bin/bash

--rm - delete the container when exits --ti - allocates -TTY session and tells this is going to be interactive session, will keep STDIN open /bin/bash - executable that we want to run within the container

Starting a docker container For example, starting a redis container,

docker create -p 6379:6379 redis:latest this pulls the redis image from the internet and creates a container with a new hash created for it.

To list all the containers available,

docker ps -a

then start the container using,

docker start $containerid

To verify if its running,

docker ps

Auto-restarting a container

production deployment needs docker to be restarted if it fails. restart generally has three choices, (i) No - will not restart during failure (ii) Always (iii) on-failure:3 - it restarts 3 times before giving up

Stopping a container

docker stop $containerid

Killing a container

stopping a container sometimes misbehaves, that is where kill command is helpful,

docker kill $containerid

Pausing and unpausing the container

While performing operations like take snapshot or image, we need to pause the container.

docker pause $containerid docker unpause $containerid

Cleaning up containers and images

To list all the containers,

docker ps -a

To list all the images,

docker images

To delete the image and all associated file system,

docker rmi $imageid

PS: you cannot delete an image that is being used by a container. Attempt to remove will throw an error. Container needs to be deleted first to remove tje image.

To delete all the containers on your docker host,

docker rm $(docker ps -a -q)

To remove all the containers that are exited,

docker rm $(docker ps -a -q –filter ‘exited!=0’)

To delete all the images on your docker host,

docker rmi $(docker images -q -)

Downloading image updates

docker pull ubuntu:latest this fetches the latest images of ubuntu even if you had downloaded, it will fetch the latest if there is any update from the time you have downloaded.

Docker exec

Lets see how we can get inside the running container, when the container is running in the background mode,

docker exec -t -i $containerid /bin/bash


Logging is the critical part of the product system. Docker provides out of box logging functioanlity where all the stdout or stderr will be sent out to docker daemon, which is default JSON file for each container.

docker logs $containerid will stream the current logs of the container in the terminal.

The actual JSON files of this container is exist on the docker server and available under, /var/lib/docker/container/$containerid

Monitoring the docker

Docker container can be monitored bt,

docker stats $containerid

docker also expose these data from its api /stats as json format.

curl -s http://localhost:2379/v1/containers/$containerid/stats

Docker events

The docker daemon internally generates an events stream around container life cycle. You can tap into the stram and identify what is happening around the container life cycle. This is implemented in the docker CLI. Behind the scenes this is long lived HTTP calls to the docker API that returns in JSON blobs as they occur. For debugging purpose it allows to see when the container is died even it is restarted.

docker events

The path to the production containers

ideal deployment of the container on the production is to pull and run the image

Let’s see how practical production deploy happens using the docker, If a company has a pool of production servers that run docker daemons and collections of applications deployed there. The workflow for dockerized application follows,

  • A build is triggered by the outside means.
  • Build server kicks off docker build
  • The image is created on the local docker and tagged with a build number or commit hash.
  • A container is configured to run the test suite based on the created image and capture the result.
  • if build is passed, ship the image to the registry.

Debugging containers

docker top $containerid


Start with simple deployment process with dev setup and move to a VM to see how it benefits the team.

As teams become more confident with docker and its workflow, the realization often dawns that containers create an incredible abstraction layer between all their software components and the underlying software components. If handled correctly, organizations can move from handling the physical servers to deploy fleets of identical docker hosts that can then be used as a large pool of resources to dynamically deploy their applications to, with an ease that was never before so smooth.