Docker is one of the coolest pieces of tech to come out of the Open Source Community in recent years, and has quickly become one of the industry’s biggest topics for good reason. Docker took an old idea that has existed in the Linux community for years and commoditized it by adding a series of robust, easy to use tools as well as a number of value added services containers. The old idea, namely containerization, in brief gives applications an isolated operating environment in which to run away from all other applications on a given host. These applications have their own run times, libraries, and supporting software, and can vary container to container without interfering with one another all while sharing the same, underlying kernel for lower level system calls.
Containers themselves have a unique value proposition in that they enable developers to deploy applications in a consist environment from development all the way through to production. The developer can provision a container image and deploy it to a number of possible infrastructure environments that are all different, but the developer doesn’t have to provision the environment to run the app, just the container. Docker’s ubiquity among containers services means that a Docker image can be deploy on a dev box just as easy as it can be deployed on a production environment.
A Docker Environment has 3 high level components:
- Docker Client – The Docker Client is an executable that typically runs on a local PC and connects to a Docker Host over a network. A user interacts with the client, which in turns sends commands to a Docker Engine running on a Docker Host, which is typically on a network server or a cloud hosted environment, although it can be local as well. The Docker Client is part of Docker Toolbox or a stand alone copy can be downloaded too.
- Docker Engine – The Docker Engine runs on a host environment which can be a local development machine, an on-premise server, or a cloud-based server. The Docker Engine receives commands from the Docker Client. It performs a number of tasks including setting up networking on the host, building containers, running containers, managing Docker clusters, and a sundry of other tasks.
- Docker Hub – Docker Hub is the main repository for Docker images. Docker Engine pulls images from Docker Hub into a local repository to create containers from or to build custom containers based on Dockerfiles.
On Azure, one can setup a variety of Docker Hosts to run a Docker Engine.
- Docker Datacenter – Docker Datacenter is the latest offering from Docker. It is an out-of-the-box experience that sets up and scalable, managed, cloud-based containers as a service offering. Once deployed, Docker Datacenter can be scaled to meet the demands of the application workloads. Docker Datacenter though does require a subscription from Docker to run. For more information about Docker Datacenter, check Docker’s website.
- Azure Container Services – Azure Container Services is another container as a service offering, but this one is from Microsoft. ACS offers tight integration with Azure through the Azure CLI and Azure Resource Manager templates. ACS is built on Azure VM Scalesets that can be scaled up or down depending on a load and offers DC/OS or Docker Swarm for container orchestration. Azure Container Services deployments use Microsoft-specific technologies with Azure, but once up and running, it is a full Docker Experience. Check the resources and webinar here for a complete rundown.
- Containers on Windows Server – This option is more intended for smaller workloads running on Windows Containers, but it still uses the Docker tools to manage the containers. Windows Containers are different from Linux container in that they can run Windows workloads in a containerized environment, just like an on-premise solution except in the cloud.
- Docker on Ubuntu – Docker on a Ubuntu uses as single VM with the Docker Engine installed on it. It is intended for smaller workloads that don’t need redundancy or scalability, but it still offers the full Docker experience.
- Do It Yourself – Azure’s flexibility allows users to create their own Docker environment as well to suit their needs with Azure infrastructure components and virtual machines.
Of these options, Azure Container Services is a good starting place because it has tight integration with Azure as well as it can scale out easily too.
Once the environment is created and running, a user connects to the environment with the Docker Client. The Docker Client can then issue a build command and build an image on Azure remotely. Custom images use a file called a Dockerfile to create images, and it has a variety of instructions. The file is edited locally in the root of a target directory. When the build command is called, the Dockerfile and the contents of the directory are uploaded to the Docker Engine by the Docker Client, and the instructions in the Dockerfile are carried out by the Docker Engine to build the image.
ex. Docker build can be run by specifying the path to the Dockerfile, but typically it is run relative to the active directory in a shell environment.
docker build –tag name-of-image .
The “.” notes the that the Dockerfile is in the active directory.
The following is an overview of the instructions in the Dockerfile with examples.
FROM – Specifies what base image to use to create an image from. Usually, the base image is hosted on Docker Hub, a repository of literally thousands of ready-to-use images for Docker containers.
i.e. The following examples are a few options of the literally thousands of available container images on Docker Hub.
ADD – Copies file into a container from the local client’s file system or a URL.
ADD source destination
i.e. The fist example downloads the index.html from example.com into the /html folder. The second example copies all the content relative to the Dockerfile into the approot folder.
ADD //example.com/index.html /html
ADD . /approot
COPY – This instruction works like ADD, except it only copies file into a container from the local client’s file system
COPY source destination
i.e. The following are some examples of how the copy command might be used.
COPY filename /newfilename
COPY directory /directory
COPY . /directory
ARG – Sets variables for the life of the build process. The key is the variable that is used in othe build process, and a default value can be specified. The default can be overridden with the –build-arg flag when Docker Build is run (i.e. docker build –build-arg key=value).
ARG key defaultvalue
i.e. The following sets up an build argument that is used by the ADD instruction.
ARG source /app
ADD $source /directory
ENV – Sets Environmental variables for the build process AND for container creation. The values assigned with the ENV parameter will be persisted with the image and can be overriden when Docker Run is called by using the -e flag (i.e. docker run -e KEY=value).
ENV KEY value
i.e. Many Linux applcations use MySQL databases and read the host name for the database from an environmental variable. If the MySQL database were running in the container, the host would be localhost.
ENV MYSQL_HOST localhost
RUN – Runs a command during the image build process. This is often used to execute build scripts or install packages into the container with a package manager. Run prepends whatever command that is being run with the default shell of for the image. For Windows, this is CMD and for Linux this is /bin/sh -c.
RUN command parameters
i.e. On an Ubuntu or Debian image, the following would tell the apt package manager to install nginix into the image.
RUN apt-get install -y nginx
SHELL – SHELL allows a command to be run in a shell environment other than the default shell. For instance, on Windows the default shell is CMD, but sometimes user may want to run Powershell.
i.e. If the image wanted to call a webiste on Windows.
SHELL ["powershell", "Invoke-WebRequest", "-Uri", "//someurl"]
CMD – This specifies a command for the container to run on start-up. This can usually be overridden by specifying a command to run when the container is created from an image with the Docker Run command. (i.e. docker run some-image /path/to/command/command)
CMD command parameter
i.e. If nginx is installed in a container, the following commands will run whenever an container is created from an image containing nginx. The parameters tell nginix to run the foreground, keeping the container operational. Otherwise, the container will complete execution and shutdown.
CMD nginx -g 'daemon off;'
ENTRYPOINT – specifies a command to be run on start-up, like CMD, but additionally allows the container to be used like an executable as well. There are two styles to use with this instruction. The Shell style executes a command in a shell environment, so the command passed in is essentially prepended with a “/bin/sh -c” command to execute the command specified. The Exec style executes the command specified without the shell environment wrapped around it, and the parameters are stil passed in.
ENTRYPOINT command parameter
i.e. On a Linux container, this will run the top command prepended by /bin/sh -c, and will display the running processes by memory usage.
ENTRYPOINT top -m
ENTRYPOINT ["command", "parameter"]
i.e. When running a Windows Nanoserver with .NET Core, the container needs to bootstrap the application by telling the container to run the dotnet command at startup and which file is the entrypoint for the application.
ENTRYPOINT ["dotnet", "docker-demo.dll"]
EXPOSE – Expose exposes a network port that can be mapped by the -p flag given with Docker Run (i.e. docker run -p externalport:exposedport)
i.e. If a webserver was running in the container, port 80 needs to be exposed.
When the images is run as a container, the run command would run:
docker run -p 80:80 some-image
External port 80 would be mapped to internal port 80 on the container.
WORKDIR – Sets the working directory for COPY, ADD, RUN, CMD, and ENTRYPOINT. The other instructions will be run relative to the path set by this instruction.
i.e. Setting the root folder for the HTML root directory on a web server is a common use of WORKDIR because much of the work done on a build is done in this folder.
VOLUME – Sets up mountable volumes inside a container. When building an image, this is using a placeholder within the image that will create a volume whenever the container runs.
i.e. If one wanted to create a mountable volume for logs, which is often the case for monitoring, one could specify the typical log directory on Linux to be a mountable volume.
ONBUILD – This will create a deferred command that will be run whenever another image is created from the image that is being built.
ONBUILD RUN command
ONBUILD ADD source destination
i.e. If one was creating an image from a Debian or Ubuntu based image, whenever a subsequent image is built, following command would be run to update the package manager.
ONBUILD RUN apt-get update -y
HEALTHCHECK – Runs a command in a specified interval in the container to check the health of the container. It reurns an exit code: 0 = Success, 1=unhealthy, 2=does not use exit code.
HEALTHCHECK --interval=Xs --timeout=Xs –retries=X CMD command parameterse. If a Linux container wanted to check to see if the webpage running in the container was still responding…
HEALTHCHECK --interval=30s --timeout=1s --retries=2 CMD curl -f //localhost/ || exit 1
Here’s a few simple examples of Dockerfiles:
COPY airports.sql /docker-entrypoint-initdb.d/airports.sql
The base image, mysql, has a directive to look in the /docker-entrypoint-initdb.d folder for a .sql file and then execute the sql file on the database engine preinstalled in the engine. This is a very simple and easy way to set up a MySQL database in a container.
This example uses a nodeJS image from Docker Hub to build out a container with a NodeJS app. The Dockerfile would be in the root folder along side the package.json file. The build process for this container looks for the the package.json file on creation and installs all the dependecies and setup the app to run based on the contents of the package.json file. It then exposes port 80 because it is likely running Express.
ENTRYPOINT ["dotnet", "docker-demo.dll"]
ENV ASPNETCORE_URLS //+:80
COPY . .
This example creates an image for an app in the local directory for a .NET core app. It sets an environmental variable to tell ASP.NET were to run the application, then exposes port 80 to the outside network. The COPY instruction copies all the content of the working directory to the images root.