http://www.theregister.co.uk/2015/09/30/docker_how_to
Hello World
How to Docker is the
name on the tip of many tongues at the moment. It is a containerisation
engine which allows you to package up an application along with all the
settings and software required to run it and deploy it to a server with
a minimum of fuss.
This can be useful if you want to change the CPU that the machine is running on, or restrict the amount of memory available to a machine, but it has overheads. These overheads mean that we are restricted in the number of virtual machines we can have on a single server.
Containers don't use hardware emulation, it's all about the software. So you can run more of them on a single machine. As well as that, Docker takes advantage of shared file systems. It builds up its container in terms of layers. The base layer is the operating system. The next any updates and file changes that occur.
If there are two containers that can share some of this file system then they do so. This reduces the space required for each container as it only contains what is different about its setup, rather than repeating the same information over and over.
So how does it work?
This will create a small linux virtual machine using virtual box as its driver. There are other drivers for different virtualisation systems. Now we need to connect to this. List the system settings for your virtual machine with:
Then we connect to it!
Mac:
Windows:
That's it. It's that simple. Docker creates a container using ubuntu as its base layer and runs whichever commands you give it. In this case we used the echo program to say 'hello world'.
The solution for this problem are "Dockerfile"s. A Dockerfile is a script which tells the container how to build and run.
A Dockerfile could be defined as:
This uses centos 6 as its base image, updates everything, installs java, copies in a file and then compiles it. When we run this with this, the java file will execute!
Dockerfiles are incredibly flexible and can handle copying in resources from other containers, the internet or your local system. Anything that you would run on the command line is pre-pended with the "run" statement. By defining containers using these files we can version control and test them. This leads to more reliable deployments.
It can be mapped to any port on the host system, allowing each service to believe it is running on the default port, but without any clashes (after all, how many services use port 8080? tomcat, glassfish, jenkins, puppet to name a few from my last few weeks!)
Three little containers, all believing that they are running on port 3306, but with redirects being handled by docker on the host system.
We can even restrict access between containers, without leaving ports open to the outside world by linking them together.
The mysql container has no ports open to the outside world. The apachephp container has a link to the mysql container. It thinks of it locally as "db". The link between the two is secure, no one port scanning your server will be able to access the mysql database!
This means our entire infrastructure is defined in readable files which can be version controlled and tested.
image: mysql environment:
MYSQL_ROOT_PASSWORD:
rootpassword php: image: phpapache
ports:
- "80:80"
links:
- mysqldatabase:
Containers are a powerful and easy way to manage your code deployment to servers, they are flexible enough to allow for any set of requirements of your software and allow you to automate all the things!
After all, wouldn't it be nice if setting up a minecraft server was as easy as:
®
Kat McIvor is Principal Technologist for DevOps at QA, the UK’s biggest provider of technical and business training in the UK.
So where did this idea come from?
Shipping containers! Shipping containers have a defined size. No matter what they contain the cranes used to move shipping containers about and the boats that load them know how to stack them. A shipping container is a standardised unit. Imagine if software was the same. Rather than needing to know how to setup a server to the exact specifications given by a program, you are given a container. The container engine knows how to move that container about and how to run it, no matter what is inside.How is this different from virtual machines?
Well, virtual machines are also individual boxes, with all the requirements for some software inside them. However, they also emulate the hardware constraints (the ship and the crane from our docks).This can be useful if you want to change the CPU that the machine is running on, or restrict the amount of memory available to a machine, but it has overheads. These overheads mean that we are restricted in the number of virtual machines we can have on a single server.
Containers don't use hardware emulation, it's all about the software. So you can run more of them on a single machine. As well as that, Docker takes advantage of shared file systems. It builds up its container in terms of layers. The base layer is the operating system. The next any updates and file changes that occur.
If there are two containers that can share some of this file system then they do so. This reduces the space required for each container as it only contains what is different about its setup, rather than repeating the same information over and over.
So how does it work?
Step 1: Download Docker for your operating system.
It is based on Linux Containers (LXC) so if you are running Windows or OS X then you will need a wrapper for the program. Luckily, one is provided for you via 'docker-machine'. If you run Linux already, then you can use it natively.Step 1.5: Start 'docker-machine' if required.
docker-machine create default --driver virtualbox
This will create a small linux virtual machine using virtual box as its driver. There are other drivers for different virtualisation systems. Now we need to connect to this. List the system settings for your virtual machine with:
docker-machine env default
Then we connect to it!
Mac:
eval $(docker-machine env default)
Windows:
docker-machine env --shell=powershell default | Invoke-Expression
Step 2: Create your first machine and say hello!
docker run ubuntu echo 'hello world'
That's it. It's that simple. Docker creates a container using ubuntu as its base layer and runs whichever commands you give it. In this case we used the echo program to say 'hello world'.
Pre-defined containers
We can build up containers based on running each command individually with the docker binary, but that would take a while. It doesn't match with the idea of 'automate everything' that has come from the sys admin and devops world.The solution for this problem are "Dockerfile"s. A Dockerfile is a script which tells the container how to build and run.
A Dockerfile could be defined as:
from centos:centos6
run yum update -y
run yum install -y java-1.7.0-openjdk
run yum install -y java-1.7.0-openjdk-devel
copy hello.java /
run javac hello.java
This uses centos 6 as its base image, updates everything, installs java, copies in a file and then compiles it. When we run this with this, the java file will execute!
docker build -t containername . #build the container
docker run containername java hello #run the program!
Dockerfiles are incredibly flexible and can handle copying in resources from other containers, the internet or your local system. Anything that you would run on the command line is pre-pended with the "run" statement. By defining containers using these files we can version control and test them. This leads to more reliable deployments.
Windows to another world
We can open windows into containers on our terms only. This is much like restricting the port numbers open on the firewall. For a mysql database we may want port 3306 open, we can define this in the Dockerfile with:expose 3306
It can be mapped to any port on the host system, allowing each service to believe it is running on the default port, but without any clashes (after all, how many services use port 8080? tomcat, glassfish, jenkins, puppet to name a few from my last few weeks!)
# docker run -p hostport:containerport containername
docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pie mysql
docker run -d - p 3307:3306 -e MYSQL_ROOT_PASSWORD=pie mysql
docker run -d -p 3308:3306 -e MYSQL_ROOT_PASSWORD=pie mysql
Three little containers, all believing that they are running on port 3306, but with redirects being handled by docker on the host system.
We can even restrict access between containers, without leaving ports open to the outside world by linking them together.
docker run -d -e MYSQL_ROOT_PASSWORD=pie mysql
docker run --link mysql:db apachephp
The mysql container has no ports open to the outside world. The apachephp container has a link to the mysql container. It thinks of it locally as "db". The link between the two is secure, no one port scanning your server will be able to access the mysql database!
Composing containers
All looks good so far? Docker goes a step further when automating everything. With docker-compose we can define and start up many containers at once.This means our entire infrastructure is defined in readable files which can be version controlled and tested.
mysqldatabase:
image: mysql environment:
MYSQL_ROOT_PASSWORD:
rootpassword php: image: phpapache
ports:
- "80:80"
links:
- mysqldatabase:
Containers are a powerful and easy way to manage your code deployment to servers, they are flexible enough to allow for any set of requirements of your software and allow you to automate all the things!
After all, wouldn't it be nice if setting up a minecraft server was as easy as:
docker run -d -p=25565:25565 itzg/minecraft-server
®
Kat McIvor is Principal Technologist for DevOps at QA, the UK’s biggest provider of technical and business training in the UK.
No comments:
Post a Comment