Monday, April 20, 2015

Docker overlay network using Flannel

http://blog.shippable.com/docker-overlay-network-using-flannel


This is the next blog post in the series where I’ll attempt to build a full multi-node kubernetes cluster from scratch. You can find the previous post here where I describe bringing up a two-node cluster without using overlay network.

The first thing you need once you start scaling up your containers on different hosts is a consistent networking model, the primary requirement of which is to enable two(or more) containers on different hosts to talk to each other.  Now port forwarding might give you the same result when dealing with less number of containers but this approach gets out of control very quickly and you’re left to wade through port forwarding mess. What we want in situations like these is a network where each container on every hosts gets a unique IP address from a global namespace and all containers can then talk to each other. This is one of the fundamental requirements for kubernetes network implementation as specified here.
Two tools that solve the problem and are trending right now are Flannel and Weave. This post will describe how to get the containers on two Docker hosts to talk to each other using Flannel.
Here is the link to the scripts if you prefer reading the code instead of a writeup.
Bootstrapping
As with the previous tutorial, we’ll use two Vagrant hosts for this demo. You might as well choose any cloud provider to do the same, as long as the hosts can talk to each other
Put the following Vagrant file in any folder, say /home/kube/Vagrantfile.
 # -*- mode: ruby -*-

# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.define "host1" do |hostone|
    hostone.vm.box = "trusty64"
    hostone.vm.network "private_network", ip: "192.168.33.10"
    hostone.vm.hostname = "host-one"
  end

  config.vm.define "host2" do |hosttwo|
    hosttwo.vm.box = "trusty64"
    hosttwo.vm.network "private_network", ip: "192.168.33.11"
    hosttwo.vm.hostname = "host-two"
  end

end
Following commands will bring up the two hosts and drop you into their respective shell.
[terminal 1] $ cd /home/kube
[terminal-1] $ vagrant box add https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box --name trusty64
[terminal-1] $ vagrant up host-one
[terminal-1] $ vagrant ssh host-one
 
[terminal-2] $ vagrant up host-two
[terminal-2] $ vagrant ssh host-two.
You need to install Docker on both hosts, the instructions for which are available here.
As with the previous post, I’ll put more effort to provide a fully functional script first and then explaining the steps here.
Flannel
Lets begin by setting up Flannel. To quote from Flannel github page
flannel (originally rudder) is an overlay network that gives a subnet to each machine for use with Kubernetes.
Flannel has one additional requirement, etcd, which might make it less attractive in some cases but not when using it with Kubernetes as kubernetes already uses etcd for its data storage.
Installing etcd (host 1): This step can be skipped if etcd is already installed and listening on an available port on any of the hosts.
Install etcd server on host-one ONLY using following script. Flannel daemon on host-two will read the configuration settings by connecting to etcd server running on host-one. The etcd server can also be running on any other host. We don’t need to start etcd manually as it’ll be taken care of by the setup script. Add this in /etc/init/etcd.conf so that etcd can be managed by upstart.
description "Etcd service"
author "@jainvipin"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn

pre-start script
        # see also https://github.com/jainvipin/kubernetes-ubuntu-start
        ETCD=/opt/bin/$UPSTART_JOB
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        if [ -f $ETCD ]; then
                exit 0
        fi
    echo "$ETCD binary not found, exiting"
    exit 22
end script

script
        # modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
        ETCD=/opt/bin/$UPSTART_JOB
        ETCD_OPTS=""
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        exec "$ETCD" $ETCD_OPTS
end script

Defining the subnet (host 1):
The line here uses the etcdctl utility to connect to etcd server running on master and inserting the flannel subnet value. Flannel daemons running on each host will connect to this etcd server and read the key (coreos.com/network/config) which will determine the subnet the lie in. Since this is a part of setup script, you don’t have to do anything here.
Configuring Flannel (both hosts):
We need to make sure flannel starts before docker does on every host because docker will use the flannel bridge instead of using the default bridge. Place the following file in /etc/init/flanneld.conf
description "Flanneld service"
author "@ric03uec"
 
start on filesystem or runlevel [2345]
stop on runlevel [!2345]
 
respawn
 
pre-start script
        FLANNELD=/usr/bin/$UPSTART_JOB
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        if [ -f $FLANNELD ]; then
                exit 0
        fi
    echo "$FLANNELD binary not found, exiting"
    exit 22
end script
 
script
        # modify these in /etc/default/$UPSTART_JOB (/etc/default/flanneld)
        FLANNELD=/usr/bin/$UPSTART_JOB
        FLANNELD_OPTS=""
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        exec "$FLANNELD" $FLANNELD_OPTS
end script
Configuring Docker (both hosts):
As mentioned above, docker daemon should start after flannel daemon. I’ve modified the default docker upstart config to  add dependency on flannel.
– lines 3 and 4 make sure docker starts/stops with flannel daemon.
– lines 44 and 45 import the environment variables exported by flannel and inserts them into DOCKER_OPTS. Specifically, FLANNEL_SUBNET variable is used for docker –bip flag and FLANNEL_MTU variable is used for docker –mtu flag.
As with the conf files above, the file below replaces /etc/init/docker.conf
description "Docker daemon"

start on started flanneld
stop on stopping flanneld

limit nofile 524288 1048576
limit nproc 524288 1048576

respawn

pre-start script
        # see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
        if grep -v '^#' /etc/fstab | grep -q cgroup \
                || [ ! -e /proc/cgroups ] \
                || [ ! -d /sys/fs/cgroup ]; then
                exit 0
        fi
        if ! mountpoint -q /sys/fs/cgroup; then
                mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
        fi
        (
                cd /sys/fs/cgroup
                for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
                        mkdir -p $sys
                        if ! mountpoint -q $sys; then
                                if ! mount -n -t cgroup -o $sys cgroup $sys; then
                                        rmdir $sys || true
                                fi
                        fi
                done
        )
end script
script
        # modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
        DOCKER=/usr/bin/$UPSTART_JOB
        DOCKER_OPTS=
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi

        if [ -f /var/run/flannel/subnet.env ]; then
        ## if flannel subnet env is present, then use it to define
        ## the subnet and MTU values
                . /var/run/flannel/subnet.env
                DOCKER_OPTS="$DOCKER_OPTS --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}"
        else
                echo "Flannel subnet not found, exiting..."
                exit 1
        fi

        exec "$DOCKER" -d $DOCKER_OPTS
end script

# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
        DOCKER_OPTS=
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        if ! printf "%s" "$DOCKER_OPTS" | grep -qE -e '-H|--host'; then
                while ! [ -e /var/run/docker.sock ]; do
                        initctl status $UPSTART_JOB | grep -q "stop/" && exit 1
                        echo "Waiting for /var/run/docker.sock"
                        sleep 0.1
                done
                echo "/var/run/docker.sock is up"
        fi
end script
Running the script (both hosts):
Run the following commands on both hosts after editing the NODE_IP andMASTER_IP environment variables in setup.sh script
[terminal-1 (host-one)] $ ./setup.sh master
[terminal-1 (host-two)] $ ./setup.sh slave
Testing:
Run the following commands in the specified order to test. This uses the nc utility to test a connection between two containers on different hosts. We run a listener inside a container on host 1 and echo some text to it from a second container inside host 2 using the first containers IP address.
[terminal-1 (host-one)] $ container_id=$(docker run -d -p 5555:5555 ubuntu:14.04 nc -l 5555)
[terminal-1 (host-one)] $ docker inspect $container_id | grep IPA //gives the IP Address of the container
[terminal-1 (host-one)] $ docker logs -f $container_id

[terminal-2 (host-two)] $ docker run -it ubuntu:14.04 /bin/bash
[terminal-2 (host-two)] $ echo 'Hello, it works !!' > source_file
[terminal-2 (host-two)] $ nc  5555 < source_file

[terminal-1 (host-one)] $ Hello, it works!!
That’s it !!! Hope the setup works for you. Feel free to comment with suggestions and any bugs you find in the script.

No comments:

Post a Comment