Tuesday, September 11, 2018

Understanding the State of Container Networking

http://www.enterprisenetworkingplanet.com/datacenter/understanding-the-state-of-container-networking.html

Containers have revolutionized the way applications are developed and deployed, but what about the network?

By Sean Michael Kerner | Posted Sep 4, 2018
 
Container networking is a fast moving space with lots of different pieces. In a session at the Open Source Summit, Frederick Kautz, principal software engineer at Red Hat outlined the state of container networking today and where it is headed in the future.


Containers have become increasingly popular in recent years, particularly the use of Docker containers, but what exactly are containers?

Kautz explained the containers make use of the Linux kernel's ability to allow for multiple isolated user space areas. The isolation features are enabled by two core elements cGroups and Namespaces. Control Groups (cGroups) limit and isolate the resource usage of process groups, while namespaces partition key kernel structures for process, hostname, users and network functions.

Container Networking Types


While there are different container technologies and orchestration systems, when it comes to networking, Kautz said there are really just four core networking primitives:

Bridge
Bridge mode is when networking is hooked into a specific bridge and everyone that is on the bridge will get the messages.

Host
Kautz explained that Host mode is basically where the container uses the same networking space as the host. As such, whatever IP address the host has, those addresses are then shared with the containers.

Overlay
In an Overlay networking approach, a virtual networking model sits on top of the underlay and the physical networking hardware.

Underlay
The Underlay approach makes use of core fabric and hardware network.

To make matters somewhat more confusing Kautz said that multiple container networking models are often used together, for example a bridge together with an overlay.

Network Connections

Additionally, container networking models can benefit from MACVLAN and IPVLANs which tie containers to specific mac or IP addresses, for additional isolation

 Kautz added that SR-IOV is a hardware mechanism that ties a physical Network Interface Card (NIC) to containers providing direct access.
Container Networking

SDNs

On top of the different container networking models are different approaches for Software Defined Networking. For the management plane, there are functionally two core approaches tat this point, the Container Networking Interface (CNI) which is what is used by Kubernetes and the libnetwork interface that is used by Docker.

Kautz noted that with Docker recently announcing support for Kubernetes, it's likely that CNI support will be following as well.

Among the different technologies for container networking today are:

Contiv - backed by Cisco and provides a VXLNA overlay model

Flannel/Calico - backed by Tigera provides an overlay network between each hosted and allocates a separate subnet per host.

Weave - backed by Weaveworks, uses standard port number for containers

Contrail - backed by Juniper networks and open sourced as the TungstenFabric project, provides policy support and gateway services.

OpenDaylight - open source effort that integrates with OpenStack Kuryr

OVN - open source effort that creates logical switches and routers.

Upcoming Efforts


While there are already multiple production grade solutions for container networking, the technology continues to evolve. Among the newer approach is using eBPF (extended Berkeley Packet Filter) for networking control, which is used by the Cilium open source project.

Additionally there is an effort to use shared memory, rather than physical NICs to help enable networking. Kautz also highlighted the emerging area of service mesh technology, in particular the Istio project, which is backed by Google. With a service mesh, networking is offloaded to the mesh, which provides load balancing, failure recovery and service discovery among other capabilities.

Organizations today typically choose a single SDN approach that will connect into a Kubernetes CNI, but that could change in the future thanks to the Multus CNI effort. With Multus CNI multiple CNI plugins can be used, enabling multiple SDN technologies to run in a Kubernetes cluster.

Sean Michael Kerner is a senior editor at EnterpriseNetworkingPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.

No comments:

Post a Comment