One of the most exciting things to happen in the Linux world
in the past few years is the emergence of containers — self-contained
Linux environments that live inside another OS and provide a way to package and isolate applications.
They're
not quite virtual systems, since they rely on the host OS to operate,
nor are they simply applications. Dan Walsh from Red Hat has said that
on Linux, "everything is a container,"
reminding me of the days when people claimed that everything on Unix
was a file. But the vision has less to do with the guts of the OS and
more to do with explaining how containers work and how they are
different than virtual systems in some very interesting and important
ways.
To
get some perspective on containers, I spoke with Joe Brockmeier, a
senior evangelist at Red Hat. He suggests that we can think of
containers as lightweight virtual machines,
though he pointed out that we'd not be technically correct. Container
runtimes talk to the host's kernel and run applications out of tarballs.
They provide a very convenient format for shipping applications —
avoiding the pain associated with tracking down dependencies, compiling
anything, or struggling with any sort of configuration. Instead, you get
the end result you're looking for in one package — the container. It
won't interfere with other applications you're running or require you to
worry about configuration or work beyond the installation.
None
of this is meant to imply that containers don't require work. The work
required, however, is on the part of the organization or individuals
building each container. Moving a legacy
application into a container to run on its own can involve a lot of
work and require a lot of expertise. It's just that none of that work
gets passed onto the people installing it.
Is there a performance hit when using containers?
The
likelihood of a performance hit associated with running a container is
very small, especially compared with virtual systems. Containers run
with an agility that is comparable to bare metal. Unless the container
is flawed because someone upstream made mistakes in putting one
together, you should not notice any performance loss.
What about security?
Linux containers offer a lot of advantages when it comes to system security
— particular because they provide a serious way to isolate applications
from one another and from other running processes. With containers, you
could run 20 different versions of Python at the same time if you were
so inclined with no problems.
In addition, containers
cannot see or be affected by other containers' network traffic. They
simply can't interfere with other applications that are running on the
system.
Containers allow applications to be moved around
with all of the files they require, making it easy to move them from one
environment to another — whether from testing to production or from
production to a secondary/alternate site.
Where are containers heading?
Linux
containers provide an extremely convenient way to ship applications and
avoid a lot of the follow-up support that your customers might require
if they were to run into problems setting them up and configuring them.
We're
probably still just seeing the start of the application-as-container
delivery wave as companies begin to recognize the advantages and jump
deeply into container technology.
How to start using containers on Linux
Since
containers are likely to become critical parts of our networks, this is
a good time to investigate the various tools and models that are
becoming available — from LXC to Docker and Kubernetes.
You
can try out the commands for building LXC containers on one of your
Linux systems. For example, using LXC, you can easily set up a container
and get a feel for how it works and maintains its isolation. Here are
some basic steps:
Install LXC: sudo apt-get install lxc
Create a container: sudo lxc-create -t fedora -n fed-01
List your containers: sudo lxc-ls
Start a container: sudo lxc-start -d -n fed-01
Get a console for your container: sudo lxc-console -n fed-01
More information on getting started with LXC is available at LinuxContainers.org
You
can also look into some of the premier tools for containerization —
such as Docker and Kubernetes. These two tools might at first to do the
same thing, but they work at different layers in the stack and can in
some ways actually work together.
Read about container technology:
Once you and your
organization are deploying containers with enthusiasm (or maybe even
before), you may find yourself looking into how to best manage a
population of containers. Here's a reference on Container orchestration tools to get you started.
No comments:
Post a Comment