https://www.linux.com/learn/how-manage-logs-docker-environment-compose-and-elk
The
question of log management has always been crucial in a well managed
web infrastructure. Well managed logs will, of course, help you monitor
and troubleshoot your applications, but it can also be source of
information to know more about your users or investigate any eventual
security incidents.
In this tutorial, we are first going to discover the Docker Engine log management tools.
Then we are going to see how to stop using flat files and directly send our application logs to a centralized log collecting stack (ELK). This approach presents numerous advantages:
Let's run a simple container outputting data to /dev/stdout:
That's exactly the solution used by the official nginx:alpine image, as we can see in its Dockerfile:
For example, on a mac:
Clone my ELK repo with:
Navigate then to
The login is
Click on the
Click on the Discover tab. You should now see and search your nginx access logs in Kibana.
In this tutorial, we are first going to discover the Docker Engine log management tools.
Then we are going to see how to stop using flat files and directly send our application logs to a centralized log collecting stack (ELK). This approach presents numerous advantages:
- Your machines' drives are not getting filled up, which can lead to service interruption.
- Centralized logs are much easier to search and back up.
Requirements for this tutorial
Install the latest Docker toolbox to get access to the latest version of Docker Engine, Docker Machine and Docker Compose.Discovering docker engine logging
Let's first create a machine on which we are going to run a few tests to showcase how Docker handles logs:$ docker-machine create -d virtualbox testbed
$ eval $(docker-machine env testbed)
By default Docker Engine captures all data sent to
/dev/stdout and /dev/stderr and stores it in a file using its default
json log-driver.Let's run a simple container outputting data to /dev/stdout:
$ docker run -d alpine /bin/sh -c 'echo "Hello stdout" > /dev/stdout'
3e9e2cbbbe6e237cc197d6f2277c234f10c379897b621150e2141c1d42135038
We can access the json file where those logs are stored with:$ docker-machine ssh testbed sudo cat /var/lib/docker/containers/3e9e2cbbbe6e237cc197d6f2277c234f10c379897b621150e2141c1d42135038/3e9e2cbbbe6e237cc197d6f2277c234f10c379897b621150e2141c1d42135038-json.log
{"log":"Hello stdout\n","stream":"stdout","time":"2016-04-13T09:55:15.051698884Z"}
Or by using the simpler command:$ docker logs 3e9e2cbbbe6e237cc197d6f2277c234f10c379897b621150e2141c1d42135038
Hello stdout
It is worth noticing that the json driver is adding some metadata to our logs which are stripped out by the command docker logs
.Getting your apps' logs collected by the docker daemon
So as we've just experienced, in order to collect our application's logs, we simply need them to write to /dev/stdout or /dev/stderr. Docker will timestamp this data and collect it. In most programming languages a simple print or the use of a logging module should suffice to achieve this task. Existing software that's designed to write to static files can be tricked by cleverly linking to /dev/stdout and /dev/stderr.That's exactly the solution used by the official nginx:alpine image, as we can see in its Dockerfile:
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
Let's run it to prove our point:$ docker run -d --name nginx -p 80:80 nginx:alpine
Navigate to http://$(docker-machine ip testbest)For example, on a mac:
$ open http://$(docker-machine ip testbest)
Monitor the logs with:$ docker logs --tail=10 -f nginx
192.168.99.1 - - [13/Apr/2016:10:16:41 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36" "-"
Stop your nginx container with:$ docker stop nginx
Now that we have a clearer understanding of how
Docker manages logs by default, let's see how to stop writing to flat
files and start sending our logs to a syslog listener.Creating an ELK stack to collect logs
You could choose to use an hosted log collection service like loggly, but why not use a Docker Compose playbook to create and host our own log collection stack? ELK, which stands for Elasticsearch + Logstash + Kibana, is one of the most standard solutions to collect and search logs. Here's how to set it up.Clone my ELK repo with:
$ git clone git@github.com:MBuffenoir/elk.git
$ cd elk
Create a local machine and start ELK with:$ docker-machine create -d virtualbox elk
$ docker-machine scp -r conf-files/ elk:
$ eval $(docker-machine env elk)
$ docker-compose up -d
Check the status of your ELK with stack:$ docker-compose ps
[...]
elasticdata /docker-entrypoint.sh chow ... Exit 0
elasticsearch /docker-entrypoint.sh elas ... Up 9200/tcp, 9300/tcp
kibana /docker-entrypoint.sh kibana Up 5601/tcp
logstash /docker-entrypoint.sh -f / ... Up 0.0.0.0:5000->5000/udp
proxyelk nginx -g daemon off; Up 443/tcp, 0.0.0.0:5600->5600/tcp, 80/tcp
Getting docker to send container (Apps logs) to a syslog listener
Let's start again a new nginx container, this time using the syslog driver:$ export IP_LOGSTASH=$(docker-machine ip elk)
$ docker run -d --name nginx-with-syslog --log-driver=syslog --log-opt syslog-address=udp://$IP_LOGSTASH:5000 -p 80:80 nginx:alpine
Navigate first to https://$(docker-machine ip elk)
to access your nginx webserver.Navigate then to
https://$(docker-machine ip elk):5601
to access kibana.The login is
admin
and the password is Kibana05
(see the comments at the top of the file /conf-files/proxy-conf/kibana-nginx.conf
to change those credentials).Click on the
Create
button to create your first index.Click on the Discover tab. You should now see and search your nginx access logs in Kibana.
Compose file
If you're starting your containers using Compose and would like to use syslog to store your logs add the following to yourdocer-compose.yml
file:webserver:
image: nginx:alpine
container_name: nginx-with-syslog
ports:
- "80:80"
logging:
driver: syslog
options:
syslog-address: "udp://$IP_LOGSTASH:5000"
syslog-tag: "nginx-with-syslog"
No comments:
Post a Comment