Log Level as Field for Docker GELF Logging Driver

I want to get stdout logs from a docker container and push them to the ELK stack. So far, I know that Docker has a GELF registration driver.

However, I cannot figure out how I can parse the messages ERROR

, WARNING

or DEBUG

from the message, and put them in a new type field log_level

in the log message, before Docker sends them to the ELK.

The log message should look something like this:

{
  "client": "127.0.0.1",
  "user": "frank",
  "timestamp": "2000-10-10 13:55:36 -0700",
  "method": "GET",
  "uri": "/apache_pb.gif",
  "protocol": "HTTP/1.0",
  "status": 200,
  "size": 2326,
  "message": "[ERROR] Error connecting to MongoDB",
  "_logLevel" : "ERROR"
}

      

What docker added "_logLevel" : "ERROR"

before sending to ELK.

Thank.

+3


source to share


1 answer


I think you are confusing what docker does for you and what logstash (or potentially logspout) is used for. The Docker Gelf driver adds the following fields: Hostname - container id - container name - image id - image name - created (container creation time) - level (6 for stdout, 3 for stderr, so as not to be confused with application log level). These things are known to Docker. Docker has no idea about your user or client. These fields are not generated by the gelf driver or docker.


To achieve what you want, you'll have to use the grok filter in logstash:

my messages are in log format:

$ {date: format = yyyy-MM-dd HH: mm: ss.fff} | $ {correId} | $ {level} | $ {callSite} | $ {Message}

And I run logstash from docker like:



  logstash:
    image: docker.elastic.co/logstash/logstash:5.3.1
    logging:
      driver: "json-file"
    networks:
      - logging
    ports:
      - "12201:12201"
      - "12201:12201/udp"
    entrypoint: logstash -e 'input { gelf { } }
                        filter{
                                grok { 
                                    match=> ["message", "%{SPACE}%{DATESTAMP:timestamp}%{SPACE}\|%{SPACE}%{DATA:correlation_Id}%{SPACE}\|%{SPACE}%{DATA:log_level}%{SPACE}\|%{SPACE}%{DATA:call_site}%{SPACE}\|%{SPACE}%{DATA:message}%{SPACE}$$"]
                                    overwrite => [ "message" ]
                                }
                                date {
                                    locale => "en"
                                    match => ["timestamp", "dd-MM-YYYY HH:mm:ss:SSS"]
                                    target => "@timestamp"
                                    remove_field => [ "timestamp" ]
                                }
                        }
                        output { stdout{ } elasticsearch { hosts => ["http://elasticsearch:9200"] } }'

      

and here's how I start a container that delivers the logs in the specified format (all identical except the date):

docker run --log-driver=gelf --log-opt gelf-address=udp://0.0.0.0:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N" | xargs printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished    | Request finished in 35.1624ms 200 application/json; charset=utf-8 message end\n"; sleep 1 ; done'

      

Hope this helps you get started. Make sure you start containers by creating logs after logstash.

Maybe read the grok documentation for more information.

+5


source







All Articles