Skip to main content

Graylog in k8s / Rancher


You can use the fluentd-kubernetes-daemonset daemonset to send all k8s logs into graylog. While this captures almost everything, it does have drawbacks: - produces a lot of log data - data is generally unstructured (need to create extractors to effectively use data) - only standard container (app) log data captured (if app logs to /app/logs rather than STDOUT/STDERR it will not be captured by default)

It is tempting to use the fluentd-kubernetes-daemonset; however, I recommend against it for general use-cases as you are left with a lot of log data that no one sees or you spend a lot of time creating custom extractors. Graylog works much better if you send semi-structured gelf data in as it does not need to parse with extractors as much which reduces the processing needed (can handle more messages with less resources).

Rancher does not yet provide the ability to log to graylog out of the box; however, there are options:

  • fluentd-kubernetes-daemonset
  • individual container logging
  • sidecar

Bring up a sidecar for each container that monitors the main container's logs and does with it what you need to send to graylog.

I'm not a fan of this as it requires an additional container for every app (or a master log container which defeats the purpose of microservices and creates a maintenance nightmare/bottleneck). If the sidecar(s) simply watches a file and sends its data to graylog (say via syslog) and does not reformat it into gelf and parse usable/needed fields then it would likely not add a lot of complexity/cluster use costs but it is also not very useful as the fluentd daemonset offers more than this out of the box.

Individual Container Logging

Each container is responsible for their own logging.

  • benefits
  • can customize logging per needs (nginx access/error logs, python app logs, etc.)
    • this allows easy handling of logs within graylog (if APP_NAME is already a field there is no need to extract it for every message)
  • drawbacks
  • need to implment logging on each container
    • may be difficult for some containers (e.g. old monolithic apps ported to docker)
  • app will likely require custom code (python apps require graylog pkg/config)

The fluentd-kubernetes-daemonset is a fluentd daemonset which sends all logging data to graylog in gelf format.

  • benefits
  • simple/easy to setup
  • logs everything in k8s (no need to ensure those old apps can/are writing to graylog)
  • drawbacks
  • logs everything in k8s (you will need to use streams and filters extensively separate log data)
  • while data is sent in gelf format, not all useful info is parsed into fields - you will likely need to write a lot of extractors
    • you can add labels to your containers and set up a general kubernetes (data) extractor - this makes labels into fields which can then be stream or search filtered
  • create service account, cluster role/binding and daemonset:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
  name: fluentd
  namespace: kube-system

kind: ClusterRole
  name: fluentd
  namespace: kube-system
- apiGroups:
  - ""
  - pods
  - namespaces
  - get
  - list
  - watch

kind: ClusterRoleBinding
  name: fluentd
  kind: ClusterRole
  name: fluentd
- kind: ServiceAccount
  name: fluentd
  namespace: kube-system

apiVersion: extensions/v1beta1
kind: DaemonSet
  name: fluentd
  namespace: kube-system
    k8s-app: fluentd-logging
    version: v1
    type: RollingUpdate
        k8s-app: fluentd-logging
        version: v1
      serviceAccount: fluentd
      serviceAccountName: fluentd
      # Enable tolerations if you want to run daemonset on master nodes.
      # Recommended to disable on managed k8s.
      # tolerations:
      # - key:
      #   effect: NoSchedule
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-graylog-1.1
        imagePullPolicy: IfNotPresent
          - name:  FLUENT_GRAYLOG_HOST
            value: ""
          - name:  FLUENT_GRAYLOG_PORT
            value: "12201"
            cpu: 200m
            memory: 0.5Gi
            # ===========
            # Less memory leads to child process problems.
            cpu: 1000m
            memory: 1Gi
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      - name: varlog
          path: /var/log
      - name: varlibdockercontainers
          path: /var/lib/docker/containers

Investigation Scraps

# Logging for dxc

## Needs
- collect access logs for dradux-site (how many hits this week/month/year)
- see error logs of any app
- see debug info for any app

## Questions
- do we need it for all apps or only some?

## Keys
- if we collect all, we clearly need ability to set retention period for each app

## Where to Put it
- in dxc: need db's for mongodb and elasticsearch and disk space for???
  - es data is what grows a lot...mongo and graylog itself should not be big
- rancher: does it have extra resources?

## Graylog Server Requirements
Initially it should not take a log of cpu/memory, disk will likely be critical component.
- cpu:
- memory:
- disk:


I want to spin up a local rancher and graylog instance, configure rancher to log to graylog and test it all out. want to know what sort of data and control we have over each pod, namespace, prj' logging info - this will show if its worth it to spin up a graylog instance in dxc

- can find no way to go from k8s to graylog directly
- can we go from k8s to (kafka|syslog|fluentd) to graylog?
  - no syslog
  - kafka seems workable (no external dependencies I think; uses zookeeper; can run persistence.enabled=false; how do we set a forwarder to send to graylog???)
  - HH
  • ((using graylog for centralized logs in k8s|
  • ((fluentd-kubernetes-daemonset|
  • ((dockerhub|
  • graylog
  • ((nginx error log grok pattern|
  • ((parsing access.log and error.logs using linux commands|
  • ((graylog-contentpack-nginx|