Is Your Logging a Mess? This One-Click Fix Will Make You a Hero

Is Your Logging a Mess? This One-Click Fix Will Make You a Hero

May 19, 2024
16 views
Get tips and best practices from Develeap’s experts in your inbox

Intro

The demand for efficient and scalable solutions for logging systems and observability has never been higher. With a focus on simplifying the deployment process, this guide aims to offer a streamlined, one-click installation approach to the EFK and ECK stacks, ensuring that customer teams can quickly spin-up such a toolset, without a big overhead on the deployment side.

The option to have a one-click installation of this stack is crucial in some cases, as in cases where you need to deliver the installation package to a customer in the most simple way. And, it happened in the past and it is still happening nowadays — we wanted to have a simple ‘one-click’ installation for the following components, on a Kubernetes cluster:

  1. Elasticsearch
  2. Kibana
  3. Fluentd
  4. Java multi-line log correct filtering (fluentd)
  5. ECK — Elastic Cloud for Kubernetes (Operator)
  6. Built-in imported data-view & dashboards

We couldn’t find any installation that fits all these needs.

So we decided to create one → https://github.com/develeap/efk-stack

This project includes everything needed in order to install the EFK stack on our Kubernetes cluster.

Run it

Requirements

  1. Helm 3
  2. Kubernetes config file configured to the cluster
  3. Kubernetes cluster with volume management

Installation

To install the stack, run the simple command (For more installation details please visit the git repository):

./install.sh -n NAMESPACE -p YOURPWD

Configurations to notice

We implemented sections 4 (Java multi-line log filtering) & 6 (imported data view) using configurations that should be noticed if you’d want to change them.

Java multi-line log collection and filtering

We are overriding the default fluentd config file using a Kubernetes ConfigMap: https://github.com/develeap/efk-stack/blob/main/efk-stack/templates/fluentd-configmap.yaml

The section that is catching the Java multi-line log is the following:

<filter **>
@type concat
key log
multiline_start_regexp /(\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{0,3})/
flush_interval 1
timeout_label "@NORMAL"
</filter>
<match **>
@type relabel
@label@NORMAL
</match>

This Fluentd configuration snippet defines two main components: a filter and a match directive, each targeting all incoming events (as indicated by **). Here’s a breakdown of what each part does:

Filter Section

  • <filter **>: This line starts the filter directive, applying it to all incoming events regardless of their tag.
  • @type concat: Specifies the filter plugin to use. The concat plugin concatenates multiline messages into a single event. This is particularly useful for combining multi-line log messages that are part of the same event but were split across multiple lines (e.g., stack traces).
  • key log: This indicates the key within the event record that contains the string to be processed. The plugin will look for multiline messages under this key.
  • multiline_start_regexp /(\\d{4}-\\d{1,2}-\\d{1,2} \\d{1,2}:\\d{1,2}:\\d{1,2}.\\d{0,3})/: This regular expression is used to match the start of a multiline message. It looks for timestamps at the beginning of a line, which typically signify the start of a new log entry. This pattern matches a date and time format (e.g., YYYY-MM-DD HH:MM:SS.mmm).
  • flush_interval 1: This sets the interval, in seconds, at which the buffer will be flushed. In this case, it’s set to 1 second. This means the plugin will automatically flush concatenated messages if no new matching lines are found within this period.
  • timeout_label "@NORMAL": If the timeout occurs (defined by the flush_interval), the concatenated message is forwarded to the label @NORMAL. This label is used to determine the next step in the processing pipeline for these events.

Match Section

  • <match **>: This starts the match directive, targeting all events regardless of their tag.
  • @type relabel: Specifies the output plugin to use. The relabel type is used for re-routing events to another label. This is a way to organize the flow of data without actually processing or outputting the data.
  • @label@NORMAL: This directs the matching events to the @NORMAL label. It’s a reference to the label specified in the timeout_label of the filter section. This is where you would define further processing for these events.

Built-in imported data-view & dashboards

We are importing a data view using a Kubernetes job & a ConfigMap including a script. In this project, we are not importing dashboards but it’s possible doing so using the same method.

For the ConfigMap script: https://github.com/develeap/efk-stack/blob/main/efk-stack/templates/kibana-data-configmap.yaml

For the Kubernetes job: https://github.com/develeap/efk-stack/blob/main/efk-stack/templates/kibana-import-job.yaml

For more details about this part please read the following blog post about auto-load objects to the Kibana-Elasticsearch stack.

We hope it’ll help you as it helped us. If you have any feature requests or ideas, please feel free to open a PR or contact us.

 

We’re Hiring!
Develeap is looking for talented DevOps engineers who want to make a difference in the world.