Journalbeat Configuration
A lightweight shipping agent designed for systemd journals
Journalbeat is a lightweight, open source shipping agent that lets you ship log data from systemd journals stored on Linux operating systems to one or more destinations, including Logstash.
Please note that this beat is currently in an experimental phase and may be changed or removed in the future. This guide uses version 7.9.3 of Journalbeat and has been tested using Ubuntu.
Follow this step by step guide to get 'logs' from your system to Logit.io:
Step 1 - Install Journalbeat
Journalbeat is intended for Linux operating systems. Therefore, no versions of journalbeat exists for Windows or MacOS.
The following curl commands will get version 7.9.3 of journalbeat but you can get the latest version using more specialised package managers specific to the OS being used, or by downloading the package manually.
deb (Debian/Ubuntu)
curl -L -O https://artifacts.elastic.co/downloads/beats/journalbeat/journalbeat-7.9.3-amd64.deb
sudo dpkg -i journalbeat-7.9.3-amd64.deb
rpm (Redhat/Centos/Fedora)
curl -L -O https://artifacts.elastic.co/downloads/beats/journalbeat/journalbeat-7.9.3-x86_64.rpm
sudo rpm -vi journalbeat-7.9.3-x86_64.rpm
Step 2 - Configure Journalbeat
Locate and open the journalbeat configuration file:
/etc/journalbeat/journalbeat.yml
The following example will send all local journal log data to Logstash. By specifying an empty array as the paths value, the default /var/log/
path value will be used to point to the directory containing all locally persisted journals. This is a sensible default but you can specify
other paths to files or directories for journalbeat to crawl if required.
journalbeat.inputs:
# Paths that should be crawled and fetched (can be files or directories).
# When setting a directory, all journals under it are merged.
# When empty starts to read from local journal.
paths: []
Step 3 - Configure Output
We'll be shipping to Logstash so that we have the option to run filters before the data is indexed.
Comment out the elasticsearch output block.
## Comment out elasticsearch output
#output.elasticsearch:
# hosts: ["localhost:9200"]
Step 4 - Validate Configuration
Let's check the configuration file is syntactically correct by running journalbeat directly inside the terminal.
If the file is invalid, journalbeat will print an error loading config file
error message with details on how to correct the problem.
deb/rpm
sudo journalbeat -e -c /etc/journalbeat/journalbeat.yml
Step 5 - Start Journalbeat
Ok, time to start ingesting data!
deb/rpm
sudo systemctl enable journalbeat
sudo systemctl start journalbeat
Step 6 - Check Logit.io for your logs
Now you should view your data:
If you don't see logs take a look at How to diagnose no data in Stack below for how to diagnose common issues.
Step 7 - how to diagnose no data in Stack
If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.
Step 8 - Journalbeat Overview
Journalbeat is a recent addition to Elastic’s Beats family that collects log entries from Systemd Journals. Journalbeat is based on the libbeat framework. Journalbeat is forecast to be repurposed from being a Beat and will instead become a module for Filebeat in the near future.
Journalbeat monitors the journal locations that are specified and also collects logs for forwarding with Logstash and further processing in Elasticsearch. It is rated as being easier to use than more commonly known Beats (such as Filebeat), but is noted as an experimental Beat by Elastic so is subject to change.
Journalbeat can be used to ship logs from Kubernetes but some users may not see their data being reflected in Elasticsearch and may prefer to use Filebeat or Fluentd for a seamless and wider community supported logging experience.
If you require any more assistance in using Journalbeat or any other Elastic Beats for shipping your log events we're here to help. Feel free to reach out by contacting our support team by visiting our dedicated Help Centre or via live chat & we’ll be happy to assist.