Start your 14-day free trial today

No Credit Card Required

Try Logit.io Free

Already have an account? Sign In

Send data via Kafka to your Logstash instance provided by Logit.io

Kafka

Collect and ship Kafka application logs to Logstash and Elasticsearch.

Filebeat is a lightweight shipper that enables you to send your Apache Kafka application logs to Logstash and Elasticsearch. Configure Filebeat using the pre-defined examples below to start sending and analysing your Apache Kafka application logs.

Step 1 - Install FilebeatCopy

deb (Debian/Ubuntu/Mint)

curl -L -O https://artifacts.elastic.co/downloads/beats//-oss-7.15.1-amd64.deb
sudo dpkg -i -oss-7.15.1-amd64.deb

rpm (CentOS/RHEL/Fedora)

curl -L -O https://artifacts.elastic.co/downloads/beats//-oss-7.15.1-x86_64.rpm
sudo rpm -vi -oss-7.15.1-x86_64.rpm

macOS

curl -L -O https://artifacts.elastic.co/downloads/beats//-oss-7.15.1-darwin-x86_64.tar.gz
tar xzvf -oss-7.15.1-darwin-x86_64.tar.gz

Windows

  • Download and extract the Windows zip file.
  • Rename the -<version>-windows directory to ``.
  • Open a PowerShell prompt as an Administrator.
  • Run the following to install as a Windows service:
.\install-service-.ps1
If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-.ps1.
My OS isn't here! Chat to support now

Step 2 - Enable the Kafka moduleCopy

deb/rpm

sudo filebeat modules list
sudo filebeat modules enable kafka

macOS

cd <EXTRACTED_ARCHIVE>
./filebeat modules list
./filebeat modules enable kafka

Windows

cd <EXTRACTED_ARCHIVE>
.\filebeat.exe modules list
.\filebeat.exe modules enable kafka

Additional module configuration can be done using the per module config files located in the modules.d folder, most commonly this would be to read logs from a non-default location

deb/rpm /etc/filebeat/modules.d/
mac/win <EXTRACTED_ARCHIVE>/modules.d/

- module: kafka
# All logs
log:
  enabled: true

  # Set custom paths for Kafka. If left empty,
  # Filebeat will look under /opt.
  #var.kafka_home:

  # Set custom paths for the log files. If left empty,
  # Filebeat will choose the paths depending on your OS.
  #var.paths:

Step 3 - Copy Configuration FileCopy

The configuration file below is pre-configured to send data to your Logit.io Stack via Logstash.

Copy the configuration file below and overwrite the contents of filebeat.yml.

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

# ================================== Outputs ===================================
# ------------------------------ Logstash Output -------------------------------
<div class="sw-warning">
    <b>No input available! </b> Your stack is missing the required input for this data source <a href="#" onclick="Intercom('showNewMessage')" class="btn btn-info btn-sm">Talk to support to add the input</a>
</div> 

Step 4 - Start filebeatCopy

Ok, time to start ingesting data!

deb/rpm

sudo systemctl enable filebeat
sudo systemctl start filebeat

macOS

./filebeat

Windows

PS C:\Program Files\Filebeat> Start-Service filebeat

Step 5 - how to diagnose no data in StackCopy

If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.

Step 6 - Kafka dashboardCopy

The Kafka module comes with predefined Kibana dashboards. To view your dashboards for any of your Logit.io stacks, launch Kibana and choose Dashboards.

Predefined kibana dashboard screenshot

Step 7 - Apache Kafka Logging OverviewCopy

Apache Kafka is a distributed streaming platform written in Scala & Java, that is primarily used for generating low latency real-time data streaming pipelines for apps & data lake engines.

Kafka offers users the ability to publish & subscribe to record streams, decouple data & sort the aggregated data in chronological order for improved real-time processing. The platform is suited to processing many trillions of cross systems events per day making the tool ideal as a big data solution.

Kafka is one of the leading Apache projects and is used by enterprise level businesses globally; including Uber, LinkedIn, Netflix & Twitter. Much of this infrastructure also uses Logstash, which works side by side with the platform as Kafka acts as a buffer between the two for improved resilience.

The combined power of Elasticsearch, Logstash & Kibana form the Elastic Stack which can be used for efficient log analysis as platform & Kafka broker logs contain vital information on the performance & overall health of your systems.

Our hosted Elastic Stack solution can help monitor & visualise Kafka logs and alert you on performance issues & broker degradation in real time. Logit.io’s built in Kibana can easily generate dashboards for capturing various Kafka log messages along with their severity counts.

If you need any assistance with analysing your Kafka logs (no matter if their server, utils or state-change logs) we're here to help. Feel free to get in touch by contacting the Logit.io help team via chat & we'll be happy to help you start analysing your log data.

Toggle View

Expand View

Return to Search

© 2022 Logit.io Ltd, All rights reserved.