Start your 14-day free trial today & Get 20% Off All Annual Managed ELK Plans

No Credit Card Required

Try Free

Already have an account? Sign In

Send data via NGINX to your Logstash instance provided by

Nginx Logstash Configuration

Ship logs from NGINX to logstash

Configure Filebeat to ship logs from a NGINX web server to Logstash and Elasticsearch.

Step 1 - Install FilebeatCopy

deb (Debian/Ubuntu/Mint)

curl -L -O
sudo dpkg -i -oss-7.15.1-amd64.deb

rpm (CentOS/RHEL/Fedora)

curl -L -O
sudo rpm -vi -oss-7.15.1-x86_64.rpm


curl -L -O
tar xzvf -oss-7.15.1-darwin-x86_64.tar.gz


  • Download the Windows zip file from the official downloads page.
  • Extract the contents of the zip file into C:\Program Files.
  • Rename the -<version>-windows directory to ``.
  • Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator). If you are running Windows XP, you may need to download and install PowerShell.
  • Run the following commands to install as a Windows service:
cd 'C:\Program Files\'
If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-.ps1.
My OS isn't here! Don't see your system? Check out the official downloads page for more options (including 32-bit versions).

Step 2 - Locate Configuration FileCopy

deb/rpm /etc/filebeat/filebeat.yml
mac/win <EXTRACTED_ARCHIVE>/filebeat.yml

Step 3 - Enable the NGINX ModuleCopy

There are several built in filebeat modules you can use. You will need to enable the nginx module.


sudo filebeat modules list
sudo filebeat modules enable nginx


./filebeat modules list
./filebeat modules enable nginx


.\filebeat.exe modules list
.\filebeat.exe modules enable nginx

Additional module configuration can be done using the per module config files located in the modules.d folder, most commonly this would be to read logs from a non-default location

deb/rpm /etc/filebeat/modules.d/
mac/win <EXTRACTED_ARCHIVE>/modules.d/

- module: nginx
# Access logs
  enabled: true

  # Set custom paths for the log files. If left empty,
  # Filebeat will choose the paths depending on your OS.
  var.paths: ["/custom/path/to/logs"]

# Error logs
  enabled: true

  # Set custom paths for the log files. If left empty,
  # Filebeat will choose the paths depending on your OS.
  var.paths: ["/custom/path/to/logs"]

# Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
  enabled: false

  # Set custom paths for the log files. If left empty,
  # Filebeat will choose the paths depending on your OS.

Step 4 - Configure outputCopy

We'll be shipping to Logstash so that we have the option to run filters before the data is indexed.
Comment out the elasticsearch output block.

## Comment out elasticsearch output
#  hosts: ["localhost:9200"]
No input available! Your stack is missing the required input for this data source Talk to support to add the input

Step 5 - Validate configurationCopy

Let's check the configuration file is syntactically correct by running directly inside the terminal. If the file is invalid, will print an error loading config file error message with details on how to correct the problem.


sudo  -e -c /etc//.yml


./ -e -c .yml


.\.exe -e -c .yml

Step 6 - Start FilebeatCopy

Ok, time to start ingesting data!


sudo systemctl enable filebeat
sudo systemctl start filebeat




PS C:\Program Files\Filebeat> Start-Service filebeat

Step 7 - how to diagnose no data in StackCopy

If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.

Step 8 - NGINX dashboardCopy

The NGINX module comes with predefined Kibana dashboards. To view your dashboards for any of your stacks, launch Kibana and choose Dashboards.

Predefined kibana dashboard screenshot

Predefined kibana dashboard screenshot

Predefined kibana dashboard screenshot

Step 9 - NGINX Logs OverviewCopy

NGINX is an open-source HTTP server and reverse proxy that was created by Igor Sysoev & released in 2004. It has gone on to power many of the web’s highest traffic sites (including Netflix, Google & Wordpress) as it is a highly reliable server for enabling businesses to scale their operations.

Viewing NGINX log files can allow you to see spikes in 5XX/4XX status codes affecting the performance of your applications, and allow your Dev teams to drill down into the data to resolve errors. Analysing these at scale can rapidly drain your resources if your teams need to configure separate parsing, configuration, visualisation and reporting tools for a single large NGINX instance.

Many NGINX log analyzers can slow down the process of troubleshooting & increase time to resolution unnecessarily as they often struggle to process large amounts of log data. The log management platform is built on ELK and can easily process large amounts of NGINX server data for root cause analysis.

Our platform is built to scale with your infrastructure, once data is migrated to your ELK Stack you’ll be able to benefit from automatic parsing with Logstash and visualise your NGINX metrics in Kibana. Alert on errors and notify your teams of spikes in real-time with our integrated alerting features that can send notifications to a variety of sources including Jira, Opsgenie, Slack, PagerDuty & Webhooks.

In case you need any further assistance with sending your NGINX data to Logstash & Elasticsearch we're here to help. Just get in touch with our support team via live chat & we'll be happy to assist.

Toggle View

Expand View

Return to Search

© 2022 Ltd, All rights reserved.