Start your 14-day free trial today
No Credit Card Required
Try Logit.io FreeAlready have an account? Sign In
Nginx Logstash Configuration
Ship logs from NGINX to logstash
Configure Filebeat to ship logs from a NGINX web server to Logstash and Elasticsearch.
Step 1 - Install Filebeat
deb (Debian/Ubuntu/Mint)
curl -L -O https://artifacts.elastic.co/downloads/beats//-oss-7.15.1-amd64.deb
sudo dpkg -i -oss-7.15.1-amd64.deb
rpm (CentOS/RHEL/Fedora)
curl -L -O https://artifacts.elastic.co/downloads/beats//-oss-7.15.1-x86_64.rpm
sudo rpm -vi -oss-7.15.1-x86_64.rpm
macOS
curl -L -O https://artifacts.elastic.co/downloads/beats//-oss-7.15.1-darwin-x86_64.tar.gz
tar xzvf -oss-7.15.1-darwin-x86_64.tar.gz
Windows
- Download and extract the Windows zip file.
- Rename the
-<version>-windows
directory to ``. - Open a PowerShell prompt as an Administrator.
- Run the following to install as a Windows service:
.\install-service-.ps1
PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-.ps1
.
Step 2 - Enable the NGINX Module
There are several built in filebeat modules you can use. You will need to enable the nginx module.
deb/rpm
sudo filebeat modules list
sudo filebeat modules enable nginx
macOS
cd <EXTRACTED_ARCHIVE>
./filebeat modules list
./filebeat modules enable nginx
Windows
cd <EXTRACTED_ARCHIVE>
.\filebeat.exe modules list
.\filebeat.exe modules enable nginx
Additional module configuration can be done using the per module config files located in the modules.d folder, most commonly this would be to read logs from a non-default location
deb/rpm /etc/filebeat/modules.d/
mac/win <EXTRACTED_ARCHIVE>/modules.d/
- module: nginx
# Access logs
access:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/custom/path/to/logs"]
# Error logs
error:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/custom/path/to/logs"]
# Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
ingress_controller:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
Step 3 - Copy Configuration File
The configuration file below is pre-configured to send data to your Logit.io Stack via Logstash.
Copy the configuration file below and overwrite the contents of filebeat.yml.
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== Outputs ===================================
# ------------------------------ Logstash Output -------------------------------
<div class="sw-warning">
<b>No input available! </b> Your stack is missing the required input for this data source <a href="#" onclick="Intercom('showNewMessage')" class="btn btn-info btn-sm">Talk to support to add the input</a>
</div>
Step 4 - Start Filebeat
Ok, time to start ingesting data!
deb/rpm
sudo systemctl enable filebeat
sudo systemctl start filebeat
macOS
./filebeat
Windows
PS C:\Program Files\Filebeat> Start-Service filebeat
Step 5 - how to diagnose no data in Stack
If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.
Step 6 - NGINX dashboard
The NGINX module comes with predefined Kibana dashboards. To view your dashboards for any of your Logit.io stacks, launch Kibana and choose Dashboards.
Step 7 - NGINX Logs Overview
NGINX is an open-source HTTP server and reverse proxy that was created by Igor Sysoev & released in 2004. It has gone on to power many of the web’s highest traffic sites (including Netflix, Google & Wordpress) as it is a highly reliable server for enabling businesses to scale their operations.
Viewing NGINX log files can allow you to see spikes in 5XX/4XX status codes affecting the performance of your applications, and allow your Dev teams to drill down into the data to resolve errors. Analysing these at scale can rapidly drain your resources if your teams need to configure separate parsing, configuration, visualisation and reporting tools for a single large NGINX instance.
Many NGINX log analyzers can slow down the process of troubleshooting & increase time to resolution unnecessarily as they often struggle to process large amounts of log data. The Logit.io log management platform is built on ELK and can easily process large amounts of NGINX server data for root cause analysis.
Our platform is built to scale with your infrastructure, once data is migrated to your ELK Stack you’ll be able to benefit from automatic parsing with Logstash and visualise your NGINX metrics in Kibana. Alert on errors and notify your teams of spikes in real-time with our integrated alerting features that can send notifications to a variety of sources including Jira, Opsgenie, Slack, PagerDuty & Webhooks.
In case you need any further assistance with sending your NGINX data to Logstash & Elasticsearch we're here to help. Just get in touch with our support team via live chat & we'll be happy to assist.