Have an account? Sign in
Configure Filebeat to ship logs from a NGINX web server to Logstash and Elasticsearch.
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.8.1-amd64.deb sudo dpkg -i filebeat-oss-7.8.1-amd64.deb
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.8.1-x86_64.rpm sudo rpm -vi filebeat-oss-7.8.1-x86_64.rpm
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.8.1-darwin-x86_64.tar.gz tar xzvf filebeat-oss-7.8.1-darwin-x86_64.tar.gz
- Download the Filebeat Windows zip file from the official downloads page.
- Extract the contents of the zip file into C:\Program Files.
- Rename the
- Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator). If you are running Windows XP, you may need to download and install PowerShell.
- Run the following commands to install Filebeat as a Windows service:
cd 'C:\Program Files\Filebeat' .\install-service-filebeat.ps1`
PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1.
There are several built in filebeat modules you can use. You will need to enable the nginx module. _deb/rpm__
sudo filebeat modules list sudo filebeat modules enable nginx
cd <EXTRACTED_ARCHIVE> ./filebeat modules list ./filebeat modules enable nginx
cd <EXTRACTED_ARCHIVE> .\filebeat.exe modules list .\filebeat.exe modules enable nginx
Additional module configuration can be done using the per module config files located in the modules.d folder, most commonly this would be to read logs from a non-default location
We'll be shipping to Logstash so that we have the option to run filters before the data is indexed.
Comment out the elasticsearch output block.
## Comment out elasticsearch output #output.elasticsearch: # hosts: ["localhost:9200"]
Uncomment and change the logstash output to match below.
output.logstash: hosts: ["your-logstash-host:your-ssl-port"] loadbalance: true ssl.enabled: true
Let's check the configuration file is syntactically correct by running filebeat directly inside the terminal.
If the file is invalid, filebeat will print an
error loading config file error message with details on how to correct the problem.
sudo filebeat -e -c /etc/filebeat/filebeat.yml
cd <EXTRACTED_ARCHIVE> ./filebeat -e -c filebeat.yml
cd <EXTRACTED_ARCHIVE> .\filebeat.exe -e -c filebeat.yml
Ok, time to start ingesting data!
sudo systemctl enable filebeat sudo systemctl start filebeat
PS C:\Program Files\Filebeat> Start-Service filebeat
NGINX is an open-source HTTP server and reverse proxy that was created by Igor Sysoev & released in 2004. It has gone on to power many of the web’s highest traffic sites (including Netflix, Google & Wordpress) as it is a highly reliable server for enabling businesses to scale their operations.
Viewing NGINX log files can allow you to see spikes in 5XX/4XX status codes affecting the performance of your applications, and allow your Dev teams to drill down into the data to resolve errors. Analysing these at scale can rapidly drain your resources if your teams need to configure separate parsing, configuration, visualisation and reporting tools for a single large NGINX instance.
Many NGINX log analyzers can slow down the process of troubleshooting & increase time to resolution unnecessarily as they often struggle to process large amounts of log data. The Logit log management platform is built on ELK and can easily process large amounts of NGINX server data for root cause analysis.
Our platform is built to scale with your infrastructure, once data is migrated to your ELK Stack you’ll be able to benefit from automatic parsing with Logstash and visualise your NGINX metrics in Kibana. Alert on errors and notify your teams of spikes in real-time with our integrated alerting features that can send notifications to a variety of sources including Jira, Slack, PagerDuty & Webhooks.
In case you need any further assistance with sending your NGINX data to Logstash & Elasticsearch we're here to help. Just get in touch with our support team via live chat & we'll be happy to assist.