Send data via HAProxy to your Logstash instance provided by Logit.io

HAProxy

Ship logs from HAProxy to logstash

Configure Filebeat to ship logs from HAProxy to logstash and Elasticsearch.

Step 1 - Install Filebeat

deb (Debian/Ubuntu/Mint)

sudo apt-get install apt-transport-https
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo 'deb https://artifacts.elastic.co/packages/oss-6.x/apt stable main' | sudo tee /etc/apt/sources.list.d/beats.list

sudo apt-get update && sudo apt-get install filebeat

rpm (CentOS/RHEL/Fedora)

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
echo "[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/oss-6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md" | sudo tee /etc/yum.repos.d/elastic-beats.repo

sudo yum install filebeat

Step 2 - Locate Configuration File

deb/rpm /etc/filebeat/filebeat.yml

Step 3 - Setup HAProxy Configuration

deb (Debian/Ubuntu)

HAProxy generates logs in syslog format, on debian and ubuntu the haproxy package contains the required syslog configuration to generate a haproxy.log file which we will then monitor using filebeat. Confirm the existance of /etc/rsyslog.d/49-haproxy.conf and /var/log/haproxy.log If you've recently installed haproxy you may need to restart rsyslog to get additional haproxy config file loaded.

rpm (Centos/RHEL)

The RPM haproxy default configuration sends it's logs to a syslog daemon listening on localhost via UDP. We need to configure rsyslog to listen on localhost and write a haproxy.log file which we will then monitor using filebeat. Run the following lines of command and then restart rsyslog.

echo '#Rsyslog configuration to listen on localhost for HAProxy log messages 
#and write them to /var/log/haproxy.log
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1
local2.*    /var/log/haproxy.log' | sudo tee /etc/rsyslog.d/haproxy.conf

sudo systemctl restart rsyslog

There are several built in filebeat modules you can use. You will need to enable the haproxy module. deb/rpm_

sudo filebeat modules list
sudo filebeat modules enable haproxy

macOS

cd <EXTRACTED_ARCHIVE>
./filebeat modules list
./filebeat modules enable haproxy

Windows

cd <EXTRACTED_ARCHIVE>
.\filebeat.exe modules list
.\filebeat.exe modules enable haproxy

Additional module configuration can be done using the per module config files located in the modules.d folder, for haproxy we want to configure the haproxy module to read from file, uncomment and edit the var.input line to say

var.input: file

deb/rpm /etc/filebeat/modules.d/haproxy.yml mac/win <EXTRACTED_ARCHIVE>/modules.d/haproxy.yml

Step 4 - Confirm the Logging

Confirm the haproxy log file contains entries to process.

tail /var/log/haproxy.log

Should return the last 10 entries in the file, if you get nothing back or file not found, check haproxy is running and if rsyslog needs reloading.

Step 5 - Configure output

We'll be shipping to Logstash so that we have the option to run filters before the data is indexed.
Comment out the elasticsearch output block.

## Comment out elasticsearch output
#output.elasticsearch:
#hosts: ["localhost:9200"]

Uncomment and change the logstash output to match below.

output.logstash:
  hosts: ["your-logstash-host:your-port"]
  loadbalance: true
  ssl.enabled: true 

Step 6 - (Optional) Update Logstash Filters

All Logit stacks come pre-configured with popular Logstash filters. We would recommend that you add HAProxy specific filters if you don't already have them, to ensure enhanced dashboards and modules work correctly.

Edit your Logstash filters by choosing Stack > Settings > Logstash Filters


if [fileset][module] == "haproxy" {
  grok {
    match => {
      "message" => [
      "%{HAPROXY_DATE:[haproxy][request_date]} %{IPORHOST:[haproxy][source]} %{PROG:[haproxy][process_name]}(?:\[%{POSINT:[haproxy][pid]}\])?: %{GREEDYDATA} %{IPORHOST:[haproxy][client][ip]}:%{POSINT:[haproxy][client][port]} %{WORD} %{IPORHOST:[haproxy][destination][ip]}:%{POSINT:[haproxy][destination][port]} \(%{WORD:[haproxy][frontend_name]}/%{WORD:[haproxy][mode]}\)",
      "(%{NOTSPACE:[haproxy][process_name]}\[%{NUMBER:[haproxy][pid:int}\]: )?%{IP:[haproxy][client][ip]}:%{NUMBER:[haproxy][client][port:int} \[%{NOTSPACE:[haproxy][request_date]}\] %{NOTSPACE:[haproxy][frontend_name]} %{NOTSPACE:[haproxy][backend_name]}/%{NOTSPACE:[haproxy][server_name]} %{NUMBER:[haproxy][http][request][time_wait_ms:int}/%{NUMBER:[haproxy][total_waiting_time_ms:int}/%{NUMBER:[haproxy][connection_wait_time_ms:int}/%{NUMBER:[haproxy][http][request][time_wait_without_data_ms:int}/%{NUMBER:[haproxy][http][request][time_active_ms:int} %{NUMBER:[haproxy][http][response][status_code:int} %{NUMBER:[haproxy][bytes_read:int} %{NOTSPACE:[haproxy][http][request][captured_cookie]} %{NOTSPACE:[haproxy][http][response][captured_cookie]} %{NOTSPACE:[haproxy][termination_state]} %{NUMBER:[haproxy][connections][active:int}/%{NUMBER:[haproxy][connections][frontend:int}/%{NUMBER:[haproxy][connections][backend:int}/%{NUMBER:[haproxy][connections][server:int}/%{NUMBER:[haproxy][connections][retries:int} %{NUMBER:[haproxy][server_queue:int}/%{NUMBER:[haproxy][backend_queue:int} (\{%{DATA:[haproxy][http][request][captured_headers]}\} \{%{DATA:[haproxy][http][response][captured_headers]}\} |\{%{DATA}\} )?\"%{GREEDYDATA:[haproxy][http][request][raw_request_line]}\"",
      "(%{NOTSPACE:[haproxy][process_name]}\[%{NUMBER:[haproxy][pid:int}\]: )?%{IP:[haproxy][client][ip]}:%{NUMBER:[haproxy][client][port:int} \[%{NOTSPACE:[haproxy][request_date]}\] %{NOTSPACE:[haproxy][frontend_name]}/%{NOTSPACE:[haproxy][bind_name]} %{GREEDYDATA:[haproxy][error_message]}",
      "%{HAPROXY_DATE} %{IPORHOST:[haproxy][source]} (%{NOTSPACE:[haproxy][process_name]}\[%{NUMBER:[haproxy][pid:int}\]: )?%{IP:[haproxy][client][ip]}:%{NUMBER:[haproxy][client][port:int} \[%{NOTSPACE:[haproxy][request_date]}\] %{NOTSPACE:[haproxy][frontend_name]} %{NOTSPACE:[haproxy][backend_name]}/%{NOTSPACE:[haproxy][server_name]} %{NUMBER:[haproxy][total_waiting_time_ms:int}/%{NUMBER:[haproxy][connection_wait_time_ms:int}/%{NUMBER:[haproxy][tcp][processing_time_ms:int} %{NUMBER:[haproxy][bytes_read:int} %{NOTSPACE:[haproxy][termination_state]} %{NUMBER:[haproxy][connections][active:int}/%{NUMBER:[haproxy][connections][frontend:int}/%{NUMBER:[haproxy][connections][backend:int}/%{NUMBER:[haproxy][connections][server:int}/%{NUMBER:[haproxy][connections][retries:int} %{NUMBER:[haproxy][server_queue:int}/%{NUMBER:[haproxy][backend_queue:int}"
      ]
    }
    pattern_definitions => {
      "HAPROXY_DATE" => "(%{MONTHDAY}[/-]%{MONTH}[/-]%{YEAR}:%{HOUR}:%{MINUTE}:%{SECOND})|%{SYSLOGTIMESTAMP}"
      }
  }
  date {
    match => [
      "[haproxy][request_date]",
      "dd/MMM/yyyy:HH:mm:ss.SSS",
      "dd/MMM/yyyy:HH:mm:ss",
      "MMM dd HH:mm:ss"
    ]
    target => "@timestamp"
  }
  geoip {
    source => "[haproxy][client][ip]"
    target => "[haproxy][geoip]"
  }
}

Step 7 - Validate configuration

Let's check the configuration file is syntactically correct.

deb/rpm

sudo filebeat -e -c /etc/filebeat/filebeat.yml

Step 8 - Start filebeat

Ok, time to start ingesting data!

deb/rpm

sudo systemctl enable filebeat
sudo systemctl start filebeat
expand view

Expand View

compact view

Compact View

Return to Search
Sign Up