Ready to get going? Start your 14 days free trial today

Start free trial

Have an account? Sign in

Send data via HAProxy to your Logstash instance provided by Logit.io

HAProxy

Ship logs from HAProxy to logstash

Configure Filebeat to ship logs from HAProxy to Logstash and Elasticsearch.

Step 1 - Install FilebeatCopy

deb (Debian/Ubuntu/Mint)

sudo apt-get install apt-transport-https
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo 'deb https://artifacts.elastic.co/packages/oss-6.x/apt stable main' | sudo tee /etc/apt/sources.list.d/beats.list

sudo apt-get update && sudo apt-get install filebeat

rpm (CentOS/RHEL/Fedora)

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
echo "[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/oss-6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md" | sudo tee /etc/yum.repos.d/elastic-beats.repo

sudo yum install filebeat

Step 2 - Locate Configuration FileCopy

deb/rpm /etc/filebeat/filebeat.yml

Step 3 - Setup HAProxy ConfigurationCopy

deb (Debian/Ubuntu)

HAProxy generates logs in syslog format, on debian and ubuntu the haproxy package contains the required syslog configuration to generate a haproxy.log file which we will then monitor using filebeat. Confirm the existance of /etc/rsyslog.d/49-haproxy.conf and /var/log/haproxy.log If you've recently installed haproxy you may need to restart rsyslog to get additional haproxy config file loaded.

rpm (Centos/RHEL)

The RPM haproxy default configuration sends it's logs to a syslog daemon listening on localhost via UDP. We need to configure rsyslog to listen on localhost and write a haproxy.log file which we will then monitor using filebeat. Run the following lines of command and then restart rsyslog.

echo '#Rsyslog configuration to listen on localhost for HAProxy log messages 
#and write them to /var/log/haproxy.log
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1
local2.*    /var/log/haproxy.log' | sudo tee /etc/rsyslog.d/haproxy.conf

sudo systemctl restart rsyslog

There are several built in filebeat modules you can use. You will need to enable the haproxy module. _deb/rpm__

sudo filebeat modules list
sudo filebeat modules enable haproxy

macOS

cd <EXTRACTED_ARCHIVE>
./filebeat modules list
./filebeat modules enable haproxy

Windows

cd <EXTRACTED_ARCHIVE>
.\filebeat.exe modules list
.\filebeat.exe modules enable haproxy

Additional module configuration can be done using the per module config files located in the modules.d folder, for haproxy we want to configure the haproxy module to read from file, uncomment and edit the var.input line to say

var.input: file

deb/rpm /etc/filebeat/modules.d/haproxy.yml mac/win <EXTRACTED_ARCHIVE>/modules.d/haproxy.yml

Step 4 - Confirm the LoggingCopy

Confirm the haproxy log file contains entries to process.

tail /var/log/haproxy.log

Should return the last 10 entries in the file, if you get nothing back or file not found, check haproxy is running and if rsyslog needs reloading.

Step 5 - Configure outputCopy

We'll be shipping to Logstash so that we have the option to run filters before the data is indexed.
Comment out the elasticsearch output block.

## Comment out elasticsearch output
#output.elasticsearch:
#hosts: ["localhost:9200"]

Uncomment and change the logstash output to match below.

output.logstash:
  hosts: ["your-logstash-host:your-ssl-port"]
  loadbalance: true
  ssl.enabled: true 

Step 6 - (Optional) Update Logstash FiltersCopy

All Logit stacks come pre-configured with popular Logstash filters. We would recommend that you add HAProxy specific filters if you don't already have them, to ensure enhanced dashboards and modules work correctly.

Edit your Logstash filters by choosing Stack > Settings > Logstash Filters

if [fileset][module] == "haproxy" {
  grok {
    match => {
      "message" => [
      "%{HAPROXY_DATE:[haproxy][request_date]} %{IPORHOST:[haproxy][source]} %{PROG:[haproxy][process_name]}(?:\[%{POSINT:[haproxy][pid]}\])?: %{GREEDYDATA} %{IPORHOST:[haproxy][client][ip]}:%{POSINT:[haproxy][client][port]} %{WORD} %{IPORHOST:[haproxy][destination][ip]}:%{POSINT:[haproxy][destination][port]} \(%{WORD:[haproxy][frontend_name]}/%{WORD:[haproxy][mode]}\)",
      "(%{NOTSPACE:[haproxy][process_name]}\[%{NUMBER:[haproxy][pid:int}\]: )?%{IP:[haproxy][client][ip]}:%{NUMBER:[haproxy][client][port:int} \[%{NOTSPACE:[haproxy][request_date]}\] %{NOTSPACE:[haproxy][frontend_name]} %{NOTSPACE:[haproxy][backend_name]}/%{NOTSPACE:[haproxy][server_name]} %{NUMBER:[haproxy][http][request][time_wait_ms:int}/%{NUMBER:[haproxy][total_waiting_time_ms:int}/%{NUMBER:[haproxy][connection_wait_time_ms:int}/%{NUMBER:[haproxy][http][request][time_wait_without_data_ms:int}/%{NUMBER:[haproxy][http][request][time_active_ms:int} %{NUMBER:[haproxy][http][response][status_code:int} %{NUMBER:[haproxy][bytes_read:int} %{NOTSPACE:[haproxy][http][request][captured_cookie]} %{NOTSPACE:[haproxy][http][response][captured_cookie]} %{NOTSPACE:[haproxy][termination_state]} %{NUMBER:[haproxy][connections][active:int}/%{NUMBER:[haproxy][connections][frontend:int}/%{NUMBER:[haproxy][connections][backend:int}/%{NUMBER:[haproxy][connections][server:int}/%{NUMBER:[haproxy][connections][retries:int} %{NUMBER:[haproxy][server_queue:int}/%{NUMBER:[haproxy][backend_queue:int} (\{%{DATA:[haproxy][http][request][captured_headers]}\} \{%{DATA:[haproxy][http][response][captured_headers]}\} |\{%{DATA}\} )?\"%{GREEDYDATA:[haproxy][http][request][raw_request_line]}\"",
      "(%{NOTSPACE:[haproxy][process_name]}\[%{NUMBER:[haproxy][pid:int}\]: )?%{IP:[haproxy][client][ip]}:%{NUMBER:[haproxy][client][port:int} \[%{NOTSPACE:[haproxy][request_date]}\] %{NOTSPACE:[haproxy][frontend_name]}/%{NOTSPACE:[haproxy][bind_name]} %{GREEDYDATA:[haproxy][error_message]}",
      "%{HAPROXY_DATE} %{IPORHOST:[haproxy][source]} (%{NOTSPACE:[haproxy][process_name]}\[%{NUMBER:[haproxy][pid:int}\]: )?%{IP:[haproxy][client][ip]}:%{NUMBER:[haproxy][client][port:int} \[%{NOTSPACE:[haproxy][request_date]}\] %{NOTSPACE:[haproxy][frontend_name]} %{NOTSPACE:[haproxy][backend_name]}/%{NOTSPACE:[haproxy][server_name]} %{NUMBER:[haproxy][total_waiting_time_ms:int}/%{NUMBER:[haproxy][connection_wait_time_ms:int}/%{NUMBER:[haproxy][tcp][processing_time_ms:int} %{NUMBER:[haproxy][bytes_read:int} %{NOTSPACE:[haproxy][termination_state]} %{NUMBER:[haproxy][connections][active:int}/%{NUMBER:[haproxy][connections][frontend:int}/%{NUMBER:[haproxy][connections][backend:int}/%{NUMBER:[haproxy][connections][server:int}/%{NUMBER:[haproxy][connections][retries:int} %{NUMBER:[haproxy][server_queue:int}/%{NUMBER:[haproxy][backend_queue:int}"
      ]
    }
    pattern_definitions => {
      "HAPROXY_DATE" => "(%{MONTHDAY}[/-]%{MONTH}[/-]%{YEAR}:%{HOUR}:%{MINUTE}:%{SECOND})|%{SYSLOGTIMESTAMP}"
      }
  }
  date {
    match => [
      "[haproxy][request_date]",
      "dd/MMM/yyyy:HH:mm:ss.SSS",
      "dd/MMM/yyyy:HH:mm:ss",
      "MMM dd HH:mm:ss"
    ]
    target => "@timestamp"
  }
  geoip {
    source => "[haproxy][client][ip]"
    target => "[haproxy][geoip]"
  }
}

Step 7 - Validate configurationCopy

Let's check the configuration file is syntactically correct.

deb/rpm

sudo filebeat -e -c /etc/filebeat/filebeat.yml

Step 8 - Start filebeatCopy

Ok, time to start ingesting data!

deb/rpm

sudo systemctl enable filebeat
sudo systemctl start filebeat

Step 9 - HAproxy Logs OverviewCopy

HAProxy (High Availability Proxy) is an open-source software load balancer for proxying HTTP & TCP based applications. As the tool offers high availability by default it is well suited for high traffic websites.

HAProxy is the de-facto proxy server powering many of the web’s most popular sites & is often the default deployment in most cloud platforms. For most Linux distributions it is the reference load-balancer recommended for container orchestration (E.G Kubernetes).

HAProxy logs hold data on HTTP queries, error codes & how long the request took to send, if it was queued and how long for, how long the TCP connection took to establish, as well as information on response size and cookies, among other valuable insights for reporting & security. These logs can be difficult to process for analysis at scale & so a log analyser will likely be required to process HAProxy logs efficiently.

Requests & traffic for HTTP & TCP based applications are spread across multiple servers when HAProxy is used. The proxy is well known for its flexibility & the tool’s logs can be used in a log management solution such as Logit for easy identification of critical issues within an application.

The Logit platform offers a complete solution for centralising your log files from multiple applications and servers and provides a HAProxy log analyser as standard. You can also use our Kibana integrations to visualise key server metrics from both frontend and backend applications for fast error resolution & troubleshooting.

Followed our HAProxy log configuration guide and are still encountering issues? We're here to help you get started. Feel free to reach out by contacting our support team by visiting our dedicated Help Centre or via live chat & we'll be happy to assist.

Toggle View

Expand View

Return to Search