Ready to get going? Start your 14 days free trial today

Start free trial

Have an account? Sign in

Send data via MySQL to your Logstash instance provided by Logit.io

MySQL Logs

Ship logs from MySQL to logstash

Configure Filebeat to ship logs from MySQL to Logstash and Elasticsearch.

Step 1 - Install FilebeatCopy

deb (Debian/Ubuntu/Mint)

sudo apt-get install apt-transport-https
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo 'deb https://artifacts.elastic.co/packages/oss-6.x/apt stable main' | sudo tee /etc/apt/sources.list.d/beats.list

sudo apt-get update && sudo apt-get install filebeat

rpm (CentOS/RHEL/Fedora)

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
echo "[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/oss-6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md" | sudo tee /etc/yum.repos.d/elastic-beats.repo

sudo yum install filebeat

Windows

  • Download the Filebeat Windows zip file from the official downloads page.
  • Extract the contents of the zip file into C:\Program Files.
  • Rename the filebeat-<version>-windows directory to Filebeat.
  • Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select Run As Administrator). If you are running Windows XP, you may need to download and install PowerShell.
  • Run the following commands to install Filebeat as a Windows service: cd 'C:\Program Files\Filebeat' .\install-service-filebeat.ps1
If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1

Step 2 - Locate Configuration FileCopy

deb/rpm /etc/filebeat/filebeat.yml
Windows Open C:\Program Files\Filebeat\filebeat.yml

Step 3 - Enable the Mysql ModuleCopy

There are several built in filebeat modules you can use. To enable the mysql module run.

deb/rpm

filebeat modules list
filebeat modules enable mysql

Windows

.\Filebeat modules enable mysql

The default configured paths for MySQL logs are as follows.

/var/log/mysql/mysql.log

/var/log/mysql/mysql-slow.log

c:\programdata\MySQL\MySQL Server*\error.log*

c:\programdata\MySQL\MySQL Server*\mysql-slow.log*

Additional module configuration can be done using the per module config files located in the modules.d folder, most commonly this would be to read logs from a non-default location

deb/rpm /etc/filebeat/modules.d/
mac/win <EXTRACTED_ARCHIVE>/modules.d/

Step 4 - Configure OutputCopy

We'll be shipping to Logstash so that we have the option to run filters before the data is indexed.
Comment out the elasticsearch output block.

## Comment out elasticsearch output
#output.elasticsearch:
#  hosts: ["localhost:9200"]

Uncomment and change the logstash output to match below.

output.logstash:
    hosts: ["your-logstash-host:your-ssl-port"]
    loadbalance: true
    ssl.enabled: true

Step 5 - (Optional) Update Logstash FiltersCopy

All Logit stacks come pre-configured with popular Logstash filters. We would recommend that you add MySQL specific filters if you don't already have them, to ensure enhanced dashboards and modules work correctly.

Edit your Logstash filters by choosing Stack > Settings > Logstash Filters

if [fileset][module] == "mysql" {
  if [fileset][name] == "error" {
    grok {
      match => { "message" => ["%{LOCALDATETIME:[mysql][error][timestamp]} (\[%{DATA:[mysql][error][level]}\] )?%{GREEDYDATA:[mysql][error][message]}",
        "%{TIMESTAMP_ISO8601:[mysql][error][timestamp]} %{NUMBER:[mysql][error][thread_id]} \[%{DATA:[mysql][error][level]}\] %{GREEDYDATA:[mysql][error][message1]}",
        "%{GREEDYDATA:[mysql][error][message2]}"] }
      pattern_definitions => {
        "LOCALDATETIME" => "[0-9]+ %{TIME}"
      }
      remove_field => "message"
    }
    mutate {
      rename => { "[mysql][error][message1]" => "[mysql][error][message]" }
    }
    mutate {
      rename => { "[mysql][error][message2]" => "[mysql][error][message]" }
    }
    date {
      match => [ "[mysql][error][timestamp]", "ISO8601", "YYMMdd H:m:s" ]
      remove_field => "[mysql][error][time]"
    }
  }
  else if [fileset][name] == "slowlog" {
    grok {
      match => { "message" => ["^# User@Host: %{USER:[mysql][slowlog][user]}(\[[^\]]+\])? @ %{HOSTNAME:[mysql][slowlog][host]} \[(IP:[mysql][slowlog][ip])?\](\s*Id:\s* %{NUMBER:[mysql][slowlog][id]})?\n# Query_time: %{NUMBER:[mysql][slowlog][query_time][sec]}\s* Lock_time: %{NUMBER:[mysql][slowlog][lock_time][sec]}\s* Rows_sent: %{NUMBER:[mysql][slowlog][rows_sent]}\s* Rows_examined: %{NUMBER:[mysql][slowlog][rows_examined]}\n(SET timestamp=%{NUMBER:[mysql][slowlog][timestamp]};\n)?%{GREEDYMULTILINE:[mysql][slowlog][query]}"] }
      pattern_definitions => {
        "GREEDYMULTILINE" => "(.|\n)*"
      }
      remove_field => "message"
    }
    date {
      match => [ "[mysql][slowlog][timestamp]", "UNIX" ]
    }
    mutate {
      gsub => ["[mysql][slowlog][query]", "\n# Time: [0-9]+ [0-9][0-9]:[0-9][0-9]:[0-9][0-9](\\.[0-9]+)?$", ""]
    }
  }
}

Step 6 - Validate ConfigurationCopy

Let's check the configuration file is syntactically correct.

Run from the extracted archive dir:

filebeat -e -c filebeat.yml

Step 7 - Start filebeatCopy

Ok, time to start ingesting data!

deb/rpm

sudo systemctl enable filebeat
sudo systemctl start filebeat

Windows

Start-Service filebeat

Step 8 - MySQL Logging OverviewCopy

MySQL is an open source relational database management system created by Michael Widenius in 1995, this relational database runs across the majority of operating systems & is closely associated with its usage for web applications.

MySQL powers some of the world’s highest traffic sites, including Facebook, YouTube & Pinterest.

MySQL is able to work within an operating system to organise data into multiple data tables and show which data types may be related to each other. This helps the user to easily structure their data.

When used in this way, relational databases can be used to test database integrity, manage users and create backups of vital data.

MySQL Servers create numerous logs that you can use for troubleshooting and analysis, the most important ones include: Slow query logs, General query logs & error logs.

These logs default to a text file format, which can quickly become tedious to parse and process quickly to spot functional problems, opportunities to improve performance and identify security issues.

Our built in HA (high availability) MySQL log file analyser can be used to centralise your data & set up alerts to monitor your log data in real-time as well as deliver metrics for Kibana visualisations & reports with easily.

Toggle View

Expand View

Return to Search