Get a DemoStart Free TrialSign In

Google Dataflow Metrics via Telegraf

Ship your Google Dataflow Metrics via Telegraf to your Logit.io Stack

Configure Telegraf to ship Google Dataflow metrics to your Logit.io stacks via Logstash.

Send Your DataMetricsGoogle CloudGoogle Dataflow Metrics via Telegraf Guide

Follow this step by step guide to get 'logs' from your system to Logit.io:

Step 1 - Set credentials in GCP

Google Dataflow is a comprehensive suite of services within Google Cloud that provides a fully managed and serverless approach to building and executing data pipelines. It allows organizations to process and analyze large volumes of data in real-time or batch mode, offering scalability, fault tolerance, and ease of use.

  • Begin by heading over to the 'Project Selector' and select the specific project from which you wish to send metrics.
  • Progress to the 'Service Account Details' screen. Here, assign a distinct name to your service account and opt for 'Create and Continue'.
  • In the 'Grant This Service Account Access to Project' screen, ensure the following roles: 'Compute Viewer', 'Monitoring Viewer', and 'Cloud Asset Viewer'.
  • Upon completion of the above, click 'Done'.
  • Now find and select your project in the 'Service Accounts for Project' list.
  • Move to the 'KEYS' section.
  • Navigate through Keys > Add Key > Create New Key, and specify 'JSON' as the key type.
  • Lastly, click on 'Create', and make sure to save your new key.

Now add the environment variable for the key

On the machine run:

export GOOGLE_APPLICATION_CREDENTIALS=<your-gcp-key>

Step 2 - Install Telegraf

This integration allows you to configure a Telegraf agent to send your metrics, in multiple formats, to Logit.io.

Telegraf is a flexible server agent equipped with plug-in support, useful for sending metrics and events from data sources like web servers, APIs, application logs, and cloud services.

To ship your metrics to Logit.io, we will integrate the relevant input and outputs.http plug-in into your Telegraf configuration file.

Choose the install for your operating system below to get started:

Windows

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.19.2_windows_amd64.zip

Download and extract to: C:\Program Files\Logitio\telegraf\

Configuration file: C:\Program Files\Logitio\telegraf\

MacOS

brew install telegraf

Configuration file x86_64 Intel: /usr/local/etc/telegraf.conf Configuration file ARM (Apple Silicon): /opt/homebrew/etc/telegraf.conf

Ubuntu/Debian

wget -q https://repos.influxdata.com/influxdata-archive_compat.key
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list

sudo apt-get update
sudo apt-get install telegraf

Configuration file: /etc/telegraf/telegraf.conf

RedHat and CentOS

cat <<EOF | sudo tee /etc/yum.repos.d/influxdata.repo
[influxdata]
name = InfluxData Repository - Stable
baseurl = https://repos.influxdata.com/stable/\$basearch/main
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
EOF

sudo yum install telegraf

Configuration file: /etc/telegraf/telegraf.conf

SLES & openSUSE

zypper ar -f obs://devel:languages:go/ go
zypper in telegraf

Configuration file: /etc/telegraf/telegraf.conf

FreeBSD/PC-BSD

sudo pkg install telegraf

Configuration file: /etc/telegraf/telegraf.conf

Read more about how to configure data scraping and configuration options for Telegraf

Step 3 - Configure the Telegraf input plugin

First you need to set up the input plug-in to enable Telegraf to scrape the GCP data from your hosts. This can be accomplished by incorporating the following code into your configuration file:

# Gather timeseries from Google Cloud Platform v3 monitoring API
[[inputs.stackdriver]]
  ## GCP Project
  project = "<your-project-name>"

  ## Include timeseries that start with the given metric type.
  metric_type_prefix_include = [
    "dataflow.googleapis.com",
  ]

  ## Most metrics are updated no more than once per minute; it is recommended
  ## to override the agent level interval with a value of 1m or greater.
  interval = "1m"
Read more about how to configure data scraping and configuration options for Stackdriver

Step 4 - Configure the output plugin

Once you have generated the configuration file, you need to set up the output plug-in to allow Telegraf to transmit your data to Logit.io in Prometheus format. This can be accomplished by incorporating the following code into your configuration file:

[[outputs.http]]
  
  url = "https://<your-metrics-username>:<your-metrics-password>@<your-metrics-stack-id>-vm.logit.io:0/api/v1/write"
  data_format = "prometheusremotewrite"

  [outputs.http.headers]
    Content-Type = "application/x-protobuf"
    Content-Encoding = "snappy"

Step 5 - Start Telegraf

Windows

telegraf.exe --service start

MacOS

telegraf --config telegraf.conf

Linux

sudo service telegraf start

for systemd installations

systemctl start telegraf

Step 6 - View your metrics

Data should now have been sent to your Stack.

View my data

If you don't see metrics take a look at How to diagnose no data in Stack below for how to diagnose common issues.

Step 7 - How to diagnose no data in Stack

If you don't see data appearing in your Stack after following the steps, visit the Help Centre guide for steps to diagnose no data appearing in your Stack or Chat to support now.

Step 8 - Telegraf Google Dataflow Platform metrics Overview

To ensure effective monitoring and analysis of Google Dataflow pipelines across distributed systems, implementing a robust metrics management solution is crucial. Telegraf, an open-source server agent, is widely recognized for its capability to collect and report metrics from various sources, including Dataflow pipelines, job durations, data processing rates, and other relevant metrics.

Telegraf provides a wide range of input plugins, enabling users to collect diverse metrics such as pipeline latency, element counts, and resource utilization. These metrics are essential for understanding the performance and efficiency of Google Dataflow.

For storing and analyzing these metrics, organizations can leverage Prometheus, an open-source monitoring and alerting toolkit known for its flexible query language and powerful data visualization capabilities. The process of transmitting Google Dataflow metrics from Telegraf to Prometheus involves configuring Telegraf to present metrics in Prometheus's format and instructing Prometheus to scrape these metrics from the Telegraf server.

Once the metrics are successfully integrated into Prometheus, organizations can perform comprehensive analysis and visualization using Grafana. Grafana, a leading open-source platform for monitoring and observability, seamlessly integrates with Prometheus, allowing users to create dynamic and interactive dashboards for in-depth exploration of the metrics data. This empowers organizations to gain valuable insights into Dataflow performance, data processing trends, and potential optimization opportunities.

If you need any further assistance with shipping your log data to Logit.io we're here to help you get started. Feel free to get in contact with our support team by sending us a message via live chat & we'll be happy to assist.

Return to Search
Sign Up

© 2024 Logit.io Ltd, All rights reserved.