Resources
4 min read
Last updated:
Prometheus is among the leading open-source metric-gathering and alerting platforms for managing metrics from microservices architectures. As Prometheus is such a popular choice in the cloud computing landscape today, we will explain how Prometheus metrics can be leveraged for observability.
Contents
What Is Prometheus?
With Prometheus, you can monitor and alert on metrics generated by dynamic cloud environments like Kubernetes with ease. A lack of monitoring tools suitable for monitoring dynamic cluster scheduling within containers led SoundCloud to develop Prometheus in 2012.
Prometheus is essentially a multidimensional time-series data model (otherwise known as a time-series database). Prometheus is also known for being a flexible system that can run complex mathematical calculations, collect metrics, and display basic graphs.
Find out more in our guide explaining what is Prometheus?
What Are Prometheus Metrics?
Prometheus metrics, when harnessed from your infrastructure, can allow you to monitor the performance of your containers and clusters in a much more intuitive manner, particularly when they are used alongside other types of telemetry. As a result of centralising your Prometheus data together with your logs and traces, it is much easier to avoid operational silos and potential blind spots.
The Kubernetes ecosystem is ubiquitous with Prometheus metrics. Prometheus itself is one of the most valuable tools you can use if you are a regular Kubernetes user. This is because you will struggle to gain full observability without using this dedicated metrics platform alongside your containers. Additionally, Prometheus metrics are often created so engineers can monitor general availability signals and latency service level indicators (SLIs) in real time.
There has been a surge in popularity of the Prometheus metrics format over the last few years. This has resulted in the establishment of separate projects such as OpenMetrics which is based on the original format. It is the intention of OpenMetrics to make this particular format of metrics become an industry standard.
There are also a wide variety of Cloud Native Computing Foundation (CNCF) projects that use the Prometheus metrics format as a means of exposing out-of-the-box metrics with ease.
Since many tools (such as InfluxDB, OpenTSDB, and Graphite) are pre-installed with support for the Prometheus metrics format, these will usually detect and scrape this data automatically for you.
The process of directly instrumenting a system so that it could be used with Prometheus is often complicated and troublesome for engineers. This means that Prometheus client libraries and Prometheus servers can be used instead as exporters of metrics from external third-party systems.
It can be essentially said that Prometheus metrics themselves can be divided into four types or categories. In the following sections, we will cover each type in a bit more detail.
How Many Prometheus Metrics Are There?
Prometheus supports four metrics types: gauges, counters, histograms, and summaries. Below we have provided a more detailed explanation of each of these.
Gauge
A gauge metric is used to measure a value without accounting for changes in the value over time. Using this metric, you can determine if the current state is constant or if it is changing over time. CPU and memory utilization, queue size, temperature, and the number of pods are all examples of gauge metrics.
Counter
Counter metrics measure units that increase over time. If the sensor restarts, the counter metric's value is reset to zero, otherwise, the counter does not go down. A rate() function can be used to calculate these metrics based on the rate at which change occurred over time, or by calculating the difference between two timestamps.
An event's increasing value is analyzed using counter metrics. This is often used to determine how many events occur each second when used alongside the rate() function. In addition to API call requests, error counts are examples of counter metrics.
Histogram
A histogram displays metrics based on their frequency. In order to derive histogram metrics, frequency observations are subdivided into predefined buckets. A metric value is recorded over multiple events with the help of these buckets in Prometheus.
In instances where exact values are not necessary, histograms can be useful for recording values that will be used to calculate percentiles and averages. Response size and request duration are examples of metrics that can be represented by histograms.
Summaries
As opposed to histograms, summary metrics expose periodic values directly on the application server, along with metric distributions over time. As a result, summaries are an inefficient way to determine the time frame covered by different samples since they cannot aggregate metric data from multiple sources.
When the range of values is unknown, summaries are used to calculate accurate quantiles. Examples of a summary metric are the duration of a request, the latency, and the size of a response.
How Do Prometheus Metrics Work?
To scrape metric data, Prometheus typically uses an HTTP endpoint. In plaintext HTTP format, these endpoints present their respective metrics - whether scraped directly from the Prometheus server or retrieved by a push gateway.
As a result of scraping these metrics, aggregated time-series data is stored locally. By using Prometheus' API, users can access these aggregated metrics for real-time dashboard visualisation using services such as Grafana or OpenSearch Dashboards.
Where Are Prometheus Metrics Stored?
It is Prometheus's default behaviour to store its time series data on disk under the directory specified by the flag storage. local. path. In the default configuration, the data folder is located at ./data.
Prometheus Metrics Example
Here is a short article on the most effective ways to use Prometheus metrics which covers a number of examples of how they have been used in real-world use cases in our guide to Prometheus dashboards.
For users who wish to explore how metrics management can be used to solve their issues, Logit.io provides fully managed hosted Prometheus as part of its platform. If you would like to explore this use case in more detail, you can sign up for a free 14-day trial of Logit.io to find out more.
In case you found this article informative, you might want to take a look at our previous article on Prometheus tools or our article all about distributed tracing.