Metrics filter plugin
Aggregates metrics (meters, timers, and percentiles) over a rolling window of events and emits the summary as its own event. Useful for producing rate/latency metrics from raw log streams.
- Package:
logstash-filter-metrics - Coverage source: default/bundled
- Official catalog entry: Yes
Plugin overview
metrics is used in the Logstash filter stage. Aggregates metrics from event streams.
Typical use cases
- Correlate multiple related events into task/session-level outputs.
- Transform fields before indexing to keep schema and naming consistent.
Input and output behavior
- Flow: processes matching events and mutates fields/tags within the same event.
- Input: works on events that match your surrounding
ifconditions. - Output: updates the current event in place unless configured otherwise.
- Important options:
clear_interval,flush_interval,ignore_older_than,meter.
Options
Required
- No required plugin-specific options.
Optional
clear_interval(type: number; default:-1) — Seconds after which the internal state is cleared (use to reset rolling windows).flush_interval(type: number; default:5) — Seconds between summary emissions.ignore_older_than(type: number; default:0) — Ignore events older than this many seconds relative to@timestamp.meter(type: array; default:[]) — List of meter names; one counter per meter is maintained.percentiles(type: array; default:[1, 5, 10, 90, 95, 99, 100]) — Percentiles to report on timers (for example[50, 95, 99]).rates(type: array; default:[1, 5, 15]) — Rates to report, in minutes (for example[1, 5, 15]).timer(type: hash; default:{}) — Map of timer name to source field path; used for percentile and rate calculations.
Example configuration
filter {
metrics {
meter => [ "events.ingested", "events.parse_failures" ]
timer => { "http.response.time_ms" => "[http][response][time_ms]" }
rates => [ 1, 5 ]
percentiles => [ 50, 95, 99 ]
flush_interval => 60
add_tag => [ "pipeline_metrics" ]
}
}Common options configuration
All Logstash filter plugins support these shared options:
add_field(type: hash; default:{}) — Adds fields when the filter succeeds. Supports dynamic field names and values.add_tag(type: array; default:[]) — Adds one or more tags when the filter succeeds.enable_metric(type: boolean; default:true) — Enables or disables metric collection for this plugin instance.id(type: string; default:none) — Sets an explicit plugin instance ID for monitoring and troubleshooting.periodic_flush(type: boolean; default:false) — Calls the filter flush method at regular intervals.remove_field(type: array; default:[]) — Removes fields when the filter succeeds. Supports dynamic field names.remove_tag(type: array; default:[]) — Removes tags when the filter succeeds.
filter {
metrics {
add_field => { "pipeline_stage" => "parsed" }
add_tag => ["parsed", "logstash_filter"]
enable_metric => true
id => "my_filter_instance"
periodic_flush => false
remove_field => ["tmp_field"]
remove_tag => ["temporary"]
}
}Apply in Logit.io
- Open your stack in Logit.io and navigate to Logstash Pipelines.
- In the
filter { ... }section, add ametricsblock. - Save your pipeline changes, then restart the Logstash pipeline if prompted.
- Send sample events and verify parsed/enriched fields in OpenSearch Dashboards.
Validation checklist
- Confirm the
metricsblock compiles without syntax errors. - Verify expected new/updated fields exist in sample documents.
- Verify unexpected fields are not removed unless explicitly configured.
- Confirm tags added on success/failure align with your alerting and routing rules.
Troubleshooting
- If events are unchanged, verify your filter condition (
if ...) matches incoming events. - If the pipeline fails to start, validate braces/quotes and retry with a minimal filter block.
- If throughput drops, reduce expensive operations and test with representative sample volume.
References
- GitHub package:
logstash-filter-metrics(opens in a new tab) - Canonical catalog: /log-management/ingestion-pipeline/logstash-filters-reference