An introduction to alerting
Alerting and notifications
Proactive alerting helps you catch production errors, unusual log volumes, and security-relevant events before they escalate. On Logit.io you configure ElastAlert 2 rules as YAML from the dashboard, test them, and apply updates when you are ready. Notifications can go to email, chat tools, on-call systems, webhooks, and many other destinations.
You express what to match using the same OpenSearch query patterns you use elsewhere; rules run on managed infrastructure—you do not configure cluster endpoints inside the rule file.
Automate enable/disable and rule CRUD with the Developer API.
Primary workflow (YAML editor)
- Enable alerting under Alerting & Notifications for your Logs stack.
- Create or open a rule and edit the YAML.
- Run Test only to check parsing and query results.
- Choose Update when the rule is ready.
See How do I create a new alerting rule? for step-by-step UI guidance.
Subject, body, and context
- Subject & body — titles, message text, Python and Jinja formatting, multiple destinations.
- Context & links — top field counts,
include, OpenSearch Discover URLs.
Rule types (ElastAlert 2)
Each rule sets type: to one of the built-in rule types. Every type has its own required fields and options:
| Type | Summary |
|---|---|
| any | Alert on every document that matches the filter. |
| blacklist | Match when a field value is in a deny list. |
| whitelist | Match when a field value is not in an allow list. |
| change | Match when a field changes between events sharing a query_key. |
| frequency | Match when at least num_events occur inside timeframe. |
| spike | Match when volume in the current window spikes or drops vs the previous window. |
| flatline | Match when there are fewer than threshold events in timeframe. |
| new_term | Match when a new value appears in a monitored field. |
| cardinality | Match on unique-count of a field above/below limits. |
| metric_aggregation | Match when a metric (avg, max, sum, etc.) crosses a threshold. |
| spike_aggregation | Match when a metric spikes or drops between windows. |
| percentage_match | Match when a subset of documents exceeds or falls below a percentage of the whole. |
Browse all guides under Rule types.
Destinations
Set alert: to one or more destinations (for example email, slack, or custom webhooks via post / post2). Each destination has its own configuration keys. Browse Destinations for a full list with examples.
Other useful features
- OpenSearch Dashboards links in alert content (where configured).
- Top counts and field values in alert bodies.
- Aggregation windows and scheduled digest-style sending (
aggregation). - Per-key bucketing with
query_keyto reduce noise. realertto control how often the same rule can fire.
Example frequency rule (managed stack)
name: High error rate example
type: frequency
index: "*-*"
num_events: 50
timeframe:
minutes: 15
filter:
- query:
query_string:
query: "log.level:error OR level:error"
alert:
- "email"
email:
- "[email protected]"ElastAlert execution logs in your stack
Logit.io runs ElastAlert 2 for you, but execution output is still written into your Logs stack (the elastalert index). That helps you audit rule behaviour, see test runs, and debug cases where matches occur but no notification arrives.
To view those messages in the dashboard, open Diagnostic logs for your stack at /logs-settings/logfile (stack logs). Use your usual Logs search or Discover workflow against the elastalert data alongside application logs.
OpenSearch Alerting (optional)
If you prefer OpenSearch Alerting (monitors in OpenSearch Dashboards) instead of YAML rules, see OpenSearch Alerting.
Best getting started guides
Start with these guides in order:
- Validate your rule first
Confirm YAML structure, required fields, and destination settings before troubleshooting behaviour. - Send alerts for common scenarios
Use ready-to-adapt examples for spike, flatline, and per-host/application alerting patterns. - Set a log-volume threshold warning
Warn early when ingestion approaches your stack limit using a frequency rule. - Configure high CPU notifications (OpenSearch Alerting)
Build a monitor in OpenSearch Dashboards when you prefer monitor-based alerts.
More task-focused walkthroughs are available under Guides, including HTML in email alerts.
Frequently asked questions
My rule is not getting any hits?
If tests show 0 hits, confirm the index pattern (for example *-* or your Beat prefix) matches documents in Discover. Temporarily simplify or remove filter to see whether any documents are seen at all.
Typical filter using query string syntax:
filter:
- query:
query_string:
query: "foo:bar AND baz:abc*"If you use a term query on an analysed field, the token in the index may differ from the raw string (for example use a .keyword field if your mapping provides one).
I got hits, why did I not get an alert?
Behaviour depends on type:. For any, each hit can be a match. For frequency, num_events must occur inside timeframe. For spike and flatline, windowing and thresholds must be satisfied.
Example frequency snippet (no cluster settings):
name: Log frequency rule example
type: frequency
index: "*-*"
num_events: 30
timeframe:
minutes: 5
filter:
- query:
query_string:
query: "*"
alert:
- "email"
email:
- "[email protected]"If you see matches but 0 alerts sent, check aggregation settings, silence / realert, and rule logs for messages such as Ignoring match for silenced rule.
Why only one alert when I expected several?
realert sets the minimum time between alerts for the same rule. To allow every match:
realert:
minutes: 0How can I reduce duplicate alerts?
Raise realert or combine with query_key so duplicates are scoped per field value:
realert:
hours: 8
query_key: userSend a digest on a schedule
aggregation:
schedule: "2 4 * * mon,fri"Custom timestamp field
If events do not use @timestamp, set:
timestamp_field: log-time
timestamp_type: isoYou can also tune query_delay and buffer_time for late-arriving data.