ES|QL Language Reference for Dashboard Creation¶
This guide provides essential ES|QL (Elasticsearch Query Language) syntax for LLMs creating Kibana dashboards. ES|QL is a piped query language distinct from SQL.
ES|QL vs SQL: Key Differences¶
ES|QL is not SQL. Common mistakes:
| SQL Syntax (Wrong) | ES|QL Syntax (Correct) |
| ------------------ | --------------------- |
| SELECT * FROM logs | FROM logs-* |
| SELECT COUNT(*) FROM logs WHERE status = 200 | FROM logs-* \| WHERE status == 200 \| STATS COUNT(*) |
| SELECT * FROM logs ORDER BY @timestamp DESC LIMIT 10 | FROM logs-* \| SORT @timestamp DESC \| LIMIT 10 |
| SELECT host, COUNT(*) FROM logs GROUP BY host | FROM logs-* \| STATS COUNT(*) BY host |
| SELECT AVG(response_time) AS avg_time FROM logs | FROM logs-* \| STATS avg_time = AVG(response_time) |
| SELECT * FROM logs WHERE message LIKE '%error%' | FROM logs-* \| WHERE message LIKE "*error*" |
Key differences:
- Piped syntax: Commands flow left-to-right with
|(pipe) operators - Equality: Use
==not=for comparison - No SELECT: Use
KEEPto select columns, or just aggregate directly - No GROUP BY clause: Use
BYwithinSTATS - Wildcards: Use
*in patterns, not% - Index patterns: FROM uses Elasticsearch index patterns (e.g.,
logs-*)
Syntax Rules¶
Case sensitivity:
- Keywords and function names are case-insensitive:
FROM,from,WHERE,whereall work - Field names are case-sensitive:
host.name≠Host.Name - String comparisons are case-sensitive:
"Germany"≠"germany"
String literals use double quotes, not single quotes:
# Correct
WHERE country == "Germany"
# Wrong - single quotes not supported
WHERE country == 'Germany'
Always name computed columns to avoid awkward backtick references later:
# Bad - creates column named "height * 3.281"
| EVAL height * 3.281
| STATS MEDIAN(`height * 3.281`) # Must quote expression
# Good - explicit name
| EVAL height_feet = height * 3.281
| STATS MEDIAN(height_feet)
Source Commands¶
Every ES|QL query starts with a source command:
FROM¶
Retrieves data from Elasticsearch indices, data streams, or aliases.
ROW¶
Creates inline data for testing:
SHOW¶
Returns deployment metadata:
TS (Time Series)¶
Optimized source command for time series data streams. Available in Elasticsearch 9.2+ (tech preview).
Use TS only when your target stack supports it. For dashboards that must remain broadly compatible, prefer FROM. If a dashboard intentionally depends on TS, set minimum_kibana_version: "9.2.0" on the dashboard so the requirement is explicit.
TS my_metrics
| WHERE @timestamp >= NOW() - 1 day
| STATS SUM(RATE(requests)) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend), host
Processing Commands¶
Processing commands transform data. Chain them with |:
WHERE¶
Filters rows by condition:
FROM logs-*
| WHERE status == 200
| WHERE response_time > 1000
| WHERE host.name LIKE "prod-*"
| WHERE event.category IN ("authentication", "network")
STATS¶
Aggregates data with optional grouping:
# Simple aggregation
FROM logs-*
| STATS total = COUNT(*)
# Grouped aggregation
FROM logs-*
| STATS count = COUNT(*) BY host.name
# Multiple aggregations
FROM logs-*
| STATS
total = COUNT(*),
avg_time = AVG(response_time),
max_time = MAX(response_time)
BY service.name
# Conditional aggregation (inline WHERE - powerful feature)
FROM logs-*
| STATS
total = COUNT(*),
errors = COUNT(*) WHERE status >= 500,
slow_requests = COUNT(*) WHERE response_time > 1000
BY service.name
# Time bucketing (dynamic - recommended)
FROM logs-*
| STATS event_count = COUNT(*) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend)
| SORT time_bucket ASC
EVAL¶
Creates or modifies columns:
FROM logs-*
| EVAL response_time_ms = response_time * 1000
| EVAL status_category = CASE(
status >= 500, "error",
status >= 400, "client_error",
status >= 300, "redirect",
"success"
)
| EVAL is_slow = response_time > 1000
KEEP¶
Selects and orders columns:
DROP¶
Removes columns:
RENAME¶
Renames columns:
SORT¶
Orders results:
FROM logs-*
| SORT @timestamp DESC
| SORT status ASC, response_time DESC
| SORT host.name ASC NULLS LAST
LIMIT¶
Restricts row count (default is 1000):
DISSECT¶
Extracts fields from strings using delimiter patterns:
GROK¶
Extracts fields using regex patterns:
MV_EXPAND¶
Expands multi-valued fields into separate rows:
ENRICH¶
Adds data from enrichment policies:
LOOKUP JOIN¶
Joins with a lookup index (requires index.mode: lookup on the lookup index):
Limitations: Single shard only (max ~2B docs), no wildcards in index name, can reorder rows (always SORT after join).
FORK¶
Splits processing into multiple branches and combines results. Useful for combining metrics from different conditions or fields:
TS metrics-*
| FORK (
WHERE container.cpu.usage.kernelmode IS NOT NULL
| STATS cpu_rate = AVG(RATE(container.cpu.usage.kernelmode))
BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend), container.id
| EVAL cpu_mode = "kernelmode"
)
(
WHERE container.cpu.usage.usermode IS NOT NULL
| STATS cpu_rate = AVG(RATE(container.cpu.usage.usermode))
BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend), container.id
| EVAL cpu_mode = "usermode"
)
| STATS combined_rate = AVG(cpu_rate) BY time_bucket, cpu_mode
Key points:
- Each branch processes independently
- Branches must produce compatible schemas (same columns)
- Use
STATSwithin branches when usingRATE()(RATE can only be used in STATS) - Combine results after FORK with additional processing commands
Aggregation Functions¶
Use within STATS:
| Function | Description | Example |
|---|---|---|
COUNT(*) |
Count all rows | STATS total = COUNT(*) |
COUNT(field) |
Count non-null values | STATS errors = COUNT(error.code) |
COUNT_DISTINCT(field) |
Unique value count | STATS unique_users = COUNT_DISTINCT(user.id) |
SUM(field) |
Sum numeric values | STATS total_bytes = SUM(bytes) |
AVG(field) |
Average value | STATS avg_time = AVG(response_time) |
MIN(field) |
Minimum value | STATS min_time = MIN(response_time) |
MAX(field) |
Maximum value | STATS max_time = MAX(response_time) |
MEDIAN(field) |
Median value | STATS median_time = MEDIAN(response_time) |
PERCENTILE(field, n) |
Nth percentile | STATS p95 = PERCENTILE(response_time, 95.0) |
STD_DEV(field) |
Standard deviation | STATS stddev = STD_DEV(response_time) |
VARIANCE(field) |
Variance | STATS var = VARIANCE(response_time) |
VALUES(field) |
All unique values | STATS hosts = VALUES(host.name) |
TOP(field, n, order) |
Top N values | STATS top_hosts = TOP(host.name, 5, "desc") |
Time Series Functions¶
For use with the TS source command (Elasticsearch 9.2+):
| Function | Description | Example |
|---|---|---|
RATE(field) |
Per-second rate of counter increase | STATS SUM(RATE(requests)) |
IRATE(field) |
Instant rate (last two points) | STATS SUM(IRATE(requests)) |
Important: RATE() and IRATE() can only be used within STATS commands (or other aggregate functions). They cannot be used directly in EVAL or WHERE clauses:
# CORRECT - RATE() inside STATS
TS metrics-*
| STATS request_rate = SUM(RATE(requests))
# CORRECT - RATE() inside STATS within FORK branch
TS metrics-*
| FORK (
WHERE field1 IS NOT NULL
| STATS rate1 = AVG(RATE(field1))
)
(
WHERE field2 IS NOT NULL
| STATS rate2 = AVG(RATE(field2))
)
# WRONG - RATE() cannot be used in EVAL
TS metrics-*
| EVAL rate = RATE(requests) # Error: RATE can only be used in STATS
| DELTA(field) | Absolute change of gauge | STATS SUM(DELTA(temperature)) |
| IDELTA(field) | Instant delta (last two points) | STATS SUM(IDELTA(gauge)) |
| INCREASE(field) | Absolute increase of counter | STATS SUM(INCREASE(total_bytes)) |
| AVG_OVER_TIME(field) | Average over time window | STATS MAX(AVG_OVER_TIME(cpu)) |
| MAX_OVER_TIME(field) | Maximum over time window | STATS MAX(MAX_OVER_TIME(memory)) |
| MIN_OVER_TIME(field) | Minimum over time window | STATS MIN(MIN_OVER_TIME(latency)) |
| SUM_OVER_TIME(field) | Sum over time window | STATS SUM(SUM_OVER_TIME(bytes)) |
| COUNT_OVER_TIME(field) | Count over time window | STATS SUM(COUNT_OVER_TIME(events)) |
| FIRST_OVER_TIME(field) | Earliest value by timestamp | STATS MAX(FIRST_OVER_TIME(value)) |
| LAST_OVER_TIME(field) | Latest value by timestamp | STATS MAX(LAST_OVER_TIME(value)) |
Important: All *_OVER_TIME() functions must be wrapped in another aggregate function like AVG(), MAX(), MIN(), or SUM(). They cannot be used directly:
# CORRECT - AVG_OVER_TIME wrapped in MAX()
TS metrics-*
| STATS avg_cpu = MAX(AVG_OVER_TIME(system.cpu.utilization))
# CORRECT - LAST_OVER_TIME wrapped in MAX()
TS metrics-*
| STATS connections = MAX(LAST_OVER_TIME(postgresql.backends))
# WRONG - *_OVER_TIME() must be wrapped in aggregate
TS metrics-*
| STATS cpu = AVG_OVER_TIME(system.cpu.utilization) # Error: must be wrapped
Choosing the Right Gauge Aggregation¶
For gauge metrics (point-in-time values like connections, memory, counts), choose based on what question you're answering:
| Use Case | Function | When to Use |
|---|---|---|
| Current state | LAST_OVER_TIME() |
Connection counts, thread counts, buffer sizes, queue depths - "what is the value now?" |
| Typical value | AVG_OVER_TIME() |
CPU utilization, memory percentage, load averages - "what is the typical value?" |
| Peak detection | MAX_OVER_TIME() |
Finding spikes in latency, memory high-water marks |
| Minimum threshold | MIN_OVER_TIME() |
Detecting drops in available resources |
Recommended patterns:
# Current active connections (use LAST_OVER_TIME for current state)
TS metrics-*
| STATS connections = MAX(LAST_OVER_TIME(postgresql.backends))
# Average CPU over time (use AVG_OVER_TIME for typical value)
TS metrics-*
| STATS avg_cpu = MAX(AVG_OVER_TIME(system.cpu.utilization))
# Peak memory usage (use MAX_OVER_TIME for high-water mark)
TS metrics-*
| STATS peak_memory = MAX(MAX_OVER_TIME(process.memory.usage))
Note: When aggregating across multiple time series (e.g., multiple hosts), wrap with MAX(), SUM(), or AVG() as appropriate for your use case.
Dynamic Time Bucketing (Required)¶
Time bucketing function (for use with TS source command):
Always use the 4-parameter BUCKET() syntax for time series charts so they scale with the user's selected time range. This works with both FROM and TS queries:
# With FROM queries
FROM logs-*
| STATS event_count = COUNT(*) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend)
| SORT time_bucket ASC
# With TS queries
TS metrics-*
| STATS rate = SUM(RATE(requests)) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend)
| SORT time_bucket ASC
| Parameter | Description |
|---|---|
@timestamp |
The timestamp field to bucket |
20 |
Target number of buckets (20-50 recommended) |
?_tstart |
Kibana time range start parameter (auto-populated) |
?_tend |
Kibana time range end parameter (auto-populated) |
This ensures visualizations remain readable whether the user views 5 minutes or 1 year of data. The ?_tstart and ?_tend parameters are automatically populated by Kibana based on the dashboard time picker.
Why dynamic bucketing is essential:
- Fixed intervals like
BUCKET(@timestamp, 1 minute)create 10,080 data points for 1 week - Fixed intervals like
BUCKET(@timestamp, 5 minutes)create 2,016 data points for 1 week BUCKET(@timestamp, 20, ?_tstart, ?_tend)creates exactly ~20 data points regardless of time range
Note: TBUCKET(interval) is an alias for BUCKET(@timestamp, interval) but should be avoided because it uses fixed intervals that don't scale with the dashboard time range.
Common Functions¶
String Functions¶
| Function | Description | Example |
|---|---|---|
LENGTH(s) |
String length | EVAL len = LENGTH(message) |
SUBSTRING(s, start, len) |
Extract substring | EVAL prefix = SUBSTRING(host, 0, 4) |
CONCAT(s1, s2, ...) |
Concatenate strings | EVAL full = CONCAT(first, " ", last) |
TO_LOWER(s) |
Lowercase | EVAL lower = TO_LOWER(name) |
TO_UPPER(s) |
Uppercase | EVAL upper = TO_UPPER(name) |
TRIM(s) |
Remove whitespace | EVAL clean = TRIM(input) |
REPLACE(s, old, new) |
Replace substring | EVAL fixed = REPLACE(msg, "err", "error") |
SPLIT(s, delim) |
Split into array | EVAL parts = SPLIT(path, "/") |
MV_FIRST(array) |
First element of array | EVAL first = MV_FIRST(parts) |
MV_LAST(array) |
Last element of array | EVAL last = MV_LAST(parts) |
Array access: ES|QL doesn't support bracket indexing like array[0]. Use MV_FIRST() and MV_LAST() to access array elements:
# Split image name and extract name/version
FROM metrics-*
| EVAL image_parts = SPLIT(container.image.name, ":")
| EVAL image_name = MV_FIRST(image_parts)
| EVAL image_version = CASE(MV_FIRST(image_parts) == MV_LAST(image_parts), "latest", MV_LAST(image_parts))
Date/Time Functions¶
| Function | Description | Example |
|---|---|---|
NOW() |
Current timestamp | WHERE @timestamp > NOW() - 1 hour |
DATE_EXTRACT(unit, date) |
Extract date part | EVAL hour = DATE_EXTRACT("hour", @timestamp) |
DATE_TRUNC(unit, date) |
Truncate to unit | EVAL day = DATE_TRUNC("day", @timestamp) |
DATE_DIFF(unit, d1, d2) |
Difference between dates | EVAL age_days = DATE_DIFF("day", created, NOW()) |
DATE_FORMAT(date, fmt) |
Format date as string | EVAL formatted = DATE_FORMAT(@timestamp, "yyyy-MM-dd") |
DATE_PARSE(fmt, s) |
Parse string to date | EVAL parsed = DATE_PARSE("yyyy-MM-dd", date_str) |
BUCKET(field, count, start, end) |
Dynamic bucketing for date/numeric fields | STATS count = COUNT(*) BY BUCKET(@timestamp, 20, ?_tstart, ?_tend) |
Numeric Functions¶
| Function | Description | Example |
|---|---|---|
ABS(n) |
Absolute value | EVAL abs_val = ABS(change) |
CEIL(n) |
Round up | EVAL ceiling = CEIL(value) |
FLOOR(n) |
Round down | EVAL floored = FLOOR(value) |
ROUND(n, decimals) |
Round to decimals | EVAL rounded = ROUND(avg, 2) |
POW(base, exp) |
Power | EVAL squared = POW(x, 2) |
SQRT(n) |
Square root | EVAL root = SQRT(variance) |
LOG10(n) |
Base-10 logarithm | EVAL log = LOG10(value) |
Conditional Functions¶
| Function | Description | Example |
|---|---|---|
CASE(cond1, val1, ...) |
Conditional logic | EVAL status = CASE(code >= 500, "error", code >= 400, "warn", "ok") |
COALESCE(v1, v2, ...) |
First non-null | EVAL name = COALESCE(display_name, username, "unknown") |
GREATEST(v1, v2, ...) |
Maximum value | EVAL max = GREATEST(a, b, c) |
LEAST(v1, v2, ...) |
Minimum value | EVAL min = LEAST(a, b, c) |
Type Conversion Functions¶
| Function | Description | Example |
|---|---|---|
TO_STRING(v) |
Convert to string | EVAL str = TO_STRING(status_code) |
TO_INTEGER(v) |
Convert to integer | EVAL num = TO_INTEGER(count_str) |
TO_DOUBLE(v) |
Convert to double | EVAL dbl = TO_DOUBLE(value) |
TO_BOOLEAN(v) |
Convert to boolean | EVAL flag = TO_BOOLEAN(enabled) |
TO_DATETIME(v) |
Convert to datetime | EVAL ts = TO_DATETIME(timestamp_str) |
TO_IP(v) |
Convert to IP | EVAL ip = TO_IP(ip_string) |
Operators¶
Comparison Operators¶
| Operator | Description | Example |
|---|---|---|
== |
Equal | WHERE status == 200 |
!= |
Not equal | WHERE status != 500 |
< |
Less than | WHERE response_time < 100 |
<= |
Less than or equal | WHERE count <= 10 |
> |
Greater than | WHERE bytes > 1000 |
>= |
Greater than or equal | WHERE score >= 0.5 |
LIKE |
Pattern match (wildcards) | WHERE host LIKE "prod-*" |
RLIKE |
Regex match | WHERE message RLIKE "error.*timeout" |
IN |
In list | WHERE status IN (200, 201, 204) |
IS NULL |
Null check | WHERE error IS NULL |
IS NOT NULL |
Not null check | WHERE response IS NOT NULL |
Logical Operators¶
| Operator | Description | Example |
|---|---|---|
AND |
Logical AND | WHERE status == 200 AND response_time < 100 |
OR |
Logical OR | WHERE status == 500 OR status == 503 |
NOT |
Logical NOT | WHERE NOT host LIKE "test-*" |
Arithmetic Operators¶
| Operator | Description | Example |
|---|---|---|
+ |
Addition | EVAL total = sent + received |
- |
Subtraction | EVAL diff = end - start |
* |
Multiplication | EVAL bytes = kb * 1024 |
/ |
Division | EVAL rate = count / duration |
% |
Modulo | EVAL remainder = value % 10 |
Dashboard Query Patterns¶
Metric Panel Queries¶
Single-value metrics:
# Total count
FROM logs-*
| STATS total_events = COUNT(*)
# Average with breakdown
FROM logs-*
| STATS avg_response = AVG(response_time), p95 = PERCENTILE(response_time, 95.0) BY service.name
Time Series Charts¶
# Events over time (adaptive bucketing - recommended)
FROM logs-*
| STATS event_count = COUNT(*) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend)
| SORT time_bucket ASC
# Events by category over time (adaptive bucketing)
FROM logs-*
| STATS event_count = COUNT(*) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend), event.category
| SORT time_bucket ASC
# Bytes transferred over time with breakdown
FROM logs-*
| STATS total_bytes = SUM(bytes) BY time_bucket = BUCKET(@timestamp, 20, ?_tstart, ?_tend), host.name
| SORT time_bucket ASC
Pie/Donut Charts¶
# Top 10 event categories
FROM logs-*
| STATS count = COUNT(*) BY event.category
| SORT count DESC
| LIMIT 10
# Browser distribution
FROM logs-*
| STATS users = COUNT_DISTINCT(user.id) BY user_agent.name
| SORT users DESC
| LIMIT 5
Bar Charts¶
# Top hosts by request count
FROM logs-*
| STATS requests = COUNT(*) BY host.name
| SORT requests DESC
| LIMIT 10
# Error codes distribution
FROM logs-*
| WHERE status >= 400
| STATS errors = COUNT(*) BY status
| SORT errors DESC
Data Tables¶
# Recent errors with context
FROM logs-*
| WHERE log.level == "error"
| KEEP @timestamp, host.name, message, error.type
| SORT @timestamp DESC
| LIMIT 100
# Top talkers summary
FROM logs-*
| STATS
total_requests = COUNT(*),
total_bytes = SUM(bytes),
avg_response = AVG(response_time)
BY source.ip
| SORT total_requests DESC
| LIMIT 20
Common Mistakes to Avoid¶
Automatic Detection
Many of these mistakes are automatically detected by the dashboard linter.
Run kb-dashboard-lint on your YAML files to catch issues early.
-
Using SQL syntax: ES|QL is not SQL. No SELECT, FROM comes first, use
|pipes. Lint rule:esql-sql-syntax -
Wrong equality operator: Use
==for comparison, not=. Lint rule:esql-sql-syntax -
Missing pipes: Commands must be separated by
|. -
GROUP BY in wrong place: Use
BYwithinSTATS, not as separate clause. -
Wrong wildcard character: Use
*in LIKE patterns, not%. Lint rule:esql-sql-syntax -
Forgetting SORT for time series: Add
SORT time_bucket ASCfor charts. -
Using BUCKET as standalone command:
BUCKET()must be used within STATS...BY or EVAL, not as a separate command. -
Case sensitivity: Field names are case-sensitive.
-
Missing time filter: Add
WHERE @timestamp >= NOW() - 1 dayfor performance. Lint rule:esql-where-clause -
Assuming default order: Always explicit SORT for predictable results.
-
Using window functions: ES|QL has no
ROW_NUMBER() OVER (PARTITION BY ...). UseVALUES()+MV_SORT()+MV_FIRST()/MV_LAST()for latest-per-group patterns. -
Hardcoded time buckets: Always use dynamic sizing
BUCKET(`@timestamp`, 20, ?_tstart, ?_tend)for both FROM and TS queries so visualizations scale with the time range. Avoid fixed intervals likeBUCKET(`@timestamp`, 1 minute)orTBUCKET(5 minutes)as they create too many data points for long time ranges. Lint rule:esql-dynamic-time-bucket -
Multivalue fields return NULL: Most functions silently return NULL on multivalue fields. Use
MV_EXPANDfirst: -
Type mismatch across indices: Same field with different types causes errors. Fix with explicit conversion:
-
Late filtering wastes resources: Filter on indexed fields immediately after FROM for Lucene pushdown:
-
Type conversions fail silently: Failed conversions return NULL with warnings, not errors. Check results:
OpenTelemetry Data Patterns¶
When querying OpenTelemetry metrics data, follow these patterns:
Field Path Conventions¶
OTel data uses specific field path patterns:
# Metric values - use metric name directly
TS metrics-*
| WHERE apache.requests IS NOT NULL
# Metric attributes - always prefix with "attributes."
TS metrics-*
| WHERE attributes.state == "running"
# Resource attributes - use full path
TS metrics-*
| WHERE resource.attributes.service.name == "my-service"
# Filter by data stream
TS metrics-*
| WHERE data_stream.dataset == "apachereceiver.otel"
Counter vs Gauge Metrics¶
Counter metrics (cumulative, always increasing) must use RATE() when you are using TS:
# CORRECT - Use RATE() for counters
TS metrics-*
| STATS request_rate = SUM(RATE(apache.requests))
# WRONG - MAX() on counter gives cumulative total
FROM metrics-*
| STATS requests = MAX(apache.requests)
Gauge metrics (point-in-time values) use *_OVER_TIME() functions when you are using TS. Choose based on your use case:
| Gauge Type | Function | Examples |
|---|---|---|
| Current state | LAST_OVER_TIME() |
Connection counts, buffer sizes, queue depths |
| Typical value | AVG_OVER_TIME() |
CPU utilization, memory percentage |
# Current value - use LAST_OVER_TIME for "what is it now?"
TS metrics-*
| STATS connections = MAX(LAST_OVER_TIME(mysql.connections))
# Average over time - use AVG_OVER_TIME for "what is the typical value?"
TS metrics-*
| STATS avg_cpu = MAX(AVG_OVER_TIME(system.cpu.utilization))
Escaping Special Field Names¶
Field names with numeric suffixes require backticks.
Lint rule: esql-field-escaping
# WRONG - Parser error
WHERE apache.load.1 IS NOT NULL
# CORRECT - Backtick escape
WHERE `apache.load.1` IS NOT NULL
| STATS load = AVG(AVG_OVER_TIME(`apache.load.1`))
Dimensional Queries¶
Filter or group by metric dimensions using the attributes prefix:
# Filter by dimension value
TS metrics-*
| WHERE attributes.operation == "insert"
| STATS ops = SUM(RATE(mysql.operations))
# Group by dimension
TS metrics-*
| STATS ops = SUM(RATE(mysql.operations)) BY operation = attributes.operation
For comprehensive OTel dashboard guidance, see Creating Dashboards from OTel Receivers.
Key Limitations¶
| Limitation | Detail |
|---|---|
| Row limit | Default 1,000 rows, max 10,000 (output only, not docs processed) |
| Timeout | 30-second default regardless of Kibana settings |
| Timezone | ES|QL only supports UTC |
| Nested fields | Not returned at all |
| Subqueries | Not supported—use ENRICH or LOOKUP JOIN |
| BUCKET gaps | Does not create empty buckets for missing time intervals |
| date_nanos | Partial support—cast to datetime for BUCKET, DATE_FORMAT, DATE_PARSE |
Quick Reference: Pipeline Order¶
Optimal command ordering for performance:
FROM logs-* -- 1. Source command
| WHERE @timestamp > NOW() - 1h -- 2. Filter indexed fields (Lucene pushdown)
| KEEP @timestamp, status, message -- 3. Drop unused columns early
| EVAL status_group = status / 100 -- 4. Compute new columns
| DISSECT message "%{method} %{path}" -- 5. Parse unstructured data
| ENRICH geo_policy ON client.ip -- 6. Add reference data
| LOOKUP JOIN users ON user.id -- 7. Join with lookup data
| WHERE status_group == 5 -- 8. Filter on computed values
| STATS count = COUNT(*) BY path -- 9. Aggregate
| SORT count DESC -- 10. Sort (after aggregation)
| LIMIT 100 -- 11. Cap output rows
Additional Resources¶
- ESQL Panel Configuration - Dashboard panel setup
- ES|QL Query Reuse with YAML Anchors - Query patterns
- Queries Configuration - Query format options
- Elastic ES|QL Reference - Official documentation