Skip to content

Quick Start

FastForward is a research project exploring how to build a fast log forwarder with Rust. This guide gets you from zero to a working pipeline in about 15 minutes.

Requires the Rust stable toolchain (1.89+).

Terminal window
git clone https://github.com/strawgate/fastforward.git
cd fastforward
cargo build --release -p ffwd
sudo cp target/release/ff /usr/local/bin/

Verify it works:

Terminal window
ff --version
ff --help

Or start from a generated config:

Terminal window
ff init # creates ffwd.yaml with a basic file → stdout pipeline
ff wizard # interactive: choose a use-case preset or build your own

Generate synthetic JSON log lines and print them to your terminal.

Terminal window
ff generate-json 10000 logs.json

This creates 10,000 JSON log lines with fields like level, message, status, duration_ms, and service.

config.yaml
pipelines:
default:
inputs:
- type: file
path: logs.json
format: json
outputs:
- type: stdout
format: console
Terminal window
ff run --config config.yaml

You’ll see colored output for every log line:

ff v0.x.x (abc1234 2026-04-01, release)
ready: default
ff running (1 pipeline(s))
10:30:00.000Z INFO request handled GET /api/v1/users/10000 duration_ms=1 request_id=... service=myapp status=200
10:30:00.000Z ERROR request handled GET /health/10021 duration_ms=40 request_id=... service=myapp status=503
...

FastForward parsed every JSON line, detected field types automatically, built Arrow RecordBatches, and printed them in a human-readable format. All 10,000 lines stream through in under a second.

For one-off command-line use, keep a destination-only config and pipe data into ff. ff send reads stdin, drains the configured output or outputs, and exits.

destination.yaml
output:
type: stdout
format: console
Terminal window
cat logs.json | ff send --config destination.yaml --format json --service checkout

Bare ff also enters send mode when stdin is piped, using the normal config search order. Set FFWD_CONFIG when you want an explicit destination config:

Terminal window
cat logs.json | FFWD_CONFIG=destination.yaml ff --format json --service checkout

Add a SQL transform to keep only what matters. Every batch of parsed records becomes a DataFusion SQL table named logs.

Terminal window
ff generate-json 10000 logs.json # regenerate — FastForward tracks file positions
config.yaml
pipelines:
default:
inputs:
- type: file
path: logs.json
format: json
transform: |
SELECT level, message, status, duration_ms
FROM logs
WHERE level = 'ERROR' AND duration_ms > 50
outputs:
- type: stdout
format: console
Terminal window
ff run --config config.yaml

Now you see only errors with slow durations. Only the columns from the SELECT appear — everything else was filtered before reaching the output. In production, this means you only ship the logs you care about.

Full SQL works: JOIN, GROUP BY, HAVING, subqueries, window functions. Built-in UDFs: regexp_extract(), grok(), json(), json_int(), json_float(), geo_lookup(). See SQL Transforms for the complete reference.


FastForward sends OTLP protobuf to an OpenTelemetry Collector, Grafana Alloy, or any OTLP-compatible backend.

Test locally with FastForward’s built-in devour receiver:

Terminal window
ff devour --mode otlp --listen 127.0.0.1:4318 &

You can also blast generated data into a destination sink to test throughput:

Terminal window
ff blast --destination otlp --endpoint http://127.0.0.1:4318/v1/logs
config.yaml
pipelines:
default:
inputs:
- type: file
path: logs.json
format: json
transform: |
SELECT level, message, status, duration_ms
FROM logs
WHERE level IN ('ERROR', 'WARN')
outputs:
- type: otlp
endpoint: http://127.0.0.1:4318/v1/logs
compression: zstd
Terminal window
ff generate-json 10000 logs.json
ff run --config config.yaml

To ship to a real collector:

output:
type: otlp
endpoint: https://otel-collector:4318/v1/logs
compression: zstd

Works out of the box with OpenTelemetry Collector, Grafana Alloy, or any OTLP-compatible backend.


This config tails Kubernetes pod logs in CRI format, filters by severity, and ships to OTLP with monitoring enabled.

pipeline.yaml
pipelines:
default:
inputs:
- type: file
path: /var/log/pods/**/*.log
format: cri
transform: |
SELECT * FROM logs WHERE level IN ('ERROR', 'WARN')
outputs:
- type: otlp
endpoint: https://otel-collector:4318/v1/logs
compression: zstd
server:
diagnostics: 0.0.0.0:9090
log_level: info

What’s different from the earlier stages:

  • format: cri — Kubernetes container runtime format. FastForward strips the CRI prefix and reassembles multi-line logs via the P partial flag.
  • path: /var/log/pods/**/*.log — Recursive glob. FastForward discovers new files automatically and handles rotation.
  • server.diagnostics — Exposes health checks, metrics, and pipeline status.
Terminal window
ff validate --config pipeline.yaml
ff dry-run --config pipeline.yaml

validate catches YAML errors. dry-run goes further and compiles the SQL against the Arrow schema, catching column name typos and type mismatches.

Terminal window
curl -s http://localhost:9090/live # liveness probe
curl -s http://localhost:9090/ready # readiness probe
curl -s http://localhost:9090/admin/v1/status | jq . # full status

See Monitoring & Diagnostics for the complete endpoint reference and alerting guidance.

pipelines:
errors:
inputs:
- type: file
path: /var/log/pods/**/*.log
format: cri
transform: |
SELECT * FROM logs
WHERE level IN ('ERROR', 'WARN')
outputs:
- type: otlp
endpoint: https://otel-collector:4318/v1/logs
compression: zstd
all-logs:
inputs:
- type: file
path: /var/log/pods/**/*.log
format: cri
outputs:
- type: stdout
format: json
server:
diagnostics: 0.0.0.0:9090

TopicWhere to go
Understand the pipeline internalsPipeline Explorer (interactive)
Learn the full SQL transform syntaxSQL Transforms
See all YAML config optionsYAML Reference
Deploy to KubernetesKubernetes DaemonSet
Debug a problemTroubleshooting