Kubernetes Deployment
This page is the Kubernetes-specific production deployment guide for FastForward. For standalone container usage, see Docker deployment.
DaemonSet
Section titled “DaemonSet”A DaemonSet is the recommended way to deploy FastForward in a Kubernetes cluster. Each
node runs one FastForward pod that reads container logs from /var/log on the host.
A ready-to-use manifest is provided at deploy/daemonset.yml.
Minimal DaemonSet
Section titled “Minimal DaemonSet”apiVersion: v1kind: Namespacemetadata: name: collectors---apiVersion: v1kind: ServiceAccountmetadata: name: ffwd namespace: collectors---apiVersion: v1kind: ConfigMapmetadata: name: ffwd-config namespace: collectorsdata: config.yaml: | input: type: file path: /var/log/pods/**/*.log format: cri
transform: | SELECT * FROM logs WHERE level != 'DEBUG'
output: type: otlp endpoint: ${OTEL_ENDPOINT} protocol: grpc compression: zstd
server: diagnostics: 0.0.0.0:9090 log_level: info---apiVersion: apps/v1kind: DaemonSetmetadata: name: ffwd namespace: collectors labels: app: ffwdspec: selector: matchLabels: app: ffwd template: metadata: labels: app: ffwd spec: serviceAccountName: ffwd tolerations: - operator: Exists # run on all nodes including control-plane containers: - name: ffwd image: ghcr.io/strawgate/fastforward:latest imagePullPolicy: IfNotPresent args: - run - --config - /etc/ffwd/config.yaml env: - name: OTEL_ENDPOINT value: http://otel-collector.monitoring.svc.cluster.local:4317 ports: - name: diagnostics containerPort: 9090 resources: requests: cpu: "250m" memory: "128Mi" limits: cpu: "1" memory: "512Mi" volumeMounts: - name: varlog mountPath: /var/log readOnly: true - name: config mountPath: /etc/ffwd readOnly: true volumes: - name: varlog hostPath: path: /var/log - name: config configMap: name: ffwd-configApply it:
kubectl apply -f deploy/daemonset.ymlkubectl -n collectors rollout status daemonset/ffwdValidate rollout
Section titled “Validate rollout”# Pod healthkubectl -n collectors get pods -l app=ffwd -o wide
# Runtime logskubectl -n collectors logs daemonset/ffwd --tail=100
# Diagnostics endpoint (port-forward one pod)POD=$(kubectl -n collectors get pods -l app=ffwd -o jsonpath='{.items[0].metadata.name}')kubectl -n collectors port-forward "$POD" 9090:9090curl -s http://localhost:9090/admin/v1/status | jq .Rollback
Section titled “Rollback”If a new deployment causes dropped logs or sustained output errors, revert quickly:
# Roll back DaemonSet to previous revisionkubectl -n collectors rollout undo daemonset/ffwd
# Verify rollback completionkubectl -n collectors rollout status daemonset/ffwd
# Confirm forwarding resumeskubectl -n collectors logs daemonset/ffwd --tail=100Kubernetes metadata enrichment
Section titled “Kubernetes metadata enrichment”Use the k8s_path enrichment table to attach namespace, pod, and container labels to
every log record:
input: type: file path: /var/log/pods/**/*.log format: cri source_metadata: ecs
enrichment: - type: k8s_path table_name: k8s
transform: | SELECT l.level, l.message, k.namespace, k.pod_name, k.container_name FROM logs l LEFT JOIN k8s k ON l."file.path" = k.log_path_prefixNamespace filtering
Section titled “Namespace filtering”To collect logs only from specific namespaces, filter in the transform:
input: type: file path: /var/log/pods/**/*.log format: cri source_metadata: ecs
transform: | SELECT l.*, k.namespace, k.pod_name, k.container_name FROM logs l LEFT JOIN k8s k ON l."file.path" = k.log_path_prefix WHERE k.namespace IN ('production', 'staging')Scraping the diagnostics endpoint with Prometheus
Section titled “Scraping the diagnostics endpoint with Prometheus”Expose port 9090 in the pod spec to make the diagnostics API reachable from within the cluster.
To scrape /admin/v1/status, configure a Prometheus adapter (such as
json_exporter) that converts the JSON response into Prometheus metrics, or
query the endpoint directly in your monitoring stack.
OTLP collector integration
Section titled “OTLP collector integration”FastForward sends log records as OTLP protobuf. Any OpenTelemetry-compatible collector can receive them.
OpenTelemetry Collector
Section titled “OpenTelemetry Collector”Add a otlp receiver to your collector config and enable the logs pipeline:
receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318
exporters: debug: verbosity: normal otlphttp/loki: endpoint: http://loki:3100/otlp
service: pipelines: logs: receivers: [otlp] exporters: [debug, otlphttp/loki]Point FastForward at the collector:
output: type: otlp endpoint: http://otel-collector:4317 protocol: grpc compression: zstdGrafana Alloy / Agent
Section titled “Grafana Alloy / Agent”Grafana Alloy can receive OTLP logs and forward them to Loki or Tempo. Configure an
otelcol.receiver.otlp component and connect it to your exporter pipeline.
Resource sizing guidelines
Section titled “Resource sizing guidelines”FastForward is designed to process logs on a single CPU core. The pipeline runs as a set of blocking OS threads — one per input plus shared coordinator threads.
Baseline
Section titled “Baseline”| Scenario | CPU | Memory |
|---|---|---|
| Quiet node (< 1 MB/s) | 50 m | 64 Mi |
| Typical node (1–10 MB/s) | 250 m | 128 Mi |
| High-throughput node (> 10 MB/s) | 500 m – 1 CPU | 256 Mi |
Memory breakdown
Section titled “Memory breakdown”| Component | Typical size |
|---|---|
| Arrow RecordBatch (per batch, 8 KB read) | ~512 Ki |
| DataFusion query plan | ~4 Mi |
| OTLP request buffer | ~2 Mi |
| Enrichment tables | < 1 Mi |
| Per-pipeline overhead | ~16 Mi |
Tuning tips
Section titled “Tuning tips”- Reduce memory: avoid enabling input line capture unless needed (
line_field) — it stores the full JSON line and accounts for up to 65 % of table memory. - Reduce CPU: use a
WHEREclause in the transform to drop unwanted records early. - Multiple pipelines: each pipeline occupies its own thread. Add CPU budget proportionally.
Kubernetes resource example
Section titled “Kubernetes resource example”resources: requests: cpu: "250m" memory: "128Mi" limits: cpu: "1" memory: "512Mi"Set the limit higher than the request so FastForward can burst during log spikes without being OOM-killed.
Validating before deploy
Section titled “Validating before deploy”Use validate to parse and validate the config without starting the pipeline:
ff validate --config config.yamlUse dry-run to build all pipeline objects without starting them (catches errors
such as SQL syntax issues):
ff dry-run --config config.yamlBoth commands exit 0 on success and print an error to stderr on failure.