Skip to content

CLI Commands

The FastForward CLI binary is installed as ff. It provides several commands for running, testing, and debugging log pipelines.

Starts the FastForward pipeline and runs continuously until stopped.

Terminal window
ff run --config pipeline.yaml

The --config flag is optional. FastForward will automatically discover configuration files in this order:

  1. --config <path>
  2. $FFWD_CONFIG
  3. ./ffwd.yaml
  4. ~/.config/ffwd/config.yaml
  5. /etc/ffwd/config.yaml

Parses and validates the YAML configuration file and its schema without starting any pipeline components.

Terminal window
ff validate --config pipeline.yaml

On success, validate exits with code 0 and prints:

ready: default
config ok: 1 pipeline(s)

Builds all pipeline objects (inputs, SQL transforms, and outputs) without starting active IO workers. dry-run runs the same validation path as validate — both build full pipelines and compile SQL queries against the Arrow schema to catch column name typos or type mismatches. The only difference is the success label: dry run ok instead of config ok.

Terminal window
ff dry-run --config pipeline.yaml

On success, prints:

ready: default
dry run ok: 1 pipeline(s)

Reads from standard input until EOF, processes it through the configured pipeline, drains the output buffers, and exits. Designed for piped shell workflows.

Terminal window
cat app.log | ff send --config destination.yaml --format json
OptionDescription
--configDestination YAML config file
--formatInput format: auto, cri, json, raw (default: auto)
--serviceSet service.name on emitted records
--resourceAdd or override a resource attribute, repeatable: --resource key=value

See Inputs for stdin and piping patterns.

Generates synthetic JSON log lines for testing. Creates a file with stable, parseable records that exercise all column types.

Terminal window
ff generate-json 10000 logs.json

This creates 10,000 JSON log lines with fields: timestamp, level, message, duration_ms, request_id, service, status.

OptionDescription
NUM_LINESNumber of log lines to generate
OUTPUT_FILEPath to write the generated file

Blasts generated synthetic log data into a destination sink to test throughput and network performance. This command uses the same core pipeline runtime but bypasses reading from disk or the network.

Terminal window
ff blast --destination otlp --endpoint http://127.0.0.1:4318/v1/logs
OptionDescription
--destinationSink type (required in non-interactive mode, prompted otherwise): otlp (alias: elasticsearch_otlp), elasticsearch (alias: elasticsearch_bulk), loki, arrow_ipc, udp, tcp, null
--endpointDestination URL/address (required for all destinations except null)
--workersWorker threads (default: 2)
--batch-linesLines per batch (default: 5000)
--duration-secsStop automatically after N seconds (default: run until stopped)
--auth-bearer-tokenBearer token auth header
--auth-headerExtra header, repeatable: --auth-header 'Authorization=ApiKey xyz'
--diagnostics-addrDiagnostics server bind address (default: 127.0.0.1:0)

Runs a built-in receiver that accepts incoming log traffic and drops it. Use this to isolate sender performance or test upstream clients in isolation.

Terminal window
ff devour # defaults to OTLP mode on 127.0.0.1:4318
ff devour --mode tcp --listen 127.0.0.1:15140
ff devour --mode elasticsearch_bulk --listen 127.0.0.1:9200

Unlike blast, devour does not require --destination — it accepts multiple built-in receiver modes.

OptionDescription
--modeReceiver type: otlp (default), http, elasticsearch_bulk, tcp, udp
--listenBind address (default depends on mode: OTLP → 127.0.0.1:4318)
--duration-secsStop automatically after N seconds (default: run until stopped)
--diagnostics-addrDiagnostics server bind address (default: 127.0.0.1:0)

A convenience alias for devour --mode otlp with no other arguments.

Terminal window
ff blackhole # listens on 127.0.0.1:4318
ff blackhole 0.0.0.0:14318 # custom bind address

blackhole and devour both accept the same OTLP mode defaults. Use whichever reads more naturally in your workflow.

Validates the config, expands environment variable placeholders, redacts secrets, and prints the final YAML. Use this to inspect how your config looks after expansion before deploying.

Terminal window
ff effective-config --config pipeline.yaml

Creates a starter ffwd.yaml in the current directory with a basic file → stdout pipeline.

Terminal window
ff init

The generated config includes comments pointing to the next steps. Refuses to overwrite an existing file.

Interactive step-by-step config builder. Choose a use-case preset or assemble your own input and output.

Terminal window
ff wizard

wizard requires an interactive terminal (stdin and stdout must both be TTYs). It validates the generated config before writing, so bad column names or type mismatches are caught before you run.

Print shell completion scripts.

Terminal window
ff completions bash
ff completions zsh
ff completions fish
ff completions elvish
ff completions powershell
ff completions nushell

Redirect output into the appropriate location for your shell, e.g.:

Terminal window
ff completions bash >> ~/.bashrc

All pipeline commands (run, validate, dry-run, effective-config) accept the same YAML config format:

pipelines:
default:
inputs:
- type: file
path: ./logs.json
format: json
transform: |
SELECT * FROM logs WHERE level = 'ERROR'
outputs:
- type: stdout
format: console
server:
diagnostics: 127.0.0.1:9090
log_level: info

See the YAML configuration reference for all available input, transform, and output options.

Terminal window
ff --version # Print version and exit
ff --help # Show top-level help and exit

Run ff <command> --help for subcommand-specific options.