Getting started
This guide walks through Station from first install to a production-ready setup with persistence, recurring jobs, multi-step pipelines, and lifecycle observers.
Prerequisites
- Node.js 18 or later
- A package manager (pnpm, npm, or yarn)
- A TypeScript project configured for ES modules (
"type": "module"in your package.json)
1. Install
pnpm add station-signalstation-signal re-exports z from Zod. There is no need to install Zod separately.
2. Define a signal
A signal is a named, type-safe background job definition. It declares an input schema, execution constraints, and a handler function using a builder pattern. Signals are defined in their own files so the runner can auto-discover them.
// signals/send-email.tsimport { signal, z } from "station-signal"; export const sendEmail = signal("sendEmail") .input(z.object({ to: z.string(), subject: z.string(), body: z.string(), })) .timeout(30_000) .retries(2) .run(async (input) => { console.log(`Sending email to ${input.to}`); // Your email sending logic here });Builder methods
| Method | Description |
|---|---|
.input(schema) | Zod schema for the job payload. Every .trigger() call is validated against this schema. If validation fails, the run never starts. |
.timeout(30_000) | Maximum execution time in milliseconds. If the handler exceeds this duration, the run is killed and marked as timed out. Default: 300_000 (5 minutes). |
.retries(2) | Number of retry attempts after the initial failure. A value of 2 means 3 total attempts (1 initial + 2 retries). Default: 0 (no retry). |
.run(handler) | The handler function. Receives the validated input. Runs in an isolated child process spawned by the runner. |
3. Create the runner
The runner is the process that polls for due jobs and spawns child processes to execute them. Point it at a directory of signal files and call start().
// runner.tsimport path from "node:path";import { SignalRunner } from "station-signal"; const runner = new SignalRunner({ signalsDir: path.join(import.meta.dirname, "signals"),}); runner.start();| Option | Description |
|---|---|
signalsDir | Path to a directory of signal files. The runner auto-discovers every .ts or .js file that exports a signal and registers it at startup. |
runner.start() | Begins the poll loop. The runner checks for due entries every second by default. Configurable via the pollIntervalMs option (in milliseconds). |
By default, the runner uses an in-memory adapter. All jobs are lost on restart. See step 5 below for production-grade persistence.
4. Trigger a signal
import { sendEmail } from "./signals/send-email.js"; const runId = await sendEmail.trigger({ to: "user@example.com", subject: "Welcome", body: "Thanks for signing up.",}); console.log(`Enqueued run: ${runId}`);| Behavior | Detail |
|---|---|
| Validation | .trigger() validates the input against the Zod schema before enqueuing. Invalid input throws immediately. |
| Return value | Returns a run ID (string) immediately. The call does not wait for execution. |
| Execution | The runner picks up the job on its next poll tick and spawns a child process to run the handler. |
The .js extension in the import path is required for ESM resolution, even when your source files are .ts.
5. Add persistence (SQLite)
The default in-memory adapter loses all jobs on process restart. For anything beyond local development, use the SQLite adapter.
pnpm add station-adapter-sqlite{ "pnpm": { "onlyBuiltDependencies": ["better-sqlite3"] } } to your package.json and re-run pnpm install. See Adapters for details.// runner.tsimport path from "node:path";import { SignalRunner } from "station-signal";import { SqliteAdapter } from "station-adapter-sqlite"; const runner = new SignalRunner({ signalsDir: path.join(import.meta.dirname, "signals"), adapter: new SqliteAdapter({ dbPath: path.join(import.meta.dirname, "jobs.db"), }),}); runner.start();| Detail | Description |
|---|---|
| Engine | Uses better-sqlite3 under the hood with WAL mode enabled for concurrent reads. |
| Setup | Tables and indexes are created automatically on first run. No migrations needed. |
| Database file | Created at the path you provide. Use an absolute path to avoid ambiguity. |
Shared adapter for separate processes
When triggers happen in a different process than the runner (common in web servers), both processes need access to the same adapter instance. Use the configure() function to set a global default.
// config.tsimport { configure } from "station-signal";import { SqliteAdapter } from "station-adapter-sqlite"; configure({ adapter: new SqliteAdapter({ dbPath: "./jobs.db" }),});Import the config module before any signal imports in your trigger process:
// In your web server or trigger processimport "./config.js"; // Run configure() firstimport { sendEmail } from "./signals/send-email.js"; await sendEmail.trigger({ to: "user@example.com", subject: "Order confirmation", body: "Your order has been placed.",});6. Recurring signals
Signals can run on a fixed interval. The runner handles scheduling, re-enqueuing, and retry logic automatically.
// signals/health-check.tsimport { signal } from "station-signal"; export const healthCheck = signal("healthCheck") .every("5m") .run(async () => { const res = await fetch("https://api.example.com/health"); if (!res.ok) throw new Error(`Health check failed: ${res.status}`); });| Behavior | Detail |
|---|---|
| Intervals | .every() accepts interval strings: "30s", "5m", "1h", "1d". |
| Scheduling | The runner automatically schedules the first execution at startup and re-enqueues after each completion. |
| Input | No input schema needed for recurring signals. If your recurring signal requires input, chain .withInput(data) to provide a default payload. |
| Failures | If a recurring signal fails, retry rules apply. After all attempts are exhausted, it re-enqueues for the next interval. |
7. Multi-step signals
For pipelines where each stage transforms data for the next, use steps instead of a single handler.
// signals/process-order.tsimport { signal, z } from "station-signal"; export const processOrder = signal("processOrder") .input(z.object({ orderId: z.string(), amount: z.number() })) .step("validate", async (input) => { if (input.amount <= 0) throw new Error("Invalid amount"); return { ...input, validated: true }; }) .step("charge", async (prev) => { const chargeId = await payments.charge(prev.amount); return { orderId: prev.orderId, chargeId }; }) .step("notify", async (prev) => { await notify(`Order ${prev.orderId} charged: ${prev.chargeId}`); }) .build();| Behavior | Detail |
|---|---|
| Data flow | Each .step() receives the return value of the previous step as its input. The first step receives the validated signal input. |
| Execution | Steps run sequentially within a single child process. |
| Failure | If any step throws, the entire run fails and retries from the beginning (if retries are configured). |
| Finalization | Use .build() instead of .run() when defining steps. |
8. Subscribers
Subscribers observe the signal lifecycle. Use them for logging, metrics, alerting, or any side effect that should not live inside a handler.
import { SignalRunner, ConsoleSubscriber } from "station-signal"; const runner = new SignalRunner({ signalsDir: "./signals", subscribers: [ new ConsoleSubscriber(), // Built-in: logs all events to stdout { onRunStarted({ run }) { metrics.increment("signal.started", { name: run.signalName }); }, onRunCompleted({ run }) { metrics.increment("signal.completed", { name: run.signalName }); }, onRunFailed({ run, error }) { alerting.send(`Signal ${run.signalName} failed: ${error}`); }, }, ],});| Event | Description |
|---|---|
onRunDispatched | A run was picked up from the queue and dispatched for execution. |
onRunStarted | A child process began executing the handler. |
onRunCompleted | The handler finished successfully. |
onRunFailed | The handler threw an error (after all retries exhausted). |
onRunRetry | A failed run is being retried. |
onRunTimeout | The handler exceeded its timeout and was killed. |
All subscriber methods are optional. Implement only the events you care about. ConsoleSubscriber is a built-in subscriber that logs every event to stdout.
Next steps
| Resource | Description |
|---|---|
| Signals API | Full builder reference, runner options, adapter interface. |
| Broadcasts | Chain signals into DAG workflows with fan-out and fan-in. |
| Adapters | SQLite adapter details and custom adapter interface. |
| Station | Real-time monitoring dashboard for signals and broadcasts. |
| Examples | Complete working examples covering common patterns. |