For your AI agents, emails, payments, and more

A simple background
jobs framework.

Background jobs usually mean either a Redis cluster or another cloud bill. Station is an npm package. Install it, define your jobs in TypeScript, run them on your existing infrastructure. Retries, scheduling, and persistence included.

// 00

A love letter to
everything you automate.

Your AI agentsthat run on a schedule, retry on failure, and report back when they're done.
Your emailsthat go out in the background without blocking your request handler.
Your paymentsthat process reliably with retries, even when a provider hiccups.
Your reportsthat generate overnight and land in an inbox by morning.
Your webhooksthat fan out to downstream services without you babysitting the queue.
01Simple codeDefine a signal in TypeScript. Schema in, handler out. That's it.
02Simple deploymentRuns in your process, on your servers. No Redis. No separate service.
03Simple monitoringEvery run recorded. Station dashboard included. One command to start.
// 01

What you get.

Everything you need for production background jobs. Nothing you don't.

Scheduling & triggers

Interval-based scheduling with human-readable strings — '5m', '1h', '1d'. Trigger jobs on-demand with .trigger() or let them run on a schedule automatically.

Run history

Every execution recorded with input, output, errors, timing, and attempt count. Query through the adapter or browse with Station's monitoring dashboard.

Automatic retries

Per-signal retry count with exponential backoff. .retries(3) gives 4 total attempts. Failed jobs re-enqueue automatically without intervention.

Type-safe inputs

Zod schemas validate every trigger payload before it enters the queue. TypeScript infers handler argument types from the schema. Invalid data never reaches your handler.

Concurrency limits

Global concurrency cap via maxConcurrent. The runner limits how many signals execute in parallel and queues the overflow.

Workflow DAGs

Chain signals into directed acyclic graphs with broadcasts. Fan-out to parallel nodes, fan-in with data aggregation, conditional execution via guard functions.

// 02

Define it in TypeScript.
Run it anywhere.

A signal is a background job definition — input schema, handler function, execution constraints. Define them in your codebase. The runner auto-discovers signal files, handles scheduling, retries, timeouts, and concurrency. No config files. No separate service.

View documentation →
love-letter.ts
import { signal, z } from "station-signal" export const loveLetter = signal("loveLetter")  .input(z.object({ to: z.string() }))  .every("1d")  .retries(2)   // Step 1: compose the letter  .step("compose", async (input) => {    const letter = await ai.generate(      "To all the jobs I love..."    )    return { to: input.to, letter }  })   // Step 2: send it  .step("send", async (prev) => {    await mailer.send({ to: prev.to, body: prev.letter })    return prev  })   // Step 3: tip a dollar  .step("tip", async (prev) => {    await wallet.send(prev.to, 1.00)  })   .build()
// 03

Why another
background jobs library?

Existing solutions work. They also come with trade-offs Station doesn't.

Self-hosted queues
Bull, BullMQ, Agenda
  • Requires Redis or MongoDB
  • Docker and ops overhead
  • Complex configuration
  • Full control over execution
Managed services
Trigger.dev, Inngest
  • Hosted infrastructure
  • Additional cloud bill
  • Data on third-party servers
  • Vendor-specific APIs
This library
Station
  • npm install, done
  • SQLite persistence (or in-memory)
  • Runs in your process, on your servers
  • Zero external dependencies
  • Full TypeScript with Zod validation
  • Same reliability: retries, timeouts, concurrency
// Get started

Five minutes to your first signal.

Install the package, define a signal, start the runner. That's the entire setup.

pnpm add station-signal
Read the guide