Skip to main content

Quickstart: replay() Review and Workflow Governance

Get replay() running in under 5 minutes. The fastest path starts zero-config governance review. If you need runtime blocking today for a structured workflow, continue to the manual-contract section below.


Prerequisites

  • Node.js 20+
  • An OpenAI or Anthropic API key
  • npm or yarn

1. Install

npm install @vesanor/replay openai

Anthropic works too. Install @anthropic-ai/sdk instead of openai — the SDK detects the provider automatically.


2. Wrap your client (zero-config review)

The fastest path needs no local contracts and no local YAML:

import OpenAI from "openai";
import { replay } from "@vesanor/replay";

const client = new OpenAI();

const session = replay(client, {
apiKey: process.env.VESANOR_API_KEY,
});

const response = await session.client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "What's the weather in Paris?" }],
tools: myToolDefinitions,
});

Run your tests. Open the dashboard. Review the draft governance plan. Click Approve to freeze the current server-side snapshot.

Today this zero-config path is pass-through capture and review. It does not yet block tool calls locally in the SDK.

See Zero-Config Governance for the full experience.


What just happened

When you called session.client.chat.completions.create(), the zero-config governance flow ran:

  1. Fetchreplay() looked up governance state for the agent
  2. Pass through — the SDK executed the LLM call without blocking
  3. Capture — the request, response, tool definitions, and usage were buffered for upload
  4. Infer — the server built or updated the typed review plan from schemas, descriptions, and observed sessions
  5. Review — the dashboard exposed tool rules, workflow coverage, limits, gaps, and checkpoint suggestions for explicit approval

If you want illegal tool calls to be blocked at runtime, use manual contracts with contractsDir.

Replay is strongest here on structured workflows with clear stages, irreversible actions, and cross-step rules.


Runtime enforcement today (manual contracts)

For runtime blocking today, write YAML contracts and pass contractsDir.

Create a contract

Create a file at contracts/get_weather.yaml:

tool: get_weather
side_effect: read
evidence_class: local_transaction
commit_requirement: acknowledged

timeouts: { total_ms: 30000 }
retries: { max_attempts: 1, retry_on: [] }
rate_limits: { on_429: { respect_retry_after: true, max_sleep_seconds: 60 } }

assertions:
input_invariants: []
output_invariants: []

golden_cases: []
allowed_errors: []

# Validate that the location argument is a non-empty string
argument_value_invariants:
- path: "$.location"
type: string

This contract says: get_weather is a read-only tool. The location argument must be a string.

Wrap with contractsDir

import OpenAI from "openai";
import { replay } from "@vesanor/replay";

const client = new OpenAI();

// Wrap the client with replay() using local contracts
const session = replay(client, {
contractsDir: "./contracts",
agent: "weather-bot",
mode: "enforce",
gate: "reject_all",
});

// Define your tool
const tools: OpenAI.ChatCompletionTool[] = [
{
type: "function",
function: {
name: "get_weather",
description: "Get the current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
},
required: ["location"],
},
},
},
];

// Use session.client — NOT the original client
const response = await session.client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "What's the weather in Paris?" }],
tools,
});

console.log(response.choices[0].message.tool_calls);
// The tool call passed contract validation

// Check session state
const state = session.getState();
console.log("Steps:", state.totalStepCount); // 1
console.log("Tool calls:", state.totalToolCalls); // 1

// Clean up
session.restore();

Every call through session.client is now validated against your contract.


Try breaking it

Add a second tool with no contract to see what happens:

const tools = [
// ... get_weather (has a contract)
{
type: "function",
function: {
name: "delete_everything",
description: "Delete all data",
parameters: { type: "object", properties: {} },
},
},
];

By default, tools without a matching contract are blocked (unmatchedPolicy: "block"). The model won't even see delete_everything — it's removed during narrowing.


What to add next

This quickstart uses the simplest possible setup. Here's what you can add:

FeatureWhat it doesGuide
PhasesRestrict which tools are available at each stepFirst Session Contract
PreconditionsRequire step A before step BPreconditions & Ordering
Session limitsCap total steps, cost, or per-tool callsSession Limits
Forbidden toolsBlock a tool after it's been called oncePreconditions & Ordering
Kill switchEmergency stop for runaway agentsKill Switch
Server-backed stateDurable governed sessions with stronger evidence on the wrapped pathGovern Mode

Anthropic example

The same code works with Anthropic — just swap the client:

import Anthropic from "@anthropic-ai/sdk";
import { replay } from "@vesanor/replay";

const client = new Anthropic();

const session = replay(client, {
contractsDir: "./contracts",
agent: "weather-bot",
mode: "enforce",
});

const response = await session.client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "What's the weather in Paris?" }],
tools: [
{
name: "get_weather",
description: "Get the current weather for a location",
input_schema: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
},
required: ["location"],
},
},
],
});

Same contracts, same enforcement, different provider.


Next steps