Execute API (/execute)

The primary entry point to the rtrvr.ai planner + tools engine. Send one JSON payload that can browse the web, load tabular data as in-memory sheets, call tools, and return structured results.

Base URLhttps://api.rtrvr.ai

All external calls to /execute run in API mode. Auth and plan limits are enforced server-side.

Use your API key in the Authorization header:

Header
Authorization: Bearer rtrvr_your_api_key
Security: Always keep your key on the server side (e.g. backend, serverless). Never embed it in browser code or ship it to clients.
POSThttps://api.rtrvr.ai/execute

Send a single JSON payload describing what you want. The planner orchestrates browser tabs, tools, and in-memory sheets to get the job done.

cURL
curl -X POST https://api.rtrvr.ai/execute \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Summarize the main points of this page in 5 bullet points.",
    "urls": ["https://example.com/blog/ai-trends-2025"],
    "response": { "verbosity": "final" }
  }'

Internally, this maps to an execution trajectory. New requests get a new trajectoryId; continuations reuse it.

Trajectory & Phase

A trajectory is a stable ID for a workflow. Use it to group related phases (e.g. discovery → enrichment → reporting) and continuations.

  • Omit trajectoryId to start fresh.
  • Reuse the same trajectoryId with history.continue = true to continue.
  • phase (default 1) lets you structure long-running projects into multiple stages.

Planner + Tools

You don't call tools directly. Instead, you describe the task and optionally configure which enableAdditionalTools to allow. Support for tools.enableAdditionalTools in the public API will come soon.

Under the hood, the planner can call tools like act_on_tab, crawl_and_extract_from_tab, sheets_workflow, create_sheet_from_data, and more. Only a subset (Docs, Slides, PDFs, Sheets, ask_user, etc.) is gated behind enableAdditionalTools to control cost and latency.

Tabular Inputs & In-Memory Sheets

Use dataInputs to attach CSV/TSV/JSON or text. The system:

  • Infers the format from extension or content type.
  • Parses header and row schema.
  • Creates an in-memory sheet (no Google Drive write) exposed to tools like sheets_workflow.

The full request shape is ApiExecuteRequest:

ApiExecuteRequest (conceptual)
type ApiVerbosity = 'final' | 'steps' | 'debug';

interface ApiExecuteRequest {
  // Core
  trajectoryId?: string;
  phase?: number;               // default: 1
  input: string;                // main user instruction
  urls?: string[];              // pages to open in browser
  schema?: Schema;              // optional: expected result shape (OpenAPI-style)

  // Extra data to load as sheets
  dataInputs?: ApiTabularInput[];

  // Per-request user settings override (advanced)
  settings?: Partial<UserSettings>;

  // Planner continuation
  history?: {
    continue?: boolean;
    previousSteps?: PlannerPreviousStep[];
    lastToolPreviousSteps?: ToolPreviousSteps;
  };

  // Tooling configuration
  tools?: ApiToolsConfig;

  // Response configuration
  response?: {
    verbosity?: ApiVerbosity;        // default 'final'
    inlineOutputMaxBytes?: number;   // snapshot beyond this size
  };

  // Internal / advanced
  options?: {
    skipToolsStorageLoad?: boolean;
  };
}

Core fields

inputstringrequired

Natural-language task description; what you want the system to do.

urlsstring[]

Optional list of URLs to open. The first real URL loads full content; others default to text-only for efficiency.

schemaSchema

Optional OpenAPI-style JSON Schema describing the desired final JSON shape. Planner and tools will try to honor it.

trajectoryIdstring

Stable ID for a workflow. Omit to start a new trajectory; reuse to continue or add phases.

phasenumberdefault: 1

Phase index within a trajectory. Use ≥2 for multi-stage workflows.

Tabular inputs (dataInputs)

dataInputsApiTabularInput[]

Optional list of tabular inputs to materialize as in-memory sheets.

dataInputs[].descriptionstring

Human-readable description. Used as sheet title in the UI.

dataInputs[].format"text" | "markdown" | "csv" | "tsv" | "json" | "xlsx" | "parquet"

Optional explicit format. If omitted, inferred from file extension or content type.

dataInputs[].inlinestring

Raw content (CSV/TSV/JSON/text) embedded directly in the request.

dataInputs[].urlstring

HTTP(S) URL to fetch as a tabular source.

dataInputs[].storageRefStorageReference

Advanced: backend-managed GCS object reference when clients upload to storage directly.

Tools configuration (tools)

tools.enableAdditionalToolsstring[]

Coming soon: optional list of higher-power tool families to enable for this request.

"ask_questions""generate_docs""generate_websites""generate_pdfs""pdf_filling""generate_sheets"

Core tools (browser actions, extraction, sheets_workflow on in-memory sheets, etc.) are always enabled. Additional tools control Docs, Slides, PDFs, persistent Sheets, and explicit ask_user behavior. Support for tools.enableAdditionalTools will come soon.

Response configuration (response)

response.verbosity"final" | "steps" | "debug"default: "final"

Controls how much detail you get back.

"final""steps""debug"
response.inlineOutputMaxBytesnumber

Hard cap (in bytes) for inline output blocks. Larger payloads are snapshot to storage and previewed.

History & continuation (history)

history.continueboolean

Signal that this call should continue a previous workflow state.

history.previousStepsPlannerPreviousStep[]

Planner-internal state from previous runs. Returned in response.history for advanced clients.

history.lastToolPreviousStepsToolPreviousSteps

Tool execution state for the last tool. Used for precise continuations.

Advanced options

settingsPartial<UserSettings>

Per-request overrides for stored user settings (model, proxy, extraction config, etc.). Generally only needed from first-party or advanced SDKs.

options.skipToolsStorageLoadboolean

Internal optimization flag when all tools are provided directly. Most clients should omit.

Every call returns an ApiExecuteResponse. This gives you:

  • A high-level success flag andstatus reason.
  • Rich output blocks (text, JSON, optional per-tool results).
  • A convenience result view with text and json for the final answer.
  • Detailed usage and metadata for billing, observability, and continuation.
ApiExecuteResponse (conceptual)
interface ApiExecuteResponse {
  success: boolean;
  status: 'success' | 'error' | 'cancelled' | 'requires_input' | 'executing';

  trajectoryId: string;
  phase: number;

  // Rich output blocks
  output: ApiOutputBlock[];

  // Convenience view of final output
  result?: {
    text?: string;
    json?: any;
  };

  // Present when verbosity !== 'final'
  steps?: ApiStepSummary[];

  usage: {
    creditsUsed: number;
    creditsLeft?: number;
    currentCreditsUsed?: number;
    expiryReason?: string;
  };

  metadata: {
    taskRef: string;
    inlineOutputMaxBytes: number;
    toolsUsed: string[];
    outputTooLarge?: boolean;
    responseRef?: StorageReference;
  };

  warnings?: string[];
  error?: string;

  // Continuation payload for advanced clients
  history?: {
    previousSteps?: PlannerPreviousStep[];
    lastToolPreviousSteps?: ToolPreviousSteps;
  };
}

Output blocks & result

The low-level output is an array of blocks:

output[].type"text" | "json" | "tool_result"

Block type: final text, JSON payload, or detailed tool result (debug mode).

output[].textstring

Present when type = 'text'.

output[].dataany

Present when type = 'json'.

output[].tool_result

When type = 'tool_result', includes stepId, toolName, args, output preview, thought, etc. Only present when verbosity = 'debug'.

result.text is the concatenation of all text blocks. result.json is either the single JSON block, or an array of JSON blocks if the workflow produced multiple.

Steps & usage

When response.verbosity is "steps" or "debug", you also get steps: ApiStepSummary[]:

steps[].toolNamestring

Which tool ran in this step (e.g. 'sheets_workflow', 'act_on_tab').

steps[].statusExecutionStatus

success, error, executing, etc. per step.

steps[].durationnumber

Execution time in ms for this step (when available).

steps[].creditsUsednumber

Credits consumed by this step, useful for analytics.

steps[].hasOutputboolean

Whether this step produced output or an outputRef.

steps[].hasSheetsboolean

Whether this step produced or touched tabular data.

steps[].hasGeneratedContentboolean

Whether this step generated external content (docs, slides, etc.).

usage mirrors your credit accumulator and is ideal for per-customer dashboards and server-side cost control.

Large output handling

When the full response exceeds inlineOutputMaxBytes:

  • • The full response is snapshot to storage under metadata.responseRef.
  • • The inline response is truncated to a safe preview.
  • metadata.outputTooLarge is set to true.

Client pattern: render the preview for UX, but fetch responseRef.downloadUrl from your backend when you need the full payload.

status & success

status"success" | "error" | "cancelled" | "requires_input" | "executing"

Execution-level status. success implies success = true; all others imply success = false.

  • "success" – Final result is available in result and output.
  • "error" – Workflow failed. You still get usage, steps (if enabled), and partial output if any.
  • "cancelled" – Client abort or timeout. Credits are accounted for partial work.
  • "requires_input" – Planner paused because it needs human answers (ASK_USER).
Continuation pattern:
  1. When you see status: "requires_input", surface your own UI to collect missing info.
  2. On the next call, send the same trajectoryId with history.continue = true and the updated history object returned from the previous response.
cURL
# 1. Minimal – summarize a page
curl -X POST https://api.rtrvr.ai/execute \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Summarize the main points of this page in 5 bullet points.",
    "urls": ["https://example.com/blog/ai-trends-2025"],
    "response": { "verbosity": "final" }
  }'

# 2. With dataInputs – enrich CSV row by row
# Note: support for tools.enableAdditionalTools will come soon.
curl -X POST https://api.rtrvr.ai/execute \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "For each company in the uploaded CSV, find the official website and a one-sentence description, then return a JSON array of {company, website, description}.",
    "dataInputs": [
      {
        "description": "Companies to enrich",
        "format": "csv",
        "inline": "company\nOpenAI\nDeepMind\nAnthropic\n"
      }
    ],
    "response": { "verbosity": "steps" },
    "tools": { "enableAdditionalTools": ["generate_sheets"] }
  }'

Ready to automate?

Join teams using rtrvr.ai to build playful, powerful web automation workflows.

API Playground

Test the Execute API interactively

Execute API Playground

POST /execute in API mode.

Try a recipe:
Task