Skip to main content

ProwlQA for Agents

Prowl QA is CLI-first and agent-native by design. One CLI command replaces ten tool calls. Structured JSON output, deterministic exit codes, and a fraction of the tokens — everything an AI agent needs to discover, run, and report on browser tests without wrangling automation APIs directly.

Made for agents, controlled by humans. You write the hunts, review the results, and set the guardrails. Agents handle the execution loop — discovering what to test, running the hunts, and reporting structured results back to your workflow.

Jump to the Library API for programmatic Node.js integration or Using Hub Templates for pre-built community hunt templates.

Why CLI-First, Not MCP

Most agent-tool integrations use MCP or similar RPC protocols that inject tool schemas into every conversation. Prowl QA takes a different approach: a plain CLI that agents call like any other shell command.

Zero Context Tax

  • No tool manifest. MCP injects its full schema into every conversation turn; the Prowl QA CLI is invisible until called.
  • Declarative, not imperative. A 10-step hunt is a single prowlqa run call — not 10+ individual tool invocations the agent has to reason through.
  • Pay-per-use. Zero tokens when idle. MCP manifests cost ~2-3k tokens just to sit in context.

Token Efficiency Comparison

The token values below are approximate estimates and will vary by LLM provider, model, prompt strategy, and conversation context.

Prowl QA CLIMCP-Based Tool
Discoveryprowlqa list --json ~150 tokensTool manifest + list call ~2,500 tokens
10-step testprowlqa run <hunt> --json ~800-1,200 tokens10+ individual tool calls ~15,000-20,000 tokens
Result parsingExit code check — 0 tokensParse nested responses ~500-1,000 tokens
Idle cost0 tokens~2,000-3,000 tokens (manifest always loaded)
Total per cycle~1,250 tokens~18,000-23,000 tokens

That's a 10-15x difference per test cycle, compounding across every conversation.

Exit Codes Enable Branching

Agents don't need to parse response bodies to decide what to do next. A zero exit code means pass, non-zero means fail — standard shell semantics that every agent framework already understands.

prowlqa ci --json && echo "All hunts passed" || echo "Failures detected"
tip

Prowl QA is agent-ready out of the box. No MCP server, no tool manifest, no per-step reasoning. Install, run, parse.

Agent Workflow: Discover, Run, Report

Every agent integration follows the same three-step pattern — each step is a single CLI call with structured output:

1. Discover

List available hunts and their metadata:

prowlqa list --json
[
{ "name": "smoke-test", "description": "Validates homepage loads", "tags": ["smoke"] },
{ "name": "login-flow", "description": "Email/password login", "tags": ["auth", "critical"] },
{ "name": "checkout-flow", "description": "E-commerce checkout", "tags": ["e2e"] }
]

Agents can filter by tags to select the right hunts for the context (e.g., run only smoke hunts on every PR, run e2e hunts nightly).

2. Run

Execute a single hunt or all hunts:

# Single hunt
prowlqa run smoke-test --json

# All hunts (CI mode)
prowlqa ci --json

3. Report

Parse the structured JSON output and check exit codes:

Exit CodeMeaning
0All hunts passed
1One or more hunts failed
2No hunts found or all skipped

CLI Integration (--json flags)

prowlqa run <hunt> --json

Returns a single hunt result:

{
"status": "pass",
"exitCode": 0,
"startedAt": "2026-02-16T17:15:12.481Z",
"hunt": "smoke-test",
"targetUrl": "http://localhost:3000",
"durationMs": 220,
"steps": [
{ "type": "navigate", "status": "pass", "durationMs": 120 },
{ "type": "wait", "status": "pass", "durationMs": 85, "selector": "text=\"Welcome\"" },
{ "type": "assert", "status": "pass", "durationMs": 15, "value": "visible:Sign In" }
],
"assertions": [
{ "type": "noConsoleErrors", "value": true, "status": "pass" },
{ "type": "noNetworkErrors", "value": true, "status": "pass" }
],
"artifacts": {
"summary": "summary.md",
"screenshots": ["screenshots/final.png"],
"console": "console.log"
}
}

prowlqa ci --json

Returns a combined result for all hunts:

{
"status": "fail",
"startedAt": "2026-02-16T17:15:12.481Z",
"durationMs": 4870,
"totalHunts": 3,
"passed": 2,
"failed": 1,
"skipped": 0,
"hunts": [
{ "hunt": "smoke-test", "status": "pass", "durationMs": 220, "runDir": "/path/to/.prowlqa/runs/2026-02-16_17-15-15" },
{ "hunt": "login-flow", "status": "pass", "durationMs": 1450, "runDir": "/path/to/.prowlqa/runs/2026-02-16_17-15-20" },
{ "hunt": "checkout-flow", "status": "fail", "durationMs": 3200, "error": "Assertion failed: visible \"Order Confirmed\"" }
]
}

JUnit XML

For CI systems that consume JUnit XML (GitHub Actions, GitLab CI, Jenkins, CircleCI), pass --junit:

prowlqa ci --junit

Each hunt run directory gets its own junit.xml (for example .prowlqa/runs/<timestamp>/junit.xml).
In CI mode, prowlqa ci also writes a combined summary JSON file at .prowlqa/runs/ci-<timestamp>/ci-result.json.

Library API (Node.js)

For deeper integration, import Prowl QA as a library:

import { runHunt, loadConfig, listHunts, loadHunt } from "prowlqa";

Core Functions

FunctionDescription
runHunt({ huntName, ...options })Run a hunt programmatically and get a typed RunResult
loadConfig(configPath?)Load and validate config, returning { config, configPath, configDir }
loadHunt(huntName, configDir)Load and parse a single hunt file from <configDir>/hunts/
listHunts(configDir)List available hunt names from <configDir>/hunts/

Schemas

Prowl QA exports Zod schemas for validation:

import { huntSchema, configSchema, stepSchema } from "prowlqa";

// Validate a hunt file an agent generated
const result = huntSchema.safeParse(agentGeneratedHunt);
if (!result.success) {
console.error(result.error.issues);
}

Variable Interpolation

import { loadConfig, loadHunt, interpolateHunt } from "prowlqa";

const { configDir } = loadConfig();
const hunt = loadHunt("login-flow", configDir);
const { hunt: interpolated } = interpolateHunt(hunt, {
...process.env,
EMAIL: "test@example.com",
PASSWORD: process.env.TEST_PASSWORD ?? "",
});

Using Hub Templates

The Prowl QA Community Hub is a collection of pre-built hunt templates — common patterns like login flows, CRUD cycles, checkout funnels, and onboarding wizards.

note

The prowlqa hub CLI is coming in a future release. In the meantime, browse and download templates directly from the Prowl QA Community Hub.

Current Workflow (Today)

  1. Browse templates in the Prowl QA Community Hub.
  2. Copy the template hunt file into your local .prowlqa/hunts/ directory.
  3. Customize {{VAR}} placeholders for your environment.
  4. Execute the hunt with prowlqa run <hunt-name> --json.

Example hunt after copying and customizing:

name: login-flow
description: Email/password login with error handling
baseUrl: "{{BASE_URL}}"
steps:
- navigate: "{{BASE_URL}}/login"
- fill:
selector: "[name='email']"
value: "{{EMAIL}}"
- fill:
selector: "[name='password']"
value: "{{PASSWORD}}"
- click: "button[type='submit']"
- wait: "Welcome"

Variables are resolved from your config, environment, or CLI flags. See Variables for interpolation precedence.

Library API for Templates

Use the library API to customize and run downloaded templates programmatically:

import { loadConfig, loadHunt, interpolateHunt, runHunt } from "prowlqa";

const { configPath, configDir } = loadConfig();

// Load the template
const hunt = loadHunt("login-flow", configDir);

// Preview interpolation with custom variables
const { hunt: preview } = interpolateHunt(hunt, {
...process.env,
BASE_URL: "https://staging.example.com",
EMAIL: "test@example.com",
PASSWORD: process.env.TEST_PASSWORD ?? "",
});

console.log(preview.steps[0]);

// runHunt reads vars from process.env
process.env.BASE_URL = "https://staging.example.com";
process.env.EMAIL = "test@example.com";
process.env.PASSWORD = process.env.TEST_PASSWORD ?? "";

const { result } = await runHunt({
huntName: "login-flow",
configPath,
});

if (result.status === "pass") {
console.log(`Login flow passed in ${result.durationMs}ms`);
} else {
console.error("Login flow failed:", result.steps.filter((s) => s.status === "fail"));
}
note

Browse all templates and contribute your own at the Prowl QA Community Hub.