Collapse 6 Fetches Into 1 with Cap’n Web

• Performance

The Problem: Waterfall Latency

Modern pages often make a bunch of small API calls in sequence: profile → friends → profiles → notifications → greeting → etc. Each call adds a round trip (RTT). On real networks, ~100–200 ms of RTT per call adds up fast: a chain of dependent requests can add hundreds of milliseconds to multiple seconds of latency before any server work. (Our demo uses six calls as an example, but the approach applies to any number of calls.)

What Cap’n Web Is (and Why It Helps)

Cap’n Web is a JavaScript‑native RPC library with two key ideas:

  • HTTP batch + pipelining: queue multiple calls and send them in one request. Promises act like stubs, so you can chain calls before the first resolves.
  • Capability‑based RPC: pass references (objects with methods) instead of tokens or IDs. Least‑privilege by default.

It also supports long‑lived WebSocket sessions and bidirectional calls, but this post focuses on the one‑RTT HTTP batch because it solves the waterfall pain directly.

Live Demo: 6 REST Calls vs 1 Batch

Try the side‑by‑side demo:

Open the Cap’n Web batching demo

The left button runs six sequential REST requests. The right button uses a single Cap’n Web HTTP batch that returns six results in one round trip. You can adjust a simulated per‑call server delay to see the effect.

How the Batch Works

Client code starts a batch session, adds calls, and awaits once. Under the hood, Cap’n Web sends one HTTP request carrying all calls, and the server executes them (with pipelining support).

import { newHttpBatchRpcSession } from "capnweb";

const api = newHttpBatchRpcSession("/api");

// Queue calls without awaiting yet
const a = api.a();
const b = api.b();
const c = api.c();
const d = api.d();
const e = api.e();
const f = api.f();

// Send once, await once
const results = await Promise.all([a, b, c, d, e, f]);

In a batch, you can even use RpcPromises as parameters to other calls (promise pipelining). That lets you express dependent operations without additional round trips.

Minimal Worker Behind the Demo

The Worker exposes two shapes:

  • /rest/1/rest/6: one JSON response per request (sequential path in the demo).
  • /api: Cap’n Web endpoint with six methods (af) served in one batch.
import { RpcTarget, newWorkersRpcResponse } from "capnweb";

class DemoApi extends RpcTarget {
constructor(delayMs) { super(); this.delayMs = delayMs; }
async a() { await wait(this.delayMs); return { step: "a", at: Date.now() }; }
async b() { await wait(this.delayMs); return { step: "b", at: Date.now() }; }
async c() { await wait(this.delayMs); return { step: "c", at: Date.now() }; }
async d() { await wait(this.delayMs); return { step: "d", at: Date.now() }; }
async e() { await wait(this.delayMs); return { step: "e", at: Date.now() }; }
async f() { await wait(this.delayMs); return { step: "f", at: Date.now() }; }
}

Both paths set Access-Control-Allow-Origin: * and Timing-Allow-Origin: * so the demo page can measure times with the Performance API.

Expected Results

If RTT is ~120 ms and each call does ~120 ms of work:

  • 6 sequential REST calls: ~6 × (RTT + work) ≈ ~1440 ms
  • 1 batch (6 calls): ~1 × (RTT + work) ≈ ~240 ms

Parallel REST improves over sequential, but still pays multiple RTTs and adds head‑of‑line blocking. The batch sends once.

Trade‑offs and When to Use It

  • Great for: page boot, dashboards, “fan‑out” reads, and chained calls (authenticate → me → greet).
  • Consider WebSocket: for sustained interactions or server‑initiated callbacks.
  • Error handling: await all promises you care about; un‑awaited calls won’t return results in the batch.
  • Security: capability‑based design reduces token sprawl and scopes authority to the object you hold.

Try It Yourself

  1. Open the live demo.
  2. Run the left (REST) and right (batch) buttons and compare request counts and total time.
  3. Adjust the delay to simulate heavier endpoints or slower networks.

To build your own, see Cloudflare’s post: Cap’n Web: a new JavaScript RPC library.