School of QuestSchool of Quest
·Next cohort · by application·A program · online · worldwide
§ 21Stack · Replay

Replay. The evidence layer for any session-grade workflow.

Capture user interactions, DOM mutations, and page state. Sanitise at capture time. Replay with scrub, annotation, and citation. Designed for the moments where someone needs to actually watch what happened, not just see a heatmap.

§ 01 / The problem we are solving

Most session recording is built for the wrong reader.

The session-recording market is dominated by analytics-first products. They are built so a growth team can watch ten thousand sessions in aggregate and learn that the sign-up button is small. That is a real problem. It is not the problem we built Replay to solve.

We built Replay because a facilitator needed to review what a student did during a 40-minute work session. Because someone needed to reproduce the exact state when a bug fired, not see a heatmap of where it happened. Because a reviewer needed to scrub through a colleague's submission and leave a note at 14:32. The reader is human, the volume is small, and the requirement is fidelity, not aggregation.

§ 02 / What it captures

Five primitives. One coherent recording.

DOM

Mutation stream

Every DOM mutation, captured incrementally, replayable as a faithful re-render. The text the user saw is the text the reviewer sees.

Input

User actions

Pointer, keyboard, scroll, focus. Reconstructible into a timeline that maps to the DOM stream — not an opaque event log.

State

Page lifecycle

Navigation, route transitions, tab visibility, idle gaps. The reviewer sees the shape of the session, not just the active moments.

Hierarchy

Nested sub-sessions

A long session can spawn child sessions for nested flows (e.g. a sub-task, a modal, a subform). The reviewer can collapse and expand without losing context.

Sanitiser

Capture-time PII handling

Sensitive fields, credential inputs, and configurable selectors are scrubbed at capture, not at playback. The recording on disk has never seen the secret.

§ 03 / Why it pairs with Observatory

Transcript on one side. Recording on the other.

Observatory indexes conversational sessions — the transcript, the synthesis, the highlights. Replay indexes the same session, but from the user-interface side: what was on the screen, what got clicked, where the cursor stalled. Together they are the two halves of a session-grade artefact.

The reviewer scrubs the recording. The transcript scrolls in lockstep. A highlight surfaces a moment in the conversation; the recording parks on the screen at that moment. The two layers cite each other. The reviewer leaves one note that anchors to both.

§ 04 / Who it is for

Workflows where one human reviews one session, carefully.

  • Coaching and facilitation. A coach reviewing a coachee's work session, with annotations.
  • Customer success and support. Reproducing a bug from a recording rather than a 14-message email thread.
  • Async review. A reviewer reading a colleague's submission alongside the recording of how it was made.
  • Onboarding and training. The recording becomes the asset; new hires watch the way the work was done, not just the output.
  • Regulated workflows. Where the audit question is “what did the operator do during the call?”, not “how many calls per hour.”
§ 05 / What it is not

Three honest non-promises.

Not analytics. No funnels, no heatmaps, no aggregate dashboards. If your reader is a growth team, you want LogRocket or FullStory.

Not ad-tech. We do not pixel-track. The recording belongs to your domain. The runtime is opt-in per session, not silently injected.

Not video. The recording is reconstructed from event streams, not pixel capture. Smaller on disk, easier to redact, faster to scrub. Pixel capture is a different product with different ethics.

Stop reviewing sessions from a transcript alone.

If the question is “what actually happened on screen,” one human watching a faithful recording beats a thousand analytics charts.