SoQ LLM
The cache contract your SDK is breaking. Most teams running multi-pass generation pay for the same context five times. We don’t. Five rules, no half-measures, applied against our own synthesis pipeline.
Read the discipline →The SoQ Platform is what we built for ourselves while running the program, then opened up because every other team we know was building the same scaffolding from scratch. Each product is small, well-named, and useful on its own. Together they let an organisation stand up an AI-native operation in a fortnight, not a fiscal year.
The cache contract your SDK is breaking. Most teams running multi-pass generation pay for the same context five times. We don’t. Five rules, no half-measures, applied against our own synthesis pipeline.
Read the discipline →A conversation engine humans steer in real time. Live nudges, hard guardrails, mid-stream rewriting. The layer that turns an LLM into a coached, accountable interlocutor — our enterprise-distinctive surface.
How steering works →A document-anchored dialogue API that asks the next right question. Long memory, scaffolded questioning, and a structured rationale on every move — usable as a self-paced conversation, or with a host editing each question before it ships. For companies, teams, and the people inside them.
Read the spec →Video understanding built on the same playback substrate that powers Observatory. Frame-level tagging, ultra-smooth scrubbing, store-and-search, and an LLM you can hold a conversation with — about your video.
See Lens →Synthesis, clips, reels, and an eval surface for conversational deployments. Trace, score, replay, and ship. Pairs natively with SoQ LLM, Helm, and Lens.
See dashboards →Six experiments, headed by Helm. Where new primitives get built, sharpened, and graduated into the platform. If you live on the edge of what these models can do, this is your room.
Browse experiments →Capture-anywhere ingestion for the small daily artefacts that make up a life: notes, calls, photos, walks. Structured, searchable, yours.
See ingest spec →The context-preparation layer underneath every AI call we make. Pulls from your substrate, validates, hashes, attributes sources, ledgers what the model saw. Single source of truth for “what did the model actually see?”
See the layer →Ground claims against trusted sources, return citations, surface contested statements, and flag confidence. The grounding layer for any system that has to be right out loud.
See endpoints →The opinionated workspace we built for our own program, productized for one organisation at a time. An MCP server tuned to your substrate, a curated skill library, and the governance bar a senior operator would expect.
See the install →The MCP server we built for our own program, plus the patterns we used building it. Ships inside an Avira install today; the CLI and eval framework are honest roadmap.
Read the patterns →The harness underneath. Multi-stage AI as a first-class primitive — aggregate, orchestrate, cache, ledger. The reason the rest of the Stack runs consistently across calls.
See the substrate →The recording substrate built for one human reviewing one session, carefully. DOM-faithful, capture-time sanitised, hierarchically nested. Pairs with Observatory.
See Replay →Structured 360 feedback for one human at a time. Named invitees, moderated responses, a synthesis you can return to a year later. The middle ground between corporate review and the informal ask.
Run a campaign →Not Pomodoro. A focused-work substrate that captures intention, detects stalls, surfaces a facilitator nudge when needed, and runs structured reflection at the end. Pairs with Helm.
See the session shape →A beautifully composed, interactive artefact — described into shape from your Avira workspace by an operator, no code required. Published under your name, with an audit trail behind it. The Google Form is the wrong shape; the PDF is too dead.
See the substrate →One. We believe the difference between an organisation that compounds with these models and one that fails to is almost entirely about taste at the harness layer. The system prompts, the eval rubric, the refusal posture, the memory contract, the place where humans steer the model. We have built those defaults against our own program. They are now the platform.
Two. Most operations teams are paying for a stack of point vendors, badly stitched, with engineering time burned on glue between them. We thought we could do that work once, properly, and let teams pay for one platform instead.
Three. The interesting work is no longer in raw model capability. It is in the human layer that wraps it: how a coach steers a conversation in real time, how a reviewer rewrites mid-stream, how an organisation turns model output into something it is willing to put its name on. That is the layer we have built around.
The platform is offered as enterprise engagements only. Write to us with the actual situation and we will reply within three working days, from a person.