Substrate joins
Connectors to the structured stores that hold your real entities — your DB, your Drive, your Linear, your custom app. Output is a typed packet, not a stringified blob.
Pulls structured data from your substrate. Validates it. Hashes it. Hands a clean, single-source-of-truth payload to whatever AI tool runs next. Cache-aware, audit-ready, and the reason our downstream products can cite their sources.
An LLM is only as good as the evidence packet it sees. Most teams we look at are assembling that packet on the fly inside the request handler — a join here, a stringify there, a quiet truncation when the context blows past the window — and then wondering why the output is inconsistent across calls or impossible to audit after the fact.
Isilon is the layer we built to stop doing that. Every AI call we make in production goes through it. It pulls the structured data the model needs (sessions, journals, psychometrics, external documents, whatever the substrate is), assembles a typed payload, hashes it for cache invalidation, attaches source-attribution markers, and writes a record of exactly what got handed to the model. The downstream tool (Terra, Soqratic, Atelier) consumes it. The audit trail is automatic.
Connectors to the structured stores that hold your real entities — your DB, your Drive, your Linear, your custom app. Output is a typed packet, not a stringified blob.
Every packet hashed by content. The same evidence twice produces the same hash twice. Downstream cache hits become deterministic; re-runs become free.
Every fact in the packet labelled with its origin (session id, message id, journal day, document chunk). The downstream model cites against these markers; auditors trace claims back to source.
A packet that fails validation never reaches the model. Missing fields, broken FKs, out-of-window timestamps, rights-cleared flags — all caught at the boundary.
The exact packet, with hash and timestamp, written for every AI call. Six months later when someone asks “what did the model see?”, the answer is one query, not a forensics project.
Upload zone for files outside your structured substrate (PDFs, briefs, contracts). Parsed, chunked, attribution-tagged, and slotted into the packet alongside the structured data.
Isilon is not a vector store. If you need a production vector database, use one of the three good ones already on the market. Isilon does the layer underneath that — assemble, validate, hash, attribute, ledger — the layer most teams skip and most AI failures trace back to.
We built it because we needed it. We are productising it because every operator team we know is doing this layer badly or not at all.
Isilon ships inside an Avira install today. The standalone SDK extraction is in progress; library users get it next.
If your AI calls cannot answer “what did the model see?” six months later, this is the layer you are missing.