Home
Select an Avatar from the list, or sign in to continue.
Overview
Quick read on this Avatar. Open other tabs for Ingest, Memory, Test, Deploy / Integrate, or Settings.
Ingest
Add files to the workspace corpus (preview, reflection, then commit). The primary action in the rail is Add file; you can also choose a file here.
Memory
Inspect what is stored and how retrieval responds. Document history and diff tools are under Tools in the right rail.
Recall
Run semantic recall against the workspace index — same retrieval path as chat grounding.
Nodes
Browse stored memory nodes (documents, chunks, reflection). Use Inspect on a row for full JSON.
Test
Exercise Guide vs User Avatar against the workspace. After each send, Sources this turn in the right rail lists chunks used for that message.
When set, chat is bound to that deployment handle (integration testing).
Guide is active — add documents via Ingest or Tools, then return here.
Explore in under a minute
- Press Send on the message above — check the reply, Sources this turn, and the proof line under the answer.
- Use User Avatar (tab above) for embed-style answers; with ingested documents you should see grounded turns and non-zero sources.
- Memory — see how retrieval is stored and recall behaves.
- Deploy / Integrate — curl recipes and a live session-token test (same path as a website).
Deploy / Integrate
Create deployment handles for this Avatar so external clients can call chat with a known deployment boundary — separate from evidence export.
Chat endpoint for integrators
—
Include deployment_id in the JSON body when posting to that endpoint, or mint a short-lived session token below for browser or mobile bootstrap.
Deployments are handles for integration boundaries (session minting, kind, pause). They do not store LLM provider or model settings. Chat for any deployment below uses the same runtime inference as this Avatar (configured under Settings → runtime inference).
Trust: All deployments use this Avatar’s runtime policy at request time (not a separate deployment-level model configuration).
Runtime Chat requests will fail until runtime inference is fixed for this Avatar — open Settings → runtime inference or tenant Settings → LLM Providers.
Avatar settings
Name and lifecycle for this Avatar. Pausing stops new integration traffic; deleting removes deployments for this Avatar.
Runtime inference (Avatar chat)
Controls which model and relay connection this Avatar uses in Test and other runtime chat paths. Does not change ingest or memory indexing. Phase 1 stores your choice; it does not validate live provider health from here.
Effective runtime (used for chat policy)
Stored Avatar row (raw JSON)
—
Audit / Export
Lightweight operational evidence for trust reviews and sales demos — not a full node export. Includes Avatar metadata, effective runtime snapshot, deployments, memory counts, and a sample of recent chat audit entries.
Recent activity
Last chat turns (model, grounding mode, validation) and runtime configuration changes from audit logs.