Why a panel, not a chatbot

Cyber risk is a cross-functional problem: a single vulnerability can have privacy consequences, vendor consequences, and insurance consequences. A single generic chatbot either reasons shallowly across all of them or hyper-focuses on one. Draxis splits the reasoning across specialists and then synthesises the answer.

Personas

PersonaDomainCalls
AI vCISOPrioritisation, narrative, board-level framing.Moderator. Decides which specialists to invoke.
Privacy expertGDPR, CCPA, HIPAA, DORA, NIS2.When data-handling, breach, or consent is at stake.
TPRM expertVendor assessment, blast radius, continuous monitoring.When the answer depends on a third party.
Cyber insuranceCoverage, exclusions, carrier posture.When financial transfer is on the table.

Session lifecycle

  1. Tenant user opens the panel chat and asks a question.
  2. POST /api/panel-sessions creates a session; each panel session is its own audit-logged conversation.
  3. The moderator reads the question against the tenant’s current KRIs, risks, and institutional memory.
  4. It invokes one or more specialists; each gets a tightly-scoped system prompt.
  5. The synthesis step produces a single, coherent reply with a rationale.
  6. The full transcript and synthesis rationale are persisted. The tenant can end the session with POST /api/panel-sessions/<id>/end, which freezes the conversation and writes it to institutional memory.

Institutional memory

Institutional memory is the tenant’s long-term knowledge base — things the panel has learned about the org over time. It is distinct from a single session’s context; memory persists across sessions.

  • Facts written to institutional_knowledge (structured key/value with a source citation).
  • Documents uploaded to institutional_documents (PDFs, reports, policies).
  • Ended panel sessions whose synthesis the panel judged worth remembering.

Every LLM call starts by reading the relevant slice of institutional memory into the system prompt. Privacy-sensitive facts can be marked confidentiality=restricted, which excludes them from shared panel contexts.

API

  • GET /api/personas — list personas.
  • POST /api/panel-sessions — create a session.
  • POST /api/panel-sessions/<id>/messages — post a message.
  • POST /api/panel-sessions/<id>/synthesize — force a synthesis.
  • GET /api/panel-sessions/<id>/transcript — full transcript.
  • GET /api/panel-sessions/<id>/audit — decisions + rationales.

Observability

Every LLM call in the panel is traced through Langfuse with a session-scoped trace ID. The in-app audit view links each synthesis to the underlying specialist calls so you can see exactly what each expert said and how the moderator composed the reply.