The expert panel
A moderator and a handful of specialists. Each persona has its own system prompt, its own lane, and access to the same institutional memory. The moderator decides who to call into the conversation.
Why a panel, not a chatbot
Cyber risk is a cross-functional problem: a single vulnerability can have privacy consequences, vendor consequences, and insurance consequences. A single generic chatbot either reasons shallowly across all of them or hyper-focuses on one. Draxis splits the reasoning across specialists and then synthesises the answer.
Personas
| Persona | Domain | Calls |
|---|---|---|
| AI vCISO | Prioritisation, narrative, board-level framing. | Moderator. Decides which specialists to invoke. |
| Privacy expert | GDPR, CCPA, HIPAA, DORA, NIS2. | When data-handling, breach, or consent is at stake. |
| TPRM expert | Vendor assessment, blast radius, continuous monitoring. | When the answer depends on a third party. |
| Cyber insurance | Coverage, exclusions, carrier posture. | When financial transfer is on the table. |
Session lifecycle
- Tenant user opens the panel chat and asks a question.
POST /api/panel-sessionscreates a session; each panel session is its own audit-logged conversation.- The moderator reads the question against the tenant’s current KRIs, risks, and institutional memory.
- It invokes one or more specialists; each gets a tightly-scoped system prompt.
- The synthesis step produces a single, coherent reply with a rationale.
- The full transcript and synthesis rationale are persisted. The tenant can end the session with
POST /api/panel-sessions/<id>/end, which freezes the conversation and writes it to institutional memory.
Institutional memory
Institutional memory is the tenant’s long-term knowledge base — things the panel has learned about the org over time. It is distinct from a single session’s context; memory persists across sessions.
- Facts written to
institutional_knowledge(structured key/value with a source citation). - Documents uploaded to
institutional_documents(PDFs, reports, policies). - Ended panel sessions whose synthesis the panel judged worth remembering.
Every LLM call starts by reading the relevant slice of institutional memory into the system prompt. Privacy-sensitive facts can be marked confidentiality=restricted, which excludes them from shared panel contexts.
API
GET /api/personas— list personas.POST /api/panel-sessions— create a session.POST /api/panel-sessions/<id>/messages— post a message.POST /api/panel-sessions/<id>/synthesize— force a synthesis.GET /api/panel-sessions/<id>/transcript— full transcript.GET /api/panel-sessions/<id>/audit— decisions + rationales.
Observability
Every LLM call in the panel is traced through Langfuse with a session-scoped trace ID. The in-app audit view links each synthesis to the underlying specialist calls so you can see exactly what each expert said and how the moderator composed the reply.