AI Drop Zone
The universal KRI extractor, Draxis’s built-in AI connector for any artifact a vendor connector doesn’t cover. Paste a CSV, drag a vendor SOC 2, push a webhook from a scheduled export, and AI extracts typed Key Risk Indicators against the Draxis risk catalog with source-span citations.
At a glance
| Vendor | Draxis (first-party). The Drop Zone is built into the platform, there is no third-party vendor to authorize against. |
|---|---|
| Source type | ai_extractor |
| Vendor ID (slug) | kri-extractor |
| Base URL | n/a, local-only. The Add Integration form hides the API Endpoint field for the Drop Zone vendor and renders an inline helper card instead. |
| Auth method | Paste & upload: none beyond your Draxis session. Webhook: per-integration bearer token (x-extractor-token) issued on save. |
| Schedule default | manual, the Drop Zone runs when you submit an artifact (paste / upload / webhook), not on a cron. |
| What it produces | KRI value rows in your Risk Register, scoped to the canonical catalog signals the AI matched. |
| What it costs | One bounded LLM call per artifact (Sonnet, scanning against the catalog signal allow-list, capped at 80 entries). Pennies per artifact at current Anthropic pricing. |
| Availability | New in 2026.05. |
What problem this solves
Most security programs have a long tail of tools that don’t justify a structured connector, one-off scanners, regional SIEMs, vendor-specific reports, the auditor’s PDF, the after-action from last quarter’s tabletop. Without the Drop Zone, the signal in those artifacts dies in someone’s inbox and never reaches your Risk Register.
The Drop Zone is the catch-all: any artifact with risk signal in it, structured or not, can be turned into KRI values that flow through the same risk model the connector data does. AI does the extraction; you keep the audit trail.
Four intake paths
1. Paste
Open Settings → Integrations → AI Drop Zone and paste a CSV / JSON / log / config snapshot directly into the textarea. Best for quick one-shot extractions and ad-hoc reviews.
2. Upload
Drag a file (≤ 5 MB) of plain text, CSV, JSON, YAML, NDJSON, log, or TSV into the upload zone. Multipart upload is handled cleanly so the file’s structure is preserved for the extractor.
3. Webhook
Push from a scheduled export, a CI job, or a third-party automation:
POST /api/integrations/extractor/webhook/:integrationId
x-extractor-token: <token issued at save>
Content-Type: application/json | text/plain | text/csv
<artifact body, ≤ 5 MB>
The token is issued when you save the integration; rotate it from the integration’s detail page if it leaks. The endpoint is idempotent on the artifact’s SHA-256, pushing the same payload twice is a no-op, not a duplicate extraction.
4. MCP tool call (Claude Cowork, Claude Code, cron, any MCP client)
Use the submit_dropzone_artifact tool on the Draxis MCP server (/api/mcp). Best fit: a scheduled job that pulls data from a tool without a Draxis connector (a regional console, an internal API, a CLI export) and pushes it into Draxis for KRI extraction. The MCP path is the only intake that runs entirely out of a session, no human at a browser.
A complete session looks like:
- Authenticate with a
read write-scoped PAT or an OAuth token holdingmcp:write. - Call
list_integrationsto find the Drop Zone integration id (rows whereisDropZoneistrue). - Call
submit_dropzone_artifactwith the integration id and the pulled text. Filename and mimeType are optional but help the audit trail. - Optionally call
run_integrationon the same integration id to materialize accepted extractions into KRIs immediately, instead of waiting for the next scheduled run.
The same SHA-256 dedupe applies, so a re-run that pulls the same data twice is a no-op. See the Claude Code setup page for the canonical scheduled-pull pattern, the Claude Desktop page for a full tool reference, or the other LLM client pages for setup specifics on Cursor, VSCode + Copilot, and ChatGPT Connectors.
How extraction works
- Persist + dedupe. The artifact is stored in
extractor_artifactkeyed by SHA-256. Identical content submitted again is recognized and skipped. - Scan against the catalog. Sonnet scans the artifact body against the Draxis canonical signal allow-list (P1+P2 catalog signals, bounded at 80 entries) and proposes
{signal_name, value, unit, confidence, reasoning, evidence}for each match. Theevidencefield is the literal source span (≤ 200 chars) the value was derived from. - Confidence gate. Proposals with
confidence ≥ 0.85and non-emptyreasoningand non-emptyevidenceauto-accept. Anything below queues in Pending extraction review in the Drop Zone panel for inline accept / reject. - Materialize as KRIs. Accepted proposals materialize as
krirows on the connector’s nextrun(), or hit Run now from the integration card to flush immediately. Numeric values land directly; booleans coerce to1 / 0. - Cite back. Every accepted value carries a pointer to the source artifact and the evidence span, so an auditor or reviewer can trace any KRI back to the line of the document it came from.
Wire it into Draxis
- Open Settings → Integrations in your tenant.
- Click Add integration and pick AI Drop Zone as the source type.
- Pick AI Drop Zone (kri-extractor) from the vendor dropdown. The form hides the API Endpoint field and shows an inline helper card with the three intake paths.
- (Optional) Name the integration something meaningful (e.g. “Pentest reports”, “Vendor SOC 2 intake”, “Auditor uploads”) so the audit trail and run history are scannable.
- Save. The integration is now listed alongside your structured connectors. The webhook URL and bearer token are revealed on the integration’s detail page if you plan to push artifacts in.
- Submit your first artifact via paste, upload, or webhook. Within a few seconds, the extraction lands, high-confidence values auto-accept, the rest queue under Pending extraction review.
KRIs produced
The Drop Zone is not bound to a fixed KRI list. Instead, it can write to any P1 or P2 signal in the Draxis canonical risk catalog (180 KRI signals across 42 risks and 10 domains). What lands in your Risk Register is determined by what the AI finds in the artifact, not by a hardcoded mapping.
For example:
- A pentest report listing “3 critical findings, oldest 47 days” can land values for
pentest_critical_findings_openandpentest_critical_age_days_max. - A vendor SOC 2 attesting to MFA enforcement on privileged accounts can land a value for
vendor_mfa_privileged_attested. - A KnowBe4 PDF export listing phishing-prone percentage can land a value for
kb4_phishing_prone_pct, the same signal the structured KnowBe4 connector populates. - A regional SIEM’s CSV export of unacknowledged notable events can land a value for
siem_notable_unack_4h.
Multiple artifacts attesting to the same canonical signal is first-class: the Drop Zone Catalog will cite both sources. If the AI proposes a signal that isn’t in the catalog, the proposal routes to the platform-scope Catalog Proposals queue instead, a candidate addition for the next catalog release.
Safety & limits
- 5 MB cap, enforced at three layers, paste body parser, multer (file upload), and webhook body parser. Larger artifacts return HTTP 413 immediately, before the LLM is called.
- 200K-character prompt truncation. Artifacts longer than the model’s effective working window are truncated with an explicit
truncated: truestamp on the artifact record so reviewers know the extraction is not based on the whole document. - JSON extraction is brace-matched. The LLM’s reply is parsed by matching the first balanced
{...}, so prose preamble and Markdown fences around the JSON don’t break extraction. - Local-only vendor. No outbound calls to a third party, the artifact never leaves Draxis except for the bounded Sonnet inference call. No SSRF guard required (and no opportunity for one).
- Confidence + evidence are both required. A high-confidence proposal with empty evidence is treated as low-confidence and queued for review. Auto-accept demands all three: confidence ≥ 0.85, non-empty reasoning, non-empty evidence span.
What the Drop Zone is NOT
- Not a substitute for structured connectors. If a vendor has a Draxis connector (Okta, CrowdStrike, Tenable, etc.), use it. Connectors run on schedule, cover more signals, and don’t depend on whether someone remembered to upload a file. The Drop Zone is for the long tail.
- Not a free-text Q&A surface. The Drop Zone extracts typed KRI values against the catalog. If you want to ask the AI vCISO “what does this report mean for our risk?”, use the AI vCISO panel instead, it consumes the extracted KRIs as part of its grounding context.
- Not a document store. Artifacts are persisted with their SHA-256, but the Drop Zone is not a content-management system. Don’t use it as a filing cabinet for your audit evidence; use it as the on-ramp for risk signal that lives in those documents.
- Not authorized to mutate other tenant state. Accepted extractions write KRI values; they do not change risk likelihood/impact, control effectiveness, or any other analyst-owned field. Those still flow through the AI Risk Score Proposals queue and the analyst’s explicit review.
Quirks
- Re-uploading the same file is a no-op. SHA-256 dedupe means a second submission of an identical artifact is recognized and skipped, useful for idempotent webhook pushes, occasionally surprising on manual re-uploads. To force re-extraction, change one byte (e.g. timestamp the filename) or open a support ticket.
- Truncation favors the start of the document. Documents over the 200K-character working window are truncated from the end. If your most important risk signal lives at the bottom of a long report, split the artifact into sections before uploading.
- Webhook tokens don’t auto-rotate. Tokens persist until you rotate them from the integration’s detail page. If a token leaks, rotate immediately, old tokens are revoked at the moment a new one is issued.
- Failures are non-blocking. If the LLM call fails (timeout, rate limit, bad JSON), the artifact is recorded with the failure reason and surfaces in Pending extraction review for re-trigger or manual handling. The Drop Zone never silently drops an artifact.
Troubleshooting
- HTTP 401 on webhook, the bearer token is wrong or rotated. Get the current token from the integration’s detail page in Settings → Integrations.
- HTTP 413 on upload, the artifact exceeds the 5 MB cap. Split the file (one section per upload) or extract the relevant section before uploading.
- Extraction returned 0 proposals, the artifact didn’t match any of the 80 P1+P2 catalog signals included in this scan. Check the Pending extraction review queue for low-confidence proposals; if you believe the artifact should produce a signal that isn’t in the catalog, that becomes a Catalog Proposal for the next catalog release.
- Proposal stuck in Pending extraction review, either confidence is below 0.85, reasoning is empty, or evidence is empty. Open the proposal to see the AI’s reasoning and accept / reject inline. Accepted proposals materialize as KRIs on the next run.
- KRI didn’t appear in the Risk Register after accept, the connector hasn’t run since you accepted. Hit Run now on the Drop Zone integration card.
- Still stuck? Open a support ticket with the artifact ID (from the Drop Zone panel) and we’ll dig in.