At a glance

VendorProtect AI — Platform suite (Guardian for runtime protection, ModelScan for pre-deployment scanning, Recon for AI red-teaming, NB Defense for notebook security, Layer for MLOps observability).
Source typeai_security
Vendor ID (slug)protect-ai
Base URLPer-tenant — https://<your-tenant>.protectai.com. Self-hosted deployments use a customer-specified hostname.
Auth methodapi-key — Protect AI API token via Authorization: Bearer <token>. Dispatcher's default for api-key does this; no override.
Schedule defaultdaily — ModelScan and supply-chain analysis run overnight; Guardian runtime events are naturally 24h-windowed. Hourly fine for tighter response on the poisoning-indicators KRI.
LicensingProtect AI's products are separately licensed. Core KRIs need ModelScan; Guardian-specific KRIs (access violations, exposed APIs) need Guardian; notebook KRI needs NB Defense; poisoning KRI needs Layer or Protect AI's training pipeline integration. Unlicensed modules return 403/404 — the connector tolerates those and records 0 + warn.
AvailabilityNew in 2026.04.

Required scopes & roles

Create a dedicated service-account API token in the Protect AI admin console. Assign read-only scopes across the modules you license:

  • ModelScan: Findings read, Models read (backs KRIs 1, 2, 10).
  • Model Registry: Read (backs KRIs 3 + 10).
  • Supply Chain: Read (backs KRI 4).
  • Guardian: Events read, Endpoints read (backs KRIs 5 + 9).
  • IAM / Service Accounts: Read (backs KRI 6).
  • Training Pipeline / Layer: Poisoning indicators read (backs KRI 7).
  • NB Defense: Findings read (backs KRI 8).

No write, manage, or remediate scopes. The connector never mutates platform state.

Setup steps

  1. Identify your Protect AI Platform URL. SaaS customers get assigned a tenant URL at onboarding — check the admin console URL (it'll be https://<tenant>.protectai.com or similar). Self-hosted customers use the hostname configured during deployment.
  2. Create a service-account user in the Protect AI admin console. Name: draxis-connector. Role: a read-only role with the scopes above (or use the built-in "Read-Only" or "Auditor" role if available).
  3. Generate an API token as the service account. Admin Console → API Keys → Create Key. Protect AI shows the token once at creation — copy into your password manager.
  4. Verify connectivity with a quick curl before wiring Draxis:
    curl -sk -H 'Authorization: Bearer <token>' \
      'https://<your-tenant>.protectai.com/api/v1/platform/info' \
      | jq '.tenant, .plan, .modules'
    You should see your tenant slug, plan tier, and the list of licensed modules. 401 = wrong token; 404 = wrong base URL or older Platform version (path may differ).
  5. (Self-hosted only) Verify public-TLS reachability. Draxis runs from public egress IPs — self-hosted Protect AI on a private IP is blocked by the SSRF guard. Front with a public-TLS reverse proxy or use the SaaS platform.

Wire it into Draxis

  1. Open Settings → Integrations in your tenant.
  2. Click Add integration and pick AI Security as the source type.
  3. Pick Protect AI Platform (Guardian + ModelScan + Recon) from the vendor dropdown.
  4. In API Base URL, paste your Protect AI tenant URL.
  5. In API Key / Token, paste the API token from step 3. Draxis encrypts it server-side with encryption.key.
  6. Click Test. Green means Draxis hit /api/v1/platform/info successfully — the message includes your tenant slug, plan tier, and licensed modules.
  7. Under KRIs to import, tick the KRIs you want. All ten protectai_* are checked by default; uncheck the KRIs for modules you don't license (Guardian KRIs need Guardian, notebook KRI needs NB Defense, poisoning KRI needs Layer / training-pipeline integration). Selected rows are created on save.
  8. Save. The connector runs daily by default.

KRIs produced

SlugMeaningDerivation
protectai_modelscan_critical_high ModelScan critical/high findings unresolved GET /api/v1/modelscan/findings/count?severity=critical,high&status=open
protectai_unsafe_serialization Pickle/joblib/dill artifacts without ModelScan validation GET /api/v1/models/count?format=pickle,joblib,dill&validated=false
protectai_shadow_models Models detected in use outside the governed registry GET /api/v1/models/shadow/count
protectai_ai_supply_chain_vulns Open high/critical vulnerabilities in OSS model dependencies GET /api/v1/supply-chain/vulnerabilities/count?severity=critical,high&status=open
protectai_access_violations_24h Unauthorized model-endpoint access events (24h) GET /api/v1/guardian/access-events/count?unauthorized=true&since=<now-24h>. Requires Guardian.
protectai_privileged_ai_excessive Service accounts with out-of-role permissions GET /api/v1/service-accounts/count?privileged=true&permission_anomaly=true
protectai_data_poisoning_indicators_24h Training-pipeline poisoning indicators (24h) GET /api/v1/training/poisoning-indicators/count?since=<now-24h>. Requires Layer or training-pipeline integration.
protectai_notebook_security_findings NB Defense findings — secrets + unsafe code in Jupyter notebooks GET /api/v1/nbdefense/findings/count?status=open&severity=critical,high,medium. Requires NB Defense.
protectai_exposed_model_apis Model inference endpoints publicly reachable without auth GET /api/v1/endpoints/count?publicly_reachable=true&auth_required=false. Requires Guardian.
protectai_unscanned_prod_models Production model artifacts without ModelScan GET /api/v1/models/count?environment=production&scanned=false

Each row is a slug the connector writes to. Draxis creates the matching kri rows automatically when you check them in the KRIs to import section of the integration form — no manual API call or seed script needed. Thresholds shown in the table are the seeded defaults; you can edit them freely in the KRIs tab afterwards.

Vendor quirks

  • Protect AI Platform is evolving rapidly. The AI security space is new and API surfaces move between releases. The connector uses conventional paths from the Platform API Guide at authoring time; older or customized deployments may need one-line path edits in the connector file. Per-KRI 404 tolerance keeps unaffected KRIs working.
  • Module licensing is independent. Guardian, ModelScan, Recon, NB Defense, and Layer are separately licensed. A tenant with only ModelScan will see 0 + warn on KRIs 5, 7, 8, 9. Uncheck those KRIs to prevent 0 from reading as a clean posture.
  • "Shadow model" detection depends on MLOps telemetry reach. Shadow-model discovery works when Guardian sees inference traffic OR Layer instruments MLOps pipelines. Tenants without runtime observability (just static registry scanning) will see 0 regardless of whether shadow models exist. Deploy Guardian / Layer for this KRI to populate.
  • Pickle ≠ always-unsafe. The unsafe-serialization KRI counts pickle/joblib/dill files that haven\'t passed ModelScan validation. Pickle files that have been scanned and cleared are safe for use; the KRI flags unvalidated ones — files that bypass the scan pipeline are the signal.
  • Poisoning indicators are high-noise on new integrations. Protect AI's poisoning detection learns your training-data baselines over weeks. In the first 30 days of integration, false positives can be high — trend the KRI over time rather than alerting on single-day spikes.
  • Self-hosted on private IPs is blocked. Same pattern as Splunk / GHES / on-prem Palo Alto / Varonis on-prem. Front with public-TLS ingress or use the SaaS platform.
  • Service-account permission-anomaly is baseline-relative. The privileged-excessive KRI relies on Protect AI's automatic role analysis, which needs 30+ days of activity to establish what "normal" looks like. First-month readings are noisy.
  • Notebook scan coverage depends on connector deployment. NB Defense can run in CI, as a pre-commit hook, or as a scheduled scan of notebook storage. Low notebook-findings counts may reflect narrow scan coverage rather than clean notebooks — verify what NB Defense is actually scanning in your tenant.
  • Exposed-APIs KRI is critical if >0. An inference endpoint publicly reachable without auth is free compute for attackers and a prompt-leakage vector. Even if only 1 is flagged, treat it as a page-the-oncall event.
  • Ten KRIs is broad. User-requested. AI security is a layered problem so the full set reflects reality, but it covers multiple product modules. Curate to the subset matching the modules you license.
  • Rate limits are reasonable. Protect AI's API typically allows 60 req/min per token. The connector makes ~11 calls per run, well under.

Troubleshooting

  • Test returns 401 — token is wrong or the service account was disabled. Regenerate in the admin console.
  • Test returns 404 on /api/v1/platform/info — your Platform version exposes this under a different path. Check the Protect AI Platform API Guide for the canonical probe endpoint and adapt.
  • All module-specific KRIs return 0 with 404 warns — those modules aren't licensed for your tenant. Uncheck the KRIs or add the modules.
  • protectai_modelscan_critical_high = 0 on a tenant with scanned models — possibly all findings are already resolved (good!), or the status enum differs in your tenant. Try the query manually in the admin console to verify the status=open filter matches your data.
  • Save-time error "host resolves to a private address" — self-hosted Protect AI on a private IP. Use public-TLS ingress or switch to SaaS.
  • protectai_shadow_models = 0 but you suspect shadow models exist — Guardian / Layer isn\'t instrumenting the environment where shadow models run. Extend MLOps observability coverage before interpreting this KRI as clean.
  • rowsSkipped > 0 and rowsWritten = 0 — your tenant hasn't imported any KRIs for this integration yet. Open the integration in Settings → Integrations, tick the KRIs under KRIs to import, and save.
  • Still stuck? Open a support ticket with the run ID (from Run history) and we'll dig in.