Splunk SIEM / Enterprise Security
Runs SPL queries against Splunk to derive SIEM KRIs for notable-event response dwell, log-source coverage, MTTD trend, correlation-rule suppression, Risk-Based Alerting entity spikes, ingestion-pipeline health, and MITRE ATT&CK coverage of your detection content.
At a glance
| Vendor | Splunk Cloud and Splunk Enterprise. Most KRIs assume Enterprise Security (ES) is installed; see Licensing below. |
|---|---|
| Source type | siem |
| Vendor ID (slug) | splunk-es |
| Base URL | Per-tenant — Splunk Cloud: https://<your-instance>.splunkcloud.com:8089. On-prem: the management port (:8089, not :8000) of a search head or the search-head cluster load balancer. |
| Auth method | api-key — Splunk bearer token. Dispatcher emits Authorization: Bearer <token> which is exactly what Splunk expects. |
| Schedule default | daily. The MTTD KRI uses a 30-day lookback and the risk-spike KRI uses 24h, both self-contained — no need for hourly scheduling unless you want fresher notable-backlog signals. |
| Reachability | Splunk Cloud: works out of the box (public *.splunkcloud.com hostnames). On-prem Splunk: often on RFC1918 private IPs, which Draxis's SSRF guard blocks. Expose the management port publicly with TLS + ingress allowlist, or front it with a reverse proxy. See Quirks. |
| Licensing | Core KRIs that depend on the notable, risk, and correlation-search metadata require Splunk Enterprise Security (ES). Non-ES Splunk tenants can still use splunk_log_source_coverage_gaps and splunk_indexer_gaps — uncheck the ES-dependent KRIs in Draxis. |
| Availability | New in 2026.04. |
Required scopes & roles
Create a dedicated Splunk service-account user with a custom role. The role needs exactly these capabilities:
search— run SPL queries.rest_properties_get— read REST endpoints (| rest /services/saved/searches,| rest /services/data/indexes).list_settings— read/services/server/infofor the Test probe.
The role also needs index access: add read to notable, risk, and whatever indexes your business-critical log sources live in. Do not grant admin_all_objects, edit_*, or any write capability — this user never modifies Splunk state.
Suggested authorize.conf snippet (Splunk Enterprise on-prem):
[role_draxis_read]
importRoles = user
srchIndexesAllowed = notable;risk;main;* # adjust to your index list
srchIndexesDefault = notable
[capability::search]
[capability::rest_properties_get]
[capability::list_settings]
# grant exactly these, nothing else:
capabilities = search;rest_properties_get;list_settings
grantableRoles =
On Splunk Cloud, custom roles are configured via Settings → Access Controls → Roles in the web UI. Cloud doesn't expose authorize.conf directly; use the UI to mirror the capabilities above.
Setup steps
- Identify your management endpoint. Splunk's REST API runs on port 8089, not the web UI port 8000. Splunk Cloud customers get this at
https://<instance>.splunkcloud.com:8089; on-prem customers use their search-head (or SHC load-balancer) hostname on port 8089. - Create the service-account user and custom role (see Required Scopes above). Assign the role to the user. If on Splunk Cloud, log into Splunk Web → Settings → Access Controls. If on-prem, edit
authorize.conf+etc/users/and restart or reload. - Generate a bearer token for the service user. In Splunk Web as the service user (or as an admin using the username), go to Settings → Tokens → New Token. User:
draxis-connector. Audience:Draxis. Expiration: align with your rotation policy (Splunk supports never-expiring, but most orgs set 6-12 months). - Copy the token value. Splunk shows it once at creation. Store in your password manager.
- (On-prem only) Expose port 8089 to the Draxis runner. Draxis reaches Splunk from public egress IPs, so RFC1918 / private-IP Splunk instances are blocked by Draxis's SSRF guard. Options:
- Put a reverse proxy with public TLS in front of Splunk's :8089 (allowlist the Draxis egress range in ingress rules).
- Expose Splunk's management port directly via a firewall allowlist (with hardened TLS config).
- Verify connectivity from a trusted client. Before wiring Draxis:
You should see your Splunk version. 401 means wrong token; 403 means the role is missingcurl -sk -H 'Authorization: Bearer <token>' \ 'https://<your-splunk>:8089/services/server/info?output_mode=json' \ | jq '.entry[0].content.version'list_settings.
Wire it into Draxis
- Open Settings → Integrations in your tenant.
- Click Add integration and pick SIEM / Log Management as the source type.
- Pick Splunk SIEM / Enterprise Security from the vendor dropdown. Draxis pre-fills the auth method and daily schedule.
- In API Base URL, paste your Splunk management URL (e.g.
https://acme.splunkcloud.com:8089). Include the port; omit trailing paths. - In API Key / Token, paste the bearer token from step 3. Draxis encrypts it server-side with
encryption.keybefore storage. - Click Test. Green means Draxis hit
/services/server/infosuccessfully — the message includes your Splunk version and server name. - Under KRIs to import, tick the KRIs you want Draxis to manage. All seven
splunk_*KRIs are checked by default; uncheck the notable/RBA/MITRE KRIs if you don't have Enterprise Security (otherwise they report 0, which could be misread as clean). Selected rows are created on save. Unchecking a previously-imported KRI deletes it on save. - Save. The connector runs
dailyby default; use Run now from run history to trigger the first sync immediately.
KRIs produced
| Slug | Meaning | Derivation (SPL) |
|---|---|---|
splunk_notable_unack_4h |
Open notable events older than 4h | search index=notable NOT status="5" NOT status_label="Closed" NOT status_label="Resolved" _time<relative_time(now(),"-4h") | stats count |
splunk_log_source_coverage_gaps |
Hosts silent in the last 24h | | metadata type=hosts index=* | eval age_hours=(now()-lastTime)/3600 | where age_hours > 24 | stats count |
splunk_mttd_hours |
Avg detect-lag (hours) across 30d of notables | search index=notable earliest=-30d | eval detect_lag_hours=(_time - coalesce(event_time, _time))/3600 | where detect_lag_hours >= 0 | stats avg(detect_lag_hours) as mttd | eval mttd=round(mttd, 1) |
splunk_rules_disabled |
ES correlation searches currently disabled | | rest /services/saved/searches | where disabled=1 AND 'action.correlationsearch.enabled'=1 | stats count |
splunk_risk_entity_spikes_24h |
Risk objects with cumulative 24h risk score > 100 | search index=risk earliest=-24h | stats sum(risk_score) as total_risk by risk_object | where total_risk > 100 | stats count |
splunk_indexer_gaps |
Enabled indexes with historical events but no data in 1h | | rest /services/data/indexes | eval lag_hours=(now() - coalesce(maxTime, 0))/3600 | where disabled=0 AND lag_hours > 1 AND totalEventCount > 0 | stats count |
splunk_mitre_coverage_pct |
% of active correlation searches with MITRE annotations | | rest /services/saved/searches | where 'action.correlationsearch.enabled'=1 AND disabled=0 | stats count(eval(like('action.correlationsearch.annotations', "%mitre_attack%"))) as annotated, count as total | eval coverage_pct=if(total=0, 0, round(annotated*100/total, 1)) |
Each row is a slug the connector writes to. Draxis creates the matching kri rows automatically when you check them in the KRIs to import section of the integration form — no manual API call or seed script needed. Thresholds shown in the table are the seeded defaults; you can edit them freely in the KRIs tab afterwards.
Vendor quirks
- Port 8089 vs 8000. Port 8000 is the Splunk Web UI; port 8089 is the REST API / management port. They are not interchangeable. Using :8000 returns HTML 404s or raw login pages that Draxis can't parse.
- On-prem Splunk on private IPs is blocked by Draxis's SSRF guard. The Draxis runner resolves URLs to public IPs only; RFC1918, loopback, and link-local addresses are rejected at save-time and fetch-time. Splunk Cloud on public
*.splunkcloud.comworks seamlessly; on-prem needs a public-TLS ingress path. - Notable event status values vary across ES versions. The connector excludes common "closed" statuses (
status=5,status_label="Closed",status_label="Resolved"). Older ES versions used different status codes; ifsplunk_notable_unack_4hseems too high, check yournotable.conffor custom status labels and adjust the SPL. - MTTD uses the notable event's
event_timefield. ES populatesevent_timewith the original event's timestamp;_timeis when the notable was created. The lag between them is the detect-latency. If your correlation searches don't preserveevent_time(some custom searches don't), the KRI falls back to 0 — refine by adjusting the correlation-search SPL to setevent_timeexplicitly. - RBA assumes the stock
riskindex. If your org renamed or split the risk index (some large deployments do), the risk-spike KRI returns 0. Adjust the SPL or open a support ticket for an indirection knob. - Risk threshold of 100 is a heuristic. 100 is a common "investigate today" threshold in stock ES, but risk-scoring is notoriously per-org. Tune the KRI threshold in Draxis once you see your typical values, or adjust the
total_risk > 100constant in the connector. - MITRE coverage is "annotated rules / total rules", not "techniques covered / 200 techniques". The former is what Splunk's REST API readily exposes; the latter would need a full MITRE technique catalog cross-reference. The proxy is directionally correct: rising coverage means your detection program is mapping more of its content to ATT&CK.
- Self-signed certificates fail. Node's default TLS validation rejects self-signed or privately-CA certs on the Splunk management endpoint. Either install a publicly-trusted cert on port 8089, or front Splunk with a reverse proxy that does public TLS termination. Draxis deliberately doesn't offer a "skip TLS validation" escape hatch.
- Bearer tokens inherit the creating user's role at generation time. If you later add capabilities to the user, the token gets them too. If you later remove capabilities, the token loses them. Rotate the token whenever the service user's role changes to avoid stale-capability confusion.
- Token expiration is set at creation. Splunk supports never-expiring tokens, but common hygiene is 6-12 months. Schedule a calendar rotation; Splunk doesn't notify before expiry — the connector just starts returning 401s on the expiration day.
- Rate limits are generous. Splunk's search concurrency is the real limit; the connector runs 7 one-shot searches per run, well under any stock quota.
Troubleshooting
- HTTP 401 on Test — token is wrong or expired. Regenerate a new token as the service user.
- HTTP 403 on
/services/server/info— the role is missinglist_settings. Add it and retry. - Save-time error "host resolves to a private address" — you're pointing at on-prem Splunk on a private IP. Front it with a public-TLS reverse proxy, or use Splunk Cloud.
- Probe succeeds but all ES-dependent KRIs are 0 — ES isn't installed, or the role lacks index access to
notable/risk. Verify with| rest /services/data/indexes | search title=notablein the Splunk web UI as the service user. splunk_mitre_coverage_pctis 0 but you have MITRE-annotated correlations — the SPL matches onannotationscontaining the literal stringmitre_attack. If your annotations use a different key (custom ES field names), adjust thelike(...)pattern in the connector.splunk_log_source_coverage_gapsis huge — themetadatacommand sees every host Splunk has ever observed, including long-decommissioned ones. Tune by restricting the search to known production indexes (| metadata type=hosts index=prod_*) or by deleting stale hosts from Splunk's metadata.rowsSkipped > 0androwsWritten = 0— your tenant hasn't imported any KRIs for this integration yet. Open the integration in Settings → Integrations, tick the KRIs under KRIs to import, and save.- Still stuck? Open a support ticket with the run ID (from Run history) and we'll dig in.