Skip to content

Triage Workflows

Effective triage separates signal from noise. This guide covers strategies for building a triage workflow that scales with your organization.


The triage spectrum

Fully manual                                        Fully automated
──────────────────────────────────────────────────────────
  Review every     Write policies     Policy handles
  finding by hand  for common cases   90%+ of findings

Most organizations should aim for the right side. Start manual, observe patterns, then automate.


Phase 1 — Learn the noise

Before writing triage policies, spend 1-2 weeks reviewing findings manually:

  1. Run scans across your key repositories.
  2. Open the Triage Queue daily.
  3. For each finding, ask:
    • Is this a real issue or a false positive?
    • Would any developer action be needed?
    • Is this in production code or test/vendor code?
  4. Record patterns you see repeatedly.

Common noise patterns

  • Informational findings with no security impact
  • Findings in test files, fixtures, or vendor directories
  • Generic secret detection rules matching non-secret strings
  • Best-practice suggestions flagged as vulnerabilities
  • Low-severity CVEs in dev-only dependencies

Phase 2 — Automate the obvious

Write policies for the patterns you identified:

Suppress by severity

decision := "reject" if {
    input.finding.severity == "info"
}

Suppress by file path

decision := "reject" if {
    some pattern in ["/test/", "/spec/", "/__tests__/", "/vendor/", "/node_modules/", "/fixtures/"]
    contains(input.finding.file_path, pattern)
}

Auto-accept critical CVEs

decision := "accept" if {
    input.finding.severity == "critical"
    input.finding.cve_id != ""
}

Suppress noisy rules

# Track noisy rules and suppress them
noisy_rules := {
    "generic.secrets.gitleaks.generic-api-key",
    "python.lang.best-practice.open-never-closed"
}

decision := "reject" if {
    input.finding.rule_id in noisy_rules
}

Phase 3 — Refine with data

After 2-4 weeks of policy-driven triage, review effectiveness:

Check automation rate

Navigate to the Triage Funnel Dashboard:

  • Auto-accepted: findings handled by policy (no human needed)
  • Auto-rejected: noise eliminated by policy
  • Deferred / Unmatched: findings still needing manual review

Target: 80%+ automation rate

If your automation rate is below 80%, review the deferred/unmatched queue for new patterns to automate.

Check false positive rate

Review a sample of rejected findings monthly:

  1. Filter findings by status: Suppressed.
  2. Sort by date (newest first).
  3. Spot-check 20-30 findings.
  4. If any were incorrectly suppressed, refine the policy.

Phase 4 — Advanced triage

EPSS-based triage

Use EPSS (Exploit Prediction Scoring System) scores to prioritize:

# Auto-accept findings with high exploit probability
decision := "accept" if {
    input.finding.epss_score > 0.5
}

# Auto-reject old, low-EPSS findings
decision := "reject" if {
    input.finding.severity == "low"
    input.finding.epss_score < 0.01
    input.finding.age_days > 180
}

KEV-based triage

Prioritize findings in CISA's Known Exploited Vulnerabilities catalog:

decision := "accept" if {
    input.finding.in_kev == true
}

Context-aware triage

Factor in the asset's criticality:

critical_assets := {"api-gateway", "auth-service", "payment-service"}

# Accept all findings in critical assets regardless of severity
decision := "accept" if {
    input.finding.asset.name in critical_assets
    input.finding.severity in ["critical", "high", "medium"]
}

Triage queue workflow

For findings that policies can't handle automatically:

Daily triage (15 minutes)

  1. Open the Triage Queue.
  2. Sort by severity (critical first).
  3. For each finding:
    • Accept if actionable → it enters the ticket pipeline
    • Reject if noise → consider writing a policy for this pattern
    • Defer if you need more context → investigate later
  4. After triage, click Create policy on any pattern you saw repeatedly.

Weekly review (30 minutes)

  1. Review the deferred queue — make final decisions.
  2. Check the automation rate trend — is it improving?
  3. Update policies based on new patterns.
  4. Review any false positive reports from developers.

Anti-patterns

Anti-pattern Problem Solution
Auto-reject everything below critical Misses real high-severity issues Use defer for uncertain findings
No triage at all Alert fatigue, developer burnout Start with basic severity routing
Too many policies Hard to debug, conflicting decisions Consolidate related rules into fewer policies
Ignoring deferred queue Builds up indefinitely Schedule weekly review

Next steps