Skip to content

Policy Best Practices

This guide covers patterns, anti-patterns, and strategies for building an effective policy system in Mayo ASPM.


Start simple, iterate

The golden rule

A simple policy that covers 80% of cases today is better than a perfect policy that takes weeks to write.

Begin with broad rules and refine over time:

  1. Week 1 — Auto-accept critical, auto-reject info
  2. Week 2 — Add test-file suppression
  3. Week 3 — Add scanner-specific noise rules based on triage queue patterns
  4. Month 2 — Add EPSS-based prioritization
  5. Month 3 — Add ownership policies for team-based routing

Policy design patterns

Pattern: Severity ladder

Layer your triage decisions by severity:

package mayo.triage

import rego.v1

default decision := "defer"

decision := "accept" if input.finding.severity == "critical"
decision := "accept" if input.finding.severity == "high"
decision := "defer" if input.finding.severity == "medium"
decision := "reject" if input.finding.severity == "low"
decision := "reject" if input.finding.severity == "info"

Pattern: Allowlist / blocklist

Maintain lists of known-good and known-bad rules:

package mayo.triage

import rego.v1

# Rules that are always noise in our environment
blocklist := {
    "generic.secrets.gitleaks.generic-api-key",
    "python.lang.best-practice.open-never-closed"
}

# Rules that are always actionable
allowlist := {
    "java.lang.security.audit.sqli.tainted-sql-string",
    "javascript.express.security.audit.xss.mustache-escape"
}

decision := "reject" if {
    input.finding.rule_id in blocklist
}

decision := "accept" if {
    input.finding.rule_id in allowlist
}

Pattern: Context-aware scoring

Use multiple signals for priority:

package mayo.priority

import rego.v1

base := 80 if input.finding.severity == "critical"
base := 60 if input.finding.severity == "high"
base := 40 if input.finding.severity == "medium"
default base := 20

kev := 15 if input.finding.in_kev
default kev := 0

fix := 5 if input.finding.fixed_version != ""
default fix := 0

priority := min([base + kev + fix, 100])

Pattern: Graduated PR gates

Start permissive, tighten over time:

package mayo.pr_scan

import rego.v1

# Phase 1: Block only critical
result := "fail" if {
    input.scan_results.by_severity.critical > 0
}

default result := "pass"

Then evolve to:

# Phase 2: Block critical + high
result := "fail" if {
    input.scan_results.by_severity.critical + input.scan_results.by_severity.high > 0
}

Anti-patterns

Anti-pattern: Overly broad rejection

# BAD: Rejects everything that isn't critical
decision := "reject" if {
    input.finding.severity != "critical"
}

This hides real high-severity issues. Use defer instead of reject for findings you're unsure about.

Anti-pattern: Hardcoded IDs

# BAD: Hardcoded finding IDs are fragile
decision := "reject" if {
    input.finding.id == "f_abc123"
}

Use patterns (rule IDs, file paths, severities) instead of specific finding IDs.

Anti-pattern: No default

# BAD: No decision for findings that don't match any rule
decision := "accept" if {
    input.finding.severity == "critical"
}

Always provide a default so every finding gets a decision:

default decision := "defer"

Anti-pattern: Complex nested logic

# BAD: Hard to read and maintain
decision := "accept" if {
    input.finding.severity == "critical"
    input.finding.scanner == "grype"
    input.finding.cve_id != ""
    input.finding.epss_score > 0.5
    not contains(input.finding.file_path, "/test/")
    input.finding.age_days < 30
}

Break complex rules into smaller, named rules:

# GOOD: Composable, testable rules
is_critical_cve if {
    input.finding.severity == "critical"
    input.finding.cve_id != ""
}

is_actively_exploited if {
    input.finding.epss_score > 0.5
}

is_production_code if {
    not contains(input.finding.file_path, "/test/")
}

decision := "accept" if {
    is_critical_cve
    is_actively_exploited
    is_production_code
}

Maintenance strategy

Cadence Action
Weekly Review the triage queue for patterns that could be automated
Monthly Review policy effectiveness metrics (automation rate, false positive rate)
Quarterly Audit all policies for relevance; archive unused policies
After incidents Update policies to catch similar issues automatically

Testing strategy

  1. Unit test with the playground — test each rule with known inputs.
  2. Batch test with saved cases — maintain a test suite alongside each policy.
  3. Shadow mode — activate a new policy in "log-only" mode to see what it would do before enforcing.
  4. Monitor after activation — watch the triage queue and finding metrics for unexpected changes.

Naming conventions

Use consistent policy names:

Pattern Example
{kind}-{scope}-{purpose} triage-org-suppress-info
{kind}-{project}-{purpose} triage-payments-accept-pci
{kind}-{purpose} priority-epss-scoring

Next steps