← Back to notes Detection engineering · Process

Detection-as-code, without the cult

A measured take on the detection-as-code workflow: where it earns its hype, where it adds friction, and what to do about both.

Detection-as-code has become a topic that attracts strong opinions from people whose actual experience is six months deep. Strip away the conference talks and what you are left with is a Git-tracked workflow for detection content. That is genuinely useful. It is not, however, a religion.

The earned hype: review and rollback are real. When a detection generates a wave of false positives at 04:00, you want to know who changed what and when, and you want to back it out without asking anyone for permission. A Git-tracked rule with a documented review history is the cleanest way to get there. The peer-review loop also raises the floor of what gets shipped. We see fewer "let me just deploy this and see" detections in cohorts that adopt the workflow.

The friction nobody talks about: small teams pay a real overhead. If your detection content is two analysts and a shared Splunk login, the Git workflow can feel like ceremony. The honest answer is that the workflow scales — it is worth setting up before you grow, not after — but the first three months feel slower. We tell cohort participants this on day one because it is true and pretending otherwise erodes trust.

The discipline that matters more than the tooling: writing tuning notes you would understand a year later. The best detections we see in cohort review have a comment block above them explaining why they exist, what they are deliberately not catching, and what would justify retiring them. That comment block is worth more than any CI pipeline.