Draft

Evidence before elegance

Every section starts as a field inventory: what operators see in incidents, what vendors claim, what attackers can actually reproduce, and what breaks when the signal meets production traffic.

Review

Practitioners argue with the text

Working-group reviewers annotate drafts for missing threat models, untestable claims, weak lab assumptions, and language that drifts into product marketing.

Validate

Labs must fail usefully

A lab is not accepted until a candidate can run it from a clean environment, produce the expected artifact, and understand what a false positive or false negative would cost.

Govern

Exam changes move slowly

The curriculum can react to the field quickly. The exam blueprint changes only through recorded review so candidates are not surprised by unstable assessment targets.

Peer review

What reviewers are asked to prove.

Reviewers do not merely approve copy. They try to break the implied operating model. If a section says a signal is useful, reviewers ask where it is noisy. If a lab teaches a mitigation, reviewers ask how it fails under real user pressure. If a term sounds familiar but means different things across vendors, it moves into the glossary.

The result is deliberately plain: fewer slogans, more artifacts, and a record of why the standard teaches one trade-off before another.

Validation checklist

Minimum bars before publication.