⚙️ What Engineers Know That GRC Professionals Don't

3 simple systems-thinking heuristics that transform GRC Engineering from compliance theatre to threat reduction. Measure outcomes, eliminate noise, ship fast.

IN PARTNERSHIP WITH

Still looking for the perfect gift for your GRC friends and colleagues?

Look no further.

The Anecdotes GRC Holiday Store has a curated collection of punny and fun holiday items designed specifically for our community.

Speaking at RSAC San Francisco 2026 🎉 

Exciting news: I'm speaking at RSA Conference 2026 in March!

Byron Boots from my team and I will present "Build vs. Buy, We Did Both and This Is What We Learned (GRC edition)" - diving into our journey cycling through four GRC platforms in three years before building our own at GitLab.

We'll cover everything from database architecture to translating business requirements into features engineers can actually ship, plus practical tips for being a better buyer using builder frameworks.

Can't wait to showcase the incredible work Byron, James, and Donovan have built over the past year.

See you in San Francisco!

How to think like an engineer in GRC.

I learn more from software engineers than from compliance frameworks.

Not about writing code. About thinking in systems.

Engineers don't measure lines of code written. They measure uptime. Engineers don't alert on every log entry. They alert on anomalies. Engineers don't wait for perfect architecture. They ship, then iterate.

GRC professionals obsess over control counts, policy completion percentages, and perfect documentation.

Wrong metrics. Wrong focus. Wrong approach.

Here are three engineering heuristics that transformed how I work in GRC. Simple concepts. Systems-thinking based. Immediately applicable.

Why It Matters 🔍

The why: the core problem this solves and why you should care

Traditional GRC measures what's easy to count:

  • Number of controls implemented

  • Percentage of policies approved

  • Evidence collection completion rate

  • Training completion percentages

  • Framework coverage metrics

These are activity metrics. They tell you what happened. They don't tell you if you're safer.

Engineering-driven GRC measures what matters:

  • Threat exposure reduction (quantified)

  • Control effectiveness against active attacks

  • Time to detect and respond to incidents

  • Risk level changes over time

  • Actual security improvements

These are outcome metrics. They tell you if you're actually protected.

Activity metrics create compliance theatre. You optimise for auditor satisfaction, not threat reduction. You're rewarded for documentation, not security.

Outcome metrics drive security improvements. You optimise for attacker friction, not framework completeness.

Same effort. Different focus. Completely different results.

Here's how to make this shift practical.

Strategic Framework 🧩

The what: The conceptual approach broken down into 3 main principles

Three simple concepts borrowed from engineering. Each solves a fundamental problem keeping GRC programmes stuck in compliance theatre. Each backed by how top engineering teams actually work.

Heuristic 1: Measure Outcomes, Not Activity

Traditional GRC: "We have 125 controls, 95% implemented, 89% tested." Engineering GRC: "Credential attacks reduced 73%, detection time from 14 days to 4 hours."

One counts what you did. One measures if it worked.

Activity metrics are lagging indicators of compliance. They tell you what you completed.

Outcome metrics are leading indicators of security. They tell you if you're protected.

Activity asks: "Did we do the thing?"
Outcome asks: "Did the thing reduce our threat exposure?"

Engineering parallel: Site Reliability Engineers don't measure number of deployments. They measure uptime, latency, error rates. SLA-based monitoring, not activity-based.

What changes in GRC:

Stop: "We implemented MFA across the organisation."
Start: "MFA blocked 160 credential attacks this quarter, prevented estimated £2.3M in breach costs, reduced unauthorised access attempts by 89%."

Stop: "Access reviews 100% complete."
Start: "Access reviews detected 34 orphaned accounts, 12 privilege escalations, 5 terminated employees still with active access. All remediated within 48 hours."

This builds on the distinction between signal and noise we covered previously. Activity is noise. Outcomes are signal.

Heuristic 2: Simplicity - Maximise Work NOT Done

Traditional GRC: Every stakeholder in every meeting. Every control is "critical". Every policy covers every edge case.
Engineering GRC: Ruthlessly eliminate what doesn't change decisions. Less noise. More signal.

GRC drowns in information. Policies nobody reads. Metrics nobody uses. Meetings nobody needs.

The Agile Manifesto calls this "the art of maximising work NOT done."

Signal: information that changes decisions. Noise: everything else.

Most of what GRC produces is noise.

Engineering parallel: Engineers alert on anomalies, not every log entry. They focus on what breaks the pattern, what requires action. Filter ruthlessly.

What changes in GRC:

Stop: 50-page access control policy covering 60 edge cases that happen once every 5 years. Start: 5-page policy covering 95% of scenarios. Document exceptions separately.

Stop: Weekly status meetings with 15 stakeholders. Start: Async updates. Meet only when decision required.

Stop: Dashboard with 73 metrics across 8 frameworks. Start: Pick 3 metrics that actually drive action.

This reflects the signal vs noise framework: if it doesn't change decisions, it's noise. Cut it.

Heuristic 3: Small Batches, Frequent Delivery

Traditional GRC: Spend 6 months designing perfect control framework. Launch when "ready." Engineering GRC: Ship "good enough" in 2 weeks. Measure effectiveness. Improve based on feedback. Repeat.

Perfect solutions take too long. By the time you launch, requirements changed, threats evolved, stakeholders moved on.

Good-enough solutions ship fast. You protect today, not someday.

Every month spent planning is a month you're unprotected.

Engineering parallel: Agile development uses 2-week sprints. MVP (minimum viable product) approach means ship basic version, then iterate. Each improvement cycle takes weeks, not months.

What changes in GRC:

Stop: "We can't implement vulnerability management until we have perfect SLA definitions, comprehensive remediation workflows, and complete stakeholder alignment." Start: "Week 1: Define basic SLAs (Critical: 7 days, High: 30 days). Week 2: Launch automated scanner, Jira tickets, email alerts. Week 3: Measure. Week 4: Adjust based on data."

Stop: "Policy needs legal review, security review, leadership review, and final approval before we publish." Start: "Ship interim policy now. Mark as 'v1.0 - Under Review'. Schedule 90-day update based on actual usage."

This is treating GRC as a product, not a project. Products iterate. Projects aim for perfection before launch.

Traditional GRC

Engineering GRC

What This Enables

Activity: "XXX controls implemented"

Outcome: "Credential attacks down 73%"

Board sees security value

Noise: 50-page policy, 15-person meetings

Signal: 5-page policy, async updates

Team focuses on what matters

Perfection: 6 months to launch

Iteration: 2 weeks to MVP

Protection today, not someday

Metric: Completion %

Metric: Risk reduction

Measure actual safety

Execution Blueprint 🛠️

The how: 3 practical steps to put this strategy into action at your organisation

Three heuristics. Three immediate applications. Pick one. Measure impact.

That's the engineering approach. That's how GRC should work.

Application 1: Reframe Your Next Control Report

Your quarterly board report currently shows:

  • 147 controls total

  • 139 implemented (95% complete)

  • 124 tested (84% coverage)

  • 8 findings identified (3 high, 5 medium)

Engineering reframe for the same data:

  • Credential attacks: 73% reduction year-over-year (MFA implementation impact)

  • Malware incidents: 42% reduction (EDR coverage effectiveness)

  • Data exfiltration attempts: 89% blocked (DLP control performance)

  • Mean time to detection: Improved from 14 days to 4 hours

  • Estimated breach cost prevented: $2.3M

Same controls. Different framing. One shows you completed work. One shows you reduced risk.

This week: Rewrite one section of your next board report using outcome metrics instead of activity metrics. If you need inspiration, check out how we framed threat-driven metrics in Are You Building for Auditors or Attackers.

Application 2: Cut Your Overhead in Half

Your GRC programme currently includes:

  • Dozens of different metrics across 8 dashboards

  • Weekly status meetings with 12+ stakeholders

  • 50-page policies for standard processes

  • Monthly reporting on every control

Apply the simplicity filter. For each element, ask:

"Does this change decisions?"

Metrics: Keep only what drives action (usually 3-5 core metrics)

Meetings: "Status update"? Cancel. Send async. "Approve decision"? Keep, time-box to 30 minutes.

Policies: Cover 95% of scenarios in 5 pages. Handle exceptions separately.

This week: Cancel 3 recurring meetings. Replace with async updates. Document the 3 metrics that actually drive your decisions. We covered this signal vs noise framework in depth here.

Application 3: Stop Goldplating, Ship Baseline

XYZ framework requires annual access reviews. You've designed quarterly automated reviews with risk scoring and custom dashboards for 3 months.

Meanwhile: 12 terminated employees still have database access.

You're goldplating whilst the baseline isn't even shipped.

Baseline (what audit actually requires, what protects you):

  • Manager reviews access quarterly

  • Removes inappropriate access

  • Documents review

  • Ships this week

Excellence (what you're designing):

  • Automated risk scoring

  • Custom dashboards

  • Integration with 5 systems

  • Ships over next 6 months

Baseline prevents breaches today. Excellence optimises processes tomorrow.

This week: Identify one control where you're goldplating. Ship baseline by Friday. Schedule excellence for quarterly iteration. For practical implementation examples, see how we automated quarterly access reviews in practice.

Conclusion

Three engineering heuristics for GRC:

  1. Measure outcomes, not activity (threat reduction vs control counts)

  2. Maximise work NOT done (ruthlessly eliminate noise)

  3. Ship in small batches (baseline this week, excellence over quarters)

These aren't about writing code. They're about thinking in systems.

Traditional GRC optimises for auditor satisfaction. Engineering GRC optimises for threat reduction.

Same effort. Different focus. Better results.

Did you enjoy this week's entry?

Login or Subscribe to participate in polls.

That’s all for this week’s issue, folks!

If you enjoyed it, you might also enjoy:

See you next week!

Reply

or to participate.