- GRC Engineer
- Posts
- ⚙️ Your GRC Program Serves the Audit. The Best GRC Engineering Programs Don't.
⚙️ Your GRC Program Serves the Audit. The Best GRC Engineering Programs Don't.
How the discipline collapsed into evidence collection, what enterprise GRC teams I know actually focus on, and why the audit should be a translation layer, not the foundation it's built on.
Every single day I get a version of the same question.
"What do you think about [the thing that happened]?" "Is GRC Engineering still the right approach?" "Should we slow down on automation?"
I'm not going to name any company or event. You already know what happened. What I want to talk about is what it revealed.
It revealed that most GRC programs are built around the audit.
Think about how the typical program operates. The audit is in six weeks, so evidence collection starts. The auditor prefers screenshots, so that's the format. The auditor scoped 6 projects, so those are the ones that get attention. The audit timeline drives the roadmap. The auditor's preferences shape the program.
The program exists to serve the audit. This is backwards.
I wrote about why the audit methodology itself doesn't hold up a few weeks ago. This piece isn't about fixing the audit. It's about building a program that doesn't need the audit to be fixed.

IN PARTNERSHIP WITH

Why FedRAMP Is About to Matter to Every GRC Team (Even If You Don’t Sell to the Government)
Compliance doesn't have to suck.
Stop drowning in "soul-crushing" spreadsheets. Whether you’re tackling FedRAMP 20x, Rev 5, or CMMC, Paramify automates the heavy lifting. Generate instant, machine-readable SSPs and POA&Ms that are actually audit-ready.
Get compliant 90% faster at 1/4 the cost - and like your job at the same time.

The Nesting Problem
The online GRC Engineering conversation has a zoom problem. Most of the noise focuses on one thing: automated evidence collection.
Here's what that looks like when you zoom out:
Layer | What it is | % of online noise |
|---|---|---|
Automated evidence collection scripts | Python + AWS | ~70% |
Evidence collection (all methods) | Scripts, APIs, manual, platform-native | ↓ |
Control testing | Testing whether controls actually work | ↓ |
Compliance | Meeting regulatory and framework requirements | ↓ |
GRC as a whole | Governance + Risk + Compliance | ↓ |
Security | The actual discipline GRC serves | ~5% |
The internet is obsessing over the innermost layer and calling it the whole discipline.
When a company cuts corners on automated evidence collection and things break, the conclusion becomes "automation is dangerous" or "we need quality over quantity." When someone vibe-codes a dashboard and calls themselves a GRC Engineer, the conclusion becomes "GRC Engineering is just coding."
Both conclusions come from staring at the innermost layer and mistaking it for the picture. I explored this exact pattern in Compliance-as-Cope, where I argued we followed the path of least resistance and risked making GRC Engineering shelfware.
The risk is real. But the answer isn't to retreat. It's to zoom out.
So what does GRC Engineering actually look like when you move beyond the innermost circle? I asked the people working in those outer layers.


POV: The GRC Engineering Starter Pack.

What Enterprise GRC Teams Actually Think About
I talk to heads of GRC at Fortune 500 companies and the fastest-growing startups on the planet every week. I know their programs and challenges (fairly) well. Here's what none of them are worried about:
They are not worried about whether to use Python or Go for evidence collection scripts.
Here's what keeps them up at night:
Exception handling. Every program has hundreds of edge cases that don't fit neatly into any framework. A control that works for 90% of systems but breaks for the 10% running legacy infrastructure. An access review process that makes sense for employees but falls apart for contractors. How do you manage the exceptions without creating a parallel program to manage the exceptions?
Maintainability. The script that works today needs to be maintained tomorrow by someone who didn't write it. The integration that pulls from your SIEM needs to survive when you switch SIEMs next year. How do you build things that outlast the person who built them?
Reskilling. Most enterprise GRC teams include people from Big Four audit backgrounds. Brilliant at methodology, risk assessment, and stakeholder management. Less experienced with data models, workflows, and technical architecture. How do you bring them along without making them feel obsolete? I covered the skill stack this requires in What GRC Engineering Is and What It Isn't.
Convincing leadership. Not "can I have budget for a tool" but "how do I demonstrate that the program's value extends beyond audit outcomes?" When your program is measured solely by whether the audit was clean, leadership sees GRC as a cost centre that produces a badge. Signal vs. Noise explored exactly this: shifting the conversation from compliance metrics to risk outcomes.
These are engineering problems. They have nothing to do with which scripting language you use.

The Shift: Audit-Driven vs. Program-Driven
The programs that work at scale share one thing: they were not built around the audit. They were built around the program's own data model, its own workflows, its own definitions of what good looks like.
Here's what the difference looks like in practice:
Audit-driven program | Program-driven audit | |
|---|---|---|
Roadmap set by | Audit timeline and auditor requests | Program objectives and risk priorities |
Evidence format | Whatever the auditor prefers | Structured, queryable data serving multiple consumers |
Data model owned by | Inherited from the framework | The team, based on actual operations |
Control definitions | Framework language first | Actual risk first, translated to framework language for audit |
When scope changes | Program reshapes itself | Translation layer updates, program stays stable |
ROI of automation | Depends on whether auditor can consume output | Realised across risk, reporting, security, and audit |
Most teams sit on the left. Every program I admire sits on the right.
The difference isn't tooling. It's who the program was built for.
If you built it for the auditor, every automation investment is a gamble. You produce structured data, but the auditor can't consume it. So you translate it back into screenshots. Then you explain why the JSON is trustworthy.
That's net negative. More work for the same outcome.
And from our State of GRC 2026 survey, auditors are the least technical GRC persona, so this gap isn't closing on its own.
If you built it for the program, the data serves risk decisions, security partnerships, board reporting, and the audit. The audit is one consumer. The translation is a thin display layer on top of a solid foundation.

How to Make the Shift
→ Own your data model. It exists because your program needs it, not because an auditor asked for it. I walked through this exercise in Build vs. Buy: map every data object, every field, every relationship before touching a tool.
→ Design workflows for your team. Not for audit walkthrough convenience. The Control Orchestration piece laid out the TEVEE framework (Trigger, Execution, Validation, Evidence, Escalation) for exactly this purpose.
→ Make evidence queryable because it's useful, not because it satisfies a control test. If your structured data only gets used during audit season, you haven't built infrastructure. You've built a seasonal decoration.
→ When the audit comes, translate. Take your program's output and present it in the format the auditor can consume. That's work about work, but it's a thin layer on top of a solid foundation. Not the foundation itself.
If your program only makes sense through the lens of an auditor's checklist, you don't have a program. You have an audit response plan.

What Hasn't Changed
People keep asking if recent events change my position. They don't.
GRC Engineering is about more speed AND more assurance. Never one at the expense of the other. If your automation reduces quality, you didn't engineer the process. You skipped the engineering and went straight to the automation. That's Mount Stupid, not GRC Engineering.
The programs I work with aren't worried about the current noise. They never built around the audit. They never reduced the discipline to evidence collection scripts. They built programs that serve governance, risk, and compliance outcomes.
We wrote the manifesto to define this discipline. Not as automation. Not as coding. As engineering thinking applied to GRC problems, whether through spreadsheets or code, in cloud or on-premise, at a startup or in the public sector.
The discipline is bigger than the innermost circle. It always was.

Did you enjoy this week's entry? |

That’s all for this week’s issue, folks!
If you enjoyed it, you might also enjoy:
My spicier takes on LinkedIn [/in/ayoubfandi]
Listening to the GRC Engineer Podcast
See you next week!
Reply