- GRC Engineer
- Posts
- ⚙️ Your First GRC Lead Left. Their Instincts Are Still Running Your Program.
⚙️ Your First GRC Lead Left. Their Instincts Are Still Running Your Program.
Four layers of inheritance are running your GRC programme. Here is an audit framework to find out what is intentionally designed and what is just left over and you have to engineer for.
"No platform fits our programme. We are too mature for them."
Maybe. Or maybe your programme is not mature. It is specific. And that specificity is legacy you inherited from four different sources, none of which you chose.
Every GRC programme carries inherited primitives. How you collect evidence. How you score risk. How you talk to auditors. How you report to the board. Most of these were not designed. They were deposited by whoever shaped the programme before you arrived.
This newsletter gives you a framework to audit your own inheritance. Four layers, from the most personal to the most structural. By the end, you will know what is intentional and what is just left over.

IN PARTNERSHIP WITH

Five GRC experts in NYC: May 13
If you're in New York, clear your schedule: Jasmine Kaur, Maril Vernon, Ayoub Fandi, Emre Ugurlu, and Jake Bernardes are all in the same room, discussing the future of GRC.
A panel on the current state of GRC, followed by a hands-on workshop on next steps. The workshop format is unlike anything else on the GRC event circuit: interactive, built around real problems, and run by the people actively working on them.

The Inheritance Stack
Your GRC programme is running on four layers of inherited decisions. Each one narrows what you can do, how you think, and what you build next.
┌─────────────────────────────────────────────┐
│ LAYER 4: Industry Legacy │
│ "This is how GRC has always worked" │
├─────────────────────────────────────────────┤
│ LAYER 3: Company Context │
│ "This is how we do things here" │
├─────────────────────────────────────────────┤
│ LAYER 2: CISO Priorities │
│ "This is what leadership cares about" │
├─────────────────────────────────────────────┤
│ LAYER 1: Previous GRC Lead │
│ "This is how the person before you did it" │
└─────────────────────────────────────────────┘
↑ You are building on top of all of this.


Some cards you might need to collect soon!

Layer 1: The Previous Head of GRC
Your first GRC lead came from somewhere. That somewhere shaped everything.
If they came from third-party risk, your programme speaks questionnaire. Risk assessments are questionnaires. Control attestations are questionnaires. Vendor reviews are questionnaires. The entire operating model is: send a form, receive a form, store the form, repeat.
If they came from a Big 4 audit firm, your control descriptions read like workpapers. Evidence requirements mirror a template from 2019. The programme is optimised for auditor comfort, not business value. You are building for auditors, not attackers.
If they came from consulting, your risk framework is qualitative. Heat maps. Five-by-five matrices. The primitives do not support numbers. They support colourful slide decks. Adding quantification means ripping the entire thing out, which is why cyber risk quantification feels like a revolution instead of an upgrade.
Audit this layer:
Question | If yes, it is inherited |
|---|---|
Can you name the person who built the programme? | Their background shaped your primitives |
Do your risk assessments use the same format as your vendor reviews? | One methodology was copy-pasted everywhere |
Are your control descriptions written in audit language, not business language? | An auditor designed them |
Is your risk scoring qualitative with no path to quantitative? | A consultant chose that, not you |
Would the person who built it recognise the programme today? | It has not evolved since they left |
If your risk register looks like a compliance checkbox, you might want to read how to stop making risk management a compliance control. If it is entirely static, risk registers reimagined covers what action-driven looks like.

Layer 2: The CISO's Priorities
Your CISO did not build the programme. But they shaped what it optimises for.
A CISO who came from incident response wants speed. Your programme prioritises detection and escalation. Risk reporting is framed around breach scenarios. Controls that do not map to incident timelines get deprioritised.
A CISO who came from enterprise sales enablement wants certifications. Your programme prioritises audit readiness. SOC 2 is the programme. Everything else is secondary. The GRC graduation from compliance theatre never happened because the CISO never needed it to.
A CISO who came from engineering wants data. Your programme prioritises technical depth. But it might over-index on tooling and under-index on stakeholder management, which is half the job.
The State of GRC 2026 showed that 73.6% of CISOs use no commercial GRC tool. That is not a technology problem. It is a priority signal. If your CISO does not believe in the category, your programme is shaped by that disbelief whether you see it or not.
Audit this layer:
Question | If yes, it is inherited |
|---|---|
Does your programme's top priority match your CISO's background? | Their instincts became your strategy |
Is your board reporting format something the CISO designed or approved? | The report shape constrains what you can say |
Do you have budget for things the CISO does not personally value? | Their blind spots became your gaps |
Has your CISO ever used a commercial GRC tool themselves? |

Layer 3: Company Context
Every company has constraints that shape the programme in ways nobody documents.
Deployment model. If your company runs on Kubernetes, your evidence looks different from a company running on bare metal. If your platform team builds opinionated Terraform modules with encryption and logging baked in, you already have compliance by construction. You might not know it.
Regulatory pressure. A fintech under SOX and PCI builds a different programme from a SaaS startup doing SOC 2 for the first time. The framework mapping trap hits harder when you have 11 frameworks to satisfy. You stop engineering and start mapping.
Company culture. Engineering-led companies let GRC teams impact code. Sales-led companies treat GRC as a cost centre. The team topology that works depends on where GRC sits in the org chart and who listens.
Growth stage. Docker rebuilt their GRC programme from scratch in 6 months with 5 people. That is possible at a certain stage. At enterprise scale, the implementation challenges are different.
Audit this layer:
Question | If yes, it is inherited |
|---|---|
Was your programme designed for a different company stage? | Startups outgrow startup programmes |
Does your evidence collection ignore data your engineering team already produces? | You built a parallel system. Read why that is backwards |
Is your team topology based on who was available, not what the programme needs? | Org design by accident |
Do you have controls that exist only because a specific customer asked for them? | Customer-driven scope creep |

Layer 4: Industry Legacy
This is the deepest layer. It is the one you did not choose, your predecessor did not choose, and your CISO did not choose. The entire industry deposited it.
Annual audit cadence. SOC 2 observation periods, ISO surveillance cycles, annual penetration tests. The rhythm of your programme is set by audit calendars, not by when risk actually changes. Your certification covers 100% but your auditor checked 0.07%.
Evidence as artefact. Screenshots, PDFs, attestation forms. The industry built GRC around static evidence because that is what auditors knew how to evaluate. The alternative, compliance as telemetry, exists but the forcing functions do not push you there. Auditors do not know how to evaluate code-based controls. Customer questionnaires do not ask about your engineering practices. SOC 2 does not reward automation. The incentive structures actively prevent innovation.
Compliance as the goal. Compliance as cope documented this pattern. We automated evidence collection instead of improving security outcomes. The industry optimised for the audit, not the programme.
GRC as a function, not a capability. The industry treats GRC as a team that does compliance. GRC Engineering reframes it as a capability that cuts across the organisation. But most programmes are still structured around the old model.
Audit this layer:
Question | If yes, it is inherited |
|---|---|
Does your programme activate before audits and go quiet after? | You are running on audit-cycle time |
Is your primary evidence format screenshots and PDFs? | The industry chose that, not you |
Would your programme exist if no framework required it? | Compliance is the goal, not the outcome |
Is "GRC" a team name in your org chart? | The function model, not the capability model |

The Inheritance Audit: What Is Left?
Run all four layers. Mark what is inherited versus what is intentional.
YOUR GRC PROGRAMME
│
├── LAYER 1: Previous GRC Lead
│ ├── [inherited] Questionnaire-based everything
│ ├── [inherited] Qualitative risk scoring
│ ├── [intentional] Control taxonomy structure
│ └── [inherited] Evidence templates from 2019
│
├── LAYER 2: CISO Priorities
│ ├── [inherited] Certification-first strategy
│ ├── [intentional] Board reporting cadence
│ └── [inherited] No commercial tool budget
│
├── LAYER 3: Company Context
│ ├── [contextual] Kubernetes deployment model
│ ├── [contextual] SOC 2 + ISO 27001 scope
│ └── [inherited] Team of 2, designed for team of 2
│
└── LAYER 4: Industry Legacy
├── [inherited] Annual audit rhythm
├── [inherited] Screenshot evidence
├── [inherited] Compliance as the goal
└── [inherited] GRC as a function
What you will likely find: most of your programme is inherited. The intentional parts are the ones you designed after understanding the constraints, not the ones you accepted by default.
That gap between inherited and intentional is where your programme's potential lives.

Where to Go Next
You cannot fix all four layers at once. But you can start with one primitive in one layer.
1. Name your architects. Write down who built your programme, what your CISO prioritises, what your company constrains, and which industry defaults you accepted. That is your inheritance map.
2. Pick one inherited primitive to test. Run one control as a query instead of a questionnaire. Score one risk quantitatively instead of with a heat map. Collect one piece of evidence from your observability stack instead of a screenshot. See if the output is better.
3. Separate identity from architecture. "We are mature" and "we have always done it this way" are not the same sentence. A mature programme adapts. A frozen one insists it does not need to. The GRC Engineering Maturity Model can help you assess where you actually sit.
4. Engineer your process before you automate it. If the primitives are wrong, automating them makes wrong faster. Map the process first. Draw the data flow. Find the inherited assumptions. Then decide what to build, what to buy, and what to let go of entirely.
Your GRC programme is not broken.
It is running exactly the way it was designed to run by people who are no longer there, under priorities that may have shifted, inside a company that has changed, on top of industry defaults that were never questioned.
The inheritance is not the problem. Not knowing it is there is the problem.
Audit the stack. Find what is yours. Then build from there.

Did you enjoy this week's entry? |

That’s all for this week’s issue, folks!
If you enjoyed it, you might also enjoy:
My spicier takes on LinkedIn [/in/ayoubfandi]
Listening to the GRC Engineer Podcast
See you next week!
Reply