• GRC Engineer
  • Posts
  • πŸ’¬ Build vs. Buy: We Did Both. Here's What We Learned (RSAC 2026 Talk Summary)

πŸ’¬ Build vs. Buy: We Did Both. Here's What We Learned (RSAC 2026 Talk Summary)

We cycled through four GRC tools in four years before we built our own. The exercises that made us better builders are the same ones that make you a better buyer.

As some of you might know, I was in San Francisco last week for RSAC (an insane week!), where Byron and I did a talk on Build vs. Buy for GRC. This is the gist of what we talked about for anyone who could not attend.

We cycled through four GRC tools in four years at GitLab.

Not because we were indecisive. Because none of them fit. Every time we bought a new platform, the same cycle played out: exciting demo, promising sandbox, decent onboarding, then six months in, the workarounds start piling up and the team is back to spreadsheets for anything that matters.

Sound familiar? Based on the audience reaction at RSAC, it does for most of you.

After the fourth tool, we made a different call. We stopped shopping and started building. What came out of that process was GAS (GitLab Assurance Solution), our internal GRC platform. But the real insight from our talk wasn't about building. It was this:

The exercises you go through to build a GRC platform are the exact exercises that make you a better buyer.

You don't need to build. But you need to think like a builder.

PS: If you have the right access, you can actually watch the full talk and download the slides here.

IN PARTNERSHIP WITH

GRC Engineering 101: Program as Code

Whether you're just getting started or looking to level up GRC engineering in your organisation, this guide is for you.

What's inside:

β†’ Declare controls, requirements, and risks in Terraform
β†’ Change management executed through PRs & CI/CD pipelines
β†’ The tech stack GRC engineers should use day to day
β†’ Real code examples (not marketing fluff)

Start With Business Problems, Not Technical Requirements

The first thing we did was survey every internal customer. Not about features. Not about integrations. About business problems.

What's eating your time? Where do you feel the friction? What would your ideal day look like?

Most GRC tool evaluations skip this entirely or rush through it. Teams jump straight to "we need a controls module that does X" without asking why. At GitLab, previous buying rounds had the same issue: one person would be the vendor contact, gather surface-level requirements, run a demo, and declare a winner. Then three months later, the rest of the team would discover the tool couldn't handle their actual workflows.

When we slowed down and talked to everyone (sync and async, ICs and managers, the person who builds reports and the person who reads them), clear patterns emerged:

- Data was scattered across tools with different definitions

- Manual reconciliation was eating hours every week

- Reports were unreliable because source data lived in multiple places

- Everyone wanted to spend time on work that matters instead of maintaining spreadsheets

Those are business problems. They don't change whether you're building or buying. And critically, they were always there. We just never created the space to surface them properly during previous tool evaluations because the process was compressed, siloed, and driven by whoever happened to be the vendor contact.

Define Your Data Layer Before You Touch a Tool

Here's where most programmes get it wrong: they let the tool define their data model.

We did the opposite. Before writing a single line of code, we mapped every data object, every field, every relationship. Here's a simplified version of what that looks like:

Grey areas are the enemy. If your team can't agree on what the "domain" field in a control means, that's not an edge case. That's a data problem that will poison every report you build on top of it.

We forced hard definitions:

Rule

Why It Matters

Active controls must link to a requirement or risk

If it links to neither, why is it in the system?

No context hidden in title strings

"AC-1: Access Control" needs "Access Control" as a queryable field

Required relationships enforced at the platform level

Prevents bad data from entering the system silently

Every field has one definition, agreed upon by the team

Eliminates the "what does this field mean?" conversations

This exercise is brutal. It forces your programme to confront ambiguity that's been papered over for years. But it's the foundation everything else sits on.

Buyers: you can do this too. Before you open a single vendor demo, map your data. Draw the ERD. Walk through your reports and check whether the data structure supports them. If you can't draw your programme's data flow on a whiteboard, no tool is going to fix that.

Ruthlessly Define Your MVP

Scope creep kills GRC implementations. We've seen it from both sides.

When building, developers want to add features. Customers want their ideal state on day one. When buying, shiny integrations you've never had before get more demo time than validating whether the controls module actually fits your testing process.

We defined MVP as "replace what exists, nothing more." Not "be better than what exists." If we measured success against "better," the target would keep moving. Replace first. Prove the foundation works. Then iterate.

This applies directly to buying. Your non-negotiables should be the foundational workflows your programme can't function without. Not the nice-to-haves. Not the AI features. Not the integration with a tool you might use someday. Can the platform handle your core objects, relationships, and reporting needs without workarounds?

If you need workarounds on day one for foundational workflows, you're already on the path to shopping again in 18 months.

One more thing on this: load real data during your evaluation. Not "Test Control 123" and "Test Assessment 456." Real data. Your actual controls, your actual assessments, your actual edge cases. We learned the hard way that simplified test data hides every gap. The problems only show up when your 500 assessments need to be reported across systems and the "field" you were using turns out to be an unqueryable custom text box.

Standardise the Foundation, Customise the Presentation

The biggest architectural decision we made was being extremely opinionated at the data layer and flexible at the reporting and UI layer.

Layer

Our Approach

Why

Data

Extremely standardised, opinionated

Clean data = any report is possible

Relationships

Required links enforced

No orphaned objects, no ambiguity

Reporting

Fully customisable

Foundation is solid, so changes aren't breaking

UI

Flexible, user-driven

Users see what they need without workarounds

When your underlying data is clean and consistent, you can build any report on top of it. When it's messy, every report is a custom project that requires someone to manually reconcile before anyone trusts the output.

This is where buying teams should spend their evaluation time. Two tools that look identical in a demo can be fundamentally different underneath. Which one has a data model that matches how your programme actually works? Which one lets you build on top of it with APIs? Which vendor has a roadmap that aligns with how you need to grow?

The sleek UI is irrelevant if the data model underneath forces you to repurpose modules or create parallel tracking systems.

The Real Takeaway

We built GAS and it's working. Our data is clean (10,000+ fields cleaned up at launch). Reports that used to take a quarter to assemble are available in real time. Integrations match how we actually work, not how a vendor assumed we work.

But the point of our talk wasn't "you should build." Most teams shouldn't.

The point is that the process of building forced us to do the hard work that everyone skips when buying: understanding our customers, defining our data, mapping our workflows, and ruthlessly prioritising what actually matters.

You can do all of that without writing a single line of code.

Your action plan:

- [ ] Next week: Talk to your colleagues like customers. Ask about friction, not features.

- [ ] Month 1-3: Define your data objects. Draw your ERD. Map your workflows tool-agnostically.

- [ ] Month 4-6: Run a proof of concept with real data focused on your foundation, not the feature list.

The goal isn't a perfect tool. It's a programme that knows itself well enough to pick the right one. And if you're already mid-purchase or recently deployed, it's not too late to run these exercises. They'll either validate your choice or give you the clarity to course-correct before the workarounds become permanent.

One More Thing: AI Changes the Build vs. Buy Equation

We built GAS before agentic coding was viable. Every line was written by hand. Today, the cost of execution has collapsed. Code is virtually free. More teams will be able to build than ever before.

But you just can’t escape the planning.

In an agentic coding world, intent is everything. The quality of your output is directly tied to the quality of your input. If you can clearly articulate what your data objects are, how they relate, and what your workflows look like, an AI agent can build it. If you can't, the AI will happily build the wrong thing faster than you ever could by hand.

The exercises in this newsletter aren't just prep for buying or building. They're the foundation that makes AI output trustworthy. Strong definitions, clean data layers, well-mapped relationships: that's what separates a programme that can leverage AI from one that generates more mess at machine speed.

The planning was always the hard part. Now it's the only part that matters.

Byron and I had a great time presenting this at RSAC 2026. If you want to dig deeper into any of these topics, connect with either of us on LinkedIn.

Did you enjoy this week's entry?

Login or Subscribe to participate in polls.

That’s all for this week’s issue, folks!

If you enjoyed it, you might also enjoy:

See you next week!

Reply

or to participate.