- GRC Engineer - Engineering the Future of GRC
- Posts
- 🔎 Rebuilding GRC from Scratch at Docker w/ Emre & Chad
🔎 Rebuilding GRC from Scratch at Docker w/ Emre & Chad
How two GRC engineers, in six months, were able to completely rebuilding a GRC program at a major tech company by leveraging GRC Engineering principles everywhere.
IN PARTNERSHIP WITH

Who doesn’t like free stuff?
It’s Cybersecurity Awareness Month, and the team at Mastermind started it with a bang—dropping its newest Lead Auditor course centered around the buzzy ISO 42001 standard and AI governance.
This is actually the second installment in the Mastermind Lead Auditor courses, and—against all odds—they continue to offer these programs, including the final exam, totally free.
Who is Mastermind? This is the same certification body that issued the world’s first accredited ISO 42001 certificate back in July 2024 and has continued to blitz the market, having followed up with certifying the likes of Google, Microsoft, Grammarly, Dataminr, Thoropass, Anecdotes, Swimlane, and Sierra—to name a few. They certainly know a thing or two about this ISO standard.

Two engineers. Six months. Putting GRC Engineering principles in practice, everywhere.
Earlier this week I dropped my conversation with Emre Ugurlu and Chad Fryer.
75 minutes of pure GRC engineering gold: how they rebuilt Docker's entire programme in 6 months with 5 people and zero traditional GRC management.
Today's newsletter breaks down everything they shared.
Prefer the highlights? Start with the TL;DR (2 minutes).
Want the complete playbook? Read the full operational breakdown (15-20 minutes).
This is reference material. Bookmark it.
In this Docker Deep-Dive:

⚡ TL;DR (2 minutes)
What They Built in 6 Months
Initiative | Timeline | Impact |
---|---|---|
Security Training Platform | A few weeks | 100% completion, zero missed deadlines |
Continuous Compliance | Ongoing | Moving towards full automation |
Risk Management Programme | 1.5 weeks | Self-service, minimal GRC touchpoints |
Customer Trust Automation | Ongoing | AI-powered questionnaire pre-fill |
Cost Model Transformation | 6 months | GRC becomes revenue generator |
Team Structure:
3 GRC engineers
1 analyst
1 customer trust specialist
"The sixth member" (Claude AI)
Management: Non-GRC technical leader (autonomy-first)
Key Philosophy:
"If we build the most cool thing but nobody uses it, it doesn't matter"
Technical Stack:
Orchestration: JIRA Service Management
Automation: AWS Lambda, EventBridge
Communication: Slack Bot (custom-built)
Development: Claude Code, GitHub
Evidence: Multiple systems integrated for automated collection

⚙️ The Engineering Deep-Dive
(15-20 minutes)

⏰ What They Inherited
When Emre and Chad joined Docker ~6 months ago, they found a familiar pattern: compliance frameworks dictating organisational behaviour rather than serving it. Processes designed for a team of 5-6 people, operated by 2-3. Analyst-built workflows that didn't fit an engineering culture. And the aftermath of a company pivot from enterprise (narrow and deep compliance) to SaaS (broad and shallow) that left requirements misaligned.
"Instead of bending over backwards, we're supposed to make it fit the organisation. Docker is really unique in the way it operates, and we have to adjust compliance accordingly."
The insight that drove everything else: in a developer-first organisation, engineers talking to engineers changes everything. When analyst-designed processes meet engineering teams, friction is inevitable. When engineers design for engineers, adoption follows naturally.
First 90 Days: The Foundation
Week 1-2: Deep Gap Analysis
Emre and Chad spent their first weeks in what they call "dark room" sessions: 4-5 hours daily, stress-testing every control across ISO 27001 and SOC 2. Not just reading requirements, but actively challenging them:
Does this control actually work?
Can we prove it works?
Who owns it?
Is this necessary or theatre?
Can this be automated?
"We put ourselves in a very dark room, hoodie up, just going through every single control"
They weren't just cataloguing gaps. They were building relationships with control owners, understanding the real technical environment, and establishing credibility through technical depth rather than compliance speak.
Week 3-4: Collaborative Stack Ranking
Rather than GRC dictating priorities, they brought Security, IT, HR, and GRC together for joint stack-ranking sessions. Everyone provided input. Everyone had visibility into trade-offs.
The initial plan:
P0: Immediate compliance gaps
P1: Security training rebuild
P2: Risk management programme
P3: TPRM optimisation
P4: Privacy programme
The reality: Privacy jumped from P4 to P1 mid-process when new customer requirements emerged.
Key learning: Your roadmap will be interrupted. Plan for flexibility, not perfection.
Week 5-12: First Builds
Security training platform
Slack automation for compliance tracking
JIRA workflow for risk management
Foundation for continuous compliance

IN PARTNERSHIP WITH

The Compliance OS for Modern GRC Leaders
Audits are no longer one-off, they’re constant, complex, and costly. Legacy tools add chaos, but Sprinto is the Compliance OS built for modern GRC leaders. It automates evidence collection, reuses proof across frameworks, and keeps compliance always-on.
The impact: 60% faster audit readiness, 100% risk oversight, and confidence for boards and regulators, all without scaling headcount. Compliance stops being a firefight and becomes a predictable business function.

🏗️ Build #1: Security Training Platform
The Challenge
The legacy LMS had no notifications. Training compliance was tracked manually. People missed deadlines. Management changes meant things fell through cracks. The process was broken, and it was creating audit risk.
The Solution: Rapid Internal Build
Rather than evaluating vendors (18-34 week timeline), they got permission to try building it themselves. Leadership's mandate: "If you can do it better in a few weeks, build it."
They built it using Claude Code for development acceleration, created a gamified user experience, and integrated a Slack bot for automated compliance tracking.
Technical Architecture
The platform itself handles user progress tracking, content management, and completion analytics. But the real innovation was the automation layer:
Slack Bot with Escalation Patterns:
10 days before due date: User reminder
5 days before: CC the manager
Overdue: Direct message to manager
The bot queries the training platform's API, checks completion status, and triggers notifications automatically. No manual tracking. No human intervention unless someone actually needs help.
Results
Metric | Before | After |
---|---|---|
Completion Rate | ~85% | 100% |
Missed Deadlines | 12-15/quarter | 0 |
Manual Tracking | 8 hrs/month | 0 hrs |
User Engagement | Low | Highest ever |
But the metrics don't capture the full story. They eliminated vendor costs, proved the build-first model to leadership, and created something other internal teams now want to use.
"I've never seen this much engagement for training before"
Why gamification mattered: Training transformed from compliance chore to engaging experience. Leaderboards, progress bars, achievement badges. It sounds trivial, but user experience drives adoption more than technical perfection.
Why rapid development was possible: Claude Code accelerated development, but the real enabler was having engineers who could review and correct AI outputs, validate API integrations, and make architectural decisions quickly.

🏗️ Build #2: Risk Management in JIRA
The JIRA Decision
They chose JIRA Service Management despite significant pain points. Week and a half of learning "nuances." UX that requires "a bit of teaching." Backend configuration complexity.
Why JIRA anyway?
Factor | Rationale |
---|---|
Users already there | No context switching = higher adoption |
Audit trail built-in | Compliance requirement solved |
Workflow automation | Possible, just requires work |
Integration | Rest of company uses it |
"If you're going to be a company where we use JIRA as a project management tool, that's where risks should exist"
The Risk Management Workflow
Three-Tier Approach:
Tier 1 (Low Risk):
Automated classification
Self-service assessment (<2hrs target)
Minimal GRC involvement
User makes treatment decision
Tier 2 (Medium Risk):
Collaborative review
GRC provides guidance
Joint decision-making
Tier 3 (High Risk):
Full GRC engagement
Detailed analysis
GRC recommends, leadership decides
The workflow routes automatically based on initial intake. Treatment options (accept, mitigate, transfer, avoid) flow into monitoring and quarterly leadership reviews.
The JIRA Struggle (Honest Version)
Emre spent a week and a half just understanding JIRA before making actual progress. At one point, they seriously considered building a custom solution. The sunk cost fallacy kicked in. But ultimately, constraints bred innovation.
What they call "hackey ways" emerged: creative workarounds to JIRA limitations, solutions that wouldn't exist without the constraints.
Design Principles in Action:
Self-service: Automated routing, minimal GRC touchpoints
Time-boxed: <2hr assessment target drives process design
Context-aware: Different paths for different risk levels
Audit-ready: Workflow history provides compliance trail

🏗️ Build #3: Continuous Compliance Infrastructure
The Vision: High-Level Automation
Docker is working towards significant automation of their compliance evidence collection across SOC 2 and ISO 27001. The approach combines native tool plugins, documented policies, and custom API development.
Current State Architecture
Multiple systems connected:
GitHub, AWS, Google Workspace, Okta, Slack, JIRA
Additional systems across engineering, IT, security
Automated evidence generation:
Real-time collection vs point-in-time
Version controlled
Context enriched
Auditor accessible
The technical implementation uses AWS Lambda for custom evidence collection and EventBridge for orchestration. When native plugins don't exist, they build custom extractors that transform system data into compliance evidence.
Evidence Collection: Before vs After
Traditional audit cycle:
Auditor requests evidence
Email 25+ people
Wait for responses
Chase non-responders
Consolidate evidence
Format for auditor
Upload to portal
Answer follow-ups
Result: 80-100 hours of audit prep, last-minute scrambling, stressed teams
Automated approach:
Auditor logs into portal
Views real-time evidence
Filters by control
Downloads if needed
Result: 10-15 hours of audit prep, no scrambling, happy teams
"We want auditors pulling from our tool, not scheduling calls with 50 people"
Implementation Phases
Phase 1: Plugin configuration and tuning
Phase 2: Custom API development
Phase 3: Refinement and edge cases
Phase 4: Ongoing maintenance and expansion
The goal is to reach a point where audit prep becomes truly minimal, where compliance becomes genuinely continuous rather than episodic.

🏗️ Build #4: AI-Powered Questionnaire Tool
The Customer Trust Challenge
Beth, their customer trust specialist, was drowning in RFP questionnaires. Multiple products. Growing customer base. Security assessments. Custom due diligence. All flowing through one person.
The traditional process:
Receive questionnaire
Read every question
Identify subject matter experts
Chase down responses
Consolidate answers
Format and send
Bottleneck: One person, multiple products, scale problems
The AI Solution
Architecture:
Knowledge base: Internal docs, past RFPs, security policies, compliance certs, trust centre content
AI pipeline: Question ingestion → knowledge query → answer generation → confidence scoring
Human review: High confidence (>80%) auto-approved, lower confidence flagged for Beth
Self-service: Sales engineers can query directly for real-time answers
Confidence Scoring:
90-100%: Auto-approve (standard questions, clear docs)
70-89%: Quick review (common but needs verification)
50-69%: Detailed review (uncommon or nuanced)
<50%: Flag for SME (novel questions or gaps)
The User Experience Layer
Chad's design philosophy in action:
8-bit Zelda aesthetic (makes it fun)
Built for Beth (the actual user), not for GRC team
Feedback loops integrated
Iterative improvement based on actual usage
"She's skeptical, which I love. She's the customer, not me"
Current State vs Target
Aspect | Current | Target (6 months) |
---|---|---|
Automation | Pilot phase | 80% questionnaires |
Self-service | Limited | Full AE/SE access |
Response time | Days | Hours/same-day |
Beth's workload | 100% | 20% (edge cases) |
The goal is to free her for strategic work: edge cases, complex customer relationships, process improvement.

🤖 The AI Integration Reality
Claude Code as the 6th Team Member
They use Claude Code extensively in their development work. Not as a replacement for engineering skill, but as a tool that accelerates certain tasks.
But with strict discipline:
"Six times out of ten, I have to go correct Claude. The ability to read code and spot flawed logic never disappears"
What AI Does Well:
✅ Rapid prototyping (built in weeks)
✅ Boilerplate generation
✅ API integration assistance
✅ Initial content creation
What AI Doesn't Do:
❌ Replace technical judgement
❌ Understand your specific APIs without docs
❌ Eliminate code review
❌ Work without human oversight
Common AI Errors They Correct:
Outdated API patterns (trained on old documentation)
Redundant methods (doesn't see existing code)
Logic errors (misunderstands requirements)
Version conflicts (suggests incompatible libraries)
Security oversights (hardcoded credentials, exposed secrets)
The Pre-AI Experience Factor
The breakthrough insight: pre-AI coding experience makes you 10x better at using AI.
Without coding background:
Can't spot errors
Get stuck in infinite debug loops
Accept bad outputs
Don't understand when to override
With coding background:
Immediately recognise flawed logic
Correct efficiently
Validate against official docs
Know when to restart vs refine
"If you asked me to build a function from scratch, I'd be like 'what the heck is a function?' But the ability to read code and spot flawed logic never disappears"
AI Best Practices
Do:
Test API calls in analysis tool first
Review every line of output
Understand the logic, don't just copy
Validate against official documentation
Keep custom prompts with proficiency level
Don't:
Blindly accept outputs
Skip code review
Treat it as an oracle
Rely on it without understanding

👥 The Self-Managing Team Model
Why No GRC Manager Works
Traditional GRC organisations have clear hierarchy: Head of GRC → Programme Manager → Senior Analyst → Analysts → Contracted Engineers.
Docker has: Non-GRC technical manager providing organisational support, with three engineers and one analyst self-directing all work.
What makes this work:
1. No Ego About Tasks
Everyone does analyst work when needed. Everyone does programme management when needed. Everyone does engineering when needed.
"I've done programme management, I've been an analyst. I don't feel anything's beneath us"
2. Shared "Grunt Work" Background
Emre: Year at Broadridge doing RFIs ("super sad, super boring")
Chad: Three years help desk before security
That shared experience of tedious manual work creates:
Empathy for what sucks
Clarity on what to automate first
Character and work ethic
Understanding of user pain points
"We all came from a very similar background where you do a lot of the grunt work. And I think that really builds character"
3. Collaborative Decision-Making
No manager assigning tasks. The team collectively:
Stack-ranks priorities
Divides work based on capacity/interest
Reviews each other's work
Makes build vs buy decisions
4. Leadership Trust
Their manager seriously debated whether they even needed a manager. Conclusion: "You guys are doing great without one, keep going."
What leadership provides:
Ruthless prioritisation guidance
Organisational navigation
Strategic air cover
Resource unblocking
What leadership doesn't do:
Tactical GRC decisions
Daily task management
Technical reviews
Build vs buy calls
The Speed Advantage
Decision Type | Traditional | Docker | Speedup |
---|---|---|---|
Build new tool | 4-8 weeks | Same day | 20-40x |
Change priority | 1-2 weeks | Hours | ~50x |
Technical approach | 2-4 weeks | Same day | 10-20x |
This speed compounds. Every quick decision enables the next. Every successful build increases trust and autonomy.
When Self-Management Breaks
This model requires:
✅ Engineering-first culture
✅ Self-directed individuals
✅ Broad skill sets per person
✅ Aligned incentives
✅ Patient stakeholders
✅ Leadership trust
Without these conditions:
❌ Chaos and confusion
❌ Unclear accountability
❌ Conflicting priorities
❌ Political infighting

💰 The Cost Model Innovation
From Cost Centre to Revenue Generator
Traditional GRC teams request budget, justify spend, operate under constraints. They're cost centres.
Docker's GRC team is building a different model:
The Cycle:
Build solutions internally (eliminate vendor costs)
Quantify time/cost savings
Other teams notice and want the solution
"Sell" internally through budget allocation
Savings fund additional headcount
Expanded capacity enables more builds
Repeat
Example: Training Platform
Eliminated commercial LMS cost
HR team wants it for onboarding
IT team interested for security awareness
Legal wants it for policy acknowledgement
Each team "pays" through budget allocation
GRC can justify hiring another engineer
"We're actually going to be a revenue generating team"
Value Creation Breakdown
Initiative | Direct Savings | Indirect Value | Strategic Impact |
---|---|---|---|
Training Platform | Vendor elimination | 8 hrs/month saved | Other team adoption |
Continuous Compliance | Audit prep -85% | Real-time evidence | Faster certifications |
Risk Programme | No consulting | Self-service | Scale enablement |
Questionnaire Tool | Beth's time 60%+ | Faster sales | Revenue acceleration |
Budget Conversation Shift
Before:
"We need $XXX,XXX for GRC tools"
"Can we hire another analyst?"
"This is necessary for compliance"
Defensive posture
After:
"We saved $XXX,XXX this year"
"HR wants to use our platform"
"We're enabling faster sales cycles"
Value-creation posture
This fundamentally changes the team's positioning from cost centre to profit contributor, expanding budget, headcount, and strategic influence.

🎨 The User Experience Philosophy
The Core Principle
"If we build the most cool thing on the planet, but nobody uses it, it doesn't matter"
Chad's spouse works in UX, and it's "either for better or worse rubbed off" on him. The result: every solution designed with end-user adoption as the primary success metric.
The Trade-off:
Moderately automated + high adoption > Perfectly automated + low adoption
Simple and used > Complex and ignored
Fast enough > Theoretically optimal
Real Examples
Training Platform:
Problem: Low engagement
UX Decision: Gamification (leaderboards, progress bars, badges)
UX Decision: Slack notifications (no context switching)
UX Decision: Simple interface, clear next steps
Result: 100% completion rate
Risk Management:
Problem: Complex assessments intimidate users
UX Decision: <2hr time target drives process design
UX Decision: Built in JIRA (familiar tools)
UX Decision: Auto-classification reduces cognitive load
Result: High adoption
Questionnaire Tool:
Problem: Beth overwhelmed
UX Decision: 8-bit Zelda theme (makes work fun)
UX Decision: Confidence scoring (clear priorities)
UX Decision: Self-service for sales (removes bottleneck)
Result: Beth stays engaged, sales moves faster
Feedback Mechanisms
Built-in:
CSAT surveys in all tools
Anonymous comment channels
Usage analytics
Completion/abandonment rates
Organisational:
Security show-and-tell every Friday
GRC demos ("swinging heavier than security")
Direct user interviews
Slack engagement monitoring
The Feedback Paradox
Current challenge: 95% positive feedback, 5% critical.
Why this happens:
Bar was very low previously (anything better seems amazing)
Honeymoon period (too early for real criticism)
People don't want to hurt engineers' feelings
Lack of power users pushing boundaries
What they actually want:
Constructive criticism
Edge case scenarios
Feature requests that challenge assumptions
Pain points they haven't seen

🎯 Skills That Actually Matter
Emre's Core Skills
1. API Documentation Reading
"You're going to come across an API. You need to make a call, pull data, manipulate it"
Universal skill across all GRC engineering work. Whether it's GitHub, AWS, Okta, or custom internal APIs, you need to read docs and implement correctly.
2. Authentication Patterns
Creating API keys, handling tokens, managing secrets. Boring but essential. Every integration requires it.
3. Code Review Discipline
The ability to read through code and identify flawed logic. What makes AI useful without being dangerous.
Chad's Core Skills
1. Documentation
"One of us were to leave, there goes everything without proper documentation"
The "lottery winner" standard: if you won the lottery tomorrow, could someone pick up your work?
2. Curiosity
Actually wanting to dive in and fix things. Not accepting "that's wasted effort." The drive to improve processes.
3. Embracing Failure
"If Emre quit the first time he failed on risk, that wouldn't have gone anywhere"
Debugging is the real education. Every failure teaches something.
4. Multi-Language Understanding
Not writing in ten languages, but understanding code when you see it. Reading > writing.
5. Technical Translation
Bridging GRC and engineering. Explaining technical concepts to non-technical stakeholders.
For Aspiring GRC Engineers
The Playbook:
Find a broken process (something that blocks you or your team)
Build a solution (even if it's rough)
Document the journey (what you learned, what broke, how you fixed it)
Share on GitHub (public portfolio)
Write about it (blog post, LinkedIn)
Show in interviews (working demo beats keywords)
"Find a process that's blocking you, unblock yourself, show value, show impact. That's the quickest pathway"
Specific actions:
AWS free tier account (learning environment)
Build Lambda function (first automation)
Read API docs (GitHub, AWS)
Create evidence collector (portfolio project)
Document on GitHub (public portfolio)
Write blog post (demonstrate communication)
Share on LinkedIn (network building)
Repeat with new project
Portfolio Over Resume
What hiring managers want to see:
❌ "Proficient in Python"
✅ Link to GitHub repo with working code
❌ "Experience with APIs"
✅ Demo of API integration you built
❌ "Strong communication skills"
✅ Technical blog explaining complex topics
❌ "Compliance knowledge"
✅ Automated evidence collection project
"Everybody's resume at this point almost looks the same. Create projects, show that you actually know how to do the things you're talking about"
For that, feel free to use the new GRC Engineering lab generator!


🚀 The 12-Month Vision
Next 3 Months (Q1 2026)
Primary goals:
SOC 2 + ISO 27001 audits with minimal auditor interaction
Continue expanding continuous compliance coverage
TPRM process optimisation (tier-based automation)
Success metrics:
Clean audit reports
Reduced audit prep time
Self-service for tier 1 vendor assessments
6-Month Goals (Mid 2026)
Customer trust evolution:
Knowledge base system live
SSDLC gates for security documentation
Increased questionnaire automation (Beth focuses on edge cases)
Self-service for sales team
Risk programme maturity:
Quarterly leadership review cadence
Integrated with product development
Real-time dashboard
Exploring predictive analytics
12-Month Aspiration (End 2026)
The Internal GRC Portal:
Compliance dashboard
Risk management
TPRM interface
Customer trust centre
Training platform
Policy library
All powered by multiple system integrations, real-time evidence, and automated testing.
Open source strategy:
Training platform (containerised release)
Slack bot framework (generic template)
Evidence collectors (modular plugins)
Risk workflow (JIRA templates)
Team evolution:
Current: 5 people, 100% development
6 months: 5-6 people, 60% development / 40% maintenance
12 months: 6-7 people, 40% development / 40% maintenance / 20% innovation
The Bold Statement
"We're going to transform GRC into something no one's ever seen"
What this means:
Not just automation, but user experience
Not just compliance, but revenue generation
Not just tools, but open source community
Not just efficiency, but strategic influence

📌 Timestamps
(00:00) Introduction and guest backgrounds
(02:39) What they inherited: Processes owning the organisation
(07:50) First weeks: Deep gap analysis and stress-testing controls
(11:08) Self-managing team structure (no GRC manager)
(16:07) Build vs buy: Docker's developer-first philosophy
(21:22) Security training platform: Rapid rebuild to 100% completion
(28:20) The JIRA struggle: Building risk management workflows
(35:07) Continuous compliance: Moving towards full automation
(40:15) Risk management programme: Self-service approach
(45:30) AI integration: Claude as development accelerator
(50:45) User experience philosophy: Adoption over perfection
(55:12) TPRM automation: "Automate the shit out of it"
(60:30) Cost model innovation: Revenue-generating GRC
(65:20) Essential skills for aspiring GRC engineers
(70:15) 12-month vision: Open source and transformation

🌶️ Hot Take
The traditional GRC career path is dying, and that's a good thing.
When Emre can rebuild training platforms in weeks, Chad can automate questionnaire responses, and both can move towards full continuous compliance automation, what's left for traditional GRC analysts to do?
The answer: strategic work that actually matters.
The grunt work that built character for previous generations (manual evidence collection, spreadsheet updates, vendor questionnaire completion) is being automated away. Good. That work was necessary but soul-crushing.
As Emre puts it:
"With the tools we have today, there's no excuse why anybody can't build things themselves."
If you're willing to learn APIs, embrace AI as a pair programmer, and think like a product manager, you're positioning yourself for the most interesting phase of GRC's evolution.
With all the caveats they discussed in the previous sections!

📚 References
To learn more about Chad and Emre
LinkedIn Emre: https://www.linkedin.com/in/emre-ugurlu-48596b116/
LinkedIn Chad: https://www.linkedin.com/in/c-fryer/
Resources mentioned
grc.engineering manifesto projects: https://grc.engineering/projects/
Emre’s open source cybersecurity training: https://emreugurlu.github.io/open-security-training/
Ayoub’s open source grc engineering lab builder: https://grc.engineering/grc_engineering_lab_builder/
Video of the head of Design at Claude Code explaning her approach: https://www.youtube.com/watch?v=1-x-QzNjFHQ

That’s all for this podcast’s issue, folks!
If you enjoyed it, you might also enjoy:
My spicier takes on LinkedIn [/in/ayoubfandi]
Listening to the GRC Engineer Podcast
See you next week!
Reply