• GRC Engineer
  • Posts
  • πŸ΄β€β˜ οΈ Building the GRC Engineering Trust Infrastructure: Introducing Corsair

πŸ΄β€β˜ οΈ Building the GRC Engineering Trust Infrastructure: Introducing Corsair

A GRC Engineering-native answer to the trust and compliance exchange challenges. Open-source and free to sign. Assurance through cryptography instead of PDFs.

Today I'm releasing Corsair.

Before I get into what it is, I want to walk through the problem first. Properly.

Because it's easy to see "cryptographic compliance proof" and think it's a technical solution to a technical problem.

It isn't.

It's a structural solution to a structural problem. And the structure has been broken for a long time.

IN PARTNERSHIP WITH

What’s the Most Maddening thing about GRC?

We've put together 16 of the most maddening things about GRC, everything from vendor questionnaires that ask the same thing 47 different ways to subjective risk scoring and suspiciously generic SOC 2 reports. Submit your predictions and vote on the match-ups for a chance to win some amazing prizes!

You've done a vendor security review.

You sent the questionnaire. 300 questions. Two weeks later, a spreadsheet with 287 β€œYes” came back. Filled out, answers typed in, a screenshot or two attached.

You reviewed it. Flagged a few items. Got written responses. Filed it. Set a calendar reminder for next year.

At no point did you verify whether any control actually works.

Not because you were negligent. Because there was no mechanism to verify.

Here's what's actually happening at the vendor while you wait for your spreadsheet.

Their scanner is running against their AWS environment right now. Their container scanner found three vulnerabilities this morning. Their config profiler checked hundreds of controls overnight.

Every one of those tools produces structured, machine-readable output that answers your questionnaire questions. With real data, not typed responses.

That evidence exists. You never see it.

Instead, a compliance person at the vendor reads the tool output, translates it into a "Yes" in a spreadsheet, attaches a screenshot, and sends it back.

You get the translation. You don't get the proof.

This is not a people problem.

The compliance teams doing this work aren't cutting corners. The vendors filling out questionnaires aren't being deceptive. Everyone is following the only process that exists.

The format is missing.

There's no standard way to take a security tool's output, sign it cryptographically, and hand it to a relying party in a form they can verify without trusting the sender.

So instead, we copy and paste. We screenshot. We attach PDFs. We send 300-question questionnaires to vendors who answer them manually every year.

The $8.57 billion GRC market runs on promises, not verification.

You might already have tools for this. They don't solve it.

GRC platforms: excellent at managing compliance within your organization. They don't produce proof that travels to a relying party.

Trust Centers: show high-level signals: "SOC 2 Type II: βœ“". They stop there because publishing detailed control data publicly creates legal and competitive exposure. So you see the certificate β€” not the evidence behind it.

SOC 2 reports: point-in-time. Last year's audit doesn't tell you about last week's change.

Every one of these tools is excellent at what it does. None of them solve the exchange layer.

The exchange layer is the handoff between vendor and relying party, the moment one party needs to prove something to another in a form they can independently verify, without trusting the sender.

The evidence lives in tools. The proof format doesn't exist yet.

Think about how software teams had this exact problem and fixed it.

If you've spent time in GRC and also spent time in engineering, this is the exact pattern you've seen before.

Before version control, teams emailed code to each other.

Here's the latest version. Hope it's current. Trust me.

Tarballs. FTP uploads. Shared network drives. No standard format for a "change." No diff. No commit history. No way to verify what the other team actually had.

Git didn't fix bad developers. It fixed the format.

One standard for tracking what changed, when, who changed it, and whether the history is intact. The entire software industry now runs on it.

Compliance is still emailing tarballs.

No standard format for a proof. No way to diff two assessments. No verifiable history. Every exchange is manual, bespoke, and trust-based.

That's the gap Corsair closes.

Corsair is an open-source compliance infrastructure protocol.

It gives security tools a standard way to sign their output into a cryptographically verifiable proof β€” called a CPOE (Certificate of Proof of Operational Effectiveness). And it gives relying parties a standard way to verify that proof.

One format. Any tool can produce it. Anyone can verify it. No vendor account required.

There is no other open protocol that does this. Corsair is the first.

Six primitives. Each does one thing.

The git model applied to compliance: composable commands, each with a single responsibility, that together form a verifiable chain.

corsair sign (https://grcorsair.com/sign) β€” Takes the output of your security tool and signs it into a CPOE. Like git commit for compliance evidence.

corsair log(https://grcorsair.com/log) β€” Registers the proof in an append-only transparency log (SCITT). The timestamp is anchored and verifiable. Like a commit that can never be force-pushed. (SCITT registration is opt-in. Include corsair log when you want a proof in the public transparency log,skip it for proofs that shouldn't be publicly registered.)

corsair trust-txt generate(https://grcorsair.com/publish) β€” Publishes a trust.txt discovery file so any agent or system can find and verify your CPOEs automatically. Like git push β€” it makes your proof discoverable.

corsair verify(https://grcorsair.com/verify) β€” Checks the signature on any CPOE. Anyone can run this β€” no Corsair account, no API key, no trust required in the issuer. Like checking a commit hash.

corsair diff(https://grcorsair.com/diff) β€” Compares two CPOEs and shows what changed: what controls improved, what regressed, what's new. Like git diff for your security posture.

corsair signal(https://grcorsair.com/signal) β€” Notifies relying parties when posture changes in real time. Instead of finding out about a regression at the next annual review.

What this looks like in practice.

Two ways to handle a vendor security review.

The old way:

β†’ Send a 300-question questionnaire

β†’ Vendor's compliance team answers from memory and policy docs

β†’ Screenshot of MFA screen attached as "proof"

β†’ PDF returned, reviewed, filed

β†’ Annual repeat. Nobody verifies whether a control actually works.

The Corsair way:

β†’ Vendor runs their security tools

β†’ corsair sign --file scan-results.json β€” output is a CPOE, Ed25519-signed

β†’ Relying party runs corsair verify β€” takes seconds

β†’ Result: cryptographically confirmed, not trusted

Same controls. Completely different answer to the question: "can you prove it?"

The crypto stack mapped to the problems each technology solves.

Here's where it gets technical. I'll map every piece to the specific problem it exists to solve, because the jargon only earns its place if you understand why it's there.

Ed25519 signatures→ Solves: "How do I know the evidence wasn't modified after it left the tool?"

When a CPOE is signed with Ed25519, any modification to the data after signing breaks the signature. The verifier doesn't need to trust the sender β€” the math makes tampering visible immediately.

W3C Verifiable Credentials β†’ Solves: "How do I share this proof in a format any system can read β€” without proprietary tooling?"

The CPOE is a standard JWT. Any JWT library on Earth can decode and verify it. No Corsair SDK required. No account. If Corsair ceased to exist tomorrow, every proof already issued would still be verifiable.

DID:web β†’ Solves: "How do I know who actually issued this proof?"

The issuer's public key lives at a URL they control β€” e.g. https://acme.com/.well-known/did.json. Anyone can fetch and inspect it. No central certificate authority. No registry to trust. You verify the issuer by resolving a URL.

SCITT transparency log β†’ Solves: "How do I know this proof wasn't backdated?"

SCITT is an IETF-standard append-only log. Each entry gets a Merkle inclusion proof. You can verify both the timestamp and the integrity of every entry β€” without trusting the log operator.

SSF/CAEP signals→ Solves: "How do I find out when a vendor's posture changes — not at the next annual review, but when it happens?"

When a signed control state changes, Corsair emits a cryptographic notification to relying parties in real time. Compliance becomes continuous instead of point-in-time.

None of these are Corsair-proprietary. All five are open standards. That's the point.

How does it work for me?

For GRC leaders, a new way of evaluating vendors

Corsair changes what vendors send you.

Instead of a questionnaire answer, they send a signed proof. Your team or your tools can verify it in seconds. The signature either checks out or it doesn't.

Evidence stays current. Proofs are refreshable. When a vendor's posture changes, you find out instead of discovering it at the next renewal cycle.

The audit trail is verifiable. Every proof records where the evidence came from: self-assessed, tool-generated, or auditor-verified. Your auditors see provenance not just assertion.

You set the thresholds. Corsair doesn't decide what's sufficient. Your program does. The protocol carries the proof. Your policy artifacts decide what to accept.

For engineers β€” get started in five minutes.

# Install

brew install grcorsair/corsair/corsair

# Initialize β€” generates your Ed25519 signing keys

corsair init

# Sign your first proof

corsair sign --file scan-output.json --output cpoe.jwt

# Verify it β€” no account needed

corsair verify --file cpoe.jwt
# After every scan run in a CI pipeline

corsair sign --file trivy-output.json --mapping ./mappings/trivy.json --output cpoe.jwt

corsair log --file cpoe.jwt

corsair diff --current cpoe.jwt --previous last-run.jwt

Corsair is agent-native.

Every primitive is a CLI command. Agents are already fluent in Unix and Bash, there's no GUI to navigate, no portal to log into, no human interface in the way. An agent can compose sign, log, verify, and diff the same way it would any shell pipeline.

trust.txt replaces the trust center model entirely. Instead of a compliance team maintaining a Trust Center page and a human on the other side reviewing it, an agent fetches trust.txt, discovers the vendor's CPOEs, and runs corsair verify. The entire vendor review is automatable, with no human step required on either side.

Corsair is LLM-free as an application. It's a signing and verification protocol. An agent doesn't ask a language model to check a signature, it runs the command and gets a binary result: valid or not. That makes the protocol fully leverageable by any automated system, with or without an LLM in the loop.

Agents can sign with OIDC tokens. No key management, no portal setup. For machine onboarding, POST /onboard returns did.json, jwks.json, and trust.txt in a single call. The entire setup is automatable from first call.

But the deepest implication is longer-term. As AI agents operate autonomously in business contexts, they need to establish trust with each other. An AI procurement agent needs to verify that a vendor AI agent's infrastructure meets certain security thresholds. A pipeline agent needs to confirm that the code it's deploying was tested against controls that are currently verified. CPOEs give agents a cryptographic basis for that exchange (provable, portable, verifiable) without a human attestation in the middle.

Corsair is how agents prove things to other agents.

Add it as an AI agent skill (Claude Code, Cursor, Copilot, and 25+ others) as it is published on skills.sh and can be installed in a couple of minutes.

You can then directly use the skill in your existing AI-workflows seamlessly.

npx skills add grcorsair/corsair

Why open source?

Corsair is Apache 2.0. The CPOE specification is CC BY 4.0.

Verification is free. no pricing tier that gates checking a signature, no enterprise plan required to consume a proof. A proof format only works if everyone can verify it, including the vendors you send proofs to, the tools built on top of them, and the agents running automated TPRM.

Pull requests are welcome, the whole source code is available for you to review, fork and submit PRs!

Why the pirate name.

The branding isn't aesthetic, it's structural.

In the 16th century, Pieces of Eight became the world's first global currency. Not because Spain mandated them. Because anyone could verify them, cut them, weigh them, bite them. The verification was built into the artifact itself. Not dependent on trusting the issuer.

Modern compliance runs the opposite model. We pass each other certificates of authority: "Trust the auditor. Trust the vendor. Trust the report." Nobody verifies the artifact, we verify the reputation of whoever sent it.

A CPOE works like a Piece of Eight.

The acronym is intentional twice over: Certificate of Proof of Operational Effectiveness and, not coincidentally, Corsair Pieces of Eight.

Decode it. Resolve the issuer. Extract the public key. Check the signature. Four steps. Any JWT library. No authority required.

Trust through verification, not through promises.

Did you enjoy this week's entry?

Login or Subscribe to participate in polls.

That’s all for this week’s issue, folks!

If you enjoyed it, you might also enjoy:

See you next week!

Reply

or to participate.