The GRC SaaS Killer
GRC platforms have been selling the same dream for twenty years. One place for your risks, your controls, your evidence, your reports. A single pane of glass for your entire security program.
The dream is compelling. The reality is a data entry nightmare wrapped in a six-figure renewal.
I’ve been thinking about this a lot lately — and why AI models just made the alternative the obvious choice for any team willing to build it.
What GRC Platforms Promise
To be fair, the feature list is real. Every major GRC platform ships with:
- A risk register for tracking and scoring risks
- A control library mapped to frameworks like SOC 2, ISO 27001, and NIST
- Evidence collection workflows tied to audit cycles
- Ticketing and workflow automation to assign and track remediation
- Reporting and dashboards for leadership and auditors
On paper, that covers the full GRC lifecycle. In a demo, it looks seamless. In practice, each of these features has a catch.
Risk Registers and Control Libraries
The promise: a living inventory of your risks and controls, always current, always mapped.
The reality: someone has to maintain it. Every risk needs a description, an owner, a score, a status. Every control needs to be linked to the right frameworks, kept current as your environment changes, and reviewed on a cadence. The platform doesn’t do that work. It gives you fields to fill in.
Control libraries are worse. Vendors ship pre-built libraries mapped to common frameworks. They look comprehensive until your environment has nuances the library doesn’t accommodate — and it always does. Customizing a control library in most GRC platforms is a project, not a task.
With a repo-based approach, your risk register is a structured data file. Your control library is code. Both are version-controlled. Changes are committed with context. The history is yours. When your environment evolves, you update the source — not a vendor’s data model.
Framework Mapping
The promise: map your controls once, get instant coverage views across SOC 2, ISO, NIST, and whatever framework comes next.
The reality: the mappings are approximations. Every vendor’s framework library reflects their interpretation of the standards, not yours. When an auditor pushes back on a control mapping, you’re arguing against a black box.
New frameworks and updates lag. When NIST drops a revision or a new regulation lands, you wait for the vendor’s roadmap.
With a repo, your mappings are explicit and editable. You can see exactly why a control maps to a requirement. When something changes, you change it — in a pull request, with a comment, reviewed by the team. Auditors can see the logic. So can you.
Evidence Collection and Audit Readiness
The promise: automated evidence collection tied to your controls, always audit-ready.
The reality: automation covers a narrow slice. Screenshots, exports, and manual uploads cover the rest. Someone is still spending weeks before every audit hunting down evidence, reformatting it, and uploading it to the right place in the platform.
Evidence also lives inside the platform. If you switch vendors or lose access, so does your audit history.
With a repo, evidence collection is a script. Pull the data you need from your actual systems, store it in your own infrastructure, and version it alongside the controls it supports. MCPs — Model Context Protocols — let AI models connect directly to your data sources, ticketing systems, and documentation tools to pull and organize evidence on demand. Your audit package is generated, not assembled by hand.
Workflow Automation and Ticketing
The promise: assign remediation tasks, track status, and close the loop — all inside the platform.
The reality: your engineers don’t live in the GRC platform. They live in Jira, Linear, GitHub, or whatever your engineering org uses. Every finding that needs remediation has to be manually translated into a ticket in the system your engineers actually check. Status updates flow in one direction — someone has to keep the GRC platform current by hand.
With a repo-based approach and the right MCP connections, that translation is automatic. A finding in your risk data triggers a ticket in your engineering system, pre-populated with the right context, labels, and assignee. Status syncs back. Nothing lives only in the GRC platform because there is no GRC platform to be siloed in.
Reporting and Dashboards
The promise: executive dashboards, audit reports, and board-ready summaries generated automatically.
The reality: the outputs look like enterprise software from 2015. Rigid templates. Vendor branding. Limited customization. The reports that come out of GRC platforms rarely look as good as what the vendor showed you in the demo.
Custom reports are a configuration project. Anything outside the vendor’s templates requires professional services or a workaround.
With a repo, your reports are generated from your data. The format is yours. AI models can take raw metrics and draft the narrative — identifying the trend that matters, framing it for the right audience, producing something you’d actually want to send to your board. You edit. You don’t fight a template.
The Repo-Based Model
Here’s what this looks like in practice.
Your GRC program lives in a shared code repository. Risks, controls, framework mappings, and assessment logic are structured data and scripts — readable, editable, and version-controlled by the whole team.
Skills are reusable task definitions — think of them as documented workflows your AI model knows how to execute. Run a vendor risk assessment. Generate the monthly metrics package. Produce an audit evidence summary. Each skill encodes the steps, the data sources, and the expected output.
MCPs are the connections. They let your AI model reach into your actual systems — your ticketing tool, your documentation platform, your cloud environment — to pull data, create records, and update status without manual translation.
Put them together and you have a GRC program that executes on demand, reasons about what it finds, and produces outputs your team can actually use.
Why This Wins
GRC SaaS platforms create a ceiling. The vendor’s data model, roadmap, and pricing define what’s possible. When your needs outgrow their assumptions, you’re filing a support ticket or writing a check for professional services.
A repo has no ceiling. When your needs change, you change the code. When AI models get more capable — and they will — your program gets more capable with them.
Every change is a commit. Every output is reproducible. When an auditor asks how you calculated a risk score a year ago, you check out the tag and run it.
Try asking your GRC SaaS vendor for that.
The platform was never the point. The program is.