Dario Amodei just published an essay called “The Adolescence of Technology,” and it’s worth reading in full. His core argument is that AI is entering an accelerating feedback loop: AI systems helping to build better AI systems, compressing the timeline between capability leaps.

He’s right. But the practical version of this problem for people who build and ship software isn’t existential risk. It’s organizational.

Your AI tools are shipping new capabilities weekly. Your security review process takes six weeks. Your compliance team is still writing policies for last quarter’s model.

That gap between AI velocity and governance velocity is the defining challenge for every CTO right now. I don’t care what industry you’re in.

The Velocity Mismatch

Let me make this concrete. In the past three months:

Anthropic launched MCP Apps, bringing interactive tools directly into Claude conversations. Agent Skills became an open standard. Claude Code got concurrent agent support, background task execution, and worktree isolation. OpenAI announced plans to sunset their Assistants API in favor of MCP. Google shipped Gemini 3 and Gemini 3 Flash. New agentic frameworks appeared seemingly every week.

Each change introduces new capabilities and new risks. Each one potentially changes what your AI systems can access, how they behave, and what attack surface they expose.

Now here’s the governance side. Updating a security policy takes weeks of review. Compliance approvals require legal sign-off. Vendor risk assessments follow a quarterly cadence. Change management processes were designed for software that ships monthly, not AI capabilities that evolve weekly.

The math doesn’t work. And it’s getting worse.

Three Ways This Shows Up

Shadow AI. When governance moves too slowly, people route around it. Engineers start using new AI tools without formal approval because the formal process would take longer than the project itself. This isn’t malice. It’s pragmatism. But it creates unmonitored risk.

I’ve seen this in every organization I’ve led. At Zipcar, shadow IT was a constant challenge when the engineering team needed tools faster than procurement could evaluate them. The difference now is that AI tools are far more powerful (and therefore far more risky) than a rogue SaaS subscription. An engineer using an unapproved AI agent with access to production data is a fundamentally different risk profile than someone signing up for a project management tool.

Stale controls. Your security team built controls for last month’s AI capabilities. This month, the model can browse the web, execute code, and interact with external services through MCP servers. The controls that were appropriate for a text-generation tool are insufficient for an agentic system with tool access. The gap between the controls you have and the controls you need grows with every capability update.

Analysis paralysis. Some organizations respond by slowing everything down. They require exhaustive review for every AI feature, every model update, every new tool. This feels prudent but it’s a different kind of risk: the risk of falling so far behind that you can never catch up.

How I’m Trying to Close the Gap

I don’t pretend to have this solved. Nobody does. But here’s what we’re experimenting with:

Tiered governance. Not every AI change requires the same level of review. Low-risk changes (prompt updates, UI modifications) go through a fast path. High-risk changes (new tool access, new data sources, new action capabilities) go through more rigorous review. The key is having a clear framework for classification, not treating everything as high-risk by default.

Automated compliance checks. Some governance requirements can be checked programmatically. Does the AI have access to PII? Is the output logged for audit? Are the tool permissions scoped correctly? Build these into the CI/CD pipeline so they run automatically with every deployment. This doesn’t eliminate human review. It catches the obvious issues before they reach the review stage.

Governance as code. This is still early. The idea is to express governance policies in a machine-readable format that the deployment system can enforce. Instead of a PDF that says “AI systems must not access sensitive data without authorization,” you have a policy file that the system evaluates at deploy time. This borrows from infrastructure-as-code and applies it to AI governance.

Continuous monitoring. Instead of quarterly security assessments, monitor AI system behavior continuously. What tools are agents calling? What data are they accessing? How often are they being overridden by human reviewers? These signals provide early warning when a system is drifting outside its intended parameters.

The Uncomfortable Truth

Governance will never fully keep pace with AI velocity. The technology moves too fast, the capability surface is too large, and the threat landscape evolves too quickly for any review process to be comprehensive.

This means CTOs need to get comfortable with managed uncertainty. You can’t guarantee that every AI interaction will be perfect. What you can do is build systems that detect and respond to failures quickly, maintain audit trails for accountability, and create feedback loops that continuously improve your controls.

Dario Amodei is right that AI is entering a feedback loop. The question for practitioners is whether our governance systems can enter a feedback loop of their own: detecting new risks, adapting controls, and evolving policies at something closer to the speed of AI itself.

We’re not there yet. But it’s the most important engineering problem I’m working on.