Run Your AI Bets Like a VC Portfolio

Sam Gaddis

Picture a boardroom. Someone's asking for $2 million for "an AI initiative." The room goes quiet. Everyone's thinking the same thing: what exactly are we buying? A chatbot? A recommendation engine? A vague promise that we'll be "more efficient"? The CFO wants a business case. The CIO wants a technical architecture. The CEO wants to know why their competitor just announced something that sounds similar but shinier.

This is the wrong conversation. Not because AI isn't worth $2 million — it might be worth far more — but because the framing assumes you know which bet will pay off before you place it. You don't. Nobody does. And the organizations that pretend otherwise are the ones that end up eighteen months into a project that never ships.


Here's the mental model that actually works: treat your AI investments like a venture capital portfolio.

VCs don't make money by hitting singles. They take a lot of at-bats, watch most of them fail, and make their returns on the small number of bets that break out. They expect failure. They plan for it. A 10% hit rate on transformative outcomes is a great year.

Most organizations approach AI the opposite way. They pick one big initiative, staff it like a traditional software project, and treat failure as an indictment of the technology. Then they cite the (outdated) MIT study claiming 95% of AI projects fail — which, by the way, used data from an era when the models genuinely weren't good enough. A more recent Wharton study puts the success rate closer to 50%, and that's before you optimize for fast iteration.

The math changes completely when you run many small bets instead of a few big ones.


This approach wasn't possible two years ago. It's possible now because of something I've started calling the speed collapse.

What used to take a team of ten people six months can now be done by one person in a month. Sometimes faster. The economics of software prototyping have fundamentally shifted, and most companies haven't internalized this yet. They're still budgeting like it's 2019, allocating large sums to large teams for long timelines.

Here's what that means in practice: you can now allocate a quarter million dollars to an AI initiative and get ten to twenty prototypes out of it. Not one or two large pieces of software that you don't know will work — ten to twenty small experiments that you can validate with real users in weeks.

Most of them won't work. That's fine. That's the point. You're not trying to be right on the first try. You're trying to find the two or three ideas that actually solve problems your organization has, then double down on those.


So how do you actually execute this? You find your operator-engineer.

An operator-engineer is someone who combines business sense, technical ability, and fluency with AI tools. They're not a committee. They're not a center of excellence. They're one person — maybe two — who can take a vague business problem and turn it into a working prototype in a week or two. Think of them as the Bob Ross of vibe coding: calm, competent, surprisingly prolific.

You give them access to data (within 48 hours, ideally — see our IT Objection Handler). You give them a business problem. You let them build. Two weeks later, you have something users can actually touch. If it doesn't work, you kill it and move on. If it does, you invest more.

This requires organizational tolerance for experiments that go nowhere. It requires IT departments that enable rather than obstruct. It requires executives who understand that most bets are supposed to fail — that failure is the mechanism by which you find the wins, not evidence that the strategy is broken.

Here's what would need to be true for this to work at your organization:

You'd need at least one person who can operate at the intersection of business and technology — someone who's already playing with these tools on their own time, probably. You'd need IT to make data accessible quickly, even if that means working with anonymized samples rather than live systems. You'd need leadership willing to fund experiments without demanding certainty about outcomes. And you'd need a culture that treats failed prototypes as learning rather than waste.

That's a lot to ask. But the alternative — betting everything on one big initiative and hoping it works — has a worse track record. The companies that figure out how to run lots of small experiments, fast, are going to find things that work. The ones that don't will still be in that boardroom, arguing about architecture diagrams for a project that never ships.

Your competitors are already placing bets. The question is whether you're placing enough of them.

Get the magazine delivered to your door.

Issue 01: Technology Strategy for the Agentic Era. $12 + free US shipping.