Coordination Technologies
In search of the right funding structures for the AI era.
In the traditional SaaS model, you pay high upfront development costs and then amortize them over as many users as possible at relatively low marginal cost. AI changes this relationship because while it’s easier to go to market, the economics get harder as you scale, since your most engaged users are consuming the most inference and driving the highest variable costs. These “tokenomics” run in the opposite direction of SaaS economics, and it’s not yet clear you can capitalize tokens the way you could capitalize software development.
The obvious counterargument is that inference costs are falling fast, roughly 10x per year, and that this problem resolves itself on a timeline well within a normal venture fund’s horizon. But cheaper inference doesn’t actually reduce your total spend per user because as compute gets cheaper, people use dramatically more of it. Lower costs unlock new use cases, heavier usage patterns, and broader deployment across an organization, so the per-token cost falls but total inference spend per user rises. Your user cohorts remain expensive even as the unit economics of any individual call improve.
In effect, you’re running down an escalator.
And this compounds with a simultaneous competition problem that SaaS never faced at the same intensity: when the cost of building software approaches zero for everyone, feature differentiation compresses and you have to defend lifetime value through workflow integration and data advantages rather than product surface area. The companies that build those moats can eventually price on value delivered rather than tokens consumed, but getting to that position requires sustained investment through a period where your unit economics are negative and unpredictable, which is a fundamentally different problem than SaaS companies faced when unit economics were negative but followed a quickly well-understood curve.
A fair objection here is that this dynamic doesn’t describe all AI companies equally. Many AI businesses are building on inference as a commodity input and competing primarily on data, distribution, and workflow integration, where the marginal cost dynamics look much more like traditional SaaS. The financing problem I’m describing hits hardest at companies where user engagement directly scales inference consumption, not companies where AI is an embedded feature within a product that charges on seats or outcomes. But the companies where this dynamic is most acute are also the companies building the most transformative products, the ones where the AI is the product rather than a feature within it, which means the financing gap sits precisely where the most consequential value creation is happening.
This creates a financing question that the traditional venture and SaaS playbooks aren’t well-suited for: how do you fund successive tranches of user cohorts over a long enough horizon to realize returns from productivity gains while your cost structure is still stabilizing and your users are consuming more, not less, as the technology improves?
In SaaS, you could afford to be unit-economics-negative because your SG&A stayed flat or predictable and you were amortizing across users with a repeatable GTM motion. The mature version of that playbook looks like General Catalyst’s Customer Value Fund, where they deploy non-dilutive capital into a predictable go-to-market motion, you acquire the users, and as profits materialize you pay it back. That model depends on the payback curve being modelable, which in AI it often isn’t, not because the returns won’t come but because your cost base is a moving target that shifts with every improvement in the underlying models as usage expands to fill whatever capacity the efficiency gains create.
So the question now becomes: how do you produce a return on deployment when the path to stable unit economics runs through a longer and less certain period than venture has historically been willing to fund, and where the cost structure keeps evolving as usage patterns expand in response to every efficiency gain?
I think the winners in this space will be growth equity and slightly-above-growth-equity investors alongside long-duration seed investors, the capital that’s structured to underwrite a 7-10 year path to margin stability rather than a 3-5 year path to an exit, and that can stay patient through the period where falling per-unit costs and rising aggregate consumption are working against each other before workflow moats and value-based pricing eventually stabilize the equation.
As a result, I think the capital stack for AI companies needs to be rebuilt from scratch rather than adapted from SaaS playbooks.
Venture has always adapted its instruments to new dynamics: participating preferred for capital-intensive hardware, YC’s innovative SAFE for speed at seed, Pipe’s (RIP) revenue-based financing for SaaS. Why can’t it adapt again? I think the answer is that those adaptations were modulations on a shared underlying assumption, that unit economics would become predictable within a fund’s deployment horizon, and the AI margin convergence problem violates that assumption at a level where incremental term sheet evolution isn’t sufficient.
You’re not adjusting the terms of a bet whose structure you understand; you’re trying to underwrite a fundamentally different payoff curve.
The winners on the investor side will probably be the ones who design new instruments rather than the ones who stretch existing ones. The venture model doesn’t just need longer timelines, it needs different return expectations, different signaling frameworks for follow-on decisions, and different LP relationships. And the debt question that may be looming in the back of your mind isn’t really “should debt come in earlier” but rather “what new instrument has the patience of debt, the risk tolerance of equity, and the cost-structure awareness of an infrastructure provider,” because that’s what the underlying dynamics actually demand.
You need an instrument that absorbs equity-like risk during the period when unit economics are unstable, transitions toward debt-like predictability as the company matures, and gives the investor a payoff profile that’s calibrated to margin stabilization rather than revenue growth, because revenue growth in AI can be a misleading signal if it’s being driven by usage patterns that are compounding costs faster than pricing power develops.
The instrument essentially needs to be a bet on the spread between cost deflation and usage expansion converging into stable margins, not a bet on top-line scale.
The base layer would be preferred equity with no fixed coupon or repayment obligation, but with terms that shift dynamically based on observable margin and cost-structure triggers. Think of it as a convertible preferred where the conversion terms aren’t fixed at issuance but are functions of how the company’s unit economics evolve over time. We’re actively working on structuring something in this direction as I write this.
There are real implementation questions here that I haven’t fully resolved though; how do you get inference providers to write compute collars at meaningful scale when they’re also navigating their own cost uncertainty? How do you make the margin-based triggers robust enough that neither party can game them through accounting choices? And the legal complexity of an instrument with this many embedded options and contingent conversion paths is nontrivial, which means transaction costs could be high enough to make it impractical for smaller investments. The structure probably only works above a certain check size, maybe $10-15M minimum, where the complexity is justified by the capital at risk.
Not great.
The other open question is whether LPs would allocate to a fund structured to deploy this kind of instrument, because it doesn’t fit neatly into any existing asset class bucket. It’s not quite Venture, it’s not credit, it’s not quite structured products in the traditional sense. You’d probably need to raise a dedicated vehicle with a thesis-specific LP base, which is itself a fundraising challenge.
Which brings me to my final musing:
There’s a version of this argument that says the constraint is actually productive, that capital structures which force companies toward predictable unit economics and defensible moats produce better companies, and that the range of companies that can’t be built or don’t get started under current structures includes a lot of companies that would destroy capital without creating durable value. I take that seriously, and I think it’s partially right for any given cycle, but I think it mistakes the static efficiency of capital allocation for the dynamic process through which economic growth actually happens.
Everyone knows that venture has layers of power laws; in founder talent, in geography, in what companies win, and now in what allocators will actually get the dollars. Andreessen raising the equivalent of 18% of all venture dollars in 2025 is an example of this.
Venture term sheets are referred to as “risk capital” but that’s been a phrase that I think has been abused over the past couple years…after a certain point, equity stakes that come with additional terms/options/warrants like liquidation preferences, participation rights, and more starts to look more like debt with coupons and principal protection in the downside than it does look like equity. If a company wanted to take out a loan do I have to clear that with my equity investors first? Who gets paid first and what’s the implied seniority of the voting rights? At a certain point VC looks like complex mezz debt and I’m not sure founders on the other side are aware of this.
And that matters for the AI financing problem because “minding the capital structure” means that as a founder you need to realize that your scope of managerial control is being increasingly defined and dictated by what rights the VC you raise from have. Your cap table actually prescribes what company you’re able to build, not the other way around. When the instruments available to you are all calibrated to the SaaS payback curve, you can only build companies whose economics converge on that curve, which is exactly the wrong constraint for a technology whose cost dynamics I’ve just spent 1000+ words arguing are fundamentally different.
Accounting 101 tells us that liabilities should match assets, but today assets are all trying to fit into an increasingly narrow box of liabilities. Will Manidis calls this “legibility to capital” but I think the consequences are more severe than that framing suggests, because what we’re actually observing is the liability side of the balance sheet dictating the dynamism of what operating assets are allowed to even exist.
The instrument I’ve described above is one attempt at liability-side innovation, and I’m certain the final form will look different (and MUCH simpler) from what I’ve sketched here. But the direction of travel has to be toward instruments that are native to AI’s actual cost dynamics rather than adaptations of instruments designed for a world where marginal costs were asymptotic to zero.
Every major technological era produced its own financing innovation, and the pattern was always the same: the technology arrives first, then a long awkward period where it’s being funded by instruments designed for the previous era, then someone figures out the right liability structure, and then the real expansion begins.
Railroads needed bond markets and antitrust to allocate beyond Jay Gould’s antics. Bank credit couldn’t “see” early-stage uncertainty so Arthur Rock had to invent a fundamentally different way to finance Fairchild Semiconductor, and the result was the entire asset class of venture capital and an entirely new growth cone for the entire economy. The internet needed YC’s innovation in the SAFE and growth equity.
The pattern isn’t that existing capital structures stretch to accommodate new assets; it’s that new assets force the creation of entirely new capital structures, and the economic expansion happens after that structural innovation, not before.
AI is in that awkward middle period right now. The technology is here. The companies are being built. And they’re being financed by instruments that can’t “see” them clearly, which means we’re building at a fraction of the scale the technology already supports. The size of that gap, between what AI can do and what AI companies can sustainably be funded to do, is the actual measure of how much growth we’re leaving on the table.
And it will stay on the table until someone builds the financial architecture - or coordination technology - to go get it.

Hi Patrick, I'm new to your substack. Super interesting to read about your thoughts and insights.
Specifically learning about:
It's that new assets force the creation of entirely new capital structures, and the economic expansion happens after that structural innovation, not before
Your cap table actually prescribes what company you’re able to build, not the other way around.
Regarding new capital structure, I'have started to implement sth called computation startups, which use a modell of a startups to track ROI on dollars invested to dollar's saved in eg. work hours.
It allows us to have a very transparent view of the internal capital deployment to cause effect on outcomes metric, not that it captures ai spend, yet thought it really is valued/outcome based investing
Very new to the vc/financial world. Happy to learn
Best Levin