Global styles
All content
Insights
06 Apr 2026

The era of subsidised AI is ending

OpenAI and Anthropic say they are improving flexibility and transparency. The deeper story is that flat-fee AI is colliding with compute costs, margin pressure, and investor expectations.

Written by
The gecco team

For months, AI vendors have sold the idea that more capability could sit inside simple monthly plans. That promise is starting to crack. The latest pricing changes from OpenAI and Anthropic suggest the industry is moving toward tighter limits, usage-based charging, and more explicit cost control.

Their official language focuses on flexibility and better support for growing demand. That is true. It is also only part of the picture. Flat-fee AI gets harder to sustain when usage grows, agent workflows become heavier, and compute remains scarce.

OpenAI's change is the cleaner one on paper. It has introduced Codex-only seats for ChatGPT Business and Enterprise. These seats carry no fixed fee. Usage is billed on token consumption. OpenAI has also cut the annual price of standard ChatGPT Business seats from $25 to $20 per user per month, while moving heavier Codex usage toward credits and auto top-ups.

Anthropic's move is sharper. Starting 4 April 2026, Claude subscription limits no longer cover third-party tools such as OpenClaw. Users who want to keep using those tools must now purchase extra usage bundles or authenticate through a separate API key. Anthropic says this is about managing capacity. The structure says more than that.

These are different product decisions. They point in the same direction.

This is about subsidy unwind

For the past two years, AI labs have priced many products as if usage would stay close to normal chat behaviour. That is no longer true. Coding agents, external harnesses, and longer-running workflows create much heavier demand. When those workloads sit inside flat subscriptions, the economics start to break.

Axios made the point clearly in March. Current pricing is unlikely to hold as margin pressure builds ahead of eventual public offerings. OpenAI is projected to burn $14 billion in 2026, up from $8 to $9 billion in 2025. Anthropic's margins have improved but remain under pressure from rising inference costs.

That logic matches what both companies are doing now. OpenAI is separating broad workplace access from heavier technical usage. Anthropic is separating direct Claude use from external tools that create very different consumption patterns. Neither company is saying "we underpriced this." Their actions say it for them.

This is also about compute

The margin story matters. Compute scarcity may matter more in the short term.

Anthropic has been unusually direct. Boris Cherny, Head of Claude Code, stated publicly that third-party tools placed an outsized strain on Anthropic's systems. He said subscription products were not built for those usage patterns. That is not the language of a packaging refresh. It is the language of a company protecting scarce infrastructure.

One growth marketer estimated that a single OpenClaw agent running for a day could burn $1,000 to $5,000 in API costs. Anthropic was absorbing that gap for every user who routed through a third-party harness at a flat monthly rate. At scale, that arithmetic is not survivable.

OpenAI has been more measured, but the same constraint shows through. Codex usage within ChatGPT Business and Enterprise has grown sixfold since January 2026. More than two million developers now use Codex weekly. In that context, a shift from bundled access to token-based billing looks less like a feature launch and more like operational necessity.

Investor optics are part of the story

This is where the IPO question becomes relevant.

There is reporting that Anthropic has prepared for an IPO that could come as early as 2026. Anthropic has said no decision has been made on timing. OpenAI is also laying groundwork for a public listing as it raises capital at ever larger valuations. In late March, OpenAI announced that its shares would be included in ARK Invest ETFs.

Whether those timelines move or not, both companies already operate under the scrutiny that late-stage private markets apply to future public companies.

At that stage, growth alone is not enough. Investors want to know whether demand converts into durable revenue. They want to know whether high-usage customers are profitable. They want to know whether the company can ration scarce resources without damaging the product. Clean pricing architecture helps answer all three questions.

Usage-based seats, shared credits, bundle pre-purchases, and tighter rules around third-party tools all make revenue quality easier to explain. So yes, better margins before an IPO is a credible reading. It is not the whole explanation. A more precise framing: both firms are tightening commercial models because compute is expensive, heavy users are harder to subsidise, and investor expectations are moving closer to public-market standards.

The difference between OpenAI and Anthropic

The similarity matters. So does the difference.

OpenAI's move looks like segmentation. It is lowering the base seat price. It is introducing Codex-only seats. It is using credits to extend heavier usage. That is a structured commercial model. It widens entry at the top while making expensive activity more measurable. OpenAI says the goal is to make adoption easier, citing more than nine million paying business users and more than two million weekly Codex users.

Anthropic's move looks more defensive. Reporting on the OpenClaw decision shows Anthropic drawing a firm line around what subscription limits are meant to cover. Users received less than 24 hours' notice before the change took effect. OpenClaw's creator said he and a board member tried to negotiate with Anthropic. The best they managed was a one-week delay.

In simple terms, OpenAI is designing a more investable pricing system. Anthropic is protecting product boundaries under compute pressure. Both moves support stronger unit economics.

What this means for AI buyers

For buyers, the message is clear. The era of broad, all-you-can-eat AI access is starting to narrow.

You should expect lower headline entry prices alongside tighter included limits for advanced workloads. Shared credit pools will become more common. Usage-based charging for coding, agents, and long-running tasks will grow. Vendors will exercise closer control over third-party tools that sit on top of subscription products.

That does not mean AI is becoming bad value. It means the market is maturing. The more advanced the workload, the less likely it is to remain hidden inside a flat monthly fee.

For SMEs, this is a planning question as much as a budget question. It becomes more important to decide which users need broad access, which teams need deeper technical capacity, and which workflows should sit on metered tools. Businesses that treat all AI access as interchangeable will find it harder to control spend. Businesses that segment usage by role and outcome will be in a stronger position.

If you have not already done so, now is a good time to audit how your organisation uses AI. Map which teams rely on which tools. Understand which workflows generate heavy token usage. Build that picture before the next round of pricing changes arrives.

The bigger shift

The most important point is not that two companies changed a pricing page. It is that both are signalling the same underlying reality.

AI is moving out of its land-grab phase. The focus is shifting from user growth at almost any cost to revenue quality, infrastructure discipline, and commercial models that can survive scrutiny. As that happens, subsidised usage gets squeezed first.

The current changes look like early signs of a broader reset. AI access is becoming less about generous bundles and more about disciplined economics. For vendors, that is a margin story. For buyers, it is a governance story. For the market, it is a sign that AI is growing up.

Adopt AI Today
Insights
17 Apr 2026

A shoe company just pivoted to AI infrastructure

Allbirds renamed itself NewBird AI and pivoted to GPU leasing. You do not need to go that far, but you probably need to go further than you have.

Insights
16 Apr 2026

Claude Opus 4.7 lifts vision, holds capability back

Claude Opus 4.7 is out. Better vision, stronger professional output, and a reminder that as models get sharper, loose prompts stop working.