anderegg.ca

LLM pricing has never made sense

April 22, 2026

Yesterday, Claude Code disappeared from the $20/month subscription tier on Anthropic’s website. Well, for some people. Then it came back. As Simon Willison put it, it’s all very confusing.

Anthropic’s Head of Growth, Amol Avasare, said this was caused by a “test” gone slightly wrong. Apparently only 2% of users were supposed to see the new pricing page. But this suggests Anthropic is considering raising rates substantially for code generation product.

A day earlier, GitHub announced that its Copilot code generation product would be “pausing new sign-ups, tightening usage limits, and adjusting model availability”. Effectively, the product is going to get substantially worse for current users, and even worse for new users when they can sign up again.

Almost a year ago, Sam Altman said there was an AI bubble. He also said that OpenAI is losing money on its $200/month Pro subscriptions.

None of this should be surprising. OpenAI in particular has raised over $290 billion dollars of investment, and has not yet turned a profit. It hopes to become profitable by 2030, but it’s not clear how that will happen. Maybe ads? Anthropic made fun of this idea during the last Super Bowl.

All of these companies have been raising vast sums from venture capitalists for years. Now it’s starting to become clear that people would like to see some returns soon, please. Maybe it’s just me, but I’m not sure how that’ll happen. Perhaps they’ll raise rates to sustainable levels, but I think that would price most users out of the market.

But there’s another challenge: local LLMs. It’s already possible to run LLMs on local hardware, and that’s only going to get easier in the future. Apple’s M-series chips are extremely good at doing this today. Open weight (read: free) models are widely available and good enough that most people probably couldn’t tell the difference. They also have the benefits of running on hardware that’s sipping power most of the time, rather than slurping it down in massive data centres.

As I’ve written before, I think LLMs can be very useful tools. I honestly think most “AI haters” would agree with me on that front. The issue for many people isn’t the technology itself (though there are many ethical issues in how it was trained). The issue is the stupid state of our capitalist system, and the weird way companies are trying to force it down everyone’s throats.

I don’t know what the future holds for the big AI companies, but I think there will be a profitability reckoning soon. The products will need to get worse, more expensive, or both if VCs are to get their money back. But even then, I’m not sure the math adds up. Will everyone keep paying more? Will people unsubscribe if chat sessions start including crappy ads? Will more people start running LLMs on commodity hardware? Whatever happens next, it doesn’t seem ideal that so much investment money is tied to an underpants gnome scheme.