You’re running Claude through your API integration for the hundredth time today, watching tokens fly by at a pace that would’ve seemed absurd six months ago. Your bill is climbing, but so is your product’s capability. You’re not alone—and Anthropic’s latest numbers prove it.
The company just announced its revenue run rate hit $30 billion, up from $9 billion at the end of 2025. That’s not a typo. We’re talking about a more than 3x jump in a matter of months. To support this explosion, Anthropic expanded its compute deal with Google and Broadcom, securing TPU-powered capacity that won’t even come online until 2027.
What This Means for Toolkit Builders
I test AI tools for a living, and this deal tells me more about the state of the market than any feature announcement could. When a company locks in compute capacity three years out, they’re not guessing—they’re responding to demand they can already see and measure.
For those of us building on top of Claude, this is both reassuring and concerning. Reassuring because Anthropic clearly has the resources and partnerships to scale. Concerning because that $30 billion run rate suggests pricing pressure isn’t going away anytime soon. The compute costs that make your SaaS margins tight? They’re baked into the economics of this entire space.
The Broadcom Angle Nobody’s Talking About
Here’s what caught my attention: Broadcom isn’t just supplying existing chips. They’ve agreed to produce future versions of Google’s AI chips. This is vertical integration at scale, and it matters for toolkit reliability.
When you’re evaluating whether to build on Claude versus other models, supply chain stability should be part of your calculus. A three-way deal between Anthropic, Google, and Broadcom creates redundancy and commitment that solo ventures can’t match. Your production app running on Claude in 2027 will have access to compute that’s being planned and fabricated right now.
Reading the Tea Leaves on Pricing
Let’s be direct: a $30 billion run rate doesn’t happen because Anthropic dropped prices to zero. It happens because enterprises are paying premium rates for capability they can’t get elsewhere. I’ve tested enough tools to know that Claude’s context window, instruction following, and output quality justify higher costs for specific use cases.
But this also means the race to the bottom on pricing—the one everyone predicted would happen as models commoditized—isn’t materializing the way we thought. Quality still commands a premium, and compute scarcity keeps floors under pricing.
For toolkit builders, this creates a planning problem. You can’t assume costs will drop 50% next year. You need to build business models that work at current price points, with incremental improvements as your upside, not your baseline assumption.
What to Watch Next
The 2027 timeline is key. Anthropic isn’t scrambling for compute next quarter—they’re securing it for three years out. This suggests they expect sustained growth, not a bubble about to pop.
For anyone building AI tools, this deal is a signal about where to place your bets. The companies investing billions in compute infrastructure believe this market has legs. The question is whether your toolkit can capture enough value to justify building on top of these increasingly expensive foundations.
I’ll keep testing tools and tracking which ones deliver ROI at current API prices. Because that’s the real test: not whether Claude is impressive in a demo, but whether it pencils out when you’re processing millions of tokens a month.
The compute deals being signed today will determine which tools are still standing in 2027. Choose your dependencies accordingly.
đź•’ Published: