What if the future of every AI tool you test isn’t being built in some scrappy startup’s garage, but in the quarterly earnings reports of two massive public companies?
Doug Clinton, CEO of Intelligent Alpha, recently made waves by calling Nvidia (NVDA) and Google (GOOGL) the “safest AI bets” in public markets right now. As someone who spends my days testing AI toolkits and watching which ones actually deliver versus which ones are vaporware wrapped in hype, this assessment hits different than your typical Wall Street cheerleading.
The Toolkit Tester’s Perspective
Here’s what Clinton’s analysis means for those of us in the trenches: every AI toolkit I review—whether it’s a code assistant, a content generator, or an automation platform—runs on infrastructure that traces back to these two companies. Nvidia provides the GPUs that train the models. Google provides both the cloud infrastructure and, increasingly, the foundational models themselves.
Clinton points to strong revenue growth and rising AI demand as the core reasons these stocks represent safe bets. From where I sit, that “rising demand” isn’t abstract market speculation. It’s real. Every month, I test tools that require more compute, more sophisticated models, and more reliable infrastructure. The barrier to entry for AI toolkits keeps rising, and that barrier is built on Nvidia chips and Google Cloud credits.
What This Means for Toolkit Economics
The concentration of AI infrastructure in these two companies creates an interesting dynamic. When I evaluate a new AI toolkit, one of my first questions is: what’s the actual cost structure here? Most founders I talk to are essentially reselling access to compute and models, adding a UI layer and some prompt engineering on top.
If Nvidia and Google are the safest bets, it’s partly because they’ve positioned themselves as unavoidable toll collectors. Every AI toolkit that scales has to pay the toll. That’s not necessarily bad—it means these companies have pricing power and predictable revenue streams. But it also means toolkit makers are squeezed on margins.
I’ve watched promising tools fold because their unit economics didn’t work once they scaled beyond early adopters. The cost of inference, the price of GPU time, the fees for API calls—these aren’t going down fast enough to save tools with weak business models.
The Safety Question
Clinton’s use of the word “safest” is telling. Not “best returns” or “highest growth potential”—safest. In the AI toolkit space, I see why that framing resonates. We’re in a period where most AI companies are burning cash to acquire users, hoping to figure out monetization later. Nvidia and Google are already profitable, already essential, already embedded in every layer of the stack.
For investors, that’s safety. For toolkit builders, it’s a warning: you’re building on someone else’s foundation, and they control the economics.
What I’m Watching
As someone who reviews AI toolkits professionally, Clinton’s assessment makes me think about which tools are most vulnerable to infrastructure cost increases. The ones with thin margins, the ones that haven’t figured out how to optimize inference costs, the ones that are just wrappers around someone else’s API—those are the ones that won’t survive if Nvidia or Google decide to adjust pricing.
The tools that will thrive are the ones that either have genuine technical moats (rare) or have figured out business models that aren’t purely dependent on arbitraging access to compute (slightly less rare).
Multiple 2026 reports echo Clinton’s view, which suggests this isn’t just one CEO’s hot take. The market is consolidating around the idea that in AI, infrastructure is king. For those of us testing and reviewing AI toolkits, that means paying closer attention to the economics underneath the demos. The flashy features matter less if the cost structure doesn’t work.
The safest bets in AI stocks might also reveal which AI toolkits are skating on thin ice.
đź•’ Published: