\n\n\n\n Google's Codebase Is Now Mostly Written by the Thing Google Built - AgntBox Google's Codebase Is Now Mostly Written by the Thing Google Built - AgntBox \n

Google’s Codebase Is Now Mostly Written by the Thing Google Built

📖 4 min read•753 words•Updated Apr 23, 2026

“Today, about 75% of all new code at Google is now AI-generated and approved by engineers, up from 50% last fall.” That’s Sundar Pichai, speaking at Google Cloud Next. I read that quote twice. Then I closed my laptop and made a coffee.

When the CEO of one of the most influential software companies on the planet tells you that three-quarters of his engineers’ output is now being drafted by machines, that’s not a footnote. That’s a signal worth paying attention to — especially if you’re someone who spends their days testing AI coding tools and asking whether any of this stuff actually works in the real world.

Spoiler: apparently it does. At least at Google scale.

What Google Actually Said

To be precise about it: Alphabet confirmed that AI now generates 75% of Google’s new code, with human engineers reviewing and approving the output. This is up from 50% just last fall, which means the adoption curve inside Google is steep and accelerating. The company also announced capital expenditure plans of $175 billion to $185 billion for 2026, a number that tells you everything about how seriously they’re betting on this direction.

The human review piece matters. Google isn’t shipping raw AI output. Engineers are still in the loop — they’re just spending more of their time reading, evaluating, and approving rather than writing from scratch. That’s a meaningful shift in what a software engineer’s day actually looks like.

What This Means for the AI Toolkit Space

Here at agntbox, we review AI tools for a living. We poke at them, stress-test them, and tell you honestly when something is overhyped. So when a number like 75% surfaces from a company with Google’s engineering depth, it reframes a lot of the conversations we have about these tools.

For months, the skeptic’s position has been: “Sure, AI can write boilerplate, but real production code? Come on.” Google’s internal numbers are a direct challenge to that position. This isn’t a startup running a demo. This is one of the largest engineering organizations on earth saying that AI-generated code now makes up the majority of what they ship.

That changes how I think about the tools we review. A lot of the AI coding assistants we’ve tested — the ones that felt promising but not quite there — may be closer to production-ready than their rough edges suggested. The gap between “impressive demo” and “trusted daily driver” might be narrowing faster than most people expected.

The Part That Doesn’t Get Talked About Enough

There’s a quieter story inside this headline. If 75% of new code is AI-generated, what are engineers actually doing with their time? The answer, based on how Google describes it, is review and approval. That’s a fundamentally different job than writing code.

This has real implications for how teams should be evaluating AI coding tools. The question isn’t just “can this tool write good code?” anymore. The better question is “does this tool produce output that’s fast and easy for a human to review and trust?” Those are different design goals, and not every tool on the market is optimized for the second one.

When we test tools at agntbox, we’re going to start weighting that more heavily. How readable is the output? How well does it explain its own decisions? How easy is it to catch a mistake before it ships? These things matter more now.

A Honest Take From Someone Who Reviews This Stuff

I’ll be straight with you. A year ago, if you’d told me that a company like Google would be comfortable with AI writing three-quarters of its new code, I’d have been skeptical. Not because I doubted the technology’s potential, but because I’d seen enough AI-generated code go sideways in my own testing to know it isn’t magic.

But Google’s number isn’t a marketing claim. It came from the CEO, at a major conference, tied to a capital expenditure announcement. That’s the kind of statement that gets scrutinized. And the fact that it’s grown from 50% to 75% in less than a year suggests this isn’t a plateau — it’s a trend with momentum.

For anyone building with AI coding tools, or evaluating them for their team, this is useful context. The question of whether AI-generated code is “ready” is starting to look less like a debate and more like a settled matter — at least for the companies with the resources to build the review processes around it.

The tools are getting there. The workflows around them are what most teams still need to figure out. That’s where we’ll keep focusing our reviews.

đź•’ Published:

đź§°
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring
Scroll to Top