\n\n\n\n Chrome Deleted Its Own Privacy Promise and Hoped You Wouldn't Notice - AgntBox Chrome Deleted Its Own Privacy Promise and Hoped You Wouldn't Notice - AgntBox \n

Chrome Deleted Its Own Privacy Promise and Hoped You Wouldn’t Notice

📖 4 min read•723 words•Updated May 8, 2026

You Didn’t Install It. Chrome Did.

Picture this: you’re sitting at your desk, Chrome open, doing nothing more suspicious than checking your email. Somewhere in the background, without a prompt, without a dialog box, without so much as a polite heads-up, your browser quietly pulls down a 4GB AI model onto your device. You didn’t ask for it. You didn’t agree to it. You probably don’t even know it’s there.

That’s not a hypothetical. That’s what’s reportedly happening right now on machines running Google Chrome.

The Line That Disappeared

Here’s what makes this story particularly sharp: Google didn’t just start doing something new. They also stopped saying something old. Chrome’s documentation previously included a clear, reassuring statement — that Chrome can use AI models running directly on your device without sending your data to Google servers. That line is gone as of Chrome 148.0.

No announcement. No changelog callout. No blog post explaining the shift. The privacy promise was simply deleted, and the browser kept moving.

As a toolkit reviewer, I spend a lot of time evaluating what AI tools actually do versus what they claim to do. That gap is usually where the real story lives. In this case, Google didn’t just widen the gap — they quietly filled it in with concrete and hoped nobody would look down.

Silent Downloads and Absent Consent

The 4GB model being installed without user permission was flagged by Alexander Hanff, a prominent privacy researcher. His concern goes beyond the philosophical. He’s suggested the practice may violate EU law — specifically around consent requirements for software installation on user devices.

That’s a serious allegation, and it deserves serious attention. A 4GB download is not a minor background process. On a metered connection, that’s real data. On a lower-spec machine, that’s real storage. And on any machine, that’s a real decision that should belong to the user, not the browser vendor.

What makes it worse is the default behavior. After Chrome auto-updates, new features — including these AI additions — are switched on by default. You’re not opting in. You’re opting out of something you didn’t know existed, assuming you even find the setting.

Why This Matters for AI Toolkit Users

At agntbox.com, we review AI tools with one core question in mind: does this thing actually do what it says? We’re not reflexively anti-AI. On-device AI, done right, is genuinely useful. Processing data locally, without round-tripping to a server, is a solid privacy model in theory. It’s one of the reasons tools like local LLM runners have built real trust with privacy-conscious users.

But that trust is built on transparency. You know what’s running. You chose to install it. You understand the tradeoff.

What Chrome has done is take the aesthetic of on-device AI — the “your data stays local” framing — and stripped out the part where you get to decide. The model is on your device, sure. But the promise that it stays private? That’s been quietly retired.

What You Can Actually Do

  • Check your Chrome settings. Look under chrome://settings/ai to see what AI features are active. Disable anything you didn’t consciously turn on.
  • Check your storage. A 4GB model sitting on your drive is findable. Look in your Chrome profile directory if you want to confirm whether it’s there.
  • Consider your browser choice. Firefox, Brave, and others have not pulled this move. If on-device AI with genuine consent controls matters to you, that’s worth factoring into your setup.
  • Stay skeptical of “on-device” claims. This episode is a good reminder that “on-device” describes where computation happens — not necessarily what data leaves, when, or under what future policy.

The Trust Problem Is the Real Problem

Google removing that privacy statement without explanation is the kind of move that erodes trust in ways that are hard to rebuild. It’s not just about this one feature. It’s about what it signals: that privacy commitments in product documentation are provisional, subject to change, and apparently not worth a public conversation when they shift.

For anyone building a personal AI toolkit, browser choice is infrastructure. And infrastructure you can’t trust is a liability, not an asset.

Chrome is still the dominant browser. It has genuinely useful features. But this episode is a clear signal to pay attention to what your tools are doing in the background — because they’re not always going to tell you.

đź•’ Published:

đź§°
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring
Scroll to Top