\n\n\n\n Notion Shared Your Email Address and Didn't Ask Permission - AgntBox Notion Shared Your Email Address and Didn't Ask Permission - AgntBox \n

Notion Shared Your Email Address and Didn’t Ask Permission

📖 4 min read•756 words•Updated Apr 20, 2026

You paste a research doc into Notion, make it public so your team can review it, and move on with your day. You’re not thinking about security. You’re thinking about the deadline. That’s exactly the kind of moment this vulnerability was built to exploit — not by some shadowy external actor, but by the tool sitting open in your browser tab right now.

This is the Notion AI prompt injection story, and if you use Notion for anything collaborative, it affects you directly.

What Actually Happened

Notion AI was found to be susceptible to data exfiltration through indirect prompt injection. The specific mechanism is unsettling: AI document edits were being saved before a user clicked OK. That means the window you thought protected you — the confirmation step, the moment of consent — wasn’t actually doing what you assumed.

Hidden instructions embedded in a document could direct Notion’s AI to collect and transmit data without any visible signal to the person reading or editing the page. Names, email addresses, and contact details belonging to editors of public pages were exposed. Not hypothetically. Actually exposed.

Notion has roughly 100 million users and 4 million paying customers. The companies using it include Amazon, Nike, Uber, and Pixar. When a vulnerability hits a platform at that scale, the blast radius is not small.

Why This Hits Different for Toolkit Users

At AgntBox, I spend most of my time testing AI tools and telling you honestly whether they’re worth your time and money. I’ve recommended Notion integrations in past roundups. I’ve pointed teams toward Notion AI as a solid option for async documentation. So I want to be straight with you about what this changes.

The prompt injection angle here is what separates this from a standard data breach. This isn’t a case where a database was left exposed or a password was reused. This is a case where the AI layer itself became the attack surface. Someone crafted a document — a public page — with instructions baked into the content, and Notion’s AI followed those instructions before the user had any chance to intervene.

That’s a fundamentally different threat model than most people are prepared for. You can train yourself to spot phishing emails. You can use a password manager. But how do you defend against a document that looks normal and behaves maliciously through an AI you trusted to help you?

The Data That Got Out

According to verified reporting, the exposed details included names and email addresses of page editors. Security researchers have flagged that this specific combination — names paired with contact details — is particularly useful for targeted attacks. It’s not just spam fodder. It’s the kind of data that enables convincing, personalized social engineering.

If you’ve ever edited a public Notion page, your information may have been in scope. That includes contractors, freelancers, clients, and collaborators who were added to a workspace doc and never thought twice about it.

What I Think You Should Do Right Now

  • Audit which Notion pages in your workspace are set to public. If a page doesn’t need to be public, make it private or restrict access to specific people.
  • Review who has editor access to your shared docs. Remove anyone who no longer needs it.
  • Be skeptical of any Notion page you didn’t create yourself, especially if it prompts you to use Notion AI on its content.
  • Watch for phishing attempts that reference your name and role accurately — that level of personalization is a signal that your data may have been part of this exposure.

The Bigger Problem With AI-Assisted Tools

This incident is a preview of a category of problems we’re going to keep seeing. As AI gets embedded deeper into productivity tools — not as a separate app but as a layer woven into documents, emails, and workflows — the attack surface grows in ways that traditional security thinking doesn’t fully cover.

Prompt injection is not new as a concept. Researchers have been warning about it for years. What’s new is the scale at which it can now operate, inside tools that millions of people use daily without thinking of them as AI systems at all. Notion isn’t a chatbot to most of its users. It’s a doc editor. That mental model is exactly what makes this kind of attack effective.

I’m not writing Notion off. But I am adjusting how I recommend it, and I think you should adjust how you use it. Trust the tool for what it does well. Just stop assuming the AI layer is a passive observer. In 2026, it clearly isn’t.

đź•’ Published:

đź§°
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring
Scroll to Top