Imagine you signed up for a gym membership, and one day you get a letter saying your workout footage will now be used to train personal trainers — unless you call a specific number before a specific date. You didn’t ask for this. You didn’t agree to it upfront. But the gym decided your data was too useful to leave sitting on the table. That’s roughly what Atlassian is doing with your Jira tickets and Confluence pages, and as someone who reviews AI toolkits for a living, I think teams need to pay close attention before August 17, 2026 arrives.
What’s Actually Happening
Starting August 17, 2026, Atlassian will automatically collect customer metadata and in-app content from Jira, Confluence, and other cloud products to train its AI models — specifically its Rovo AI features. The settings to control this are being rolled out gradually in Atlassian Administration between now and May 19, 2026, which gives admins a window to act. If you don’t act, you’re in by default.
That last part is the piece that matters most. Default opt-in is a deliberate design choice. It’s not neutral. It’s a bet that most users won’t notice, won’t read the changelog, or won’t bother logging into admin settings to flip a switch. For a company managing tools used by engineering, product, and legal teams across thousands of organizations, that’s a significant amount of potentially sensitive data flowing into a training pipeline.
The Plan Tier Problem
Here’s where it gets more complicated. If you’re on a Free or Standard plan, you’re opted in by default and you can opt out — but only if you do it before the deadline. If you’re on a higher tier, the situation is different. Some data collection reportedly cannot be opted out of depending on your plan. The exact boundaries of what’s collectable and what isn’t vary, which means admins need to read the fine print specific to their subscription level rather than assuming a single toggle covers everything.
For teams at smaller companies running Free or Standard plans — often the ones with the least dedicated IT oversight — this is a real exposure risk. They’re the most likely to miss the window and the least likely to have someone whose job it is to catch these policy changes.
What This Means for AI Toolkit Users
At agntbox.com, we spend a lot of time evaluating what AI tools actually do with your data versus what they say they do. Atlassian’s move here is honest in one sense — they’re telling you upfront, and they’re giving you a mechanism to opt out. That’s more than some vendors do. But the default-on framing is a pattern worth calling out, because it’s becoming standard practice across the AI space and it deserves more scrutiny than it gets.
When a company trains its AI on your project data, your bug reports, your internal documentation, your sprint notes — that data shapes the model’s behavior. Your team’s workflows, terminology, and priorities become part of a system that will eventually serve other customers too. That’s not inherently sinister, but it’s a trade-off that should be your choice to make, not something you have to actively undo.
What You Should Do Right Now
- Log into Atlassian Administration and check whether the data contribution settings are visible for your organization yet. The rollout runs through May 19, 2026.
- Identify your plan tier and confirm which opt-out options are actually available to you. Don’t assume a single toggle covers all data types.
- If you manage Jira or Confluence for a company that handles sensitive client data, legal documents, or proprietary product specs, escalate this to your legal or compliance team before the August 17 deadline.
- Document whatever settings you configure. If questions come up later about what data was shared, you’ll want a record.
The Bigger Picture
Atlassian isn’t alone in doing this. Plenty of SaaS companies are quietly updating their data policies to feed AI training pipelines, and the opt-out-by-default model is becoming a familiar move. What makes this one worth flagging specifically is the scale — Jira and Confluence sit at the center of how a huge number of engineering and product teams operate. The data in those tools is often detailed, sensitive, and deeply tied to how a business actually works.
Rovo may turn out to be a genuinely useful AI layer on top of Atlassian’s products. But usefulness doesn’t automatically justify the data collection method used to build it. Teams deserve to make that call themselves, with enough time and information to do it properly. The clock is running.
🕒 Published: