Policy | 8/29/2025
Anthropic shifts Claude training to opt-out by default
Anthropic updated Claude's terms to use user conversations for model training by default, requiring users to opt out if they want privacy. The change applies to Free, Pro, and Max plans and raises broader questions about consent and data ethics in AI. The policy excludes enterprise services and promises safeguards to protect sensitive data, but critics worry about a consent-first approach.
Anthropic’s policy shift: what’s changing
Anthropic has updated the Consumer Terms and Privacy Policies for Claude, the company’s flagship chatbot. The core change is simple in wording but big in consequence: by default, conversations users have with Claude can be used to train future AI models. If you’d like to keep your chats private and out of training data, you’ll need to opt out. This affects users on the Free, Pro, and Max tiers. New users will see the choice during signup, while existing users will get an in-app notice that asks them to decide by a deadline.
The opt-out deadline and how it works
- Deadline: September 28, 2025. After that date, continuing to use Claude will require you to make a choice about data collection.
- Opt-out mechanism: In Claude’s interface, you’ll find a pop-up labeled “Updates to Consumer Terms and Policies.” You can also disable the “Help improve Claude” toggle in the Privacy section of Claude’s settings.
- What happens if you opt out: Your conversations won’t be used for training future models. If you don’t opt out (i.e., you stay in by inaction or explicit consent), your data may be included in training.
Anthropic also extended its data retention period for consenting users—from 30 days to five years. The company says that any conversations you delete won’t be used for training in the future. In other words, before you hit delete, you’ll want to be mindful of what you discuss in Claude if you’re concerned about future use.
What’s unchanged: enterprise and API terms
The updated policy explicitly notes that enterprise services—Claude for Work, Claude for Education—and API usage through commercial platforms operate under different terms. Those products aren’t affected by the opt-out/default data‑collection changes that apply to consumer-facing Claude users.
Why Anthropic says this is necessary
Anthropic frames the shift as a move to “enhance the capabilities and safety” of its AI systems. The company argues that access to real-world conversations helps improve skills like coding and reasoning, which in turn leads to better models for everyone. They also say the data will strengthen safeguards against harmful activity, enabling more accurate content moderation and detection of scams.
To address privacy concerns, Anthropic says it will use automated filtering and redaction techniques to obscure sensitive information. The company also promises it will not sell user data to third parties. Still, the optics of an opt-out default in a field where data is a core asset have drawn scrutiny from privacy advocates and some users who fear consent isn’t truly voluntary.
A broader trend toward opt-out data collection
This policy shift puts Anthropic in step with several tech giants that already use opt-out data collection for training. Google’s Gemini and Meta’s AI initiatives, among others, have moved toward default data collection with opt-out options. The industry-wide convergence appears to reflect a broader belief that large-scale training data accelerates model performance, even as it raises questions about user control and transparency. Critics argue that opt-out models risk treating consent as a “lip service” and rely on user inaction rather than a clear affirmative choice.
The ethics and practical implications
- The value of real conversations for refining language models is undeniable. Real users pose questions and provide feedback in contexts that synthetic data or purely public data can’t replicate. That richness helps models handle edge cases, sarcasm, multilingual nuances, and domain-specific jargon.
- At the same time, there’s a risk that people share sensitive or personal details, potentially exposing themselves to future data uses they didn’t fully anticipate. Even with automated filtering, there’s no guarantee that something considered private today won’t become a training signal tomorrow.
- The optics of an opt-out default can erode trust if users feel they’ve had to “opt out of privacy” to begin with. Critics argue that a consent-first approach—truly opt-in by default—better respects autonomy and aligns with emerging expectations for data ethics.
- For businesses and developers, the shift means more data to train models, but also more responsibility for handling and protecting that data. It highlights the ongoing tension between building powerful AI and safeguarding user trust.
What users can do now
If you’re a Claude user and you want to protect your conversations, here are practical steps:
- Watch for the in-app notification about the updated terms.
- Open Claude’s Settings → Privacy and toggle off “Help improve Claude.”
- If you’re new to Claude, pay attention to the signup flow where you’ll see the data collection choice.
- Periodically review your Claude history and delete conversations you don’t want used for training.
Anthropic emphasizes that changes to consumer terms do not apply to enterprise or API usage, which means organizations and developers using Claude in business contexts may be governed by separate agreements.
What’s next for AI data practices
The Claude policy update underscores how quickly data handling standards can shift in AI product ecosystems. As more major players adopt opt-out frameworks, the question becomes less about whether data should be used for training and more about how to ensure that users understand the trade-offs and retain meaningful control over their information. For now, those who care about privacy will need to stay vigilant with settings, read policy notices, and advocate for clearer consent models in the products they rely on.
Bottom line
Anthropic is nudging privacy management onto users while promising safeguards and no third-party data selling. Whether this represents a pragmatic move to improve models or a concession to the economics of AI data remains to be seen. What’s clear is that the industry’s default is shifting—and users will have to decide, actively, what they’re comfortable sharing.
Sources
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGxrlKcA7vgFSWl-0TPkv2sAio_OKjJdAenFnWsuSmPrTi3HKCQM6WzLk9y4zyMTCT0EtyY5VRIJHWXRFZSmhOEugH4Qb6sKUSGYYVrGZXzKNIorQ6Dpu6fERAaMw9phnhsf4wGpqqM4Z04D0EH2RINLPcUliRHsA==
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEKunttsJHgxMsrdShUDg6JANQB_ZgvE8x9WrSLVQK_Siu1ycHJnlNXeQgGc-VUxu7OMeASgJaYCvXFngxzGWB4_7qWU746Kn2r2gCWpqc0Cx9g2XICsNt1wd0N39AylE12O-PSfZAz1w2cPE63XuTUii0H_2LRqzaSc40C9qEQ
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGJgrNhoTBiGv0PL-xGpDYpLGG1ortRyUj1-j07fWlVXIOq2lve8NItbwtPcg-y6M9av8jfM8CSJ-Asfn-2R5-UdXY4tHFS_VkzgvUGQkizC15096ik1WyOpT4bFrYIIno9DN22UDwH_2Q86o-_4uA5rka7VrLEbIfOZpRA_i7NBXVkb2kiLLX8IFdRTjh5vWjTBbRUd_xbDvY4ELA=
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQH5yrnJaxkucQdWCpb3NEUt5NDB1emyGyabamv12sI9Y7hr0IHS1I8PtmYF2vwBvJDgNJTuZGiFz7MvPKip9cvCHXCQLslZfk-7NjRlWM6J1ZnGkKrQbeQD4Prd_RToVDYNmvf0Ly_9aS3MN_zRvsEWmYNNnNV_CQtuPVMNuhIR_TEkcLV2W-fcJeRFicV7KL-efz9POzVSOcKyLCjr-YCHOoA_BIp10N2Shn9u5XtH0JkIdVROiQ==
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEjw2d-Oj9rjdgBvb4u_BEY4IhI5yjgDNXMrhA5yDffZS9YFoNqn8wVaqAnLxU3fWlyOrwHoFPIZbnD3eMaYK0j9GpP00lyVT44AsYp_gPjF7HCfC-7tEIsoDpxqPttQZF_tTu7JC2qp-RmLSs72mlADiknSV63ja10x70vMFNAxBAEOp7sl5eFbSiNXDeDFC4OlQFsuKc=
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQFxQXzdeamjTM1ZfCGLV01FQPYy-hai4RvQJBFfUT2-nCSgbFgZMw6bssH-cj-3D2dU-y7X3ScUvtNJosXz6lwjzAbTX_PyTHM3SbDKeKsxxzP1sojXZyxX095ohe1bsNQ6aSVQYgEve_AdRttL-XPlu6JDSYs795-sDBBV6RX3dd0I4Q-7aL6g57jntqLMtoAo3E0qrUKG
- https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHy8kBTWBYFxfSOqo4_bITlqSrxot7fgvAZzKo2be1D56CdC5t0B49Jac03nIhmo3m_9O0i93ZDVVRCNeLXSeaKBU5T4U8xpn-sCexgTHVqH6LKzMk044jHyGghTNWhBWR8IwQlKUg8Z8APucw0s-8yK8rYYT5y_KBgNuRnjmth_g==