- Forward Future Daily
- Posts
- 👾 Protecting Your Privacy: How to Prevent LLMs from Training on Your Data
👾 Protecting Your Privacy: How to Prevent LLMs from Training on Your Data
Take control of your data with this guide to disabling training settings on ChatGPT, Grok, Gemini, and Claude.
In today's AI-driven world, large language models (LLMs) like ChatGPT, Grok, Gemini, and Claude have become invaluable tools for productivity, creativity, and problem-solving. However, as we eagerly adopt these technologies, we must also consider an important question: What happens to our data after we share it with these systems?
Many people don't realize that their conversations with AI assistants might be used to train future versions of these models. That innocent question about a medical condition, your business strategy brainstorming, or that creative writing sample you asked for feedback on - all of this could become part of the data that trains tomorrow's AI.
I've been thinking about this a lot lately. While I appreciate that AI needs data to improve, we should have control over our own information. Privacy isn't just a preference; it's a fundamental right that we shouldn't have to sacrifice to benefit from AI advancements.
Why Privacy Matters with LLMs
When we type sensitive information into an AI chatbot, we're often in the mindset of having a private conversation. We may share personal details, proprietary business information, or creative work that we don't want to be incorporated into a public-facing AI model.
The stakes are higher than many realize. Your conversations might contain:
Personal identifying information
Professional secrets or intellectual property
Creative content you'd prefer to keep original
Sensitive questions you'd rather keep private
Fortunately, most major AI providers now offer ways to opt out of having your data used for training. When I run my AI training workshops with businesses, we focus on this topic as I don't want people putting in information that could cause future harm to them or their organization.
Suppose you have put in things before you turned on the right privacy settings. In that case, the AI companies do work to anonymize data that's put in (e.g., social security numbers, phone numbers, and other personal identifiers) – but it's much better to turn these settings on.
Here's how to take control of each of the major platforms (not that these instructions are for the web interfaces as of March 31, 2025).
ChatGPT (OpenAI)
OpenAI's approach to privacy has evolved significantly over time. To prevent your ChatGPT conversations from being used for training:
Click on your profile icon (typically at the upper right corner)
Navigate to "Settings"
Select "Data Controls"
Toggle the "Improve the model for everyone" selector to "Off"
As an additional privacy measure, you can also go to this page (https://privacy.openai.com/policies) and click on 'Make a Privacy Request.' Doing this tells them to opt you out of training on your data.
Grok (X/Twitter)
X's Grok AI has a slightly different approach to privacy controls. To stop Grok from training on your data:
Go to your X home page
Toward the bottom left-hand side of the page, click on the icon of a circle with three dots
Select "Settings and Privacy"
Select "Privacy and Safety"
Select "Grok & Third-Party Collaborators"
Uncheck the box labeled "Allow your posts as well as your interactions, inputs and results with Grok to be used for training and fine-tuning"
Uncheck the box labeled "Allow X to personalize your experience with Grok" if you also want to stop Grok from keeping additional information on you and personalizing the experience
Gemini (Google)
Google's Gemini can be prevented from using your data for training with these steps:
Navigate to gemini.google.com
Go to "Activity"
Select "Gemini Apps Activity"
Tap the option to "Turn Off" Gemini Apps Activity
Claude (Anthropic)
Anthropic has taken a privacy-forward approach with Claude. By default, Claude does not use your conversations for model training unless you explicitly opt-in. You need to reach out to Anthropic directly if you want to enable opt-in (I am not sure why you would do that!)
Clicking the "thumbs up" or "thumbs down" buttons
Little thumbs-up and thumbs-down icons are at the bottom of most chat responses. This is to send feedback to the companies about that chat (if you're happy with them or unhappy for some reason). If you do this – the data in that chat will no longer be private. It will be available for human reviewers in those companies to look at, so please do not give a thumbs up or thumbs down on any chats where you have sensitive data. The data is unlikely to be used for training, but select people within those companies will have access to it for a set period of time.
Final Thoughts
Taking these simple steps can significantly enhance your privacy when using LLMs. I've implemented these changes across all my AI interactions, and it gives me peace of mind knowing my conversations remain private while I still benefit from these powerful tools.
Remember that privacy settings can change over time, so it's worth revisiting these options periodically. The small effort required to adjust these settings is well worth the privacy protection they provide.
About the author
![]() | Steve SmithSteve is a Senior Partner at NextAccess and has worked with hundreds of companies to understand and adopt AI in their organizations. He has worked extensively with services firms (law firms, PE firms, consulting firms). Feel free to reach out via email: [email protected] Want to talk about an AI workshop or personal training? Grab a 15-minute slot on my calendar. |
Reply