One Tech Tip: Don't want chatbots using your conversations for AI training? Some let you opt out
Aug. 15, 2024, 2:25 p.m.
Read time estimation: 7 minutes.
8
LONDON -- Be careful what you tell a chatbot. Your conversation might be used to improve the artificial intelligence system that it's built on.
If you seek advice from ChatGPT about a sensitive medical issue, remember that anything you disclose could be used to refine OpenAI's algorithms powering its AI models. The same applies if you, for instance, upload a confidential company report to Google’s Gemini to summarize for a meeting.
It's no secret that the AI models underpinning popular chatbots have been trained on vast amounts of information gathered from the internet, like blog posts, news articles and social media comments. This allows them to predict the next word when generating a response to your questions.
This training often occurred without consent, leading to copyright concerns . Experts suggest that due to the opaque nature of AI models, it might be too late to remove any data that may have been used.
Moving forward, you have the option to prevent any of your chatbot interactions from being used for AI training. While not always possible, some companies provide users with this choice:
Google stores your conversations with its Gemini chatbot to improve its machine learning systems. For users 18 or older, chats are kept by default for 18 months, though this can be adjusted in settings. Human reviewers may access the conversations to enhance the quality of the generative AI models powering Gemini. Google advises users to refrain from sharing confidential information or data they wouldn't want a human reviewer to see with Gemini.
To disable this, visit the Gemini website and click the Activity tab. Click the Turn Off button and choose from the dropdown menu to stop recording future chats or delete all previous conversations. The company warns that conversations selected for human review won't be deleted and are stored separately. Whether you choose to disable activity recording or leave it on, Google states that all Gemini chats are kept for 72 hours to “provide the service and process any feedback.”
Gemini's help page also outlines the process for iPhone and Android app users.
Meta employs an AI chatbot that participates in conversations on Facebook, WhatsApp, and Instagram, fueled by its open-source AI language models. The company clarifies that these models are trained on information shared on its platforms, encompassing social media posts, photos, and caption data, but excluding private messages between friends and family. They are also trained on publicly available data gathered from other parts of the web by “third parties."
Not all users can opt out. Residents of the 27-nation European Union and the United Kingdom, subject to rigorous privacy regulations, possess the right to object to the use of their information for training Meta's AI systems. From the Facebook privacy page, select Other Policies and Articles from the left-hand side menu, then locate the section on generative AI. Scroll down to find a link to a form enabling you to register your objection.
A box requires additional details to support your request, but specific instructions are absent. I indicated my right as a U.K. resident to revoke consent for my personal data to be used in AI training. Almost instantaneously, I received an email confirming Meta's review and acceptance of my objection. “This means your request will be applied going forward," the email stated.
Individuals residing in the United States and other countries lacking national data privacy laws are not afforded this option.
Meta's privacy hub provides a link to a form enabling users to request that their data collected by third parties not be utilized for "developing and enhancing AI at Meta.” However, the company cautions that requests are not automatically fulfilled and will be reviewed based on local legislation. The process itself involves considerable complexity, requiring users to provide the chatbot request that yielded a response containing their personal information, along with a screenshot of the interaction.
For individual users, there's no way to explicitly opt out of having your data used for training Copilot. The best course of action is to delete your interactions with the Copilot chatbot. You can do this in your Microsoft account settings and privacy page. Look for a dropdown menu labeled 'Copilot interaction history' or 'Copilot activity history' to find the delete button.
If you have an OpenAI account, navigate to the settings menu in your web browser and then the 'Data controls' section. Here, you can disable the setting 'Improve the model for everyone.' If you don't have an account, click on the small question mark in the bottom right corner of the webpage and then 'settings' to access the same option. Mobile users can make this choice within the ChatGPT Android and iOS apps.
OpenAI clarifies on its data controls help page that when users opt out, their conversations will still appear in the chat history but won't be used for training. These temporary chats will be kept for 30 days and reviewed only if necessary to monitor for abuse.
Elon Musk's X platform quietly enabled a feature that lets Grok, his AI chatbot, learn from user data on the social media platform. This setting is activated by default, permitting Grok to use information like your posts, “interactions, inputs, and results” for training and “fine-tuning.”
The modification was not publicly announced and only came to light after X users discovered it in July. To opt out, access settings within X's desktop browser version, navigate to “Privacy and safety,” locate “Grok,” and uncheck the corresponding box. Alternatively, you can erase your conversation history with Grok, if any. Regrettably, this option is not available through the X mobile app.
Anthropic AI states that its chatbot Claude does not train on personal data. It also doesn't use questions or requests to train its AI models automatically. However, users can grant “explicit permission” for a specific response to be used in training by giving a thumbs up or thumbs down or emailing the company. Conversations flagged for safety review might be used to improve the company's systems for enforcing its rules.