AI Chatbot Compliance

Privacy Policy for an AI Chatbot

AI chatbots present a privacy challenge that traditional websites do not. Users type sensitive content into a prompt, that content is processed by a language model, and it may be stored or used to improve the system. Your privacy policy must cover all of this clearly.

Last updated · Reviewed for compliance

AK
Written by Anupam Kumar
Updated
9 min read
Reviewed for compliance

2026 AI chatbot compliance

Yes, every AI chatbot that collects user prompts, account data, or conversation history needs a privacy policy that discloses how that data is processed, stored, and used. Under GDPR, CCPA, and emerging AI specific regulation, chatbot operators must explain what is collected from prompts, who processes the data (including any underlying model providers), how long it is retained, whether it is used for training, and how users can opt out or request deletion.

What an AI Chatbot Actually Collects

Prompts and responses are the most sensitive data an AI chatbot processes. Users frequently paste personal information into prompts: names, emails, employment details, health concerns, financial questions, and proprietary work content. Even if your chatbot is a wrapper around a third party model, the prompt text is personal data the moment a user includes anything that identifies them.

Account data is the next layer: email address, login credentials, billing information for paid plans, IP address, and any profile fields you collect. This data is governed by the same privacy laws as any other web service.

Conversation history is the third layer. Many chatbots persist past conversations so users can return to them. This creates a long term store of personal data that must be covered in the privacy policy and protected with appropriate security.

Disclosing the Underlying Model Provider

If your chatbot is built on top of OpenAI, Anthropic, Google, or any other language model API, you must disclose this in your privacy policy. Users have a right to know that their prompts are being sent to a third party processor.

Name the provider explicitly. State what data is sent (typically the prompt and any system context), what is returned (the model response), and what the provider says about retention. Most major providers offer enterprise terms that exclude prompt data from training; if you use those terms, say so in your policy.

If you fine tune a model on user data, this is a much bigger disclosure. Tell users what data is used for fine tuning, how they can opt out, and what happens if they later request deletion. Trained model weights cannot be selectively deleted, which is a known limitation under GDPR's right to erasure that you should explain clearly.

Training, Retention, and Opt Out

If you use user prompts to improve your chatbot, this is a sensitive disclosure. State explicitly whether prompt data is used for training, what kind of training, who has access to the raw data, and how a user can opt out.

Retention periods must be specific. Vague language like we keep your data as long as necessary is not GDPR friendly. Use real numbers: prompts deleted after 30 days, account data retained for the life of the account, billing records kept for seven years for tax purposes.

Provide a clear path for users to delete their conversations and account. The path must be self serve where possible. Manual email requests are acceptable for edge cases but should not be the only option.

GDPR and CCPA Specifics for Chatbots

Under GDPR, you must identify a lawful basis for processing prompt data. For paid users, contract performance is usually correct. For free users, consent or legitimate interests can apply, but legitimate interests need a balancing test that you should be ready to show.

GDPR also requires you to identify your role as data controller, list any joint controllers, and name any data processors (including the model provider). Provide a contact for data protection requests.

Under CCPA, California residents have a right to know, delete, and opt out of sale or sharing for advertising. Most chatbots do not sell data, but if you share with any analytics or ad partner, you must offer the Do Not Sell or Share My Personal Information link on the homepage.

Common Privacy Policy Mistakes Chatbots Make

Hiding the model provider. Reviewers and regulators have started flagging chatbots that fail to mention they are routing prompts to a third party API. Always name the provider.

Claiming end to end encryption when prompts are processed by a third party. Encryption in transit is fine to claim if true, but the prompt is decrypted at the model provider, which is the relevant disclosure.

Promising deletion without describing the model training caveat. If you fine tune on user data, deletion of the user account does not remove the data from the trained model. This must be explained in plain language.

Frequently Asked Questions

Do I need a privacy policy if my AI chatbot is free and has no accounts?

Yes. Even an anonymous chatbot collects data: IP address, browser metadata, the prompt text itself, and any cookies or analytics on the page. Privacy laws like GDPR and CCPA apply to that data regardless of whether the user has an account.

Can I just point users to OpenAI's privacy policy if my chatbot uses the OpenAI API?

No. You are the data controller for your service. You must publish your own privacy policy that names OpenAI as a processor and links to the OpenAI policy as supplementary. Pointing users at OpenAI alone fails GDPR controller obligations and confuses users.

What if my chatbot uses an open source model running on my own server?

Then there is no third party processor for the model itself, which simplifies the disclosure. You still need to describe what is collected, how it is stored, and how users can request deletion. If you use any cloud hosting, GPU vendor, or inference platform, name them as processors.

Are there special rules for AI chatbots aimed at children?

Yes. COPPA in the US, GDPR Article 8 in the EU, and the UK Age Appropriate Design Code all impose stricter rules on services aimed at users under 13. AI chatbots for children require parental consent for data collection and additional protections in the privacy policy.

Generate an AI chatbot privacy policy in 60 seconds

Covers prompt handling, model provider disclosure, training opt out, and GDPR plus CCPA rights.

Related Resources