Privacy advocates see risk in new Meta policy that uses AI chats to serve targeted ads
Summary
Meta has started personalising content and ad recommendations based on users’ interactions with its generative AI across Facebook, Instagram, WhatsApp and Messenger. The change — announced in October and rolled out in December — automatically applies to people using Meta AI; users cannot opt out of the sharing if they use the feature.
Privacy groups say Meta’s assurances are vague: the company claims it will not use conversations about sensitive topics for ads, but critics worry about indirect signals, model training, creative optimisation and so-called “proxy audiences” that could reveal sensitive traits without explicit labels. Advocates also warn the move creates a direct financial incentive for Meta to drive more and deeper chatbot engagement, raising concerns about addiction, child safety and exploitation by bad actors.
Key Points
- Meta will personalise ads and recommendations based on interactions with Meta AI across its major apps (Facebook, Instagram, WhatsApp, Messenger).
- Users of Meta AI are automatically included; there is no opt-out for those who use the AI feature.
- Meta says it will not use explicitly sensitive topics (religion, health, political views, sexual orientation, etc.) to show ads, but its language is broad and critics find it vague.
- Privacy experts fear sensitive information could still influence targeting indirectly — via proxy signals, model training or optimisation of creative assets.
- Automatic opt-in creates a direct financial incentive for Meta to increase engagement with chatbots, which could push design choices that encourage more disclosure and longer interactions.
- Child safety and mental-health risks are highlighted, given heavy teen use of AI companions and recent cases linking intense chatbot interactions to harm.
- Meta’s history of privacy and advertising controversies (including an FTC penalty and alleged failures around scam ads) has advocates wary that this change could worsen harms.
Why should I read this?
Because if you use Meta’s AI or care about how platforms collect data, this affects you now. It isn’t a distant policy tweak — it’s a change that quietly funnels what people tell chatbots into ad systems, without an easy opt-out. Read on to spot the practical risks (and decide whether you want to keep using these features).
Author’s take
Punchy: This is big. Meta is folding conversational data into ad targeting with minimal transparency and no opt-out for users of the AI — a recipe for unintended privacy leakage and perverse product incentives. If you’re interested in platform governance, ad tech or AI safety, the details matter.
Context and relevance
The rollout comes as chatbot use soars and regulators and researchers scrutinise harms from generative AI — from biased outputs and privacy exposure to mental-health risks. The policy sits at the intersection of ad tech and AI governance: if conversational signals become standard inputs for targeting, advertisers gain more precise behavioural data while users lose control over highly sensitive disclosures. This makes the item especially relevant to privacy professionals, policymakers, parents and anyone tracking how AI monetisation shapes product design.
Source
Source: https://therecord.media/privacy-advocates-see-risks-meta-ai-ad-targeting
