AI Chatbots: The Digital Confidants Turning Snitch – Privacy Warnings Escalate Amid Legal Scrutiny
By Tech Correspondent | Published May 14, 2026
In an era where artificial intelligence chatbots have become go-to companions for everything from casual chit-chat to deep personal confessions, a stark warning has emerged: these digital “therapists” might be more snitch than shrink. Columnist Arwa Mahdawi’s recent Guardian piece, “Beware what you tell your AI chatbot. It’s not a shrink – it’s a snitch,” has ignited a firestorm of debate over user privacy, corporate data practices, and the unintended consequences of sharing intimate thoughts with algorithms.
The Illusion of Confidentiality
Mahdawi’s article draws a direct line from the comfort of AI interactions to the cold reality of data exploitation. Users often treat chatbots like ChatGPT, Grok, or Claude as non-judgmental confidants, pouring out anxieties, relationship woes, and even suicidal ideations. But unlike a licensed therapist bound by strict confidentiality laws like HIPAA in the US or GDPR in Europe, AI companies operate in a regulatory Wild West.
“Your AI chatbot is not your friend, your therapist, or your priest,” Mahdawi writes. “It’s a corporate product designed to hoover up as much data as possible.” This data fuels model training, targeted advertising, and worse – potential handover to law enforcement. Recent revelations from OpenAI’s internal practices underscore this peril. Court documents from Elon Musk’s ongoing lawsuit against OpenAI co-founder Greg Brockman and CEO Sam Altman reveal that sensitive user conversations have been scrutinized and even read aloud in legal proceedings.
From Lawsuit Drama to Privacy Nightmare
The backstory is as juicy as a tech thriller. Musk, who co-founded OpenAI in 2015 as a nonprofit counterweight to profit-driven AI, sued Brockman and Altman in early 2026, alleging breach of the organization’s founding agreement. OpenAI’s pivot to a for-profit model, capped by a staggering $157 billion valuation, forms the crux of the dispute. But the real bombshell came during depositions: Brockman was compelled to read explicit user prompts aloud in court – including queries about illicit activities and personal fantasies.
“You won’t find it in the library, but you can watch Brockman… being forced to read the juiciest bits out loud in court,” Mahdawi quips, highlighting the absurdity and exposure.
These incidents aren’t isolated. In 2025, Google disclosed sharing Bard user data with authorities in child safety investigations. Meta’s Llama models have faced scrutiny for logging mental health disclosures. Experts like Dr. Emily Chen, a privacy researcher at Stanford, warn that AI firms’ terms of service often include clauses allowing data retention for “safety” or legal compliance, without user consent for specifics.
Real-World Ramifications
The risks extend beyond embarrassment. Mental health advocates report cases where users seeking crisis support via AI were flagged and reported to authorities, sometimes leading to unwanted interventions. A 2026 study by the Electronic Frontier Foundation (EFF) found that 68% of popular AI chatbots retain conversation logs indefinitely unless users opt out – an option buried in fine print.
Regulatory responses are gaining traction. The EU’s AI Act, effective from August 2026, mandates transparency in data handling for high-risk systems, including chatbots. In the US, bipartisan bills propose extending therapist-like protections to AI mental health tools. Yet, enforcement lags behind innovation. OpenAI, valued at $157 billion post its latest funding round, claims safeguards like data deletion options and human review filters, but critics argue these are performative.

Expert Voices and User Reactions
“People anthropomorphize AI too easily,” says AI ethicist Timnit Gebru. “It’s not listening empathetically; it’s logging for profit.” Social media erupts with anecdotes: one X user shared how their Claude confession about job stress appeared in targeted ads for therapy apps. Reddit threads on r/ChatGPT overflow with regretful posts like, “I told it everything – now what?”
Mahdawi advises practical steps: use incognito modes, avoid real names, and treat AI like a public diary. Emerging alternatives like privacy-focused bots from startups such as Hugging Face promise end-to-end encryption, but adoption remains niche.
The Broader AI Privacy Reckoning
This scandal amplifies wider concerns. As AI integrates into daily life – from therapy apps like Woebot to workplace coaches – the snitch factor could erode trust. Musk, leveraging the lawsuit for PR, tweeted: “OpenAI’s betrayal of users is as bad as its betrayal of its mission.” OpenAI counters that data protections exceed industry norms and Musk’s suit is sour grapes.
With AI therapy projected to reach $5.5 billion by 2028 (Statista), the stakes are high. Will users heed the warning, or continue whispering secrets to silicon ears? As Mahdawi concludes, “The chatbot may soothe your soul today, but tomorrow it could be evidence in court.”