Texas Cracks Down: Meta and Character.ai Under Fire for Allegedly Marketing Chatbots as Mental Health Therapists
Texas launches a probe into tech giants Meta and Character.ai over claims they're pushing AI chatbots as unlicensed therapists—just what the world needs: algorithmic Freuds funded by ad revenue.
Subheader: The 'Therapist' in Your Pocket—Or Just Another Data Grab?
State investigators allege these platforms crossed ethical lines by positioning conversational AI as mental health support. No numbers disclosed yet, but the legal stakes could crater valuations faster than a shitcoin in a bear market.
Subheader: Silicon Valley's Latest Cure-All—Terms and Conditions Apply
Active lawsuits suggest chatbots allegedly dabbled in diagnosis and treatment advice without oversight. Because nothing says 'healing' like a privacy policy that sells your trauma metrics to third-party advertisers.
Closer: As regulators draw hard lines, one question lingers—when did 'move fast and break things' include HIPAA violations?
Meta says that its policies ban harm to children
Meta said that its policies ban content that harms children in such ways. It added that Reuters first reported the “internal materials” leaked, which “were and are erroneous and inconsistent with our policies, and have been removed.”
Zuckerberg has committed billions toward building “personal superintelligence” and positioning Meta as an “AI leader.” The company has released its Llama family of LLMs and rolled out the Meta chatbot across its social apps. Zuckerberg also described a potential therapeutic use case for the technology. “For people who don’t have a person who’s a therapist, I think everyone will have an AI,” he said in a podcast with Ben Thompson in May.
Character.ai makes chatbots with distinct personas and lets users design their own. The platform includes many user-created therapist-like bots. One bot, called “Psychologist,” has recorded over 200 million interactions. The company has also been named in lawsuits brought by families who claim their children were harmed in the real world after using the service.
Paxton’s office mentioned that Character.ai and Meta chatbots may impersonate licensed health professionals and invent new credentials, making it seem like the interactions are confidential, even though the companies themselves mention that all conversations get logged. These conversations are also “exploited for targeted advertising and algorithmic development,” Paxton’s office said.
Paxton’s office issued a Civil Investigative Demand
The attorney general has issued a Civil Investigative Demand requiring the companies to provide information that could show whether they violated laws related to Texas consumer protection.
Meta said it marks AI experiences clearly and warns users about limitations. The company added, “We include a disclaimer that responses are generated by AI — not people. These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”
Similarly, Character.ai said it uses prominent notices as a reminder that AI personas are not real and should not be viewed as professionals. “The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” the company said.
The dual struggles, a state probe in Texas and a Senate review in Washington, place fresh pressure on how AI chatbots are built, marketed, and moderated, and on what companies tell users about the limits of automated support.
Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.