FTC Launches Probe Into AI Giants Over Chatbot Safety and Child Protection Gaps
Federal regulators just dropped the hammer on Big Tech's AI ambitions—and the timing couldn't be more brutal.
Safety First—Or Profit First?
The FTC's sweeping inquiry targets how leading AI firms handle chatbot safety protocols and child protection measures. No company names disclosed yet, but insiders whisper this cuts deep into Silicon Valley's biggest players.
Regulatory Storm Brewing
This isn't some routine check-in. The probe demands internal documents, safety testing data, and compliance records—the kind of scrutiny that makes C-suite executives sweat through their tailored suits. Comes right as these companies race to monetize generative AI while keeping costs contained.
Child Safety Takes Center Stage
With kids increasingly interacting with AI assistants, regulators want answers on content filtering, data collection, and psychological safeguards. One anonymous source called it 'a long-overdue reckoning' for an industry moving fast and breaking things—including, potentially, consumer trust.
Market Impact? Minimal—For Now
Tech stocks barely flinched on the news. Because let's be real—when was the last time a regulatory inquiry actually changed how Silicon Valley operates? They'll lawyer up, lobby hard, and keep shipping product. Business as usual, just with more compliance paperwork.
Building AI guardrails
“It’s a positive step, but the problem is bigger than just putting some guardrails,” Taranjeet Singh, Head of AI at SearchUnify, told Decrypt.
The first approach, he said, is to build guardrails at the prompt or post-generation stage “to make sure nothing inappropriate is being served to children,” though “as the context grows, the AI becomes prone to not following instructions and slipping into grey areas where they otherwise shouldn't.”
“The second way is to address it in LLM training; if models are aligned with values during data curation, they’re more likely to avoid harmful conversations,” Singh added.
Even moderated systems, he noted, can “play a bigger role in society,” with education as a prime case where AI could “improve learning and cut costs.”
Safety concerns around AI interactions with users have been highlighted by several cases, including a wrongful death lawsuit brought against Character.AI after 14-year-old Sewell Setzer III died by suicide in February 2024 following an obsessive relationship with an AI bot.
Following the lawsuit, Character.AI “improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines,” as well as a time-spent notification, a company spokesperson told Decrypt at the time.
Last month, the National Association of Attorneys General sent letters to 13 AI companies demanding stronger child protections.
The group warned that "exposing children to sexualized content is indefensible" and that "conduct that WOULD be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine."
Decrypt has contacted all seven companies named in the FTC order for additional comment and will update this story if they respond.