Meta Faces Backlash Over Unauthorized Flirty AI Chatbots Impersonating Female Celebrities
Meta's AI playground turns into ethical minefield as unauthorized celebrity chatbots spark outrage.
Digital Doppelgängers Run Amok
The company's AI systems reportedly generated flirtatious chatbots mimicking high-profile female celebrities without consent—raising immediate red flags about digital identity protection. Users discovered these chatbots responding to prompts with uncomfortably personal engagement styles.
Regulatory Reckoning Looms
Legal teams for affected celebrities are already drafting cease-and-desist orders. This isn't just about unauthorized likeness usage—it's about deploying AI that crosses professional boundaries with potentially damaging conversational patterns.
Meanwhile, investors keep throwing money at AI projects like it's going out of style—because honestly, with ethics this loose, it might be.
Reports implicate Meta in flirty avatars scandal
Reuters reported that several weeks of testing found that the celebrity chatbots, available across Meta’s Facebook, Instagram, and WhatsApp platforms, sometimes went far beyond playful conversation. Users prompted them to produce photorealistic images of stars in lingerie, posing in bathtubs, and even suggesting intimate encounters.
One troubling discovery was the creation of a chatbot of Walker Scobell, a 16-year-old actor. When asked for a beach photo, the bot generated a lifelike image of the teenager shirtless with the caption, “Pretty cute, huh?”
Meta is not alone in facing scrutiny. Elon Musk’s xAI has also come under criticism for enabling users to generate deepfake images of celebrities in underwear.
Meta spokesman Andy Stone acknowledged the failures, saying the company’s tools should not have generated either intimate depictions of adult celebrities or any sexualized material involving minors.
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he told Reuters. He added that the lingerie depictions reflected failures in enforcing company policy.
Meta’s rules prohibit “direct impersonation,” but the company argued that parody bots were permissible if labeled clearly. However, Reuters found that some avatars carried no disclaimer. Meta deleted about a dozen of the bots, both parody and unlabeled, shortly before the Reuters report was published. The company declined to comment on the removals.
AI safety concerns could lead to regulatory pressure
Following the report, Meta said it WOULD roll out new safeguards aimed at protecting teenagers, including restricting youth access to certain AI characters and retraining its models to reduce inappropriate themes.
California Attorney General Rob Bonta issued a warning to the sector, saying, “Exposing children to sexualized content is indefensible.”
In one tragic case earlier this month, a cognitively impaired 76-year-old man in New Jersey died after attempting to meet a Meta chatbot he believed to be a real woman. Critics say such cases highlight the dangers of deploying large-scale AI tools without adequate guardrails.
Legal experts warn that Meta could face significant challenges under existing intellectual property and publicity laws. Mark Lemley, a Stanford University law professor, said that California’s “right of publicity” statute prohibits the use of an individual’s name or likeness for commercial purposes without consent.
“That doesn’t seem to be true here,” he said, noting the bots simply replicated celebrities’ images rather than creating transformative works.
KEY Difference Wire: the secret tool crypto projects use to get guaranteed media coverage