xAI’s Grok Training Sparks Ethics Debate: Staff Facial Expression Videos Request Raises Eyebrows
Elon Musk's xAI pushes boundaries—and buttons—with a controversial data grab for its Grok AI project.
Employees asked to submit facial expression videos for 'emotional intelligence training'—but at what cost?
Privacy advocates sound alarms as Silicon Valley's appetite for data outpaces ethical frameworks (again).
Meanwhile in finance: VCs cheer the move—after all, nothing boosts valuation like a good old-fashioned privacy tradeoff.
xAI employees have issues with ‘Skippy’ project
According to internal documents and Slack communications obtained by Business Insider, xAI tasked its AI tutors, employees responsible for refining the model, with recording 15- to 30-minute conversations with coworkers.
The sessions reportedly included both dialogue and exaggerated facial expressions to simulate real-world emotional responses. One individual in each pair of the conversations was the “host,” or the VIRTUAL assistant, the other a user.
The host was instructed to limit their movements and stick within the frame of an optimal camera, but the user was free to record from a mobile phone or computer and MOVE naturally. The resulting footage was meant to reflect casual, real-life conversations.
The lead engineer on the Skippy project told employees during a kickoff meeting that they wanted to “give Grok a face,” and that this data could support the development of human-like avatars.
Chats show the engineer promised the videos would only be used internally and not to create digital versions of the participants, but they were supposedly not convinced.
“Our aim is to expose the model to imperfect data, like background noise and abrupt movements, to make its responses more versatile,” the engineer explained, according to a recording of the meeting. They also insisted: “Your face will not ever make it to production. It’s purely to teach Grok what a face is.”
Despite such assurances, dozens of employees were unsettled after being required to sign a consent FORM that granted xAI “perpetual” rights to use their likeness for training purposes and in promotional materials.
Opt-outs and uncomfortable prompts
Several workers opted out of the program entirely, saying they were uncomfortable with how the project and the language used in the consent agreements. During internal discussions, one employee asked if the footage could be manipulated to simulate them saying things they never actually said.
As part of the recording sessions, xAI encouraged employees to discuss personal or provocative topics like, “would you ever date someone with a kid?”, “How do you secretly manipulate people to get your way?”, and “what about showers, morning or night?”
Some found the topics profoundly invasive or inappropriate.
Only days after Grok 4’s release in mid-July, xAI debuted two AI avatars named Ani and Rudi. Videos posted on X, the social platform owned by Musk, show that Ani can be prompted to engage in sexually explicit conversations and remove her clothing.
Rudi, a red panda avatar, has reportedly made violent threats, including statements about bombing banks and harming billionaires.
Though xAI has not confirmed if the Skippy project directly contributed to the development of these avatars, the silence is getting different interpretations.
xAI also rolled out a video chat feature for Grok in April, and earlier this month introduced Grok for Tesla owners, alongside a premium subscription tier called SuperGrok Heavy, priced at $300 per month.
On July 9, Grok’s prompt feature was briefly shut down after it wrote an antisemitic rant. The company later issued a public apology on X.
KEY Difference Wire helps crypto brands break through and dominate headlines fast