OpenAI Teams Up with Bryan Cranston & SAG-AFTRA in Major Push Against Sora Deepfake Threats

Hollywood meets AI in unprecedented alliance as OpenAI recruits Breaking Bad star Bryan Cranston and actors' union SAG-AFTRA to combat synthetic media manipulation.
The Deepfake Defense Pact
Major talent agencies join the coalition, creating an entertainment industry firewall against AI-generated impersonations. Sora's hyper-realistic video capabilities sparked urgent action—no more unauthorized digital doppelgangers stealing actors' livelihoods.
Authentication protocols roll out immediately. Blockchain verification systems get embedded in production contracts. Talent agencies rewrite representation agreements with AI clauses that'd make contract lawyers blush.
Because nothing unites Hollywood faster than protecting their cut—except maybe a 20% backend deal. The entertainment industry finally found something scarier than bad reviews: perfect digital replicas working for scale.
OpenAI faces heat from agencies over Sora 2’s misuse
OpenAI has been under fire from talent agencies for a while now. Both CAA and UTA blasted the company earlier this year for using copyrighted work to train its models, calling Sora a straight-up threat to their clients’ intellectual property.
Those warnings turned real when users started uploading disrespectful videos of Martin Luther King Jr. to Sora. The videos were so bad that King’s estate had to step in last week and ask for them to be blocked, and OpenAI complied.
The heat didn’t stop there. Zelda Williams, daughter of late comedian Robin Williams, also told people to stop sending her AI-made videos of her dad after Sora 2 dropped. She made her frustration public not long after the launch, adding more fuel to the fire already building around OpenAI’s loose grip on identity protections.
With complaints stacking up, the company decided to tighten its policies. Sora had already required opt-in consent for voice and likeness use, but OpenAI said it’s now also promising to respond fast to any complaints it gets about impersonations or misuse.
Sam Altman updates policy and pushes NO FAKES Act
On October 3, OpenAI CEO Sam Altman made it official: the old policy that let the company use material unless someone told them not to? Scrapped. The company now gives rightsholders “more granular control over generation of characters,” meaning agencies can finally manage how and when their clients’ identities are used in Sora.
Sam also doubled down on his support for the NO FAKES Act, a U.S. bill aimed at stopping unauthorized AI replicas. “OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” he said. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”
OpenAI has gone from a research outfit to an AI empire chasing everything; chat apps, social platforms, and enterprise tools. But with billions locked up in AI chips, and its giant data center build-out still hungry for cash, it’s looking hard at government and corporate contracts to pay the bills. That means avoiding lawsuits and getting actors, agents, and lawmakers off its back is now just as important as training the next AI model.
Sign up to Bybit and start trading with $30,050 in welcome gifts