OpenAI Stops Deleting ChatGPT Histories Amid NYT Legal Heat—Privacy or Paper Trail?
OpenAI quietly flipped the switch—your ChatGPT convos now stick around indefinitely. The policy shift lands as The New York Times sues over alleged copyright infringement, raising eyebrows about whether this is user convenience or legal CYA.
No more vanishing acts: Gone are the days when your AI chats disappeared after 30 days. The new default? Everything’s saved unless you manually opt out—because nothing says 'trust us' like forcing users to toggle privacy settings themselves.
Legal storm brewing: The Times lawsuit accuses OpenAI of training ChatGPT on copyrighted articles without permission. Coincidence that data retention spiked just as discovery requests loomed? Wall Street would call that 'fortuitous timing.'
Transparency or trapdoor? OpenAI claims the move improves model safety. Critics see a goldmine for subpoenas—and a potential backdoor for monetizing conversations. After all, if you’re not paying for the product... well, you know the rest.
TLDR
- OpenAI is challenging a federal court order requiring it to preserve all user data, including deleted ChatGPT chats, in the New York Times copyright lawsuit
- The New York Times sued OpenAI and Microsoft in December 2023, claiming they illegally used Times content to train AI models like ChatGPT
- OpenAI argues the data preservation order undermines user privacy and conflicts with their policy of permanently deleting chats within 30 days
- The lawsuit centers on whether using copyrighted material to train AI models constitutes “fair use” under copyright law
- The Times claims OpenAI’s tools can generate near-verbatim outputs from its articles and bypass its paywall through AI summaries
OpenAI is pushing back against a federal court order that requires the company to preserve all user conversations, including deleted chats, as part of an ongoing copyright lawsuit filed by The New York Times. The AI company says the order violates user privacy rights and goes against their established data deletion policies.
The controversy stems from a May 13 court order directing OpenAI to “preserve and segregate all output log data that WOULD otherwise be deleted on a going forward basis until further order of the Court.” OpenAI COO Brad Lightcap called the move “an overreach by The New York Times” and said the company would continue appealing the decision.
The New York Times filed its lawsuit against OpenAI and Microsoft in December 2023. The newspaper alleges both companies illegally used Times content to train their large language models, including ChatGPT and Bing Chat, without permission or compensation.
The Times argues this practice infringes on its copyrights and threatens the business model that supports original journalism. The newspaper expressed concern that potential evidence of copyright infringement might be lost as users regularly clear their chat histories with AI tools.
OpenAI maintains a policy of permanently deleting user conversations within 30 days after users delete them from their accounts. The company says this practice protects user privacy and follows established data protection norms.
Data Privacy Concerns
The court order directly conflicts with OpenAI’s current privacy commitments to users. The company offers tools that allow users to control their data, including easy opt-outs and permanent removal of deleted ChatGPT conversations and API content.
OpenAI describes the Times’ demand to retain consumer data indefinitely as “sweeping and unnecessary.” The company argues this approach abandons long-standing privacy norms and weakens overall privacy protections for users.
The preservation order applies to all ChatGPT output data that would normally be deleted. This includes conversations from millions of users who may have no connection to the copyright dispute between OpenAI and the Times.
Legal Battle Over Fair Use
The lawsuit centers on a fundamental question about whether using copyrighted material to train generative AI models constitutes “fair use” under copyright law. This legal concept allows limited use of copyrighted works for purposes like criticism, comment, or education.
The Times claims OpenAI’s AI tools sometimes produce near-verbatim outputs from its articles. The newspaper also alleges that AI-generated summaries can help users bypass its paywall system, potentially reducing subscription revenue.
Both parties have positioned themselves as defending important principles. The Times says it is protecting journalism and ensuring media organizations can be compensated for their work.
OpenAI CEO Sam Altman has accused the newspaper of being “on the wrong side of history.” The company maintains that the Times cherry-picked data to support its lawsuit claims.
The case continues as OpenAI appeals the data preservation order while fighting the underlying copyright claims.