BTCC / BTCC Square / Cryptopolitan /
OpenAI’s Robotics Chief Quits Over Surveillance Fears in Explosive Resignation Letter

OpenAI’s Robotics Chief Quits Over Surveillance Fears in Explosive Resignation Letter

Published:
2026-03-08 10:41:02
9
1

OpenAI's robotics chief raises surveillance concerns in resignation letter

Another high-profile exit rocks the AI giant—this time over the tech's creepiest application.

The Surveillance Elephant in the Room

OpenAI's push into physical robotics just hit a major snag. The division's top executive walked out, penning a blistering resignation that lays bare internal fears about weaponized surveillance. The letter doesn't mince words: advanced robotics, paired with existing AI models, creates unprecedented tracking capabilities. Think cameras that don't just see, but understand and predict—deployed at scale.

Why This Stings for OpenAI

This isn't just about ethics—it's about product roadmaps. The resignation exposes a brutal fault line within the company. One faction charges ahead with commercialization; another warns they're building the ultimate panopticon. It cuts to the core of their 'beneficial AI' branding. When your own robotics chief bails over conscience, your PR narrative starts to crack.

The Inevitable Finance Angle

Meanwhile, venture capitalists keep writing checks—proving once again that in tech, moral quandaries are just a 'future regulatory consideration' to be priced in later. Nothing boosts a valuation like a technology that's both revolutionary and terrifying.

The takeaway? The race for embodied AI just got messy. And the people building it are starting to get scared of what they've created.

U.S. military to use AI for domestic surveillance, Kalinowski claims

I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are…

— Caitlin Kalinowski (@kalinowski007) March 7, 2026

According to Kalinowski, her resignation was prompted by the U.S. Department of Defense’s intention to use AI tools and capabilities to conduct surveillance of U.S. citizens without judicial oversight. The former OpenAI employee wrote on X that AI has a vital role to play in national security. 

She explained that the U.S. Department of Defense intends to use AI for surveillance and autonomous weapons, a decision she disagrees with. She said her decision “was about principle, not people” and that she was proud of what the team at OpenAI had built during her time with the company. 

In February, the U.S. Pentagon intensified talks with top AI companies on deploying automated models on classified systems. Cryptopolitan reported that the Pentagon was pushing talks with Anthropic and OpenAI to incorporate AI tools on classified military networks.

Emil Michael, the Pentagon’s Chief Technology Officer, said in a White House meeting with tech leaders that the military wants AI models to operate on both classified and unclassified networks without limitations or restrictions.

Negotiations between the U.S. government and Anthropic hit a brick wall as its leaders have drawn firm lines that their technology would not be used for domestic surveillance operations and autonomous weapon targeting systems. The company defied the Pentagon’s ultimatum to strip AI safeguards in late February.

Anthropic CEO Dario Amodei held his ground, refusing to allow the company’s technology to be used in military expeditions. In response, Trump instructed all federal agencies to stop using Anthropic technology in late February. 

OpenAI imposed restrictions on military deployment of AI

The defense department reached a deal with OpenAI that has since drawn criticism. Sam Altman mentioned that the deal looked fairly opportunistic and clarified that the company has imposed restrictions on how its AI tools will be used in military operations.

However, Kalinowski’s challenge claims that the announcement was rushed, without the necessary guardrails in place. She added that her exit was based on governance concerns, which are too important to rush.

OpenAI confirmed Kalinowski’s exit in a statement, but affirmed that the company’s links with defense departments pave the way for the responsible use of AI tools in national security. 

In February, OpenAI announced it would deploy a custom version of ChatGPT on the Department of War’s secure enterprise AI platform called GenAI.mil. The company noted that its collaborations with military and defense departments stem from AI’s critical role in protecting people and averting conflict.

The friction between the U.S. government and AI companies on military AI advancement has also led to more researchers exiting AI companies. One of Anthropic’s top safeguards researchers quit with a statement, “The world is in peril.”

Another OpenAI researcher also quit their role, saying AI technology has a way of controlling human beings that developers cannot understand or prevent.

Zoë Hitzig, a former researcher at OpenAI, also left the company on February 11. She resigned on the same day OpenAI announced it had begun testing ads on its LLM ChatGPT. She claimed that the AI company was making the same mistake that Facebook had.

Hitzig expressed her concerns that ChatGPT’s unique role as a confidant for deeply personal disclosures (medical fears, relationship issues, religious beliefs) makes ad targeting especially risky.

Join a premium crypto trading community free for 30 days - normally $100/mo.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.