OpenAI Hardware Chief Resigns Over Pentagon Deal, Citing Surveillance Concerns
Caitlin Kalinowski's departure after raising alarm over 'lethal autonomy' and domestic surveillance marks a new flashpoint in the debate over AI's role in national security.

OpenAI's hardware leader has resigned following the company's agreement with the Pentagon, publicly warning that artificial intelligence deployed for national security lacks adequate safeguards against surveillance of Americans and autonomous weapons systems.
Caitlin Kalinowski, who led hardware development at OpenAI, announced her departure in a March 7 social media post that acknowledged AI's legitimate defense applications while drawing red lines around specific uses. "AI has an important role in national security," Kalinowski wrote, "but surveillance of Americans without judicial oversight and lethal autonomy without human authorization are…" concerns requiring clearer boundaries.
The resignation transforms what had been a policy debate into a personnel crisis for OpenAI, which now faces intensified scrutiny over the terms and scope of its Pentagon collaboration. Kalinowski's background leading augmented reality hardware at Meta lends symbolic weight to her exit, signaling potential turbulence in OpenAI's ability to retain top engineering talent amid defense contracting.
Reuters reported the departure alongside separate investigations at X, the social media platform, which is probing racist and offensive posts generated by xAI's Grok chatbot, according to Sky News. Reuters could not immediately verify that report. The twin controversies underscore mounting pressure on AI companies to demonstrate governance mechanisms that match the scale of their commercial and government ambitions.
(OpenAI has not publicly disclosed the financial terms or technical scope of its Pentagon agreement. The company declined to comment on Kalinowski's departure or the specific contractual guardrails governing military applications of its technology.)
The friction reflects a broader reckoning across the AI industry as companies navigate lucrative defense contracts while managing employee concerns and public accountability. Anthropic recently sued to block Pentagon blacklisting over AI rules, while other firms face internal dissent over the boundaries of acceptable government work. The question of who sets those boundaries—corporate boards, Congress, or courts—remains unresolved as both development timelines and geopolitical pressures accelerate.
Keywords
Sources
https://www.reuters.com/technology/x-probes-offensive-posts-by-xais-grok-chatbot-sky-news-reports-2026-03-08/
Reported parallel investigation at X into offensive Grok chatbot posts, contextualizing broader AI accountability pressures.
https://glassalmanac.com/ai-has-an-important-role-in-national-security-sparks-employee-backlash-in-2026-heres-why/
Emphasized Kalinowski's Meta background and predicted hiring slowdowns, policy hearings, and intensified product scrutiny.
https://www.reuters.com/video/watch/idRW289111032026RP1/?chan=technology
Covered Anthropic lawsuit against Pentagon blacklisting, illustrating wider industry friction over defense AI rules.
https://www.itnews.com.au/news/naplan-tests-disrupted-by-tech-issue-on-first-day-624185?utm_source=feed&utm_medium=rss&utm_campaign=iTnews+
Noted Meta's acquisition of AI agent social network Moltbook, reflecting competitive dynamics in AI product development.
