CCT - Crypto Currency Tracker logo CCT - Crypto Currency Tracker logo
Cryptopolitan 2026-03-08 10:41:02

OpenAI's robotics chief raises surveillance concerns in resignation letter

Caitlin Kalinowski, OpenAI’s now former robotics boss, has resigned from her role after working for the company for a little over a year. Kalinowski cited concerns that the U.S. military could use the company’s AI tools for domestic surveillance and for automated, targeted systems in U.S. weapons. OpenAI’s hardware and robotic engineering boss, Caitlin Kalinowski, has departed the AI company after serving since November 2024. Kalinowski announced her resignation on March 7, citing concerns over a deal reached between OpenAI and the U.S. Department of Defense in February. U.S. military to use AI for domestic surveillance, Kalinowski claims I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are… — Caitlin Kalinowski (@kalinowski007) March 7, 2026 According to Kalinowski, her resignation was prompted by the U.S. Department of Defense’s intention to use AI tools and capabilities to conduct surveillance of U.S. citizens without judicial oversight. The former OpenAI employee wrote on X that AI has a vital role to play in national security. She explained that the U.S. Department of Defense intends to use AI for surveillance and autonomous weapons, a decision she disagrees with. She said her decision “was about principle, not people” and that she was proud of what the team at OpenAI had built during her time with the company. In February, the U.S. Pentagon intensified talks with top AI companies on deploying automated models on classified systems. Cryptopolitan reported that the Pentagon was pushing talks with Anthropic and OpenAI to incorporate AI tools on classified military networks. Emil Michael, the Pentagon’s Chief Technology Officer, said in a White House meeting with tech leaders that the military wants AI models to operate on both classified and unclassified networks without limitations or restrictions. Negotiations between the U.S. government and Anthropic hit a brick wall as its leaders have drawn firm lines that their technology would not be used for domestic surveillance operations and autonomous weapon targeting systems. The company defied the Pentagon’s ultimatum to strip AI safeguards in late February. Anthropic CEO Dario Amodei held his ground, refusing to allow the company’s technology to be used in military expeditions. In response, Trump instructed all federal agencies to stop using Anthropic technology in late February. OpenAI imposed restrictions on military deployment of AI The defense department reached a deal with OpenAI that has since drawn criticism. Sam Altman mentioned that the deal looked fairly opportunistic and clarified that the company has imposed restrictions on how its AI tools will be used in military operations. However, Kalinowski’s challenge claims that the announcement was rushed, without the necessary guardrails in place. She added that her exit was based on governance concerns, which are too important to rush. OpenAI confirmed Kalinowski’s exit in a statement, but affirmed that the company’s links with defense departments pave the way for the responsible use of AI tools in national security. In February, OpenAI announced it would deploy a custom version of ChatGPT on the Department of War’s secure enterprise AI platform called GenAI.mil. The company noted that its collaborations with military and defense departments stem from AI’s critical role in protecting people and averting conflict. The friction between the U.S. government and AI companies on military AI advancement has also led to more researchers exiting AI companies. One of Anthropic’s top safeguards researchers quit with a statement, “The world is in peril.” Another OpenAI researcher also quit their role, saying AI technology has a way of controlling human beings that developers cannot understand or prevent. Zoë Hitzig, a former researcher at OpenAI, also left the company on February 11. She resigned on the same day OpenAI announced it had begun testing ads on its LLM ChatGPT. She claimed that the AI company was making the same mistake that Facebook had. Hitzig expressed her concerns that ChatGPT’s unique role as a confidant for deeply personal disclosures (medical fears, relationship issues, religious beliefs) makes ad targeting especially risky. Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program

Feragatnameyi okuyun : Burada sunulan tüm içerikler web sitemiz, köprülü siteler, ilgili uygulamalar, forumlar, bloglar, sosyal medya hesapları ve diğer platformlar (“Site”), sadece üçüncü taraf kaynaklardan temin edilen genel bilgileriniz içindir. İçeriğimizle ilgili olarak, doğruluk ve güncellenmişlik dahil ancak bunlarla sınırlı olmamak üzere, hiçbir şekilde hiçbir garanti vermemekteyiz. Sağladığımız içeriğin hiçbir kısmı, herhangi bir amaç için özel bir güvene yönelik mali tavsiye, hukuki danışmanlık veya başka herhangi bir tavsiye formunu oluşturmaz. İçeriğimize herhangi bir kullanım veya güven, yalnızca kendi risk ve takdir yetkinizdedir. İçeriğinizi incelemeden önce kendi araştırmanızı yürütmeli, incelemeli, analiz etmeli ve doğrulamalısınız. Ticaret büyük kayıplara yol açabilecek yüksek riskli bir faaliyettir, bu nedenle herhangi bir karar vermeden önce mali danışmanınıza danışın. Sitemizde hiçbir içerik bir teklif veya teklif anlamına gelmez