AI Firms Race to Automate Research as Insiders Flag Accelerating Control Risks
Major labs now deploy AI to write code and conduct experiments, with Anthropic reporting 90% automation. Former leaders warn governance lags behind capability gains.

The world's leading artificial intelligence companies are rapidly automating their own research operations, deploying AI systems to write code, design experiments, and accelerate development cycles—even as former insiders from those same organizations warn that existing safety frameworks cannot keep pace with the technology's growing autonomy.
Anthropic has disclosed that its Claude system now authors up to 90 percent of the company's code, while OpenAI has signaled plans to deploy an AI "intern" capable of conducting research tasks within six months. DeepMind has published research detailing automation of core workflows, marking a industrywide shift toward self-improving systems that could compress development timelines and widen the gap between capability and oversight.
Former leaders from Microsoft, Google, OpenAI, DeepMind, and the White House have separately described a future in which AI systems become harder to control, with risks spanning deepened inequality, cybercrime, mass job displacement, and dangerous concentrations of corporate power. Their warnings, published in early April interviews, emphasize the urgency of governance structures and safety research before autonomous systems outstrip human oversight.
The acceleration has drawn public protest, with demonstrators confronting firms over the pace of development and the adequacy of regulatory safeguards. Companies defend the push as necessary to maintain competitive position and unlock economic benefits, but critics argue that automating AI research creates a feedback loop that could destabilize labor markets and concentrate technological power in a handful of corporations.
(The Atlantic and Business Insider published separate reporting on April 2 and 3, 2026, based on company statements, published research, and interviews with former officials. Neither outlet disclosed financial relationships with the firms covered.)
The race to automate research reflects intensifying rivalry among OpenAI, Anthropic, Google DeepMind, and Microsoft-backed labs, each seeking to claim leadership in so-called artificial general intelligence. OpenAI's move to hire AI interns follows Anthropic's disclosure of near-total code automation, while DeepMind's publication of workflow research signals confidence in its technical lead. The competition has historically driven rapid capability gains but also raised concerns about corner-cutting on safety testing and transparency, tensions that former insiders now warn have reached a critical threshold.
Keywords
Sources
https://letsdatascience.com/news/ai-industry-pursues-self-improving-research-systems-32187f56
Focuses on industry-wide automation push, company claims of 90% code generation, and six-month intern timeline from OpenAI
https://letsdatascience.com/news/former-ai-leaders-warn-about-systemic-risks-4cb83dfd
Emphasizes warnings from former Microsoft, Google, OpenAI, DeepMind, and White House leaders on autonomy and control risks
https://letsdatascience.com/news/former-founder-brings-ai-to-homestead-5d088649
Profiles individual adoption of AI tools for home automation, illustrating broader accessibility and edge deployment trends
