Enterprises Embrace Open AI Models Despite Security Backlash From Maintainers
Companies tout cost and customization benefits of open models at Nvidia's GTC, even as overwhelmed open-source maintainers demand $12.5M to filter AI-generated bug spam.

A widening rift has emerged between enterprises adopting open artificial intelligence models and the open-source community struggling to sustain them, as companies prioritize flexibility and cost savings while maintainers face an avalanche of low-quality, AI-generated security reports.
At Nvidia's annual GTC event this week, executives from Capital One, ServiceNow and CrowdStrike emphasized their reliance on open models for customizability and lower costs, even while acknowledging security challenges. Nvidia CEO Jensen Huang announced the Nemotron Coalition, a collaborative effort with Mistral AI, Perplexity, and other developers to advance open foundation models through shared compute and expertise. Huang framed the initiative as ensuring "the future of AI is shaped with the world and built for the world."
Yet the same week, Google's Open Source Software Vulnerability Reward Program team warned it would require higher-quality proof for certain bug submissions to "filter out low-quality reports" containing hallucinations about vulnerabilities. The Linux Foundation separately disclosed it had secured $12.5 million from Google, Anthropic, AWS, Microsoft, and OpenAI to help maintainers cope with the surge in AI-generated security submissions.
"Grant funding alone is not going to help solve the problem that AI tools are causing today on open-source security teams," Linux kernel developer Greg Kroah-Hartman said. The funding will be managed by Alpha-Omega and the Open Source Security Foundation to provide AI tools that help maintainers triage the volume of automated reports.
Cybersecurity researchers have identified AI infrastructure itself as a vulnerability layer. In one documented case, malicious AI models uploaded to the Hugging Face repository executed hidden code when companies loaded them into their environments, demonstrating how compromised models can infiltrate multiple organizations before detection. Open models from Chinese developers including DeepSeek and Alibaba Cloud's Qwen family have impressed engineers but also raised security and governance concerns.
(Mistral announced Tuesday it was introducing Forge, a system enabling enterprises to build frontier-grade AI models grounded in proprietary knowledge. Proprietary developers have historically led in creating frontier models, though open model developers have been closing the gap. True open-source models allow full access to training data and code, while open-weight models share the numerical parameters underlying them.)
The tension reflects a broader power dynamic in AI development. Enterprises gain sovereignty and cost advantages from open models they can customize and run internally, reducing dependence on proprietary vendors. Meanwhile, the volunteer and nonprofit maintainers who sustain critical open-source infrastructure face mounting operational burdens without commensurate resources, creating a sustainability crisis that AI companies are now paying to address—after their tools created the problem.
Keywords
Sources
https://www.wsj.com/cio-journal/companies-say-the-risks-of-open-artificial-intelligence-models-are-worth-it-0d3ee664?gaa_at=eafs&gaa_n=AWEtsqceMIcTcVoxOxaFaaDO6kDuztZfsIoOQYX9jolCE1S9Bp_bRbXxELGf&gaa_ts=69bf26fc&gaa_sig=NMg_GEeg4HqNVFUj4y7oxOIxUlPsny1ikPI_VGhSgU327-KbJeaGLVq5WfBc-zua_tCLST-5gemK6xoQiHDnnw%3D%3D
Focuses on enterprise adoption at Nvidia GTC and Nemotron Coalition launch, highlighting customizability benefits despite security challenges
https://www.csoonline.com/article/4148203/stop-using-ai-to-submit-bug-reports-says-google-2.html
Details Google's crackdown on low-quality AI bug reports and $12.5M funding to help overwhelmed open-source maintainers
https://www.ien.com/artificial-intelligence/blog/22962990/ai-is-transforming-industry-supply-chains-while-also-creating-major-risks
Emphasizes AI infrastructure vulnerabilities and documented Hugging Face malicious model case demonstrating supply chain risks
