Meta Prepares Hybrid AI Strategy Under Wang as Open-Source Commitment Narrows
Meta will selectively open-source new models developed under Alexandr Wang, marking a shift from its Llama playbook as rivals coordinate against Chinese distillation threats.

Meta is preparing to release its first AI models developed under Alexandr Wang with plans to eventually offer versions via open-source licenses, but the company will keep its most powerful systems proprietary, according to sources familiar with the strategy. The approach represents a departure from Meta's earlier blanket commitment to open-source frontier models and signals a more cautious posture as the industry grapples with model extraction threats from China.
Before openly releasing versions of the new models, Meta wants to retain some pieces as proprietary and ensure they don't introduce new safety risks, sources told Axios. Wang has indicated that some of the largest new models will remain closed, a shift toward what insiders describe as a hybrid strategy that mirrors broader industry pullback on openness.
The move comes as U.S. AI leaders OpenAI, Anthropic, and Google have begun coordinating through the Frontier Model Forum to detect adversarial distillation attempts that violate their terms of service, sharing information to counter Chinese competitors extracting results from cutting-edge models. The firms remain constrained by uncertainty about what can be shared under existing antitrust guidance, according to people familiar with the matter.
Meta's new models are not expected to be competitive across the board with coming releases from OpenAI and Anthropic, but the company believes it will have areas of strength that appeal to consumers, sources said. Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook, and Instagram—free services with global scale that competitors can't easily match.
(Wang joined Meta last year as part of a $15 billion deal with Scale AI, where he was CEO. The New York Times reported in March that Meta had delayed releasing its foundational AI model code-named "Avocado," which did not perform as well as offerings from Google and others on coding, reasoning, and writing.)
Meta's earlier Llama series established the company as the largest U.S. player willing to let others modify its frontier models, a strategy that won developer mindshare but drew growing speculation the company might retreat altogether. The shift under Wang appears designed to balance ecosystem influence with competitive protection, even as smaller players like Arcee continue to champion fully open approaches. Arcee, a 26-person startup, released its Trinity Large Thinking model under an Apache 2.0 license on April 7, offering on-premise sovereignty options that appeal to enterprises wary of dependence on closed-source giants.
The coordination among U.S. labs reflects heightened concern over distillation following DeepSeek's January 2025 release of its R1 reasoning model, which rattled global markets. OpenAI warned U.S. lawmakers in February that DeepSeek had continued to use increasingly sophisticated tactics to extract results from U.S. models despite efforts to prevent misuse, claiming the Chinese startup was relying on distillation to develop a new version of its chatbot. Google published a blog identifying an increase in model extraction attempts, though the three U.S. labs have not yet provided evidence quantifying how much of China's model innovation relies on distillation.
Meanwhile, Anthropic launched Project Glasswing, committing up to $100 million in usage credits and $4 million in direct donations to help open-source maintainers use AI to identify and fix vulnerabilities at scale. The company stated it does not plan to make its Mythos Preview model available to the general public, citing concerns about potential misuse. "Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software," said Jim Zemlin, CEO of the Linux Foundation.
Keywords
Sources
https://www.axios.com/2026/04/06/meta-open-source-ai-models
Exclusive scoop on Meta's hybrid strategy under Wang, with proprietary models alongside selective open releases
https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china
Reveals coordination among U.S. AI labs through Frontier Model Forum to counter Chinese distillation threats
https://www.latimes.com/business/story/2026-04-07/china-is-copying-u-s-ai-models-american-companies-say-it-is-costing-them-billions-of-dollars
Details OpenAI warnings to Congress about DeepSeek's sophisticated extraction tactics and ongoing distillation concerns
https://www.mediapost.com/publications/article/414114/meta-hybrid-superintelligence-could-shift-to-mostl.html
Contextualizes Wang's background and Meta's delayed Avocado model performance issues versus Google competitors
