U.S. Department of War Signs Seven Giants Including SpaceX, OpenAI, and Google: AI Enters Classified Networks, Weaponization Concerns Reignite

The U.S. Department of War has signed agreements with seven leading AI model and infrastructure companies, including SpaceX, OpenAI, and Google, to deploy cutting-edge AI capabilities into its classified networks, marking the latest step in its "AI-first" strategy. The announcement has sparked intense debate, particularly around the weaponization of AI.

According to information released on May 1 by the official account of the U.S. Department of War Chief Technology Officer (@DoWCTO), the Department of War has signed agreements with seven leading AI model and infrastructure companies, including SpaceX, OpenAI, and Google, to deploy frontier AI capabilities into the department's classified networks. This is the latest step in the department's "AI-first" strategy. The announcement sparked high-engagement discussions within 24 hours, with controversy over AI weaponization particularly prominent.

Facts: What Exactly Was Signed in This Agreement

  • Fact (Source: @DoWCTO official X post): The agreement involves seven AI model and infrastructure suppliers, explicitly naming SpaceX, OpenAI, and Google.
  • Fact (Source: Same as above): The deployment target is the department's classified networks, positioned as a continuation of the "AI-first" strategy.
  • Fact (Verification status): Google verification shows this topic is confirmed, including 1 original source URL and 13 API citations.

It should be noted that publicly available information currently does not disclose specific contract amounts, AI model lists, or whether weapon system integration is included. The following analysis is based on disclosed facts, with viewpoints explicitly marked.

Opinion: Innovations and Gaps from a Product Perspective

Innovations:

  • "Model + Infrastructure" bundled procurement. This agreement is not simply buying APIs; it brings together the model layer (OpenAI, etc.) and the infrastructure layer (SpaceX's Starlink, Google Cloud, etc.), meaning the Department of War intends to run an end-to-end AI closed loop within its classified networks—from compute and network to model inference—all within its own controllable boundaries.
  • Multi-vendor parallel approach. Signing seven at once clearly reflects a procurement strategy to avoid single points of dependency, contrasting with the common enterprise market trend of "All in OpenAI" or "All in Anthropic," highlighting the government customer's high sensitivity to vendor risk.

Gaps and Risks:

  • Blurry compliance and red lines. OpenAI previously removed the "military and warfare" prohibition clause from its usage policy. Entering classified networks now inevitably raises questions about the boundaries of civilian models. Winzheng believes this is a public test of trust ratings for all AI vendors—trust ratings of pass/warn/fail are entry thresholds, not bonus points. Once misuse occurs, it impacts the entire industry's social trust, not just a single vendor.
  • Explainability gap. The grounding ability of frontier large models in classified decision-making chains is critical: whether the model can strictly respond based on authorized intelligence without fabricating or unauthorized retrieval—no public evaluation data exists yet.
  • Stability is an operational signal. Consistency of answers to the same question across different sessions is almost a hard requirement in combat staff scenarios, but commercial large models still exhibit fluctuations under high load.

Comparison: How Is This Different from Previous Defense AI Projects

Compared to Project Maven (2017, where Google withdrew due to employee protests) and the JEDI/JWCC cloud contracts, the key difference in this agreement is that model vendors enter the contract as the main players rather than supporting roles, and the "seven-vendor parallel" structure is rare.

In the Project Maven era, cloud vendors provided compute power while the government developed algorithms in-house; JWCC was multi-cloud parallel but still infrastructure-focused. This time, large model vendors are collectively entering classified networks as frontier capability providers for the first time. This means AI models have evolved from "tools" to "strategic assets."

Practical Advice for Developers and Enterprises

  • For model-layer developers: Pay attention to potential evaluation standards that may spill over from the contract. Government procurement typically pushes the industry to establish stricter red-teaming and grounding benchmarks. Aligning early with the auditability of grounding capabilities will be a moat over the next one to two years.
  • For enterprise CIOs: Reference the "seven-vendor parallel" multi-supplier strategy. In production environments, value and availability should not be tied to a single model. A routing layer with multi-model fallback is a more robust architecture.
  • For compliance and legal teams: Re-examine the usage policy terms of the AI services you procure, especially the model vendor's definition of "high-risk scenarios"—this definition is being rapidly rewritten this year.
  • For observers in the Chinese market: This agreement will further increase the revenue share of U.S. AI vendors from government contracts. The training objectives and alignment direction of commercial models may shift, and the strategic value of open-source ecosystems and local alternatives will be repriced.

Winzheng Comment

From a product evaluation perspective, what truly matters in this agreement is not the number "seven," but that AI models are being procured by a nation's defense system as a strategic-level supply chain for the first time. This will create spillover effects in three areas: evaluation standards will become stricter, compliance boundaries will become clearer, and vendor diversification will become the default enterprise paradigm. As for concerns about AI weaponization, Winzheng's position has always been clear—technological neutrality is not a disclaimer; the stronger the capability, the more constraints must be set in advance. We will continue to track performance changes of these seven vendors in public evaluations, especially auditable data on grounding and consistency.

The factual content of this article is derived from the official @DoWCTO X post (published on May 1) and Google verification results (confirmed, 1 source URL + 13 API citations); analysis and recommendations are the editorial viewpoint of winzheng.com.