On a certain day in 2024 Beijing time, xAI founder Elon Musk posted a series of tweets on X platform (formerly Twitter), launching a heavy bombardment against OpenAI. The incident quickly became a trending topic with over 800,000 reposts, sparking heated discussions in the global AI community. Musk directly pointed out that OpenAI had transformed from its original non-profit organization into a "greedy commercial entity," citing court documents claiming it violated its founding promises. More notably, he boldly predicted that OpenAI's upcoming GPT-5 model would lag behind his own Grok model. This statement not only reignited the personal feud between Musk and OpenAI CEO Sam Altman but also touched on core controversies in AI development: the debate between open-source and closed-source approaches, and the boundaries of AI safety and ethics.
Background: The Feud Between Musk and OpenAI
The story begins in 2015. That year, Musk co-founded OpenAI with Sam Altman and other tech elites, aiming to develop safe artificial general intelligence (AGI) while maintaining a non-profit nature to benefit humanity rather than pursue commercial profits. However, in 2018, Musk left the board due to dissatisfaction with OpenAI's direction. Subsequently, OpenAI established a for-profit subsidiary in 2019 and deeply partnered with Microsoft, securing billions of dollars in investment. This series of transformations led Musk to publicly express his dissatisfaction.
In 2023, Musk formally sued OpenAI, accusing it of violating the founding agreement by pursuing maximum profits rather than public interest. The case is still under court review. Musk's creation of xAI and launch of the Grok model represents the continuation of his AI ambitions. Grok emphasizes open-source transparency, forming a stark contrast to OpenAI's closed-source strategy. This latest post represents the newest climax in this feud.
Musk's Core Accusations and Court Documents
In his latest posts, Musk wrote: "OpenAI has gone from non-profit to maximum profit for Microsoft shareholders, completely violating its founding charter." He cited court documents, emphasizing that OpenAI initially promised all technology would be open-sourced and shared, but now the GPT series models are highly closed-source, only accessible through APIs.
"OpenAI's greed has made them fall behind. GPT-5 will be behind Grok." — Elon Musk, X post
Musk further pointed out that OpenAI's commercialization path sacrificed AI safety: closed-source models are difficult for the public to scrutinize and may amplify biases and risks. He contrasted this with xAI's Grok-1 open-source strategy, claiming the latter better aligns with the AGI vision of "maximizing truth-seeking." This accusation quickly triggered a chain reaction, with the comment section under the posts becoming a battlefield—supporters praised Musk for "protecting AI's original mission," while opponents mocked his "sour grapes mentality."
Various Perspectives: X Users Split into Two Camps, Industry Insiders Speak Out
On X platform, user opinions showed polarization. One camp supported Musk, believing OpenAI's closed-source model threatens AI democratization; the other defended that massive funding is necessary to advance cutting-edge AI, and open-sourcing could lead to technology proliferation risks.
OpenAI has not officially responded, but Sam Altman previously stated on X: "We are committed to safe AGI, not pursuing short-term profits." Industry insiders also joined the discussion. Meta's Chief AI Scientist Yann LeCun posted in support of open source: "Closed-source AI is like a black box, difficult to verify for safety. Musk's concerns are valid." In contrast, Anthropic founder Dario Amodei emphasized: "Balancing innovation and safety requires massive investment; pure open-source is unrealistic."
Additionally, former Google DeepMind head Demis Hassabis stated in an interview: "The core of AI ethics debate is governance frameworks, not the binary opposition of open vs. closed source. Musk's lawsuit may drive industry reflection." These viewpoints highlight divisions within the AI community.
Debate Focus: Open Source vs. Closed Source and AI Safety Controversy
The core of this incident lies in the open-source versus closed-source debate. Open-source supporters believe that publicly releasing model weights, like Grok-1, allows global developers to audit code, reducing "black box" risks and promoting innovation democratization. Musk has repeatedly emphasized that open-source is key to preventing AI from going out of control.
The closed-source camp worries that open-source could be maliciously exploited, such as terrorist organizations training weapon models. OpenAI's "progressive disclosure" strategy—gradually opening weaker models while protecting the strongest ones—is seen as a pragmatic compromise. But critics call this still "pseudo-open-source," betraying the original mission.
AI safety is another hot topic. Musk warned that OpenAI's rapid iteration ignores alignment issues, potentially creating "existential risks." Referencing the 2023 AI Safety Summit, where experts from multiple countries called for international regulation, this debate is amplifying these calls.
Potential Impact: Reshaping the AI Industry Landscape
In the short term, this incident will intensify the legal battle between Musk and OpenAI, potentially affecting court rulings. If Musk wins, OpenAI may be forced to open-source some technologies, reshaping the competitive landscape. xAI's Grok will also leverage this momentum to attract talent and investment.
Long-term, it advances AI governance discussions. The EU's AI Act has already listed high-risk AI as a regulatory priority, and the U.S. is also brewing similar legislation. The open-source vs. closed-source debate may catalyze industry standards, promoting a shift from "arms race" to "collaborative safety." Meanwhile, the surge in public attention to AI ethics will influence investor confidence and policy directions.
Data shows that after the incident, xAI's website traffic increased by 30%, while OpenAI stock (indirectly through Microsoft) showed slight fluctuations, highlighting the influence of Musk's personal brand.
Conclusion: AI's Crossroads
Musk's attack is not just personal vendetta but a mirror of AI development paths. Open-source transparency or closed-source efficiency? Safety first or innovation first? These questions have no easy answers, but the debate itself is progress. Looking ahead to the performance showdown between GPT-5 and Grok, the truth will be tested by the market and users. The AI industry urgently needs consensus to ensure technology benefits rather than harms humanity.
© 2026 Winzheng.com 赢政天下 | 转载请注明来源并附原文链接