06/17 2025
332
Source: SourceByte
On June 13, the global AI development community was captivated by an unassuming yet groundbreaking collaboration. At the prestigious Advancing AI 2025 conference, AMD and OpenAI, as newly minted allies, unveiled a glimpse of their strategic partnership.
AMD introduced the latest Instinct MI400 and MI350 series AI chips, with CEO Lisa Su directly challenging NVIDIA. AMD's offerings have surpassed NVIDIA's flagship "B200" in performance. She highlighted that in large language model tests conducted by firms like Meta, AMD chips excelled on multiple metrics compared to NVIDIA products.
More significantly, OpenAI CEO Sam Altman personally endorsed AMD's new chips, stating that OpenAI will integrate AMD's AI chips in the future. This collaboration signals more than a business deal; it marks a profound shift towards a "chip-model-ecosystem" trinity, altering the competitive landscape of AI infrastructure.
However, it would be premature to assume AMD can disrupt NVIDIA's dominance in AI chips solely based on a technological leap and a strategic endorsement. While the OpenAI-AMD partnership may impact NVIDIA in local markets in the short term, dismantling NVIDIA's established commercial barriers and industrial foundations will take time.
01
Killing Move: A Closed-Loop Ecosystem
The ambition of AMD and OpenAI lies in creating a distinct "chip-model-ecosystem" closed loop. OpenAI, as a leading AI model R&D institution, demands immense computing power and iterative feedback on chip performance, which can directly influence AMD's chip design and optimization. This tight collaboration can theoretically form a spiraling, co-evolving ecosystem.
Illustration of chip manufacturing | Made by SourceByte
The competition among large models has intensified, involving not just established rivals like Gemini and Claude but also new entrants like DeepSeek. By bypassing NVIDIA and partnering deeply with AMD, OpenAI gains access to more negotiable chip supplies, reducing its computational costs and quickly obtaining optimized chips, thereby enhancing AI model training efficiency and inference performance.
For AMD, significant orders from heavyweights like OpenAI not only boost its performance but also serve as a strong endorsement for its Instinct series chips, enhancing AMD's brand influence and market confidence in high-performance computing and AI.
An alliance is forming to challenge NVIDIA's CUDA hegemony, comprising tech giants reliant on NVIDIA chips (OpenAI, Meta, Microsoft, Google, Amazon) and direct competitors (Intel, AMD, Qualcomm).
Citi analysts predict that by 2030, NVIDIA's share in the generative AI chip market will remain high at around 63%. This underscores NVIDIA's continued leadership in the AI chip market, with competition and advancements around the CUDA ecosystem remaining industry focal points.
The launch of the Instinct MI400 and MI350 series chips marks not just a technological milestone but also AMD's significant move in the AI era's business model. This model of deep collaboration between top AI model vendors and chip vendors may inspire others to follow suit, accelerating their chip field layouts.
In the long run, this ecosystem closed loop based on specific needs will drive AI chips towards greater customization and verticalization. Future AI chips may no longer be versatile but deeply optimized for specific algorithms, models, and even application scenarios, achieving an optimal balance in energy consumption, performance, and cost.
However, establishing a robust ecosystem is no overnight feat. The OpenAI-AMD collaboration's ability to form a sufficiently sticky and defensive ecosystem closed loop requires time to validate.
02
Whose Moat is Deeper?
To assess if the AMD-OpenAI collaboration can truly threaten NVIDIA, one must compare their different business models and core competitiveness in the AI chip field.
In its rivalry with NVIDIA, AMD often plays the role of a cost-effective competitor. Leveraging its CPU and GPU achievements, AMD provides products that compete with market leaders in performance, sometimes surpassing them, while offering cost advantages to attract price-sensitive customers or those seeking alternatives.
While NVIDIA still dominates, AMD is catching up fast. In Q1 2025, NVIDIA reported revenue of $44 billion, with a 90%+ market share in the independent graphics card sector. AMD's revenue stood at $7.44 billion, a 36% year-on-year increase, exceeding market expectations ($7.1 billion), indicating significant momentum.
The deep collaboration with OpenAI is crucial for AMD to break into and expand its market share in high-performance computing and AI by leveraging external forces. The Instinct MI400/MI350 series, showcased at the conference, demonstrate the potential to compete with NVIDIA's B200 in hardware specifications, thanks to TSMC's advanced process and AMD's design optimizations.
In other words, AMD aims to penetrate the high-end market and create a demonstration effect by partnering with top customers like OpenAI.
Currently, multiple companies such as Oracle, Microsoft, Meta, and xAI are collaborating with AMD to use its AI chips. Oracle will be the first to adopt solutions powered by Instinct MI355X in its cloud infrastructure. Mahesh Thiagarajan, Executive Vice President of Oracle Cloud Infrastructure, stated that the collaboration significantly enhances their services' scalability and reliability, with plans to deepen cooperation in the future.
However, NVIDIA's moat extends beyond its GPUs' hardware performance. NVIDIA's true strength lies in its CUDA unified computing architecture, built and continuously strengthened over years, and the robust software ecosystem surrounding it.
CUDA offers AI developers rich library functions, toolsets, and development platforms, simplifying parallel computing and AI model development, training, and deployment. Millions of developers worldwide innovate using CUDA, and mainstream deep learning frameworks (like PyTorch and TensorFlow) provide native support.
Vamsi Boppana, Senior Vice President of AMD's AI division, once admitted, "For developers writing kernels, they are accustomed to using CUDA, having accumulated years of CUDA code. We are the only alternative that can help them migrate smoothly, as we provide HIPIFY to compile HIPC++ code."
This strong software compatibility and developer loyalty form an unshakable ecological barrier for NVIDIA. Once customers choose NVIDIA's hardware and invest in CUDA-based development, the cost and complexity of migrating to other platforms soar.
While AMD is narrowing the gap with NVIDIA at the hardware level and even surpassing it in certain single-point performances, it still lags in the software ecosystem, particularly in support for underlying parallel computing platforms like CUDA and upper-level application frameworks.
AMD's ROCm, a CUDA counterpart, has made progress but lags significantly in maturity, ease of use, compatibility, and developer community activity compared to CUDA.
In essence, NVIDIA provides not just chips but an end-to-end solution integrating hardware and software, coupled with a vast user base and first-mover advantage. This ecological advantage cannot be easily dismantled through a single collaboration or a few high-performance chips.
03
A New Chess Game
With OpenAI as a heavyweight ally, AMD's strategy will undoubtedly become more aggressive and focused.
Screenshot from OpenAI's official website
On one hand, AMD will collaborate closely with top AI model vendors like OpenAI, using their feedback to drive Instinct chips' customization and optimization, ensuring they meet the most advanced AI training and inference needs.
On the other hand, AMD must significantly strengthen its software ecosystem. Powerful hardware alone is insufficient to challenge NVIDIA. AMD needs to invest more in enhancing the ROCm platform, improving its performance and ease of use, attracting more developers, and providing superior technical support and services.
The collaboration with OpenAI provides AMD with an excellent platform to refine its software stack and attract more potential users and developers to ROCm through OpenAI's influence.
AMD's rise and alliance with OpenAI bring immense pressure to NVIDIA. NVIDIA cannot remain passive; it must accelerate technological innovation, especially in next-generation chips' performance and energy efficiency.
To counter AMD's potential pricing strategies, NVIDIA may focus more on cost control or adopt flexible sales strategies to maintain competitiveness. More importantly, NVIDIA will emphasize deeper collaboration with downstream customers, offering customized solutions to strengthen customer loyalty. By partnering with large cloud computing platforms, NVIDIA can further consolidate its cloud service provider sector advantages and expand the CUDA ecosystem's influence.
NVIDIA may quickly compensate for shortcomings in specific technologies or markets through investments or acquisitions, such as in specialized accelerator technologies or emerging AI fields like edge computing. In 2024, NVIDIA invested $1 billion in 50 rounds of startup financing and several corporate transactions. In 2023, the company participated in 39 rounds of corporate financing, spending $872 million.
Even so, facing a relentless competitor like AMD, endorsed by top customers, NVIDIA cannot afford complacency. In this new chess game of AI computing power, the leader must run faster to maintain its advantage. Any hesitation or conservatism may leave an opening for the pursuer.
Some images are sourced from the internet. Please inform us if there is any infringement for removal.