OpenAI and AI Chip Plans

OpenAI and AI Chip Plans

Hey Everyone,

I'll try and be brief and summarize a few important things here:

If Nvidia is having antitrust issues potentially with the DOJ, the EU and France, there might be wiggle room for another contender to appear. I'm not exactly feeling the vibes of Intel or AMD these days. My best guess is on Softbank backed ARM, but what about OpenAI itself?

As OpenAI is working to get more funding one angle they are pushing is their AI chip project. We know earlier in the year Abu Dhabi's state-backed investment firm, MGX, was in early-stage discussions to invest in OpenAI's chip venture. The UAE and even Saudi are likely major investors in this project.

Microsoft’s capital expenditures growing 75% year-over-year (to $19 billion) in anticipation of AI returns that have yet to materialize and they along with Apple and Nvidia will participate in a stop-gap huge new round for OpenAI likely in the $5 to $10 Billion range. While this will dilute the stock incredibly for employees, they have 200 million WAUs on ChatGPT so will have more time to build out products.

Making their own AI chips will be a major part of all of that and their path to profitability. OpenAI's first custom-designed silicon chips allegedly will be manufactured by Taiwan Semiconductor Manufacturing Company (TSMC), the same outfit churning out processors for Nvidia, Apple, AMD, Intel, and others.

This process is still in development and is expected to be a significant technological advancement, featuring gate-all-around (GAAFET) nanosheet transistors and backside power delivery. TSMC has yet to commence mass production of its A16 Angstrom process, but there are whispers doing the rounds, clamoring about its superiority, which is no surprise given that Apple has reportedly placed orders with its foundry partner in advance.

TSMC and Taiwan still at the center of it all.

What's Special about A16?

A16 is a 16 Angstrom or 1.6-nanometer manufacturing process. "Compared with N2P," TSMC's 2nm-class process node, "A16 offers 8-10 percent speed improvement at the same Vdd [working voltage], 15-20 percent power reduction at the same speed, and 1.07-1.10X chip density," according to the factory giant.

The very high cost of Nvidia's AI servers means OpenAI needs to find an alternative. 75% of OpenAI's profits (if and when it becomes profitable) will go to Microsoft until its $13 Billion loan is paid off. So it's imperative that it gets better at cost management after burning through so much cash in 2024.

UDN suggests that OpenAI had originally planned to use TSMC's relatively low-cost N5 process node to manufacture its AI chip but that's apparently been dropped in favour of a system that's still in development—A16 will be the successor to N2, which itself isn't being used to mass produce chips yet.

  • The company is reportedly collaborating with Broadcom and Marvell to develop these chips, potentially using TSMC's advanced process nodes.
  • Arm, in which SoftBank has a 90% stake, will set up an AI chip unit to build a prototype by spring 2025, according to Nikkei Asia.

ARM and OpenAI are therefore the two most likely new players in AI chips that make sense to me in 2025 or more likely 2026 to challenge Nvidia. Intel is on the decline and AMD is half-decent but not world-class or on Nvidia's level. All of this depends on Nvidia's antitrust lingering issues by the DOJ, France and the EU and what that might mean for the AI chip market which they dominate and are likely to do so for some time.

This is a bit on the semiconductor side of things but the A16, a 1.6nm node, will eventually be the successor to the chip manufacturer’s N2 node. Both are still in development, with the N2 expected in 2025, and the A16 is not slated to be available until the second half of 2026.

TSMC and not anything OpenAI itself does are the central player in all of this, all of which depends on if and when China invades Taiwan as well. So there are many factors in all of this. Even Geopolitics and antitrust probes and cases.

It was first reported in July 2024 that OpenAI had been in talks with chip designers to discuss the possibility of developing a new AI server chip. However OpenAI's ambitious to take on the AI chip industry stem back from way back and was blown up in February, 2024. Some of the rumors are also a bit wacky:

Industry insiders report that OpenAI was in active talks with TSMC about developing a dedicated foundry for its custom chip, but those plans were canceled.

Even with the UAE and Saudi OpenAI cannot fund Trillions to take this one like its CEO publicly stated. He meant in the lifetime of the endeavor. Altman had said AI chip limitations hinder OpenAI’s growth, and as this project would increase chip-building capacity globally, he is in talks with investors, including the United Arab Emirates government, per the WSJ. Fast forward a few months later, OpenAI's PR and comms team finally has him a bit muzzled and thank goodness. Sam Altman had said he could need to raise between $5 trillion and $7 trillion for the endeavor, the WSJ reported, citing one source. 

But it sounds like this will really happen. Maybe not on the scale of ARM's push into the space. BigTech making their own chips is nothing new. You could argue that OpenAI is certainly an AI hyperscaler now. We don't know yet if they are the WeWork or the Uber of AI.

Love him or hate him, the company’s CEO Sam Altman has long been pushing for OpenAI to develop its own AI chips. OpenAI could leverage Broadcom's expertise in semiconductor technology to support OpenAI's ambitions in custom AI hardware. Obviously it would be dependent on TSMC mostly after all.

The report indicates that for the A16, both Apple and OpenAI are among the first customers to reserve capacity.