I decided to share some thoughts about the current state of the market regarding AI. I have become very cautious due to recent financing developments, the projected amount of capital to be raised, and the general valuation levels of many of these companies.
“OpenAI, with its deal with Nvidia and AMD on top of their Stargate datacenter, plans to build a total of 26GW of data centers in the next few years.”
26 GW is about the installed capacity of Switzerland, one of the most electrified countries in the world, if not the most. Took us 100 years.
Hard to guess the replacement cost in today’s CHF. An easier data point: UK’s Hinkley Point C will likely get to £50bn for 3.2GW. The construction time will be 13-15 years by EDF, ex permits et al.
If they go for combined cycle gas turbines, they must come from Siemens Energy, Mitsubishi Heavy or GE Vernova. All are basically booked out and will pass on huge cost inflation plus a natural monopoly premium - such turbines are, perhaps, the most difficult tech kid humans have ever created.
Hope: GE wants to double capacity I read. We will see.
Coal plants would likely be the path of least cost resistance, except that there is nobody left to build them except the Chinese or perhaps a Russian firm. No plant was build in the US since 2012. Even longer for Europe which is phasing it out by law.
They are all dreaming with their timetables. They also drive cost inflation with it. Their own vanity is their biggest enemy.
PS: ordinary people will resist data centers sooner rather than later if their electricity bills goes up. Etc etc
The GPU colateral financing structure you outlined is fasinating and really highlights the systemic risks building up. If these chips are depreciating on a 1-2 year cycle rather than the 6 year accounting fiction, the whole house of cards could unravel quickly when demand softens. The concentration risk with just OpenAI and Anthropic representing a third of shipments is particularly concerning for anyone holding this debt.
Thanks for sharing your analysis. I think the key issue here is whether those privately held AI labs can continue to get enough funding from private market, either through equity or debt. It's said currently both markets are still very hot. And with Fed's rate-cut cycle re-started, maybe more funds will flow to the primary market? But since the market is private, it is hard to identify or gauge any bubbles there, or just to find out any leading indicators. Meanwhile, if AI labs can speed up their monetization process, they can generate more cash organically. I heard Sam said over a few podcasts recently that they'll continue to announce new initiatives in the coming weeks/months.
Could the end result be similar to the tech boom of 2010s? I read somewhere that because companies overbuilt networking capacity, post dot-com bubble internet was cheap, helping the development of all the internet companies during 2010s.
Some great points here. I do feel that you might be underestimating OpenAI's/Anthropic's ability to continue pulling in capital at high valuations to support these endeavors. OpenAI is the most in-demand equity asset that I can ever recall from my almost 15 years in the venture and tech space. Even after the $100B commitment from NVDA, I suspect that Sam Altman could pull down capital in $5B or $10B chunks from various entities (e.g. Saudi government) pretty much whenever he feels like it. It's hard for me to imagine scenarios that will interrupt OpenAI's revenue ramp unless models commodify in a profound enough way that the end-user price of them falls to zero.
Nvda gpus are like light bulbs that keep burning out needing constant replacement. Still surprised why they are considered a capex item when they ought to be treated like an opex item. (1 yr amortization by groq is essentially that)
You nailed it. It's an accounting and debt bubble waiting to pop.
The whole grift is the amortization. Calling a GPU that's obsolete in two years a six-year asset is how they hide the fact that they're burning cash, not building value. It's a lie plain and simple.
And when the dealer - Nvidia - starts lending money to its own customers so they can keep buying chips, the game is rigged. It's not a real market, it's a company town. This whole thing is a house of cards built on bad math, and it's getting fragile.
I also wonder if you're being a little harsh on the amortization timelines. Amazon, e.g., has plenty of useful places to put their Blackwell GPUs after they're no longer state of the art for frontier AI use cases (such as workloads for Prime Video or Twitch).
Does the GPU really burn out at the end of the product cycle (1yr)??? I highly doubt it. It may cost more to generate tokens, due to VC (consumption of electricity), but it still is very much a viable asset (financially).
I do think your point about the attraction of debt financing to fund a DC whereby the chips themselves are the collateral to a SPV lending the money, is very much indicative of a bubble.
But I do think you’ve got the accounting implications very wrong
This is an exceptional deep dive into the structural problems building in the AI infrasturcture stack. Your point about GPUs being a faster depreciating asset than the amortization schedules suggest is particularly concerning - the disconnect between Coreweave's 6-year cycle and the reality of Nvidia's 1-year product cadence is a ticking time bomb for anyone holding this debt. The collateral issue you highlighted is going to be brutal when it unwinds. What struck me most is the circular financing structure - NVDA essentially acting as lender of last resort while also being the primary beneficiary of the spending. When you mentioned that 1/3 of all GPU shipments go to just OpenAI and Anthropic, it really crystallizes the customer concentration risk across the entire ecosystem. Appreciate the thoughtful analysis on why Microsoft is offloading risk to neoclouds rather than expanding their own CapEx further - that's a strong signal about their confidence in demand durability. Thanks for laying this out so clearly.
Nvidia gpu chip prices should come down in the next few years with AMD, Broadcom and hyperscalers producing their own chips. Economies of scale will also help bring down data center costs.
I largely agree, but have the following pushbacks:
1. I wouldn’t compare the comparative DC investments OpenAI requires to the FCF of the hyperscalers, but rather the OCF as this is the pre capex number
2. All the hyperscalers can sustain funding as a) capex hasn’t even consumed all their OCF yet and b) they haven’t even entertained raising material debt or equity to fund this (Oracle will prob be the first for the latter)
“OpenAI, with its deal with Nvidia and AMD on top of their Stargate datacenter, plans to build a total of 26GW of data centers in the next few years.”
26 GW is about the installed capacity of Switzerland, one of the most electrified countries in the world, if not the most. Took us 100 years.
Hard to guess the replacement cost in today’s CHF. An easier data point: UK’s Hinkley Point C will likely get to £50bn for 3.2GW. The construction time will be 13-15 years by EDF, ex permits et al.
If they go for combined cycle gas turbines, they must come from Siemens Energy, Mitsubishi Heavy or GE Vernova. All are basically booked out and will pass on huge cost inflation plus a natural monopoly premium - such turbines are, perhaps, the most difficult tech kid humans have ever created.
Hope: GE wants to double capacity I read. We will see.
Coal plants would likely be the path of least cost resistance, except that there is nobody left to build them except the Chinese or perhaps a Russian firm. No plant was build in the US since 2012. Even longer for Europe which is phasing it out by law.
They are all dreaming with their timetables. They also drive cost inflation with it. Their own vanity is their biggest enemy.
PS: ordinary people will resist data centers sooner rather than later if their electricity bills goes up. Etc etc
US power generation is expanding by 17GW/yr and accelerating every month.
https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=epmt_1_01
100GW addition over next 5yrs is quite feasible.
Great analysis Tx!
thank you!
The GPU colateral financing structure you outlined is fasinating and really highlights the systemic risks building up. If these chips are depreciating on a 1-2 year cycle rather than the 6 year accounting fiction, the whole house of cards could unravel quickly when demand softens. The concentration risk with just OpenAI and Anthropic representing a third of shipments is particularly concerning for anyone holding this debt.
Thanks for sharing your analysis. I think the key issue here is whether those privately held AI labs can continue to get enough funding from private market, either through equity or debt. It's said currently both markets are still very hot. And with Fed's rate-cut cycle re-started, maybe more funds will flow to the primary market? But since the market is private, it is hard to identify or gauge any bubbles there, or just to find out any leading indicators. Meanwhile, if AI labs can speed up their monetization process, they can generate more cash organically. I heard Sam said over a few podcasts recently that they'll continue to announce new initiatives in the coming weeks/months.
Could the end result be similar to the tech boom of 2010s? I read somewhere that because companies overbuilt networking capacity, post dot-com bubble internet was cheap, helping the development of all the internet companies during 2010s.
Yes
Some great points here. I do feel that you might be underestimating OpenAI's/Anthropic's ability to continue pulling in capital at high valuations to support these endeavors. OpenAI is the most in-demand equity asset that I can ever recall from my almost 15 years in the venture and tech space. Even after the $100B commitment from NVDA, I suspect that Sam Altman could pull down capital in $5B or $10B chunks from various entities (e.g. Saudi government) pretty much whenever he feels like it. It's hard for me to imagine scenarios that will interrupt OpenAI's revenue ramp unless models commodify in a profound enough way that the end-user price of them falls to zero.
Nvda gpus are like light bulbs that keep burning out needing constant replacement. Still surprised why they are considered a capex item when they ought to be treated like an opex item. (1 yr amortization by groq is essentially that)
You nailed it. It's an accounting and debt bubble waiting to pop.
The whole grift is the amortization. Calling a GPU that's obsolete in two years a six-year asset is how they hide the fact that they're burning cash, not building value. It's a lie plain and simple.
And when the dealer - Nvidia - starts lending money to its own customers so they can keep buying chips, the game is rigged. It's not a real market, it's a company town. This whole thing is a house of cards built on bad math, and it's getting fragile.
I also wonder if you're being a little harsh on the amortization timelines. Amazon, e.g., has plenty of useful places to put their Blackwell GPUs after they're no longer state of the art for frontier AI use cases (such as workloads for Prime Video or Twitch).
Does the GPU really burn out at the end of the product cycle (1yr)??? I highly doubt it. It may cost more to generate tokens, due to VC (consumption of electricity), but it still is very much a viable asset (financially).
I do think your point about the attraction of debt financing to fund a DC whereby the chips themselves are the collateral to a SPV lending the money, is very much indicative of a bubble.
But I do think you’ve got the accounting implications very wrong
This is an exceptional deep dive into the structural problems building in the AI infrasturcture stack. Your point about GPUs being a faster depreciating asset than the amortization schedules suggest is particularly concerning - the disconnect between Coreweave's 6-year cycle and the reality of Nvidia's 1-year product cadence is a ticking time bomb for anyone holding this debt. The collateral issue you highlighted is going to be brutal when it unwinds. What struck me most is the circular financing structure - NVDA essentially acting as lender of last resort while also being the primary beneficiary of the spending. When you mentioned that 1/3 of all GPU shipments go to just OpenAI and Anthropic, it really crystallizes the customer concentration risk across the entire ecosystem. Appreciate the thoughtful analysis on why Microsoft is offloading risk to neoclouds rather than expanding their own CapEx further - that's a strong signal about their confidence in demand durability. Thanks for laying this out so clearly.
Nvidia gpu chip prices should come down in the next few years with AMD, Broadcom and hyperscalers producing their own chips. Economies of scale will also help bring down data center costs.
This is what everyone should know about Obama’s legacy (and subscribe to me)
https://open.substack.com/pub/sergemil/p/how-obama-islamized-america-and-the
Thanks Rihard, love your content.
I largely agree, but have the following pushbacks:
1. I wouldn’t compare the comparative DC investments OpenAI requires to the FCF of the hyperscalers, but rather the OCF as this is the pre capex number
2. All the hyperscalers can sustain funding as a) capex hasn’t even consumed all their OCF yet and b) they haven’t even entertained raising material debt or equity to fund this (Oracle will prob be the first for the latter)
Thanks
++ Good Post. Also, start here : 500+ LLM, AI Agents, RAG, ML System Design Case Studies, 300+ Implemented Projects, Research papers in detail
https://open.substack.com/pub/naina0405/p/most-important-llm-system-design-77e?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
I see an $18bn debt raise from ORCL based in their filing. Where did you get the $38bn figure from?