Every Memory Cycle Ends the Same. Until It Doesn't.
Hey everyone,
For three decades, the memory semiconductor industry has followed a brutal and predictable pattern: prices boom, manufacturers over-invest, supply floods in, prices crash, everyone bleeds red ink, and then the whole thing starts over. It’s been one of the most reliably cyclical businesses in all of technology. The cycle has destroyed shareholder value, bankrupted companies, and taught every investor the same lesson: never trust the words “this time is different” when it comes to DRAM.
And yet, here I am, writing an article arguing exactly that.
Let me be clear, I know the history. I’ve studied every major memory cycle of the last 30 years. In this article, we look at them and the numbers. But then I am going to make a case for why the AI era may fundamentally break that pattern, not because demand will be infinite (it won’t), but because the nature of what memory serves has changed in a way that most investors haven’t fully internalized.
Memory is no longer just a component inside your gadget. Memory is becoming a raw input for intelligence. And the demand curve for intelligence looks a lot more like the demand curve for energy, electricity, than it does the demand curve for smartphones.
Let’s start.
The history of memory economics
For those less familiar with the space, the memory semiconductor market is dominated by three players: Samsung Electronics (South Korea), SK Hynix (South Korea), and Micron Technology (United States). Together, these three companies control approximately 95% of global DRAM production. This is an oligopoly, but not one that has historically behaved like one. Unlike OPEC, these companies can’t (legally) coordinate output. And unlike logic chips, memory is essentially a commodity—a bit is a bit. The differentiation comes from process technology, cost structure, and increasingly, product mix (more on HBM later).
The fundamental problem with memory economics is the mismatch between demand elasticity and supply inelasticity. Building a new DRAM fab costs $15-20 billion and takes 2-3 years. Once built, the economics favor running it at maximum utilization because fixed costs are enormous. So when demand rises, prices spike because supply can’t respond quickly. When manufacturers finally bring new capacity online, they tend to overshoot, because everyone is building at the same time based on the same rosy demand signals. Prices crash, margins collapse. Some companies go bankrupt or get acquired. The survivors cut capex, and the cycle begins anew.
This is the pattern. And it has repeated with remarkable consistency.
Cycle 1: The Windows PC supercycle (1993-1996)
The first modern memory supercycle was driven by the explosion of Windows PCs and graphical operating systems. Average DRAM content per PC jumped from roughly 1-2MB to 4-8MB—a 4x increase per device—while PC unit shipments were growing at double-digit rates.
During 1993 and 1994, DRAM demand outpaced supply despite most fabs running at full utilization. Spot and contract prices for 4Mb and 16Mb DRAM rose sharply, and gross margins for leading suppliers surged well above 50%. Korean memory makers like Samsung and Hyundai (now SK Hynix) posted record profits. Semiconductors accounted for 13.4% of Korea’s total exports. It was hailed as the greatest boom in Korean industrial history.
Then reality hit. Roughly 50 fab construction plans were announced during 1995-1996 alone. Capex as a percentage of semiconductor production exceeded 30%. The inevitable happened: DRAM prices peaked in late 1995 and then collapsed—falling 51% in 1996 and another 65% in 1997. Korea’s Big Three chipmakers suffered from overexpansion, and the resulting shock contributed to the Asian Financial Crisis that pushed Korea into a deep recession. Stock prices of memory companies fell 60-80% from peak to trough.
Looking at the data cycle duration (peak to trough): around 2 years. Price declines 51% in year one, 65% in year two, and the stock declines around 60-80%.
Cycle 2: The cloud and smartphone era (2016-2019)
Fast forward two decades, and the cast of characters had changed, but the script was the same. By 2016, the DRAM market had consolidated from roughly 20 players to just three. This was supposed to introduce discipline. And for a while, it seemed like it did.
The 2016-2018 “supercycle” was driven by a convergence of factors: smartphone storage capacity upgrades, the early cloud buildout, and a supply-side twist where manufacturers were shifting capacity to 3D NAND production, which temporarily constrained conventional DRAM output.
The numbers were spectacular, especially for Micron, the only publicly traded pure-play memory company in the U.S.:
Micron 2016: Revenue of $12.4 billion, gross margin of 20.2%, operating income of just $168 million (1.4% operating margin). The company was barely above breakeven.
Micron 2017: Revenue surged 64% to $20.3 billion. Gross margin expanded to 41.5%. Operating income hit $5.87 billion (28.9% margin).
Micron 2018: Revenue jumped another 50% to $30.4 billion. Gross margin peaked at 58.9%. Operating income reached an astonishing $15.0 billion—a 49.3% operating margin. From barely profitable to printing nearly 50 cents of operating profit on every dollar of revenue in two years.
SK Hynix followed a similar trajectory. At its Q3 2018 peak, SK Hynix posted an operating profit of 6.47 trillion Korean won, which at the time was a record.
DDR4 retail RAM prices doubled over the course of 2017 into early 2018. Industry inventories fell to 3-4 weeks, well below the normal 8-week average.
Micron’s stock peaked at roughly $64 in May 2018. But notice- revenue and margins didn’t peak until Q4 of calendar 2018. The stock topped out approximately two quarters before the fundamental peak. This is a classic pattern in cyclical stocks: the market discounts the turn before it shows up in the numbers.
Then came the crash:
Micron 2019: Revenue fell to $23.4 billion (-23%). Gross margin compressed to 45.7%.
Micron 2020: Revenue dropped further to $21.4 billion. Gross margin fell to 30.6%. Operating income was $3.0 billion, down 80% from the 2018 peak.
By December 2018, Micron’s stock had fallen to approximately $28—a 56% decline from the May high. The stock was pricing in the downturn even as the company was still reporting near-peak earnings.
Cycle duration (peak to trough in fundamentals): ~6-7 quarters. Revenue decline (peak to trough): ~30% Gross margin decline: from 59% to 27% (at the Q1 FY2020 low) stock decline (peak to trough): ~56%.
Cycle 3: The COVID cycle (2020-2023)
The pandemic created an unexpected demand surge. PC shipments exploded as the world went remote. Server demand spiked as cloud usage accelerated. 5G phones launched with higher per-device memory content. The upcycle lasted approximately 14 months before the familiar reversal kicked in.
By 2022-2023, the downturn was severe. Bloated inventories from pandemic over-ordering met weakening consumer demand. SK Hynix posted a full-year 2023 net margin of approximately negative 28%. Micron’s 2024 revenue dropped to around $25 billion with gross margins compressing toward the low 20s.
Memory stocks cratered. Micron fell from around $98 in early 2022 to roughly $49 by late 2022—a 50% haircut. SK Hynix fell similarly.
Cycle duration (peak to trough): 6-8 quarters of margin compression. Operating margins went from 30%+ to deeply negative for SK Hynix. Stock decline: ~50%
The pattern across all three cycles is strikingly consistent: a demand-driven boom lasting 4-7 quarters, followed by an oversupply-driven bust lasting 4-8 quarters, with revenue declines of 25-40%, margin compression from peak levels above 50% to the low 20s or even negative, and stock price declines of 50-60% that lead the fundamental downturn by 1-2 quarters.
The history is clear, but now let me tell you why I think this cycle might be structurally different.
From gadget component to intelligence input
In every previous memory cycle, the demand driver was the same: humans buying devices. PCs in the 1990s. Smartphones in the 2010s. Laptops during COVID. The demand function was ultimately capped by the number of humans and the number of devices each human needs. One person buys one phone. Maybe one laptop. Perhaps a tablet. The DRAM content per device grows, but the number of endpoints is bounded.
This meant that once the initial adoption or upgrade wave passed—once everyone who needed a new PC had bought one, or every smartphone had been upgraded to the latest generation—demand would flatten. Supply, which was ramped during the boom, would overshoot. Prices would crash.
In the AI era, the demand function for memory has fundamentally changed. Memory is no longer predominantly serving a fixed number of »human endpoints«. Memory, especially HBM, is now a critical input for generating intelligence.
Think about what HBM (High Bandwidth Memory) actually does inside an AI accelerator. When you ask ChatGPT a question or run an inference on a large language model, the model’s parameters—billions or trillions of numerical weights—need to be loaded from memory into the GPU’s compute cores. The KV cache, which stores the context of your conversation, grows linearly with context length, with Grouped Query Attention (GQA) consuming roughly 0.06 - 0.12 MB per token in a 7B parameter model. A model with 70 billion parameters requires more than a single 80GB GPU worth of HBM just for the weights alone.
Here’s the simplified version: More memory = the ability to run larger models, with longer context, serving more users simultaneously. Memory is not a peripheral component in AI—it is the binding constraint. The so-called “memory wall” is the single biggest bottleneck limiting AI inference performance today. GPUs often sit idle, waiting for data to be fetched from memory. More bandwidth, more capacity means more intelligence output per second.
This is where the analogy to energy becomes powerful. Think about oil. When oil prices drop, what happens? Demand for oil increases because cheaper energy enables more economic activity. The demand curve for energy is downward-sloping- lower prices stimulate consumption. There’s always more work that could be done, more goods that could be transported, more heat that could be generated, if only energy were cheaper.
I believe AI inference demand behaves similarly. If memory costs drop and inference becomes cheaper, that doesn’t mean demand for inference drops. It means more applications become economically viable. More AI agents get deployed. More models get served. More context windows get extended. The demand for intelligence, like the demand for energy, is essentially elastic in response to price declines. Cheaper intelligence leads to more consumption of intelligence, not less.
This is the polar opposite of the gadget cycle. When DRAM prices dropped after the 2018 boom, it didn’t cause people to go buy a second smartphone. The number of endpoints was fixed. But when the cost of running an AI inference call drops by 50%, you can bet that the number of inference calls per day will more than compensate. Every enterprise that was waiting on the sidelines because of cost will deploy its AI project. Every startup that couldn’t afford the compute will spin up their service.
Here’s a human analogy I think captures this well. Imagine two people: one is a genius with poor memory, and the other is of average intelligence but has extraordinary memory and recall. In many real-world tasks—medicine, law, engineering, customer service—the person with superior memory will outperform the genius. Why? Because most practical work isn’t about raw reasoning power. It’s about retrieving the right piece of information at the right time. An AI model with more memory (longer context, more parameters accessible, faster retrieval) will outperform a theoretically smarter model that is memory-constrained. Memory is intelligence in many practical applications.
This is not a theoretical argument. The industry data supports it. HBM capacity per GPU has been scaling aggressively: NVIDIA’s A100 had 80GB of HBM2e. The H200 moved to 141GB of HBM3e. The upcoming Blackwell Ultra configurations push toward 288GB. And the Rubin Ultra platform is targeting 288GB - 576GB of HBM4E per GPU. The trajectory is exponential, and every generation of GPU is constrained by memory, not compute.
Where we are today
The current memory cycle is already historic in scale.
DRAM prices have surged dramatically. By Q4 2025, DRAM spot prices were nearly triple their level from a year earlier. DDR5 prices jumped 30-50% per quarter through H2 2025. Samsung raised memory prices by up to 60% since September 2025. DRAM inventories at major suppliers fell to just 3.3 weeks by the end of Q3 2025—matching the 2018 supercycle lows. SK Hynix and Micron had roughly 2 weeks of inventory each.
AI is expected to consume nearly 20% of global DRAM wafer capacity in 2026 when adjusted for HBM’s 4x wafer intensity.
The valuation: The market doesn’t believe in the durability of this cycle
Here’s where it gets really interesting from an investment perspective.
Despite the strongest fundamental setup the memory industry has ever seen—sold-out HBM capacity through 2026, record margins, structural demand from AI, and a three-player oligopoly with pricing discipline—the market is still pricing these stocks as if a classic downturn is imminent.
Micron trades at a forward P/E of about 10x, SK Hynix trades at approximately 5.2x forward P/E, and Samsung trades at a forward P/E of roughly 5x-7x—although this includes the total company, which includes much more than just memory.
The PEG ratio makes the mismatch even clearer. Micron’s PEG is approximately 0.16x, Samsung is at 0.17, and SK Hynix is at 0.10—meaning the market is pricing almost zero growth premium into the stocks.
But at these valuation levels, the question is not whether these companies will continue to grow; it’s more about how long the current demand signals will last. If these memory demand levels and margins stay here for a few more years, that would be a scenario that markets are not pricing in.
Why? Because the market has been burned by memory cyclicality before. Investors remember that in the 2017-2018 supercycle, Micron stock peaked at ~$64 with a forward P/E of about 4-5x at the top, and then the stock fell 56% even though earnings were still rising. The conditioned response is “memory is peaking, get out before the crash.”
But this framing assumes the old cycle repeats. It assumes that the demand driver (AI infrastructure buildout and inference scaling) behaves like the demand driver in previous cycles (consumer device upgrades). And I believe that assumption could be wrong.
Why the downturn when it comes might be shallower
I’m not arguing that memory prices will never decline. They will. At some point, new fab capacity from current investment plans will come online. At some point, HBM4 yields will improve, and supply will catch up. The 2017-2018 cycle teaches us that supply response is inevitable.
But I believe the depth and duration of the downturn will be structurally different this time (dangerous words I know):
1. The end market is not bounded by human endpoints. In the PC cycle, once every household had a PC, demand plateaued. In the smartphone cycle, once penetration hit saturation, annual unit growth went to zero. But the number of AI inference calls per day is growing exponentially and is nowhere near saturation. Every enterprise, every consumer app, every autonomous vehicle, every AI agent is an incremental consumer of memory bandwidth.
This view is also shared by many industry experts. Here is a former high-ranking employee from ASML on this topic:
»The current conditions actually have made us move away from cyclicality simply because the ratio of the chips that go into laptops and cell phones and other personal-use devices is getting lower each day as the capacity gets transferred to AI-related infrastructure. We may not be able to predict the condition or state of these memory manufacturers based on cyclicality anymore.«
Source: AlphaSense
2. Memory content per AI unit is growing exponentially, not linearly. DRAM content per PC grew from maybe 4GB to 16GB over a decade—a 4x increase. HBM content per GPU is going from 80GB (A100) to 288GB - 576GB (Rubin Ultra) in just a few years—a 7x increase. And the number of GPUs being deployed is also growing at 30-40% annually. The compounding effect of more units × more memory per unit is producing demand growth rates the industry has never seen.
3. HBM is structurally supply-constrained. One gigabyte of HBM consumes approximately 4x the wafer capacity of standard DRAM. HBM also requires advanced packaging (CoWoS or its equivalents), which has its own supply bottleneck. You can’t just flip a switch and convert commodity DRAM lines to HBM production. The manufacturing complexity acts as a natural supply governor that didn’t exist in previous cycles.
4. Long-term contracts are dampening volatility. In a major shift from past cycles, memory companies are increasingly locking in multi-year supply agreements with hyperscalers. SK Hynix has finalized its 2026 HBM supply plan with major clients and expects supply to remain tight through 2027. Micron has sold out its 2026 HBM capacity and has pricing agreements already in place. These contracts reduce the spot market’s influence and provide revenue visibility that the memory industry has never had before.
On top of the long-term contracts, the memory providers are much more careful with investing in new capacity this time, as the past cycle scars are a strong reminder. Here is a comment from a current Microsoft employee on what they expect in terms of memory supply coming online:
» I don’t think anyone on the buying side assumes memory suppliers will automatically rush to add unlimited supply just because demand is strong. The history of boom-bust cycles is very real, and suppliers remember that just as well as buyers do.
From my perspective, the expectation isn’t that all suppliers aggressively overbuild, but that they add capacity in much more controlled stages way than in the past cycles. What is different this time is the nature of demand. A lot of AI-driven demand is tied to long-lived infrastructure programs rather than short consumer cycles, which gives the suppliers more confidence but not enough to blindly overspend.«
Source: AlphaSense
Perhaps the even more telling comment is this one made by a Fromer high ranking Micron employee on the internal cultural scars that the memory cycles have made:
»Micron has always positioned themselves as not the cheapest. Like I said, in the past, yes, when it was under Steve Appleton, Mark Durcan, Mark Adams, they’ve been trying to gain market share by reducing prices, but with the new CEO Sanjay, he is more focused on profitability rather than market share. Market share also is important, but if you were to choose between market share and profitability, he chooses profitability.«
Source: AlphaSense
5. The price elasticity of AI demand works in memory’s favor. If DRAM prices decline 20-30% (as they inevitably will at some point), the cost of running AI inference drops proportionally. This makes AI deployment cheaper, expanding the addressable market, which in turn supports memory demand. The demand floor is higher than in past cycles because cheaper memory creates new demand, rather than simply being absorbed by a fixed number of devices.
At some point, we will see a correction, but one that looks more like a 15-25% revenue decline and margins compressing to the 35-40% range, rather than the historic 30-40% revenue declines and sub-25% margins of previous busts. And crucially, I think the trough will be shorter, because AI inference demand will continue growing even during the cyclical correction, providing a demand floor that didn’t exist in the consumer device era.
The bottom line
The memory industry has spent 30 years teaching investors the same lesson: the cycle always turns, the crash always comes, and “this time is different” are the four most expensive words in investing. I respect that history deeply, and I’ve laid out the data to show you exactly how brutal those turns have been.
But I’m willing to bet against that lesson—partially—because the underlying demand driver has genuinely changed. That is why I also own stakes in SK Hynix and Samsung. Memory was a component in your gadget. Now it’s a substrate for intelligence. And the demand for intelligence—like the demand for energy, for computing, for connectivity—doesn’t follow the same saturation dynamics as consumer electronics.
The real risk for the memory cycle at the current stage is a technical breakthrough that would require orders-of-magnitude less memory and HBM, or a change that would bypass memory altogether. The chances of that happening today are low, but it is something to keep a close eye on all the time.
In the next section of this article for paid subscribers, I analyzed in detail how long I think this memory shortage and cycle will last, the timing of memory supply coming online for memory makers, including Chinese memory providers, and their possible effect on the market. Here is my take:


