The Market Is Pricing Meta Like It's the AI Loser. Big Mistake.
Hi everyone,
In this article, I want to walk through why I think the market is making one of its bigger mistakes of this cycle with Meta. The stock is down roughly 24% from its high of $796.25, trading around $602, with a forward P/E of 19x — well below its 10-year average of around 26-27x and below the S&P 500 multiple. The dominant narrative is that Meta is “spending too much” on AI data centers without a clear path to monetization. Every quarter, the market punishes the stock harder on CapEx prints — the post-Q1 2026 reaction was a 10% drawdown on a 57% EPS beat, the worst price reaction Meta has had in its last six earnings reports, despite delivering the largest earnings surprise in that window.
The thesis I lay out has four parts:
- Meta’s AI CapEx is already showing up in revenue and engagement in a significant way and more importantly, will continue to do so (expert interview with a Former Meta employee on this field)
- The data centers are not “moonshots” (and far from the metaverse spend analogy) — they are Meta’s new workforce, and the more compute Meta has, the more it can improve products and ship new ones;
- Meta has the single most underappreciated asset in tech right now, which is distribution, and the market is assigning zero value to it;
- The valuation has compressed to a point where the bar for an aggressive re-rate is much lower than people think.
Let’s dive in.
The CapEx that the market hates is already showing up in the P&L
Meta did $200.97 billion in revenue in 2025, up 22% YoY, with operating income of $83.28 billion, up 20%. In Q1 2026, they did $56.31 billion of revenue, up 33% YoY — an acceleration from a $200 billion base. Ad impressions grew 19% YoY and average price per ad grew 12% YoY in Q1 2026, with the Q2 guide of $58-61 billion implying continued acceleration.
Now think about what that means in context. The bear narrative is that Meta is spending $125-$145 billion in 2026 CapEx with nothing to show for it. But the company is putting up 33% growth on a $200 billion base while also compounding the underlying business with double-digit price-per-ad gains. When advertisers pay more per impression, and impression volume also grows 19%, that shows that the system is getting better at allocating attention to higher-value placements.
The AI in the P&L is most visible in three specific areas that I want to walk through, because management actually quantified them on the calls.
First, the ad-ranking models. Meta has been rolling out a model called GEM (Generative Ads Recommendation Model), which is essentially their LLM-style foundation model for ads. On the Q4 2025 call, management said this directly:
In Q4, we doubled the number of GPUs we used to train our GEM model for ads ranking. We also adopted a new sequence learning model architecture, which is capable of using longer sequences of user behavior and processing much richer information about each piece of content. The GEM and sequence learning improvements together grew a 3.5% lift in ad clicks on Facebook and a more than 1% gain in conversions on Instagram in Q4.
A 3.5% lift in ad clicks on a base of roughly $200 billion is multiple billions of incremental dollars, and that’s from one model improvement in one quarter.
Meta also said on the Q3 2025 call that GEM is now “4x more efficient at driving ad performance gains” compared to the original ranking models. This is exactly the scaling law dynamic you want to see — more compute thrown at the model translates to more revenue, and the elasticity of that conversion is improving, not deteriorating.
A recent interview with a high-ranking former Meta employee who worked in this field was very useful. He explained some details about Meta’s internal metric called Internal Revenue per Engagement (iREV). According to him, Meta’s internal goal is a minimum 1.5-2% improvement of that metric every 6 months. So far, they have been delivering on this metric, and he is very confident that going forward, Meta will continue to be able to deliver that as they have a lot of room for improvement in three areas that affect iREV: model architecture, more data, and transfer learning. He even quantified it: 30-40% from model architecture changes, 20-30% from more data, and 30-40% from transfer learning. While model architecture and more data are quite straightforward, transfer learning is something many of you might not be familiar with. To explain the context as simply as possible, because serving a large LLM model across the scale of Meta is too expensive, they had to figure out a structure where they had a teacher LLM (the big one) and a student LLM who basically distilled knowledge from the teacher model to be smaller and more effective to run inference on at the scale Meta needs it. So improving data transfer between the teacher and the student model for Meta is key, as it will be for any other company running large-scale production workloads (the phase of AI adoption we are now in). So every time Meta makes improvements in this realm, it translates to better ad performance and rankings, as performance is essentially determined by the quality of the student model.
What I found really insightful was when the person was asked about what the biggest barrier to improving even faster in terms of ad ranking and performance beyond the 2% per 6 months was:
»The biggest barrier, I would say it’s just the modeling evolution is so quick. We want to push the model architecture faster but we don’t really have the time to really do parameter tuning to find fast architecture for our data or for different use cases. Right now, it’s more like we have one safe home. That’s our goal at that time. We have the foundation model. The foundation model powers roughly four or five different orgs’ ranking models.
Although it’s a foundation model, it’s a Wikipedia, knows everything, but still, how to find an optimal maybe adapter or optimization, tuning for each of the five use cases that are under its cloud. I think that’s one of the biggest challenges. We have to chase our targets and there are some ways to hit a target a little bit easier than really going deep into understanding this model, what the model does and what’s the best parameter.
Even what I said, what’s the teacher model capacity and the student model capacity ratio? What’s the optimal ratio between the two models? That even what was studied during my time there. I feel that at that time, the big challenge, we are just chasing the whole statistically or too aggressively and ignore those foundational things, all those long-term things a little bit less.«
source: AlphaSense
What this means is that because the model performance is moving so fast, internally, Meta hasn’t even had enough time and resources to pull or spend time to optimize other levers, which shows us just how early we still are in these improvements and how much more growth from this ad AI tailwind Meta has for its core business, not just from scaling laws and model improvements but also from the data and student teacher mechanizm.
Second, engagement. On the Q3 2025 call, Zuckerberg said:
Across Facebook, Instagram, and Threads, our AI recommendation systems are delivering higher quality and more relevant content, which led to 5% more time spent on Facebook in Q3 and 10% on Threads. Video is a particular bright spot, with video time spent on Instagram up more than 30% since last year. As video continues to grow across our apps, Reels now has an annual run rate of over $50 billion.
A 5% lift in time spent on Facebook — a 21-year-old app that everyone wrote off as dead — is huge. The platform was supposed to be in decline. AI ranking systems brought it back. In Q4 2025, the optimizations drove a 7% lift in views of organic feed and video posts on Facebook, which Susan Li called “the largest quarterly revenue impact from Facebook product launches in the past two years”.
Third, the end-to-end AI ad tools. The annual run-rate of revenue going through Meta’s fully AI-powered ad tools (Advantage+) passed $60 billion on Q3 2025. The video generation tools alone hit a $10 billion run rate by Q4 2025, with quarter-over-quarter growth outpacing the broader ad revenue increase by nearly 3x. Click-to-WhatsApp ads grew revenue 60% YoY in Q3. None of this exists without the AI infrastructure that the market is currently punishing the company for building.
The way to think about this is the same logic I laid out in my February article about the hyperscalers: the CapEx Meta is spending this year doesn’t show up in this year’s revenue. A data center takes around 2 years to build and operationalize, so the revenue acceleration Meta is showing today is the return on 2023 CapEx (~$28 billion), not 2025 CapEx ($72.2 billion). When 2025’s CapEx starts showing up in 2027 revenue, the operating leverage will be far more aggressive than what we’re seeing now.
Data centers are Meta’s new workforce
Here is the part I think most investors are missing, and it’s where the analogy needs to shift.
For the last 15 years, Meta’s growth engine was: hire engineers, ship product, get more users, monetize via ads. Headcount was the input that scaled output. That framework is now different, because the marginal unit of “intelligence” inside Meta is no longer an engineer. It’s a GPU.
Zuckerberg essentially said this on the Q3 2025 call when he framed Meta’s strategy around three “giant transformers” running Facebook, Instagram, and ads, with the goal of merging them into one unified system:
At the same time, we’re also working on combining these three major AI systems into a single unified AI system that will effectively run our family of apps and business — using increasing intelligence to improve the trillions of recommendations that it will make for people every day.
Meta is openly saying that the entire company — the feeds, the ads, the recommendations across 3.56 billion daily users — is going to be run by AI systems whose performance scales with compute. The CapEx number is the headcount number for the AI era.
And here is the kicker — Meta is compute-starved on the current business. Zuckerberg said it directly on Q3 2025:
We are sort of perennially operating the Family of Apps and ads business in a compute-starved state at this point, which is on the one hand sort of an odd thing to say, given the compute that we built up. But we really are taking a lot of the resources and using them to advance future things that we’re doing. And we think that there’s a lot more compute that we could put towards these that would just unlock a huge amount of opportunity in the core business as well.
Meta CFO Susan Li doubled down on this on the same call:
We’re certainly seeing that we wish we had more capacity today than we do. We would be able to put it towards good use, certain not only would the MSL team appreciate having more capacity, but we’d be able to put it towards good and ROI positive use in the core business as well.
This is not a company building speculative infrastructure for products that might monetize in 2030. This is a company that has more profitable use cases for compute than it has compute, and is rationing GPU hours between training the next frontier model and improving the ranking systems that drive a quantifiable lift in conversions every quarter. Investors who treat this CapEx like a moonshot are misreading the situation.
Meta’s distribution is Slept on
There is a more general thesis floating around in the market right now that says: “If AI commoditizes, then everyone with compute can build products, from software to something like a Meta platform.” I think this gets the second-order logic backward. If models commoditize and anyone with enough GPUs can ship a product, then the question becomes: who can get that product in front of users? Distribution becomes the bottleneck, not the model. And Meta’s distribution machine is arguably the single biggest in the world.
Meta’s Family of Apps had 3.56 billion daily active people in March 2026. Instagram crossed 3 billion monthly active users in September 2025. WhatsApp also has over 3 billion users across 180+ countries. Facebook still serves billions of people daily. That is unmatched at this scale anywhere in tech — Google has Search, but the engagement-per-user profile is fundamentally different (people come to Search, do a query, leave; people stay on Meta apps for 30+ minutes a day).
Here’s how that distribution muscle has shown up historically. Meta bought Instagram for $1 billion in 2012. It is now a 3-billion-MAU asset that is the cultural center of gravity for an entire generation. They bought WhatsApp for $19 billion in 2014. It has more than 6x to over 3 billion users. They built Threads from scratch in mid-2023 — a product that, frankly, was not particularly differentiated from X — and rode Instagram’s social graph to 400 million MAUs and 150 million daily actives in roughly 2.5 years. Similarweb data shows Threads passed X in daily mobile active users in January 2026.
Threads was a clone. The product was almost identical to X. There was nothing technically novel about it. And in 2.5 years, by being plugged into Instagram’s distribution graph, it overtook a 19-year-old product with deep cultural roots. The question every investor should ask themselves: what other company on earth could have done that?
Now apply this to AI products. Meta AI hit 1 billion monthly active users by May 2025, doubling from 500 million in roughly 8 months, and let’s be honest, the product wasn’t even good. ChatGPT took roughly 2 years to reach similar scale. Meta did it by embedding the assistant into search bars and chat interfaces inside WhatsApp, Instagram, Facebook, and Messenger. Roughly 63% of Meta AI’s usage comes from WhatsApp alone. Meta did not need to convince anyone to download an app, learn a new interface, or change a habit. The distribution infrastructure was already there.
If you believe — and I do — that the next phase of AI is going to produce a wave of consumer products (AI-generated content, personalized AI assistants, business AI, voice agents, creator tools, AI shopping experiences), then the company that can ship each of those products to 3.56 billion people on day one has a structural advantage that the market is not pricing in. Zuckerberg said it himself on the Q3 2025 call:
I would guess that Meta has the best track record of any company out there of taking a new product that people love and getting it to billions of people in terms of usage. So I think that the ability to plug in leading models is going to, I would predict, lead to a very large amount of use of these things over the coming years.
The market is essentially treating Meta as if distribution is free. It’s not free. It is the single hardest moat to build in consumer technology, and Meta is the only company that has built three of them in parallel (Facebook, Instagram, WhatsApp), then bolted on a fourth (Threads) using the first three as the launchpad.
The AI sentiment changes on a dime
The market right now is running on AI sentiment more than fundamentals. Companies are being bucketed as “AI winners” or “AI losers”, and the valuation gaps between those buckets are enormous. The reason is that the marginal flows of capital in public markets are still controlled by investors with financial-domain expertise but a relatively shallow understanding of how AI actually works at the technical level. So the signal that gets weighted most heavily is: did this company ship a frontier model? Did they show up on the benchmark leaderboards?
This is exactly the gap that creates opportunity. Meta released Muse Spark on April 8 — the first model from Meta Superintelligence Labs (MSL). Muse Spark scored 52 on the Artificial Analysis Intelligence Index, behind Gemini 3.1 Pro and GPT-5.4 (both at 57) and Claude Opus 4.6. On absolute benchmark terms, it’s not SOTA.
But look at what it is good at. Muse Spark used only 58 million output tokens on the Intelligence Index evaluation, versus 157 million for Claude Opus 4.6 and 120 million for GPT-5.4 — meaning Meta is delivering near-frontier intelligence with less than half the inference compute of competitors. Meta also said the model achieves the same capability level as the older mid-size Llama 4 Maverick using an order of magnitude less compute. For a company that’s about to deploy this model to 3 billion daily users, inference efficiency at this scale is a multi-billion-dollar economic advantage. And the model is particularly strong in vision, health, and what Meta is calling “personal intelligence” use cases — exactly the domains that map onto consumer apps.
Now think about what happens when the larger frontier model lands. Meta has been working on a next-generation flagship, a bigger model internally named the “Watermelon” model. If Meta lands a model that is genuinely competitive on benchmarks with frontier models from Anthropic, OpenAI, and Google, the market will re-rate the company aggressively. The current setup is that Meta is being priced as if it can’t compete at the frontier. The forward P/E of 19x reflects that. Compare that to Google trading at a meaningfully higher multiple post the TPU/Gemini story repricing in late 2025. The asymmetry is real. If Meta merely catches up to where the market is already pricing Google and other labs like Anthropic, OpenAI, the implied upside is substantial— and that’s without assigning incremental value to the distribution moat or to any of the new AI products.
The Market hates the data center spend. Meanwhile, everyone else is desperate for compute.
The market is currently assigning negative value to Meta’s data center buildout. Every time CapEx goes up, the stock goes down. Meta’s CapEx jumped from $39 billion in 2024 to $72 billion in 2025 to a guided $125-145 billion in 2026.
At the same time, the rest of the AI ecosystem is screaming that we don’t have enough compute. Anthropic just signed a deal to take all 300+ MW of compute capacity at xAI’s Colossus 1 data center in Memphis — roughly 222,000 Nvidia GPUs including H100, H200, and GB200 systems. That deal is worth billions. xAI has effectively pivoted to a neocloud model, renting GPUs to Anthropic. The CoreWeave and Nebius backlogs continue to grow. Oracle’s cloud business is being capacity-constrained. AWS, Google Cloud, and Azure are all selling everything they have available and have multi-year backlogs.
So the market believes simultaneously that (a) there is a multi-year compute shortage that will continue at least through 2027, and (b) Meta is wrong to be building data centers and putting negative value to them. These two beliefs cannot both be true. If there is a compute shortage, then Meta’s buildout’s terminal value in the worst case should be based on the value of those data centers if Meta sells that compute on the market. Zuckerberg made this point explicitly on the Q3 2025 call:
To date, we keep on seeing this pattern where we build some amount of infrastructure to what we think is an aggressive assumption and then we keep on having more demand to be able to use more compute… any compute that we don’t need for that, we feel pretty good that we’re going to be able to absorb a very large amount of that to just convert into more intelligence and better recommendations in our Family of Apps and ads in a profitable way. Now, I mean, it’s of course possible to overshoot that, right… If we do, this is what I mentioned in my comments then we see that there’s just a lot of demand for other new things that we build internally, externally. Like almost every week, people come to us from outside the company asking us to stand up an API service or asking if we have different compute that they could get from us. And we haven’t done that yet, but obviously if you got to a point where you overbuilt, you could have that as an option.
The fact that the market is ignoring this and assigning a negative value to the CapEx is, in my view, a significant mistake.
Here’s why this matters even more: Meta is arguably the only company outside the three hyperscalers (AWS, Microsoft, Google) that has the operational capability to run hyperscale data centers for both training and inference at the level required. They’ve been operating planetary-scale infrastructure for over a decade. They know how to manage multiple AI accelerators: Nvidia GPUs, AMD GPUs, and custom ASIC (their MTIA). If Meta wanted to offer compute externally tomorrow, the renters lining up would include some of the largest AI labs and enterprises in the world, and the unit economics would look more like AWS than like a cap-on-cost neocloud.
Valuation: What should the real value be?
The company is


