The Great SaaS Unbundling: Why AI Will Destroy Half the Industry and Supercharge the Other Half
Hey everyone,
I’ve been thinking a lot about the AI disruption narrative in SaaS. Everyone’s talking about how AI will “transform” software, but I think most people are getting it wrong. The real story isn’t about transformation—it’s about bifurcation. Some SaaS companies are about to get absolutely demolished, while others will emerge stronger than ever. It’s not about looking at valuation levels for some of these SaaS companies and buying what is cheap on a valuation metric.
The determining factor for survival isn’t the brand, or even the data that the SaaS companies have—it’s whether their core system is deterministic or probabilistic.
Let me explain what I mean and why this matters for investors.
The Core Thesis: Deterministic vs. Probabilistic Systems
Deterministic systems are those where precision is critical, state management is complex, and errors cascade into serious consequences. Think accounting software, ERP systems, compliance platforms, healthcare system, payment processors, and sophisticated workflow engines. These systems need to be right 100% of the time—not 95%, not 99%, but 100%. When you’re reconciling a billion-dollar balance sheet or processing payroll for 50,000 employees, “close enough” isn’t acceptable.
“Traditional enterprise functions, such as HR, are inherently deterministic; decisions like employee termination are binary and require rigid logic where specific inputs trigger precise, unvarying sequences. In contrast, LLMs are inherently probabilistic, determining the confidence level of the next token rather than following a hard-coded decision tree.«
Source: Employee at Rippling (AlphaSense)
Probabilistic systems are those where the core value proposition is pattern recognition, content generation, basic automation, or simple decision-making. Think chatbots, content recommendation engines, basic customer support automation, simple workflow tools, and generic productivity software. These systems can tolerate errors and are often based on “good enough” outputs.
More likely, AI is going to eat the probabilistic category, while some deterministic systems will become more valuable by integrating AI as a complementary layer and start expanding into other layers.
Why Deterministic Systems Are Actually Strengthened by AI
This might seem counterintuitive. If AI is so powerful, why wouldn’t it disrupt the complex systems?
“AI succeeds when autonomy is constrained, execution is owned, and determinism is treated as an asset rather than a limitation.”
Jens Eriksvik
When you look at how enterprises are actually deploying AI agents in 2025/2026, they’re not replacing their systems of record—they’re building orchestration layers on top of them. As a Former Microsoft Manager put it:
»A ‘reality check’ is occurring among CIOs as they realize LLMs lack the deterministic consistency required for critical industries like financial services. For use cases such as underwriting, a system that provides a correct answer ‘six out of ten times’ is insufficient; these processes demand 100% consistency, which current probabilistic models struggle to guarantee without extensive re-engineering.«
LLMs interpret human intent, deterministic systems execute the actual work. The deterministic systems are not being disrupted; the operator is (you). This is the architecture that’s winning in production environments.
Why does this matter? Because the companies that own these deterministic platforms become more valuable in an AI world, not less. They become the essential execution layer that AI needs to actually accomplish tasks.
The use of these deterministic platforms should rise substantially as more people gain valuable information from them with the help of AI. Usage goes up, but only with platforms that integrate these AI tools well into their deterministic platform cores.
But even with deterministic systems, there are challenges. The seat-based pricing must be converted to usage pricing. SaaS companies right now have to aggressively cut costs, specifically labour costs, SBC, etc., and get in front of the curve. As a deterministic platform, you can charge a premium for your deterministic core offering. On top of that, you will be able to offer probabilistic tools that complement the deterministic core. Here, the pricing logic is simple: you price it at inference cost + 30% margin. Over time, as you build out your sticky offering as a platform, you can gradually try to expand that margin once again, but right now, that time is not there yet.
As a deterministic platform provider, the goals are clear: provide a clear deterministic core, execute great probabilistic offerings that enhance the core, and cut cost AGGRESSIVELY in terms of labour as you increase OpEx spend on cloud infrastructure to reach mass scale and then negotiate better inference costs because of that scale.
The companies that do this will come out as big winners, as they will be able to consolidate and offer probabilistic features on top of their offering, at inference +30% margins, and, with it, expand their TAM.
The Probabilistic SaaS Bloodbath
Now let’s talk about the other side of this equation—the SaaS companies that are in trouble.
If your core value proposition can be replicated by an LLM with 90% of the quality at 1% of the cost, and you provide a probabilistic product, you don’t have a sound business model anymore. The problem becomes if your core value proposition is pattern matching, content generation, recommendations, or simple automation. Foundation models have gotten so good at these exact tasks that they can replicate your entire product in a few lines of code. The problem is not only the costs (which is a big one), but the problem also extends to the user interface, data, integration, and brand moats.
Having a »great UX” as a SaaS provider is irrelevant when natural language becomes the interface. Users would rather type “generate 10 marketing emails for our Q1 launch” into ChatGPT than navigate through HubSpot’s 47-screen workflow builder. While some call out proprietary data as the strong moat for these kinds of businesses, I would argue otherwise. Modern LLMs can learn from a small example set and perform as well as a model that has thousands of examples. The accelerating nature of LLMs and the emergence of synthetic data also hurt the incumbent data holders. Research from Meta in 2024 showed that models trained on synthetic data generated by GPT-4 perform within 2% of models trained on real data for most classification tasks. And this was in 2024, till today, this only gotten better. Even if the proprietary data gives you your own model with 2-3% better accuracy, because of the probabilistic nature of your business, customers are not willing to pay 100x premiums for 2-3% better outcomes. They might do that if you had a deterministic system, however.
Moving to the »integration moat«. The key emphasis here is that these SaaS solutions are integrated with thousands of other apps and that this ecosystem is hard to replicate. Most SaaS products have well-documented APIs. AI excels as an integration layer without the need for pre-built connectors. With AI agents, these integrations and connections will become even more seamless as adoption accelerates and the SaaS companies want to stay “useful” in the age of agentic AI, making their APIs even more open and clear.
Now to the moat called the brand. There is some merit that enterprises, to some extent, are loyal to brands as they build trust in those brands. But with probabilistic systems, that trust is less strong and loyal than it is with deterministic systems, where you know you get those results 100% accurate. Enterprises are loyal to a degree until the cost gap becomes too big. If the discount is 20-30%, most won’t switch, but if that discount grows to 50-+70% switching starts. The trust factor is also something very fluid. AI startups with low-cost probabilistic system solutions gain trust via media coverage, raising billions in new VC funds, and hiring high-profile people from the incumbents.
The cutting of probabilistic SaaS is already underway, and this is more than just cutting seats.
Publicis Sapient reports actively reducing traditional SaaS licenses by approximately 50%—including major platforms like Adobe—by substituting them with generative AI tools and chatbots. An executive at the firm in an expert interview explains that AI agents are “10x faster, 100x smarter” than junior staff, creating a redundancy that directly cannibalizes the seat-based revenue underpinning commercial SaaS models.
For probabilistic SaaS, the only viable model is to cut costs to a minimum and price your product with a 30%+ margin on inference, but even that might not be sticky enough, especially if you don’t have any deterministic offering and if your clients are primarily SMBs. These companies will not be disrupted directly by AI, but by deterministic systems competing with them, offering AI-generated probabilistic offerings and bundling them into a single offering with a core deterministic system holding it together. If your ERP provider starts offering you a customer service system that works flawlessly with your ERP and uses your inference credits for both use-cases, you will likely switch over rather than have a separate customer service offering even if it is at the same cost.
Valuation Compression is Already Here but it’s across the board
Right now, the market is hitting SaaS across the board as it sees the risk of AI disruption. As of December 2025, the median EV/Revenue multiple for public SaaS companies stands at 5.1x, down from the pandemic peak of 18-19x and much lower than the historic average.
The thing the market hasn’t fully priced in yet is the deterministic and probabilistic platform differences that I laid out here, so the opportunity to own a deterministic SaaS platform at reasonable prices is definitely here.
Based on the criteria laid out in this article, I made a list of public SaaS companies and ranked them in deterministic/probabilistic order, and some of the ones I would highlight as being the least at risk of AI disruption:


