AI automation replacing workers leading to economic demand decline illustrating the AI layoff trap and prisoner dilemma in 2026

The AI Layoff Trap: Why Companies Can’t Stop Automating

By Ethan Brooks

USA Tech Journalist | April 12, 2026

I’ve covered Silicon Valley for over a decade — watching hype cycles around automation, the gig economy boom, and now the explosive rise of generative AI. What’s unfolding in 2026 feels different. Companies are openly replacing workers with AI tools at unprecedented speed, often citing productivity gains and cost savings. Stock prices frequently jump on the news.

But a new academic paper published in March 2026 forces us to confront a darker possibility: this automation arms race may be collectively suicidal. Two economists — Brett Hemenway Falk from the University of Pennsylvania and Gerry Tsoukalas from Boston University — have built a rigorous mathematical model showing how rational, competitive firms can get trapped in excessive automation that erodes the very consumer demand their businesses depend on.

Their paper, titled “The AI Layoff Trap,” doesn’t rely on speculation or dystopian forecasts. It uses a task-based economic model to demonstrate a clear externality: when one firm lays off workers to cut costs, it saves money for itself but reduces overall purchasing power across the economy. Those laid-off workers (or those with depressed wages) buy fewer products and services — hurting every company, including the one that automated in the first place.

The result? A classic prisoner’s dilemma on a macroeconomic scale. Every CEO can see the trap coming. Yet no single company can afford to stop automating — because its competitors won’t.

The Core Insight: Firing Workers Means Firing Customers

In the researchers’ model, firms compete in a market where tasks can be performed by either humans or AI. Automating a task lowers costs for the individual firm, giving it a competitive edge in pricing or profits. However, displaced workers lose income, which reduces aggregate demand. Since demand is what allows all firms to sell their goods and services, the savings from automation are partly offset by lower revenue across the board.

Crucially, each firm internalizes only a small fraction of the demand destruction it causes (the part affecting its own sales), while the rest spills over to rivals. This creates a strong incentive to automate anyway — you capture the full cost savings but bear only a sliver of the demand hit.

At the extreme, the model shows firms can displace far more workers than is collectively optimal, leading to a “deadweight loss” that harms both labor and capital owners. It’s not simply a transfer of wealth from workers to shareholders; it’s a net loss for the economy.

The paper explicitly calls this the “Red Queen effect” — as AI improves, the pressure to automate faster than competitors intensifies, but when everyone races ahead at the same pace, the relative gains cancel out, leaving only destroyed demand behind.

This isn’t fringe theorizing. The abstract states plainly: “Even as AI-driven layoffs sweep across industries, and even as every firm recognizes that vanishing paychecks mean vanishing customers, not one of them will stop.”

Real-World Examples: The Numbers Are Already Stacking Up

The timing of the paper couldn’t be more relevant. Tech layoffs have accelerated, with AI frequently cited as a driver.

  • Block (formerly Square): In February 2026, the company cut more than 4,000 employees — roughly 40% of its workforce of over 10,000 — reducing headcount to under 6,000. CEO Jack Dorsey directly linked the move to AI, stating that “intelligence tools have changed what it means to build and run a company” and that “a significantly smaller team, using the tools we’re building, can do more and do it better.” He predicted that the majority of companies would reach the same conclusion within the next year. Block’s stock surged on the announcement.
  • Salesforce: CEO Marc Benioff has openly discussed reducing customer support headcount from about 9,000 to 5,000 using AI agents (Agentforce), noting he “needs less heads” because AI now handles a significant portion of interactions. The company reported cost reductions of around 17% in support. While some roles were redeployed rather than eliminated outright, the net effect is clear: fewer humans needed for routine tasks.

Goldman Sachs and other firms have highlighted how AI coding tools allow one senior engineer to handle work previously done by entire small teams. Broader data shows over 100,000 tech layoffs in 2025, with AI as a primary factor in a large share. Older but still relevant studies (including from OpenAI researchers) have estimated that up to 80% of U.S. workers have jobs where at least some tasks could be impacted by generative AI.

These moves make perfect sense at the individual company level in a competitive market. But scaled across the economy, the paper warns they risk a self-reinforcing spiral: fewer employed consumers → softer demand → pressure for even more cost-cutting via automation.

Why Common Solutions Don’t Fix the Trap

The researchers rigorously test several popular policy responses and find most fall short because they fail to change the underlying incentive structure:

  • Universal Basic Income (UBI): Improves living standards for displaced workers but doesn’t alter any firm’s decision to automate a specific task.
  • Capital income taxes or higher corporate taxes: Affect overall profits but not the per-task calculus of replacing a human with cheaper AI.
  • Worker equity or profit-sharing: Narrows inequality somewhat but can’t eliminate the competitive pressure to cut labor costs.
  • Collective bargaining or voluntary agreements: Automation remains a dominant strategy; no self-enforcing pact holds when defection (automating while others don’t) is so tempting.
  • Upskilling or retraining: Valuable for individuals, but if reabsorption lags behind displacement speed, the demand externality persists.

The only mechanism the model identifies as capable of implementing the socially optimal level of automation is a Pigouvian automation tax — essentially a per-task charge that forces each firm to internalize the demand destruction it causes when it displaces workers. Revenue from the tax could fund retraining or income support, potentially making the intervention self-limiting over time as the economy adjusts.

OpenAI itself has recently floated ideas around taxes on automated labor and shifting the tax base away from labor income toward capital, reflecting growing awareness of these dynamics among even leading AI labs.

A Balanced View: Productivity Gains vs. Demand Risks

As a journalist who has reported on both the transformative potential of AI (from better EV software to medical diagnostics) and its workforce impacts, I see two sides.

On one hand, AI-driven productivity improvements are real and could eventually lead to new tasks, new industries, and higher overall living standards — the classic “reinstatement effect” discussed by economists like Daron Acemoglu and Pascual Restrepo. History shows technology has often created more jobs than it destroyed over the long run.

On the other hand, the transition period matters enormously. If displacement outpaces reabsorption and new job creation — especially for mid-skill or routine cognitive work — the demand shortfall could create a painful drag. The paper’s contribution is showing that even perfectly rational, forward-looking CEOs can’t unilaterally escape this dynamic in a competitive market.

It’s worth noting that the model is theoretical and simplified (as all economic models are). Real economies have frictions, heterogeneous firms and workers, borrowing constraints, policy responses, and the potential for rapid new task creation that could mitigate the trap. The authors themselves call for further empirical work.

Yet the core insight resonates with what we’re seeing in boardrooms: short-term shareholder pressure rewards automation announcements, while the longer-term demand risks are diffuse and harder to quantify on quarterly earnings calls.

What This Means for Workers, Companies, and Policymakers

For tech professionals and workers: The message isn’t to fear AI but to treat it as a tool to augment skills. Focus on areas where human judgment, creativity, empathy, or complex integration still dominate. Build portfolios showing how you leverage AI rather than compete against it. Diversify skills toward emerging needs in AI oversight, ethics, domain-specific applications, and new task creation.

For CEOs and executives: The paper doesn’t suggest halting innovation. It highlights the limits of pure market-driven automation. Forward-thinking leaders might explore hybrid approaches — using AI to augment rather than fully replace where possible — or advocate for coordinated policy that levels the playing field.

For policymakers: The discussion needs to move beyond just “picking up the pieces” after displacement (UBI, retraining) to addressing the incentives driving excessive automation. A well-designed Pigouvian tax on automation tasks could internalize externalities without stifling progress. Shifting more of the tax burden toward capital or AI-driven returns (as OpenAI has suggested) could help sustain public services as labor income erodes.

In the EV and mobility space I also cover, this tension is already visible: AI optimizes battery management and autonomous features, boosting efficiency, but widespread job shifts in manufacturing, driving, and support roles could affect consumer spending on vehicles and services.

The Road Ahead: Not Inevitable Doom, But a Call for Smarter Adaptation

The “AI Layoff Trap” isn’t a prediction of certain economic collapse. It’s a warning that unfettered competitive forces, combined with powerful general-purpose technology, can lead to suboptimal outcomes that no single actor can prevent alone.

History is full of technological shifts that ultimately raised living standards, but also periods of painful adjustment with policy interventions (think labor laws, social safety nets, or infrastructure investments during past industrial revolutions).

The math in Falk and Tsoukalas’ paper is clear: knowing the trap exists isn’t enough. Breaking it requires changing incentives at the systemic level.

As someone who has interviewed engineers facing automation anxiety and executives betting billions on AI infrastructure, my take is pragmatic optimism tempered by realism. AI can be an extraordinary force for good — solving complex problems in climate, healthcare, and mobility. But getting the transition right demands clear-eyed analysis of both its upsides and its externalities.

The AI Layoff Trap isn’t inevitable. With thoughtful policy, responsible corporate strategy, and proactive upskilling, we can harness the technology without triggering the demand death spiral the model warns against.

What do you think? Does the prisoner’s dilemma framing ring true, or will new job creation outpace displacement faster than the paper assumes? Have you seen AI augmentation help rather than replace roles in your industry? Share your experiences in the comments.

Subscribe to VFuture Media for more in-depth analysis at the intersection of AI, tech workforce trends, and electric mobility.

Ethan Brooks is a veteran USA tech and auto journalist with over 12 years covering Silicon Valley innovation, workforce disruption, and emerging technologies. He brings balanced, data-driven perspectives from reporting at major conferences and direct conversations with engineers, executives, and policymakers.

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *