In a move that underscores the intensifying battle for control over the AI supply chain, Anthropic — the creator of the Claude family of frontier models — is in early-stage talks to purchase AI inference chips from British startup Fractile, according to multiple reports. The potential deal, first surfaced on May 2, could see Anthropic secure a new source of high-performance, low-cost silicon optimized specifically for running massive AI models at scale, rather than training them.
While the discussions are preliminary and no binding agreement has been reached, the news highlights Fractile’s rapid rise as one of Europe’s most promising challengers to Nvidia’s near-monopoly in AI hardware — and signals how inference (the “serving” phase of AI) is quickly becoming the industry’s next trillion-dollar bottleneck.
The Startup: From Oxford Robotics Lab to Bristol Chip Foundry
Fractile was founded in 2022 by Dr. Walter Goodwin, then a PhD student at the University of Oxford’s Robotics Institute. Now in his early 30s, Goodwin launched the company to solve what he saw as the fundamental hardware limitation holding back the next wave of AI: the crippling inefficiency of shuttling data between memory and compute in traditional GPU architectures.
The company operated in stealth for nearly two years before emerging publicly in July 2024 with a $15 million seed round co-led by Oxford Science Enterprises, Kindred Capital, and the NATO Innovation Fund. Additional early support came from a £5 million+ grant from the UK government’s Advanced Research and Invention Agency (ARIA).
Headquartered in London with a growing hardware engineering hub in Bristol, Fractile now employs dozens of specialists across transistor-level design, systems architecture, and cloud-scale inference software. In February 2026, the UK government announced Fractile would invest £100 million (~$136 million) over three years to expand its British operations, including a new dedicated facility. By March 2026, the startup was reportedly in talks to raise more than $200 million at a $1 billion valuation, with interest from Accel and returning backers.
Even former Intel CEO Pat Gelsinger personally invested, calling Fractile’s approach “radical enough to offer such a leap.”
The Technology: In-Memory Compute That Rewrites the Inference Rulebook
Fractile’s secret sauce is in-memory compute — physically interleaving memory and processing elements on the same chip using static random-access memory (SRAM) rather than the off-chip DRAM-heavy designs that dominate today’s GPUs.
In conventional systems, massive AI models spend most of their time and energy simply moving data back and forth between memory and compute units — the classic “memory wall.” Fractile’s architecture eliminates much of that movement by performing calculations directly where the data lives.
The result, according to the company: frontier models running up to 25× faster while delivering 1/10th the system cost of current GPU-based setups. The chips are designed to serve thousands of tokens per second to thousands of concurrent users simultaneously, at dramatically lower power budgets — exactly what’s needed for real-time, high-volume deployment of models like Claude.
Unlike training-focused chips, Fractile is laser-focused on inference — the phase where AI actually interacts with users. As models grow larger and context windows explode, inference is becoming the dominant share of AI compute spend. Fractile claims its first chips will be ready for customer sampling and early deployment in 2027.
Why Anthropic? Diversification, Cost, and Scale
Anthropic, valued at tens of billions and backed by Amazon and Google, is one of the world’s largest consumers of AI compute. The company is already exploring ways to design its own chips and has struck large deals with Nvidia and others. But with inference costs projected to balloon into the tens of billions annually industry-wide, securing alternative suppliers is strategic.
Fractile’s chips promise exactly what hyperscalers and AI labs crave for inference: lower latency, higher throughput, and far better economics. Sources familiar with the talks say Anthropic sees the potential partnership as a way to gain leverage with existing suppliers while preparing for the explosive growth of Claude-powered applications.
The talks remain early, with no financial terms disclosed, and any purchase would likely begin once Fractile’s chips reach production readiness next year.
A UK Success Story in the Global AI Hardware Race
Fractile joins a growing cohort of European (and especially UK) AI hardware players betting that specialized silicon can carve out space against Nvidia’s CUDA-dominated ecosystem. The UK government has thrown its weight behind the sector, viewing advanced chips as critical for both economic competitiveness and national security.
For Fractile, validation from a top-tier AI lab like Anthropic would be transformative — not just financially, but as proof that its radical SRAM-based, in-memory design can deliver in the real world. For Anthropic, it’s another step toward reducing single-vendor risk and driving down the eye-watering costs of running the world’s most capable AI models.
As Walter Goodwin and his team push toward 2027 tape-out and first silicon, the broader industry is watching closely. Inference isn’t just the “deployment” phase anymore — it’s where the real AI race will be won or lost.
If the Anthropic-Fractile partnership materializes, it could mark the beginning of a new chapter in which “boring” backend hardware innovation quietly reshapes who controls the future of AI.
VFuture Media will track developments in this potential deal and Fractile’s progress. Could SRAM-based in-memory chips be the dark horse of the inference era? Let us know your take in the comments.
By VFuture Media Tech Desk | May 3, 2026
In a move that underscores the intensifying battle for control over the AI supply chain, Anthropic — the creator of the Claude family of frontier models — is in early-stage talks to purchase AI inference chips from British startup Fractile, according to multiple reports. The potential deal, first surfaced on May 2, could see Anthropic secure a new source of high-performance, low-cost silicon optimized specifically for running massive AI models at scale, rather than training them.
While the discussions are preliminary and no binding agreement has been reached, the news highlights Fractile’s rapid rise as one of Europe’s most promising challengers to Nvidia’s near-monopoly in AI hardware — and signals how inference (the “serving” phase of AI) is quickly becoming the industry’s next trillion-dollar bottleneck.
The Startup: From Oxford Robotics Lab to Bristol Chip Foundry
Fractile was founded in 2022 by Dr. Walter Goodwin, then a PhD student at the University of Oxford’s Robotics Institute. Now in his early 30s, Goodwin launched the company to solve what he saw as the fundamental hardware limitation holding back the next wave of AI: the crippling inefficiency of shuttling data between memory and compute in traditional GPU architectures.
The company operated in stealth for nearly two years before emerging publicly in July 2024 with a $15 million seed round co-led by Oxford Science Enterprises, Kindred Capital, and the NATO Innovation Fund. Additional early support came from a £5 million+ grant from the UK government’s Advanced Research and Invention Agency (ARIA).
Headquartered in London with a growing hardware engineering hub in Bristol, Fractile now employs dozens of specialists across transistor-level design, systems architecture, and cloud-scale inference software. In February 2026, the UK government announced Fractile would invest £100 million (~$136 million) over three years to expand its British operations, including a new dedicated facility. By March 2026, the startup was reportedly in talks to raise more than $200 million at a $1 billion valuation, with interest from Accel and returning backers.
Even former Intel CEO Pat Gelsinger personally invested, calling Fractile’s approach “radical enough to offer such a leap.”
The Technology: In-Memory Compute That Rewrites the Inference Rulebook
Fractile’s secret sauce is in-memory compute — physically interleaving memory and processing elements on the same chip using static random-access memory (SRAM) rather than the off-chip DRAM-heavy designs that dominate today’s GPUs.
In conventional systems, massive AI models spend most of their time and energy simply moving data back and forth between memory and compute units — the classic “memory wall.” Fractile’s architecture eliminates much of that movement by performing calculations directly where the data lives.
The result, according to the company: frontier models running up to 25× faster while delivering 1/10th the system cost of current GPU-based setups. The chips are designed to serve thousands of tokens per second to thousands of concurrent users simultaneously, at dramatically lower power budgets — exactly what’s needed for real-time, high-volume deployment of models like Claude.
Unlike training-focused chips, Fractile is laser-focused on inference — the phase where AI actually interacts with users. As models grow larger and context windows explode, inference is becoming the dominant share of AI compute spend. Fractile claims its first chips will be ready for customer sampling and early deployment in 2027.
Why Anthropic? Diversification, Cost, and Scale
Anthropic, valued at tens of billions and backed by Amazon and Google, is one of the world’s largest consumers of AI compute. The company is already exploring ways to design its own chips and has struck large deals with Nvidia and others. But with inference costs projected to balloon into the tens of billions annually industry-wide, securing alternative suppliers is strategic.
Fractile’s chips promise exactly what hyperscalers and AI labs crave for inference: lower latency, higher throughput, and far better economics. Sources familiar with the talks say Anthropic sees the potential partnership as a way to gain leverage with existing suppliers while preparing for the explosive growth of Claude-powered applications.
The talks remain early, with no financial terms disclosed, and any purchase would likely begin once Fractile’s chips reach production readiness next year.
A UK Success Story in the Global AI Hardware Race
Fractile joins a growing cohort of European (and especially UK) AI hardware players betting that specialized silicon can carve out space against Nvidia’s CUDA-dominated ecosystem. The UK government has thrown its weight behind the sector, viewing advanced chips as critical for both economic competitiveness and national security.
For Fractile, validation from a top-tier AI lab like Anthropic would be transformative — not just financially, but as proof that its radical SRAM-based, in-memory design can deliver in the real world. For Anthropic, it’s another step toward reducing single-vendor risk and driving down the eye-watering costs of running the world’s most capable AI models.
As Walter Goodwin and his team push toward 2027 tape-out and first silicon, the broader industry is watching closely. Inference isn’t just the “deployment” phase anymore — it’s where the real AI race will be won or lost.
If the Anthropic-Fractile partnership materializes, it could mark the beginning of a new chapter in which “boring” backend hardware innovation quietly reshapes who controls the future of AI.
VFuture Media will track developments in this potential deal and Fractile’s progress. Could SRAM-based in-memory chips be the dark horse of the inference era? Let us know your take in the comments.

Leave a Comment