-640x427.png&w=3840&q=75)
11 Apr 2026 · 1 min read
Anthropic’s temporary ban of an OpenClaw creator reveals a deeper shift in AI, where control, cost, safety, and developer innovation are starting to collide.
Broadcom, Google, and Anthropic are not just expanding a partnership. They are showing that the next phase of AI will be won by the companies that control compute, networking, and long term infrastructure at industrial scale.
The biggest AI stories often get framed around chatbots, model names, or billion dollar valuations. But the more important story is usually buried underneath all that noise. It sits in the compute layer, where chips, racks, networking, data centres, and power determine who can actually keep building at the frontier. That is why the April 6, 2026 announcements involving Broadcom, Google, and Anthropic matter so much. Broadcom said it signed a long term agreement with Google to develop and supply future generations of custom Tensor Processing Units and related components for Google’s next generation AI racks through up to 2031. In a separate but connected expansion, Anthropic said it had signed a new agreement with Google and Broadcom for multiple gigawatts of next generation TPU capacity expected to begin coming online in 2027.
That may sound like a routine partnership announcement, but it is anything but routine. Broadcom’s filing said Anthropic will access approximately 3.5 gigawatts through Broadcom as part of its broader multiple gigawatt next generation TPU compute commitment, and it also made clear that Anthropic’s use of that expanded capacity depends on its continued commercial success. The filing added that the parties are in discussions with operational and financial partners. In other words, this is not just a simple product sale with a clean beginning and end. It is a multi party infrastructure buildout, tied to real demand, real money, and real execution risk.
That detail matters because AI has moved beyond the phase where a company can mostly rent whatever hardware it needs and treat infrastructure as somebody else’s problem. Frontier model companies now need long horizon access to compute, not just short term cloud capacity. Google’s latest publicly documented TPU family, TPU7x Ironwood, is described by Google as the latest TPU available on Google Cloud, the first release in its seventh generation Ironwood family, and a system designed for large scale AI training and inference. When the language in these deals shifts to multi gigawatt commitments, next generation racks, and supply assurance through 2031, you are no longer talking about an app story. You are talking about industrial scale AI.
A lot of people will read this story and come away thinking Broadcom is simply selling chips to Anthropic. That is too shallow. The structure is more interesting than that. Broadcom is deepening its role with Google by developing and supplying future TPU generations and networking parts for Google’s AI racks, while Anthropic is securing access to compute built on Google’s TPU ecosystem through that broader relationship. That means the value is being created across three layers at once: chip design and supply, cloud and platform control, and model demand. Each company is solving a different problem, but all three are being pulled into the same compute engine.
Latest
The latest industry news, interviews, technologies, and resources.
-640x427.png&w=3840&q=75)
11 Apr 2026 · 1 min read
Anthropic’s temporary ban of an OpenClaw creator reveals a deeper shift in AI, where control, cost, safety, and developer innovation are starting to collide.
-640x427.png&w=3840&q=75)
Google’s position in this arrangement is especially important. Reuters reported that demand for custom chips such as Google’s TPUs has surged as companies look for alternatives to Nvidia’s expensive GPUs, and that TPU sales have become a crucial growth engine for Google’s cloud revenue. That makes this deal about more than internal infrastructure. It is also about proving that Google’s custom silicon strategy can win serious external workloads, not just power Google’s own systems. If Anthropic can scale major Claude workloads on TPUs while still keeping a multi platform hardware strategy, that gives Google a stronger argument that its TPU stack is not a niche tool. It is a real contender.
Broadcom’s role is just as important because AI scale is not only about the accelerator itself. The company’s filing does not stop at TPUs. It explicitly includes supply assurance for networking and other components in Google’s next generation AI racks through up to 2031. That sounds dry, but this is where some of the real bottlenecks live. AI systems at scale depend on how well thousands of chips talk to each other, how efficiently data moves across the rack, and how reliably the entire system stays fed. In practical terms, Broadcom is not merely sitting in the chip lane. It is embedding itself in the physical plumbing of the AI stack.
The year 2031 also tells you something important. This is not a short term hedge or a tactical pilot. It is a multiyear commitment that gives Google continuity in TPU development, gives Broadcom visibility into a long runway of AI demand, and gives Anthropic confidence that future capacity is being planned now rather than negotiated in panic later. In a market where investors and customers worry constantly about compute shortages, that kind of horizon is strategic in itself. It says the AI leaders are no longer just competing on models released this quarter. They are competing on infrastructure booked years in advance.
Anthropic’s side of the story explains why these numbers have become so large so quickly. In its April 6 announcement, the company said its annual revenue run rate had surpassed $30 billion, up from about $9 billion at the end of 2025. It also said that when it announced its Series G fundraising in February, more than 500 business customers were spending over $1 million on an annualised basis, and that the number now exceeds 1,000. That is an extraordinary jump in a very short period of time. Whatever one thinks about run rate metrics versus fully realised revenue, the signal is still obvious. Customer demand for Claude has accelerated fast enough that Anthropic no longer has the luxury of thinking small about infrastructure.
The company has also been unusually clear that it does not want to depend on one hardware path. Anthropic said it trains and runs Claude on AWS Trainium, Google TPUs, and NVIDIA GPUs. It also said Amazon remains its primary cloud provider and training partner, while Claude is available across Amazon Web Services, Google Cloud, and Microsoft Azure. That matters because it shows Anthropic is trying to build resilience into the business as well as performance. Different workloads suit different chips, and different commercial relationships reduce single point dependency. This Google and Broadcom deal is not a sign that Anthropic is abandoning other platforms. It is a sign that frontier AI labs now need all of them.
There is also a geographical and political dimension here. Anthropic said the vast majority of the new compute capacity from this partnership will be sited in the United States, calling it a major expansion of its November 2025 commitment to invest $50 billion in strengthening American computing infrastructure. In that earlier announcement, Anthropic said it would build data centres with Fluidstack in Texas and New York, with more sites to come, and that the facilities were custom built for Anthropic with a focus on maximising efficiency for its workloads. This partnership therefore sits inside a much broader attempt to tie frontier AI growth to domestic infrastructure buildout.
The gigawatt language is worth pausing on because it changes how the whole story should be read. A startup talking about a few thousand GPUs is one thing. A frontier AI company talking about multiple gigawatts of next generation TPU based capacity is something else entirely. That is the language of utilities, power planning, campus scale data centres, financing partners, and long term industrial commitments. It tells us that frontier AI is not evolving into a normal software category. It is turning into a sector where the underlying physical infrastructure can shape who survives, who scales, and who falls behind.
For Broadcom, the timing could hardly be better. The company’s official results show AI is already a major growth engine. In its fourth quarter fiscal 2025 release, Broadcom said AI semiconductor revenue increased 74 percent year over year and that it expected AI semiconductor revenue to double year over year to $8.2 billion in the first quarter of fiscal 2026. Then, in its first quarter fiscal 2026 results, Broadcom reported that Q1 AI revenue actually reached $8.4 billion, up 106 percent year over year, and guided for AI semiconductor revenue of $10.7 billion in Q2. Those are not speculative future hopes. That is current momentum. The Google and Anthropic agreements land on top of a business already being pulled upward by custom AI accelerators and AI networking.
This is what makes Broadcom different from the simpler “AI winner” narrative that often dominates the market. Broadcom is benefiting not only from demand for accelerators, but from demand for the surrounding network fabric that makes AI clusters work. Its CEO has repeatedly tied AI growth to custom AI accelerators and Ethernet AI switches, and the new Google agreement specifically covers networking and other rack components alongside TPU development. That means Broadcom has exposure to multiple value layers in the AI buildout. If AI clusters get denser, faster, and more distributed, Broadcom does not need to win only on the chip inside the box. It can also win on how the boxes get connected.
Google also gets more than just a customer out of this. It gets validation. For years, Google’s TPUs have powered its own internal AI systems, and Google has steadily tried to turn that internal advantage into a broader cloud and platform business. The TPU7x Ironwood documentation describes Ironwood as the latest TPU on Google Cloud, built for large scale training and inference, and capable of high performance across dense models, mixture of experts models, pre training, and decode heavy inference. If a company like Anthropic is willing to lock in major future growth on that stack, it sends a signal to the rest of the market that TPUs are not just for Google. They are becoming serious external infrastructure.
There is another upside for Google that is easy to miss. The more outside demand it can attract to TPUs, the more leverage it gets from the years it has spent building custom silicon and the software ecosystem around it. Winning external frontier AI workloads helps justify the capital spending, improves platform credibility, and potentially boosts utilisation across the broader Google Cloud AI stack. In a market where every major player is trying to prove that its infrastructure investments can translate into durable revenue, this matters a lot. It turns internal engineering excellence into commercial proof.
The broader lesson is that the AI race is becoming less about who can launch the loudest demo and more about who can secure the deepest compute moat. Model quality still matters, of course. Product experience still matters. Distribution still matters. But none of that scales very far if a company cannot guarantee the compute, networking, and power needed to keep training and serving its systems. The Broadcom, Google, and Anthropic partnership is a clear example of that shift. It is a model company, a cloud and silicon platform company, and a systems supplier moving closer together because the next stage of AI is too large for any one layer to solve alone.
It also shows that the future will probably not belong to single vendor purity. Anthropic’s own description of its hardware mix makes that plain. The company is using AWS Trainium, Google TPUs, and NVIDIA GPUs, while still emphasising that Amazon remains its primary cloud and training partner. That is not indecision. It is strategy. Future AI leaders will likely run mixed stacks, place long dated bets across several suppliers, and optimise workloads based on performance, cost, availability, and resilience. In that world, the real competitive edge is not choosing one chip forever. It is building the organisational discipline to use several compute ecosystems without losing speed.
There are still plenty of risks. Financial terms were not disclosed. Broadcom’s filing says Anthropic’s expanded consumption depends on continued commercial success, which means some of the upside remains conditional. The filing also notes that the parties are still in discussions with operational and financial partners, a reminder that building AI infrastructure at this scale can require complex financing and execution structures. Beyond that, the wider semiconductor and data centre buildout faces supply chain pressure. McKinsey has noted more than $450 billion in announced US semiconductor manufacturing investment as of 2024, while also warning of possible supply gaps in chemicals and materials by 2030 if domestic supply does not keep pace. So even the biggest AI players are not operating in a frictionless world.
Still, the direction is unmistakable. The AI market is growing up into something heavier, slower to build, and harder to fake. The winners will not just be the companies with the smartest models or the flashiest product launches. They will be the companies that can lock in compute years ahead, line up financial and operational partners, match workloads across multiple chip platforms, and turn that infrastructure into dependable customer service. Inside this Broadcom, Anthropic, and Google partnership is a very simple truth: the next phase of AI will be decided as much by concrete, cables, racks, switches, and power contracts as by model weights and prompt quality. That is the real story, and it is a much bigger one than a headline about a chip deal.
-300x200.png&w=3840&q=75)
When AI Agents Cross the Line: Inside the OpenClaw Ban and What It Means for the Future of AI
1 min read · 11 Apr 2026
-300x200.png&w=3840&q=75)
When AI Starts Running the Lab: The New Biology Revolution Nobody Is Ready For
1 min read · 11 Apr 2026
-300x200.png&w=3840&q=75)
OpenAI Is Quietly Rewriting the Job Market and Most People Haven’t Noticed
1 min read · 11 Apr 2026
-300x200.png&w=3840&q=75)
Everyone Wants an AI Trading Bot Until the Market Bites Back
1 min read · 10 Apr 2026
-300x200.png&w=3840&q=75)
Apple’s AI Agents Are Being Built to Stop Themselves And That Might Be the Smartest Move in Tech
1 min read · 9 Apr 2026
-300x200.png&w=3840&q=75)
£31 Billion Gone? The Stargate UK Story Is Not What You Think
1 min read · 9 Apr 2026
-300x200.png&w=3840&q=75)
Coca-Cola Is Using AI to Sell the Feeling, Not Just the Drink
1 min read · 9 Apr 2026
-300x200.png&w=3840&q=75)
The AI Model Too Powerful to Release
1 min read · 9 Apr 2026
-300x200.png&w=3840&q=75)
Court Backs Pentagon for Now in Anthropic AI Showdown
1 min read · 8 Apr 2026

Poke Wants to Make AI Agents Feel Like Sending a Text
1 min read · 8 Apr 2026
11 Apr 2026 · 1 min read
AI is now designing and running biological experiments at massive scale, accelerating discovery but also introducing serious risks that humanity is not yet prepared to manage