Cadence is turning simulation into the new workbench | FOMO Daily
10 min read
Cadence is turning simulation into the new workbench
Cadence’s expanded work with Nvidia and Google Cloud shows how engineering is shifting toward digital twins, agentic AI, and physics based simulation. The real story is not one new product. It is the growing idea that chips, robots, and AI infrastructure should be designed and tested in software first, then deployed into the real world with fewer surprises
Something bigger than a product launch is happening
At first glance, the Cadence news looks like another enterprise AI partnership announcement. One company expands its relationship with Nvidia, adds Google Cloud, talks about automation, and wraps it all in the usual language about productivity and scale. But that is not really what is going on here. Cadence used its CadenceLIVE Silicon Valley 2026 event to tie together three important threads at once. It expanded its Nvidia partnership around agentic AI, physics based simulation, digital twins, robotics, and AI factories. It also deepened its work with Google Cloud by bringing Gemini into Cadence’s ChipStack AI Super Agent and making that system available on Google Cloud Marketplace. Put simply, Cadence is trying to move engineering toward a world where chips, robots, and large AI systems are designed, tested, and refined in software before they are trusted in the real world.
The old engineering model is under pressure
For a long time, engineering progress followed a familiar pattern. Build the design, run a limited set of simulations, test physical prototypes, find the problems, and then go back around again. That still works, but it is becoming too slow and too expensive for the systems companies now want to build. Modern chips are more complex. Data centres are more power hungry. Robots need better training data. AI factories involve compute, networking, cooling, and power systems that all affect one another. The problem is that trial and error in the physical world gets painful fast when every iteration costs time, money, and risk. Cadence’s message is that the answer is not just more automation in the old process. The answer is shifting more of the real work into simulation, where design choices can be pushed harder before anything physical is deployed.
Nvidia gives Cadence the horsepower and the vision
The Nvidia side of this story matters because it shows how serious the ambition is. Cadence said the expanded collaboration combines its design software, system analysis tools, and agentic AI with Nvidia CUDA-X, AI physics tools, Omniverse libraries, and Nvidia powered infrastructure. That is not a narrow integration. It is a broad attempt to connect design software with the accelerated computing stack needed to simulate far more of the real world than older engineering flows could handle. Cadence said this can accelerate a wide range of engineering workflows by as much as 100 times, and it framed the collaboration around three big domains: semiconductors, physical AI systems, and hyperscale AI factories. What this really means is that Cadence and Nvidia are not just selling tools. They are trying to define the workflow of the next engineering era.
Latest
Top Picks
The latest industry news, interviews, technologies, and resources.
The exits of Kevin Weil and Bill Peebles are not just leadership news. They show OpenAI moving away from broad experimentation and toward a tighter strategy built around coding, enterprise demand, and products that can scale like infrastructure rather than side bets.
This is where robotics enters the picture
One of the strongest signals in the announcement is that the partnership is not stopping at semiconductors. Reuters reported that Cadence and Nvidia are specifically working together on AI for robotics by integrating Cadence physics engines with Nvidia AI models used to train robots in simulation. That matters because robot training is not just a model problem. It is a data problem and a realism problem. A robot can learn quickly in a virtual environment, but only if the environment behaves closely enough to the real world. Cadence’s role here is to provide the physics engines that model how materials and systems behave. Nvidia’s role is to provide the simulation frameworks and AI stack that can turn those environments into training grounds. The better the simulated physics, the better the training data. The better the training data, the better the chance of shrinking the gap between a robot that works in a demo and a robot that works on a factory floor or in a live environment.
Physical ai is really about trust in the real world
There has been a lot of talk over the last year about physical AI, but the phrase can sound vague until you pin it down. In this case it means AI systems that act in the physical world and therefore have to deal with heat, friction, timing, force, material behaviour, safety constraints, and unpredictable conditions. Cadence and Nvidia describe their joint physical AI workflow as a full lifecycle system that spans training orchestration, policy optimisation, validation, deployment, and ongoing refinement. The workflow includes Nvidia Isaac Sim and Isaac Lab, Cadence physics models, high fidelity scenario simulation through VTD and VTDx, and deployment on Nvidia Jetson robotics and edge AI systems. This is where things change. The goal is no longer just training a model. The goal is building a loop where a virtual twin and a physical system keep informing one another. That is a much more serious vision than a robotics demo video.
Digital twins are becoming the main event
The phrase digital twin used to sound like a specialist term from industrial engineering. Now it is becoming one of the most important ideas in AI infrastructure. Jensen Huang said at CadenceLIVE that we are reaching a point where engineering can happen in the digital world first, with full fidelity digital twins used to explore, test, and optimise ideas before they are built. Cadence is leaning hard into that worldview. The company’s official release says the partnership with Nvidia is built around digital twins not only for robotics and physical AI, but also for AI factories. This matters because digital twins change the economics of experimentation. If you can test more scenarios safely in software, you can find better designs faster, reduce wasted capital, and lower the risk of deploying systems that fail under real loads. The deeper point is that digital twins are no longer a support tool. They are starting to look like the central workbench for modern engineering.
The chip design story matters just as much
The Google Cloud side of the announcement makes it clear that Cadence is not only chasing robotics headlines. It is also pushing hard on the core business of chip design automation. Cadence said its ChipStack AI Super Agent now integrates Google’s Gemini models on Google Cloud to create an agent driven, cloud native platform for chip design and verification. The company says the platform can deliver up to 10 times productivity improvements across digital design, testbench development, verification planning, regression management, and automated debug. It also says the system is available now on Google Cloud Marketplace. What this really means is that Cadence wants agentic AI to move from a promising lab feature into a deployed product that teams can actually use in serious semiconductor workflows. That is a meaningful step, because it turns the conversation from hype about AI copilots into questions about whether end to end design automation can compress the time to tapeout.
Agentic ai is being pushed beyond one narrow task
A lot of AI tooling still works like a clever assistant that helps with one slice of a job. Cadence is aiming for something broader. Its own materials describe AgentStack as a head agent that orchestrates multiple super agents and extends beyond RTL design and verification into physical design, custom and analog design, migration, and system level workflows. The language here matters because it shows where the company thinks engineering AI is heading. Not toward one model doing everything by magic, but toward a coordinated system of specialised agents connected to real engineering tools. Cadence’s own blog frames this as a move toward autonomous coverage across the full chip design spectrum, with a common interface and shared skills across agents. In plain English, the company is trying to make AI less like autocomplete and more like an operating layer for engineering work.
The ai factory angle may be the most revealing part
One of the most interesting parts of the Nvidia announcement is not robotics or chip design. It is AI factories. Cadence says the collaboration now extends to digital twins for large scale AI factories built around Nvidia Omniverse DSX blueprints. The goal is to let customers simulate and optimise training and inference infrastructure before deployment, with a focus on a key metric called tokens per watt. That sounds technical, but the idea is simple enough. The real cost of AI is no longer just buying compute. It is getting the most useful output from power, cooling, system configuration, and infrastructure design. Cadence says a joint 10 megawatt AI factory use case showed up to 17 percent more tokens per watt by modelling reduced power operation, and roughly 32 percent more tokens per watt when that was combined with warmer coolant. Whether those exact gains hold broadly is a separate question, but the direction is clear. AI infrastructure is now being treated as something that should be simulated and optimised like any other engineered system.
This is why the Cadence story matters beyond Cadence
It would be easy to read this as a narrow story about one engineering software company making smart alliances. But that would miss the bigger pattern. Cadence is sitting at a useful intersection. It touches semiconductors, system design, simulation, cloud deployment, and now more visible parts of robotics and AI infrastructure. When a company in that position starts tying together agentic AI, physics models, digital twins, and cloud deployment, it gives a good read on where the market is heading. The problem is that many people still think of AI in separate buckets. There is chatbot AI, robotics AI, chip AI, cloud AI, and industrial AI. This announcement suggests those buckets are beginning to merge. The same simulation heavy, agent driven workflow can now touch chips, robots, edge systems, and data centre scale infrastructure. That is why this feels important. It is showing how the stack is starting to connect.
The promise is speed but the real value is fewer bad decisions
Faster design cycles make a good headline, but speed is not the only thing on offer here. In some ways, it may not even be the most important thing. Better simulation and digital twins matter because they reduce the number of bad decisions made too late in the process. A chip design flaw caught early is cheaper than one caught after physical implementation. A cooling or power problem found in a digital twin is cheaper than one found after infrastructure rollout. A robot trained and stress tested across more realistic scenarios is less likely to behave unpredictably in the real world. What this really means is that Cadence is selling confidence as much as productivity. The entire pitch depends on the idea that higher fidelity software models can reduce expensive surprises later. That is the real commercial logic under all the AI language.
There is still a gap between announcements and outcomes
None of this means the future has already arrived. Big partnerships often sound cleaner in press releases than they do in practice. Claimed speedups and productivity gains may vary widely depending on the customer, the workflow, the integration effort, and the quality of the underlying data or engineering process. Cadence says early deployments of ChipStack have shown up to 10 times productivity gains, and its Nvidia release talks about up to 100 times acceleration in engineering workflows. Those are strong signals, but they are not the same as a universal result across the whole industry. The same goes for robotics. Better simulation does not automatically solve every sim to real problem. It only improves the odds if the models, workflows, validation loops, and deployment discipline are good enough. So the sensible reading is not blind hype. It is that the direction of travel is becoming easier to see.
What changes next
The next phase of AI engineering is likely to be shaped less by isolated models and more by connected systems. Chips will be designed with more agentic help. Robots will be trained against richer simulated physics. AI factories will be planned with digital twins before billions are committed to hardware and power infrastructure. Cloud platforms will become more important because they can turn heavy AI workflows into deployable services rather than local experiments. Cadence is not doing all of this alone, but its announcements with Nvidia and Google Cloud make the direction hard to ignore. The old model was build, test, and hope. The new model is simulate, orchestrate, validate, and only then deploy. If that model holds, simulation will stop being a backstage tool and become the place where most of the real engineering work happens first.
Dario Amodei’s White House meeting is not just a comeback moment for Anthropic. It shows that once an AI model becomes strategically important, political feuds give way to reluctant engagement, guarded access, and a new struggle over who controls powerful systems and under what terms