Prakash Sangam:
Hello everyone, welcome back to another episode of Tantra’s Mantra where we go behind and beyond the tech news headlines. I am your host Prakash Sangam, founder and principal at Tantra Analyst.
In today’s episode, we will talk about an interesting company which is so crucial to the semiconductor industry, but not many people know or fully grasp its importance unless they are really involved in the chip design ecosystem. I’m talking about Cadence here, the leader in EDA, that is Electronic Design Automation Tools.
We all know about chip designers such as Nmedia, Qualcomm, Intel, and others, and we also know about foundries or the fabs as they’re called, TSMC, Samsung, Intel Foundry, and so on. Now these guys manufacture the chips that are designed by the chip designers, right?
So Cadence sits in between them, providing software, hardware tools, as well as IP for the chip designers to design extremely complex chips. Most importantly, verify those designs through simulation, emulation to identify problems, anomalies, much before the expensive fabbing process. If you’re a chip designer, more likely than not, you are using Cadence EDA Tools.
The use of these tools goes beyond just chip design into 3D simulation and others. Why are we talking about Cadence today? Well, in today’s AI frenzy world, chips are the lifeline and demand for faster, more complex chips is growing exponentially. That means design cycles are much shorter now, and designs have to be robust, and failure, trial and error is not an option.
The role of EDA Tools, like the ones Cadence provides, make or break your chip design itself. Cadence has been working with many chip designers, but most importantly, with the AI powerhouse NVIDIA. They also had a few announcements with NVIDIA during the recently concluded Super Computing 2024 event.
So to talk about these announcements, Cadence strategy and the larger EDA ecosystem, I have with me Rob Knoth. Rob is the Group Director of Strategy and New Ventures at Cadence.
Rob, welcome to the show.
Rob Knoth:
Thank you very much, Prakash. It’s an honor to be here.
Prakash Sangam:
Very well. So let’s get started. So could you give us a quick background of your career and your roles and responsibilities at Cadence?
Rob Knoth:
Yeah, definitely. So I’ve been in the semiconductor business now for close to 30 years. You know, started as a college intern many years ago now. You know, and throughout my career, I’ve gone between both the design of semiconductors, as well as the EDA industry, like you mentioned, electronic design automation.
I’ve been very fortunate to kind of live in both of those worlds and sort of understand the perspective from one side of the fence to the other. Here at Cadence right now, like you mentioned, I sit in the strategy and new ventures group. And so here we’re really looking at, you know, Cadence produces many different solutions that help engineering design and scientific discovery.
Where our group plays a key role is understanding how each of those individual solutions work together and how they address more pressing needs and concerns in the industries. Things like automotive design, data center design and operation, semiconductor design, even drug discovery and molecular simulation.
We know about Cadence a lot from the semiconductor industry, so you talked about others as well.
What are those specific industries that you are focusing using your 3D modeling and other tools?
Yeah, semiconductors is still the lifeblood of what we do at Cadence. That’s been true since the company was founded.What is really interesting and really fun, actually, is that if you look at where the industry has gone, where especially with AI, semiconductors are crucial.
They are the heart of these systems, but they are not alone in the system. And throughout the history of semiconductor design, we’ve benefited from certain technologies, certain habits, certain principles of exhaustive simulation prior to manufacturing. So that way things do work the first time, exactly like you mentioned in the opening.
We now see that that same sort of a methodology, that sort of use of high accuracy, physics-based simulation and optimization can extend into other industries and produce some tremendous results. Automotive and aerospace being an example, data center design and operation, and now also moving into things like molecular simulation and some of the sciences fields. The same basic principles that have made semiconductors successful are scaling and transforming these other industries.
Prakash Sangam:
Yeah, that’s interesting. I mean, we will try to focus this more on the semiconductor for this discussion, but I think those are the areas I think we should definitely explore in the next one, maybe.
Rob Knoth:
Yeah, sure.
Prakash Sangam:
So, I know we talked about AI, right? I mean, the tech industry itself is extremely fast-paced, but the introduction of AI is kind of accelerated even further, right? And it’s been a black-neck speed, actually. So, from an EDA Tools perspective, how does that obviously create opportunities for you and also brings challenges? Any view on there?
Rob Knoth:
Definitely. You know, at GTC back in March, when Jensen gave his keynote, he illustrated this perfectly, that the way he views the basic unit of compute these days, the GPU is the data center, right? And when you scale a basic unit of compute from, I’m just going to look at this one piece of silicon to, you know, an advanced package to a PCB to a rack to, you know, et cetera, all the way up to the data center itself.
A whole host of new challenges are presented. You have to suddenly deal with new mechanisms, new phenomena, new disciplines, new engineers driving the tools themselves. And that presents a massive amount of challenges and, like you said, opportunities. You know, the way that we really see this, AI is unrolling in basically three phases.
And this isn’t unique to AI. We’ve seen this with the Internet, right? We’ve seen this with all the major technologies that have come through. First, you have to build the infrastructure, right?
And so, you know, right now, this is really focused on, you know, the core work that we’ve been doing, you know, in EDA and simulation and analysis for many years now, using software, IP, hardware solutions to build the chips, packages, data centers, etc. that are, you know, producing the AI models.
Now, the second phase that happens then is that we’re using those AI models to transform our existing solutions, to transform the things that we’ve been doing for many years and do them better, do them deliver a higher quality end product, more productive, easier to use. An example there is, you know, we’re using our own AI tools to produce lower powered silicon that we put into our own hardware verification solutions.
So we use our Cerebris Design Tool, that we lowered our power consumption on the core SOC inside our hardware solution by 15 percent. We lowered a core multiply accumulate IP block that we sell to customers by 25 percent. So we’re using AI to make our own tools better.
Now, those products, right? They go to make the next generation of AI factories. And so the cycle just keeps repeating. But then the third phase and where the big opportunities are out there, are really these new things that were never possible before, the new markets that are out there. And this is really when we move in to more of a physics and science based approach. Data centers, autonomy, physical AI like cars, drones, robots, etc. And sciences themselves.
And really, when we get into that realm of science, this is where it gets extra fascinating for me. Because if you read a lot of publications, you’ll see things where there’s concern over running out of data for AI. Well, in the world that we’ve lived in and that we focus on this world of physics and sciences, we have an inexhaustible supply of high quality data that’s out there.
And so we’re just not scraping the Internet to write funny limericks. We’re generating massive amounts of high quality thermal data, of high quality electronics data, etc. To train algorithms to discover new and novel solutions.
Prakash Sangam:
Basically, tools to collect that data and make sense out of it and then use it in simulation and emulation, right?
Rob Knoth:
Correct.
Prakash Sangam:
Yeah, eating your dog food kind of a thing is across the industry. There’s so much of symbiotic relationship, right? You help the other companies or other industries. At the same time, you take the output from them and use it in your own processes to improve them, right?
Rob Knoth:
Exactly.
Prakash Sangam:
That’s interesting. Then that extends to your relationship with Nvidia too, right? I mean, a lot of your tools run on Nvidia’s GPUs, but at the same time, probably they’re using your tools to make their own chip designs, right?
Rob Knoth:
Correct. And this sort of a partnership, this sort of symbiotic relationship is critical throughout the history of semiconductor design and operation. And the partnership we have with Nvidia is a really unique and special and truthfully inspirational one. I get a lot of excitement working with the team at Nvidia and vice versa.
And so the collaboration we have with them is really full stack. On the AI side, we’re using our agantic AI solutions like Cerebris to help with their GPU design, lowering power on the next generations and GPUs, etc. We’re using the Nvidia Nemo RAG, the Nemo Retriever for RAG with our solutions, Chip Nemo, etc.
The Nvidia Modulus product for handling physics and Nvidia Bio Nemo for drug discovery and molecular SIM. On the second big stack here is that physics-based simulation and optimization.
And this is really where we deploy tools across this from semiconductor design to PCB design, etc.
to full multi-physics.
An example there is one we talked about in the announcement with them, where we have our Cadence Reality digital twin platform that has a really tight integration with Omniverse for this real-time digital twinning. And then lastly, the foundation of that whole stack is accelerated compute.
And you said it well, we’re using Nvidia hardware to power our R&D, and Nvidia is using our hardware, like Palladium, to design and verify their next generation of product.
Prakash Sangam:
And also it’s fascinating that AI is not just technology, right? It’s going across the board and it will probably transform almost every industry. So wherever the AI goes, the simulation and the technology foundation of that AI goes as well, right?
So that kind of basically expands the realm of possibility for this relationship and also for your business and your tools and so on, right? I think that’s really fascinating. Again, you know, not have time to go into that in detail today.
So yeah, so talking about the announcement you mentioned with Nvidia, the Super Computing show. So this, you know, your tools running on Nvidia Omniverse Blueprint, as they call it.
So what are some of the industries and specific use cases that you’re targeting and enabling through this collaboration, you know, that was basically made possible through the announcement?
Rob Knoth:
Yeah, the Omniverse Blueprint for real time CAE Digital Twins, it’s a fantastic plan and to me I, you know, really love how adaptable it is, because if you look at the reality of, you know, EDA and simulation analysis software, one size does not fit all, you know, it is really important to have something that’s flexible and adaptable because each solution is going to require different pieces and different modularity.
And that’s where this blueprint really excels and sings. The first product that we’re going to be adopting this into is our Cadence Reality Digital Twin Platform. And so people are using that to design and operate data centers more efficiently. And that’s, you know, one of the most fundamental building blocks of, you know, what’s going on to enable AI in the world around us.
You can’t open a newspaper without reading another article about, you know, a new data center being built or concerns over power consumption, etc. This ability to operate a real-time digital twin of the most fundamental component of AI is critical for us to scale out and deliver AI in an environmentally sustainable way.
And what we see here is that we can use this technology to allow people to get quicker, better insights so that not only could they design a more efficient datacenter, but in many cases, there’s a tremendous amount of datacenters out there that need to be upgraded, right?
And so rather than having to just scrap the old one and build a new one, being able to understand how you can utilize an existing datacenter more efficiently as you swap out from general purpose CPU to high-performance GPU, that’s exactly what this kind of software is useful for and what it’s being delivered with.
Now, the announcement we made was about adopting that blueprint, but really it even goes beyond that. We showed a lot of results at Super Compute, how we’re leveraging Nvidia hardware, Grace Hopper delivering amazing results, and some new work showing where Grace Blackwell goes with that across our stack.
You know, everything from digital design to custom and analog simulation, debug and verification, 3DIC and system design, multi-physics, everything. You know, like the partnership is very rich and we’re exploring Nvidia hardware across the stack. Now, the other part of the announcement that we made, which was a little smaller on this, in order to power these kind of real-time digital twins, you need high-quality models of all the parts.
And so Cadence joined the Alliance for OpenUSD. This is an organization across many different industries. You know, Nvidia is part of it. Pixar, Disney, Trimble, you name it. There’s many companies across the industries here. Cadence joined this to help proliferate the use of these digital twins in our industries, in the electronic design area.
Prakash Sangam:
Yeah, very well. So double-clicking on the specific data center use cases that you mentioned. So basically it will be able to simulate and emulate the cooling system needed, which is one of the key issues nowadays, right? And then how power needs can be optimized because that’s the biggest cause for the data center vendors.
And in terms of placement and all, can you give us a little bit more detail on if I’m a data center owner, how can I use the latest tools with Blueprint to increase efficiency in my existing data centers?
Rob Knoth:
Yeah, happy to. And so it really breaks down into two main phases. One is the design of a data center and the other is an operation of the data center. In the design space, it’s very easy to conceptualize. Everyone’s used to using things like AutoCAD, etc.
If you’re going to plan out and build something new that’s physical. Well, the key here is being able to look at it more as a living thing rather than just as a static drawing. And so in that design phase, being able to actually visualize a worker walking through the plant, being able to use a camera essentially, first-person view to see ergonomics perspective, etc.
But most importantly, and you touched on this, is that cooling and heat distribution to make sure that you can operate the design, the entire data center as high of capacity and utilization with as low risk as possible. And that’s where the real-time digital twins have the biggest benefit.
Because the faster and more accurate the simulations are, the more a designer can play with different options, sliding different racks over into different places, different configurations, different cooling and airflow simulations, liquid cooling, etc.
And so in that design phase, being able to more robustly and more confidently ensure that what is going to be built is going to be low-risk and high-efficiency. Now, the second part of it comes in the operation, and this is maybe a little less sexy, but in many ways even more important.
Data centers are not static. Data centers are constantly being upgraded, changed, things swapped out, etc. And so having real-time metrics of what is going on in the data center, being able to get ahead of equipment failures, being able to more nimbly and effectively react to changes in requirements, changes in equipment, new products coming out that you want to retrofit in.
You know, if new computing horsepower is showing up every year, you’re constantly in that state of design and operation. And that’s where the digital twin makes the biggest benefit.
And what we announced with Nvidia helps turbocharge that and improve the accuracy.
Prakash Sangam:
Very well. So that’s another simulation, simulation at a data center level. And now let’s go to the other side of the ecosystem or equation, if you will, to the lowest point, the chips itself that run all of these things. So as we all know, being in the semiconductor industry, the performance of a chip is limited by its thermals, right?
If you can manage the thermal better, then you can amp the performance up by that much, right?
So thermal management is one of the key things in a chip design right now, especially with putting billions and billions of transistors with two, three nanometer designs. So how is Cadence Tools helping there, the chip designers?
Rob Knoth:
If you look at what’s happening with AI, there’s a massive explosion in the memory requirements for these chips, for both training as well as inference. And so then the big question is, well, what does that do to chip design? And what we’re seeing is there is a massive scale out and proliferation of 3DIC technology to allow these more advanced high-bandwidth memories to live on and close to the same package as the semiconductor processing unit.
And so 3DIC, one of the most challenging things in there is exactly what you pointed out, which is thermal. And so we have tremendously fortunate to have access to the entire stack of tools that you need to do that kind of an analysis and optimization. So we’ve got the tools that you’re going to be using to design those high-performance digital semiconductor chips.
We’ve got the tools needed to design the package, where you’re going to have the memory and the processor located in the same package. We have the tools that are needed to route in between them. And we have the tools that are needed to simulate and optimize and design the PCBs that they go on.
And so that way you’re able to really full flow, have visibility into the thermal situation of the entire system, including things like thermally induced stress and warpage. You’re able to see the impacts of that all the way down to the transistor level and all the way back up to the package. And so this way you’re much more effective at co-optimizing and getting ahead of those performance limiting features. You hit the nail on the head.
Prakash Sangam:
I mean, having a good grasp of thermal is absolutely critical to getting a highly effective high performance semiconductor system to market and working first time. Yeah, and being in the industry long enough, so when you’re designing, you always try to keep margin to make sure your designs, when they convert to products, you’re not, there are no anomalies.
So the more accurate simulation and emulation you have done from all the aspects, from physics to electronics and everything, then that margin can be as low as possible, which means you are hitting the highest performance points of these chips. So I think I cannot stress the importance of accurate emulation and simulation enough for a chip designer.
Rob Knoth:
You’re very true about that. The cost of a spin on a chip is tremendous, not just in terms of dollars, but more in terms of time and resources. You will miss the market window, the engineers that we’re supposed to be working on, the next generation product are busy working on the fixes on that one that should have been out there. So high accuracy, that is the lifeblood of everything that we do.
Prakash Sangam:
Going a little bit beyond the semi, so you earlier this year announced, this CFD, Computational Fluid Dynamics Super Computer, I think you call it Millennium M1 to the auto industry. Tell me about that.
Which kind of extends your knowledge base in terms of simulation, emulation beyond semiconductors, right?
Rob Knoth:
Yeah. Millennium is a really awesome product, and it’s really beautiful to watch how the same strategies that we’ve been using to tackle semiconductors with the hardware verification solutions, extending them into new domains. GPUs have been used for CFD, Computational Fluid Dynamics for a long time.
It scales incredibly well to many, many, many cores. And so the two are really made for each other.
Now, what makes Millennium unique is the fact that it really is this hardware software co-optimization.
And so, Nvidia GPUs are fundamental to what we’re doing with Millennium, and being able to massively accelerate the speed of the simulation also allows you to massively accelerate the capacity, the size of the system you’re simulating without losing accuracy.
And so, you’re right that automotive is a great place here.Honda is one of the companies that is deploying Millennium to be able to do aeroacoustic simulations, meshing, etc. To be able to work in automotive, but also in EV tall, so to be looking at aircraft design.
And this is really where Millennium is fascinating, is that it’s useful for automotive, it’s useful for aerospace, it’s useful for many industries, right? Because this sort of a science isn’t regulated to just one lane. This sort of extension into the real world goes across the spectrum.
Prakash Sangam:
Anything that moves can benefit from it, right? So basically, you’re saying, before you take anything to a wind tunnel to understand its fluid dynamics characteristics, you can basically simulate it much cheaper on your computers and then do the iterations much faster than otherwise, right?
Rob Knoth:
Yep, exactly, exactly. And there was a great Forbes article put out at the time of launch that really covers how Millennium fits into the ecosystem and really how it plays into some new markets and disrupts some existing.
Prakash Sangam:
Cool. And you mentioned Honda is using it. Anybody else that you can talk about that are using Millennium M1?
Rob Knoth:
So there are several customers using it. Fortunately, I’m not at the liberty to disclose. We do have one really exciting deployment that’s actually in a ITAR cloud-based environment. So it’s really cool to see how Millennium moves from everywhere from consumer automotive areas to some other high security, high accuracy areas. And so there’s a lot of opportunities in flight here. And we’re really excited as this rolls out.
Prakash Sangam:
So and obviously this was in collaboration at Nvidia. I’m assuming, not just using the GPUs, the collaboration on M1 goes beyond that. Anything specific that you’d like to share on that?
Rob Knoth:
So the GPUs are definitely the heart and soul of the platform. But beyond just Millennium, we make hardware verification or palladium boxes for emulation and our proteome boxes that extend to help bring up software.
Those platforms extensively use Nvidia networking solutions as well. The Nvidia Bluefield DPU and the Nvidia Quantum InfiniBand. The importance of networking, it’s so easy to overlook because we’re so trained to focus on the engines of compute.
But networking is a non-trivial problem and doing networking right, allows those engines to be so much more effective, and actually do the work that they’re designed for.
Prakash Sangam:
You can talk about this for hours, right?
Rob Knoth:
Yeah, it’s a very rich topic.
Prakash Sangam:
So let me ask you the last question. So sitting here today, so what’s your view of the AI and chip industry, specifically from a role of EDA Tools perspective for say next five years? How will it evolve, change?
Rob Knoth:
Yeah, and to be pretty succinct on this, I think that we’ll see AI play a really key role here, just in terms of the same progression that EDA has been going on for many years. EDA is constantly dealing with how do we abstract away details that aren’t important so that we can constantly allow designers to operate at a larger and larger scale, more transistors per person effectively.
I think we will see AI allow that to make the next leap up in terms of productivity but also quality of results. It’s not just about doing things easier, it’s about if you can get to a higher quality result, you’re going to work less or you’re going to design something better. And so I think that that will be a real key role.
It will also play a key role in terms of breaking down the barrier. So it addresses the ability to allow more people with creative, beautiful ideas to be able to do semiconductor design because it will lower the barrier of entry to be able to get your hands on this. And I think that that kind of democratization will be very important.
What we really see is that it will play out kind of over like in the sort of three horizon model where horizon one, stuff that’s going to happen in the next one to three years. I think we’ll see a tremendous amount of growth in infrastructure AI, all the things to build the data centers that are out there.
The next big horizon, the horizon two sort of stuff, the two to seven year phrase, is really where we’ll start to see a lot of injection in terms of physical AI or moving into the real world. It’s not just operating in data centers or on your phone, but it’s the things that are moving in our world.
I’m very excited to see this happen with things like robotics. I think that that’s a tremendous growth for us.
In the EDA space, to capture the physics that are involved in the robotic space, as well as just what that will do for our lives. But then really in the Horizon 3, getting out that five to 10-year horizon, sciences. There is a wealth of opportunity out there. Everything from astrophysics to quantum dynamics, to life sciences, climate research.
There is a tremendous amount of data out there and opportunity when we get to the sciences. And EDA will continue to grow and expand as we move into each of those three areas.
Prakash Sangam:
Very well. That was a fascinating discussion, Rob. Thank you very much for all of your insights.
Rob Knoth:
Thanks Prakash. It was fantastic to be part of the show. Well, thank you again for coming on to Tantra’s Mantra.
So, folks, that’s all we have for now. Hope you found this discussion informative and useful. If so, please hit like and subscribe to the podcast on whatever platform you are listening this on.
I’ll be back soon with another episode putting light on another interesting tech subject.
Bye bye for now.