Prakash Sangam:
Hello everyone, welcome back to another episode of Tantra’s Mantra where we go behind and beyond the tech news headlines. I’m your host Prakash Sangam, founder and principal at Tantra Analyst.
Today for the first time, we are talking about AMD on this show. No, we are not talking about Copilot Plus PCs. We have done enough of that in the past, but we’re talking about even more interesting and more technical subject, but a little bit less hyped, I would say.
And it’s about FPGAs. The market has gone through lots of changes in the last few years in the FPGA market, that is primarily because of acquisitions. FPGAs are widely used in many use cases across industries, including IoT, specifically industrial IoT, cloud, data center, automotive, telecom, you name it.
As many of you might know or remember, only a few years ago, the market was dominated by two players, Altera and Xilinx. Then in 2015, Intel acquired Altera and followed that in 2022, AMD acquired Xilinx. So the Altera Xilinx became part of the legendary Intel versus AMD rivalry.
This also gave an opportunity for smaller players like Lattice Semiconductor to thrive in specific niche market opportunities and so on.
Anyway, today we’ll focus more on AMD’s acquisition of Xilinx, how that has fared, how is the collaboration coming along, of course the cross leverage there as well. And what are the challenges and opportunities and so on. And to discuss all that, we are privileged to have Dr. Salil Raje with us today. He is the SVP and GM for Adaptive and Embedded Computing Group at AMD and Xilinx veteran. Salil, welcome to the show.
Salil Raje:
Thank you, Prakash. Thank you for the introduction.
Prakash Sangam:
Very well. As I mentioned, you are a Xilinx veteran and spent quite a few years there, right? Could you tell us about your journey at Xilinx and now at AMD?
Salil Raje:
Yeah. So I have been at Xilinx and now AMD for about 20 years and joined Xilinx through an acquisition of my own company. I was doing an FPGA EDA company before I joined Xilinx and Xilinx acquired us. So at Xilinx, I’ve been focused. For many years, I was focused on software engineering, around the software and IP development group.
And then I started to focus on the data center market and trying to get the Xilinx FPGAs into data center applications and acceleration. And then recently, I started to manage all of the Xilinx business and engineering and also the X86 Embedded Business. So Adaptive and Embedded Computing Group, basically the combination of Xilinx as well as the x86 Embedded CPUs.
Prakash Sangam:
Okay. So as you mentioned, the AECG, the Adaptive and Embedded Computing Group, is slightly new to AMD, right? Was it formed after Xilinx acquisition or was it there before and Xilinx went into it?
Salil Raje:
It is a fairly new group. It got formed at the close of the acquisition in 2022. It’s a container for Xilinx and Embedded X86.
Prakash Sangam:
Okay. And what are the charters of the group? What are the markets to serve?
Salil Raje:
Yeah. So the main mission of our group is to be the leader in Adaptive and Embedded Computing. So we can empower our customers and making sure that they can deliver differentiated solutions themselves. So we are in about 10 different markets. We are in automotive, aerospace defense, industrial vision, healthcare, all the way to wired and wireless communications.
We handle about 30 different applications. Within these markets, we have a pretty sizable reach with customers. I would say between direct and indirect customers that we work with, we have about 7,000 customers. And many of them we have very deep relationships with. You will find our products everywhere.
You will find our products in Cloud, you’ll find them in Edge, you’ll find them at endpoints and everything in between. You’ll find FPGAs and Adaptive SoC, each of the latest generation of our products.You’ll find them in Space, you’ll find them in Robotic Arms, you’ll find them doing ADAS applications on automotive.
So it’s a pretty broad range of applications that we work with.
Prakash Sangam:
Indeed. And talking about this change in Dynamics, Market Dynamics with Intel taking Altera, you guys going into AMD. So how has the competitive situation or the dynamics changed? Is it the same that you were before now competing as Intel and AMD or has it been different last few years after the acquisition?
Salil Raje:
Yeah. So I think so after, I would say Intel acquired Altera in 2015, like you said, we have been expanding on market share quite a bit. So I think we were sort of back then we were neck and neck with Altera in terms of market share and over these last few years, we have expanded our market share and gained significant market share against Tera and other FPGA vendors.
So we are by far the leader now in FPGA business. We have introduced products that are not just mere FPGAs. So we used to mostly program a logic and now our products are complex SOCs. They have ARM based processing subsystems. We have hardened cores in video. We have RF integrated within our SOCs. So we call them now Adaptive SOCs. With that change, with that transformation, our market has expanded. So the SAM has expanded to go beyond just the pure PLD market that we used to reside in.
And in those new markets, we really compete with other semiconductor companies. Be it, let’s say, NXP, TI, Marvell, other standard semiconductor companies, right? And we don’t really see competition from Altera in those markets with Adaptive SOCs. And so we feel pretty good about the transformation that we have made over these last years.
Prakash Sangam:
Cool. So as you mentioned, I think one of the key objectives of the acquisition itself was cross leverage portfolios, right? Getting more compute into Xilinx products and more FPGAs into AMD SOCs. Because basically making more SOCs rather than just discrete chips. So it looks like that’s been going pretty good.
Salil Raje:
Yes, absolutely. We have a tremendous portfolio now at AMD, right? I would say that we are one of the only companies to have this broad portfolio of products and IP. We have CPUs, we have GPUs, we have networking processing units, you know, we have AI engines and the neural processing units. And we have the programmable logic, of course. We have ARM processors, as well as X86 processors.
And with that broad portfolio, we can address a lot of our customer challenges. We are the only ones who can bring together all of these IPs into either a single device or we can have multiple products sitting on the same board solving our customers’ challenges. So yeah, it feels extremely exciting to us and our customers drawing us in. There have been conversations where customers have been wanting to have somebody of our scale with that level of portfolio. So the problems are getting harder. We are one of the few ones who can solve these problems for our customers.
Prakash Sangam:
Cool. So one of the statements AMD had made during acquisition was, keep the Xilinx portfolio intact and grow it. And how has that been? Was there any culling of any product lines or it’s mostly continuing and complementing that with the processors and making them SOCs?
Salil Raje:
If anything, the product portfolio is stronger after the acquisition. Lisa Su, our CEO, has done a phenomenal job in integrating Xilinx into AMD. Unlike many other acquisitions where you embrace the acquisition to the point where you crush them. I think what Lisa has done is kept us fairly autonomous, but at the same time leverage product portfolio where we can leverage and leverage the synergies in development where we can leverage.
If I had to look at the product roadmap now versus what we had just before acquisition, it’s much stronger. We have more range of products that we have put in the roadmap. We have low-end products. We just introduced Spartan Ultrascale Plus FPGAs to go after the very low-end of our portfolio. We have introduced some mid-range products. We have also launched AI-based products and Versal Generation 2 products were being launched as well in the last couple of years.
We have a very broad spectrum of products now going from the very low-end to the very high-end, and have included AI in our portfolio as well. If anything, the product is very well-positioned now across all the different markets that we sell.
Prakash Sangam:
Cool. So, integrating and cross-leveraging, right? So, Xilinx has been a ARM shop using ARM CPU cores, whereas, obviously, AMD is X86. How challenging was this integrating these on SOC? What are some of the challenges you faced? How did you solve them?
Prakash Sangam:
So, Prakash, we don’t consider ourselves as either an ARM or an X86 house, even though AMD has made revenue for the many years with X86 processors. We think about ourselves as a high-performance computing company. We use the right instruction set, the ISRs for the right markets and applications. Yes, Xilinx has typically used ARM processors within our SOC.
Because those applications and those markets we serve, demand us to have ARM processors. And the software application stack that people need, mostly based on ARM. So, and we continue to serve the ARM market. But at the same time, we have applications where, in the embedded business itself, we have applications where we need x86.
So if you think about infotainment, in-car entertainment, people want x86 because it gives them performance and headroom for doing, say, AAA gaming. And we do enable that. We have products that combine x86 chips with our Adaptive SoC chips, and we can do end-to-end application in these markets.
We have in autonomous vehicle, we use both x86 and ARM-based SoCs, and for security appliances and wired communications, a lot of times we see x86 and our Adaptive SoCs work together. So there are many applications where we can combine CPUs of different types. We also focus on RISC-5 by the way, so it’s not just x86 and ARM.
We just introduced MicroBlaze-5, which is a soft core RISC-5 based processor. That sits into in our programmable logic, customers and applications that want a really low-end RISC-5 processor, we service those as well. So we are pretty agnostic in terms of ISA. Our goal is to service the market and applications with the right processor that the market needs.
Prakash Sangam:
Okay. So interesting comment on RISC-5. How is that coming along? I mean, it was kind of experimental few years ago, but this feud between Qualcomm and ARM seemed to have given it a lot of energy and almost a shot in the arm, right?
Salil Raje:
Yeah. RISC-5 is definitely progressing well. It is still in the lower end of the market. I think RISC-5 doesn’t come with as much of the software stack as ARM does. ARM definitely has a long history. But we are starting to see traction for RISC-5. We do see traction in the China market and we do see for some applications where you don’t need as much of the horsepower open-source RISC-5 can work.
Prakash Sangam:
Also for geopolitical reasons, China has been the main area with a lot of traction. China is definitely a big target for RISC-5. In terms of integrating, how effective are chiplets have been?
Do you have chiplets based on your products that work with AMD or other X86 or other processors?
Salil Raje:
Yeah, chiplets are for AMD fundamental to how we design our products. At AMD, we introduce our CPUs through chiplets a long time before anybody else jumped on the bandwagon. A lot of customer interest in chiplets are MI300A, MI300X products are fundamentally based on chiplets. The chiplet technology enables them to give the type of performance those applications demand.
In addition, the synergy between Xilinx and AMD has been phenomenal on this front as well. Xilinx also had a long history of chiplet based approach. If you heard about SSIT, which is Stack Silicon Interconnect Technology, that allowed Xilinx to create the largest FPG’s in the world. We have held that title for many generations.
That has to do with this SSIT technology. We have slices of FPG’s all stitched together. We use them in many applications, in wired applications, in emulation prototyping applications, some other tests and measurement applications, use this SSIT technology. We also continue that forward with Versal Premium that uses HBM. So we connect HBM to our Versal products.
In technology, so that allows us to have very large memory bandwidth. Yeah. And we can service compute acceleration, for example, through that technology. Chiplets have been very good to AMD. We can also, through chiplets, we can drive a lot of IP leverage across the company. So you can imagine we can connect up multiple IPs that the company has.
Through chiplets, we increase our compute capability, that is not possible through monolithic devices. And on top of that, now we are also focused on using our customer’s IP and connecting them into our devices. So that allows us to partner with our customers and bring their own IP into our portfolio.
Prakash Sangam:
Yeah, true. So C is integrating multiple technologies, multiple partners, chiplets have been a real key development.
Salil Raje:
I think that’s right.
Prakash Sangam:
So talking about edge compute that you mentioned a little bit in terms of Versal and so on. So that’s where your latest Versal 2 announcements were as well. What are some of the challenges that you’re seeing for edge compute platforms, and especially in embedded systems?
Salil Raje:
Yeah. So I guess before I say to talk about challenges, let me talk about some of the positives.
Prakash Sangam:
Yeah, sure. Edge as such is the next frontier, right? So there is a realization that a lot of things will happen edge It’s a large opportunity, but glad to hear your perspective on the opportunity for sure.
Salil Raje:
Yeah. So absolutely, edge is the next frontier. I think cloud has grown significantly, but a lot of the applications will move towards the edge. It has many positives, right? It does bring compute power closer to the source of data. So some of the applications you can’t really do in the cloud, you need low latency, real-time responsiveness.
For that reason, you need to keep the compute closer to where the data resides. Also, when you do the compute at the edge, you lower the bandwidth that you need as you move data between cloud and edge. The most important thing these days has to do with data security. So people would not want to move that data from the edge and the endpoint applications to the cloud.
So you need to keep the compute closer to the data to make sure that the data is secure at the edge. For a lot of these reasons, edge compute is becoming very popular. AI at the edge is also the next frontier. So you can imagine a lot of these applications are starting to move into AI-based applications.
So that’s another selection that is happening in the market. But as you said, there are many challenges as well. You have to be very savvy about how you do some of these edge applications. Some of the challenges had to do with power consumption. At the edge, power is limited.
You had to worry about thermal envelope, make sure that you do the compute under the right power as well as thermal envelope.
So you had to be very efficient. Compute efficiency is paramount for our products there. Then real-time processing. So you had to have low latency, real-time capabilities so that if you can solve it, it’s a big positive, but it’s also a big challenge in how to do it under the power constraints. Scalability is another one. You can’t just have a single product in a single market.
You need to be able to produce products that cut across the different ranges of markets. You may need a low-end product for the cost-sensitive part of the market, or you may want to have a higher and larger compute product for the higher end of the market. The reason scalability is important is because you want to preserve the software stack.
You don’t want to have different software stacks and redo all of those applications as you move from the lower end of the market to the higher end of the market. I think security, privacy is a positive, but it’s also a challenge. How do you keep the data local? How do you keep it secure? Make sure that there are enough mechanisms in there so that people cannot tamper with the data, cannot steal the data and that’s another challenge.
There are cost constraints. Some of these edge applications are very cost sensitive. And a lot of these edge applications, these are embedded applications, they have longevity requirements. If you’re sending something to the space, you need to be able to live for 20-30 years sometimes. And you need to live in environments that are pretty harsh. You may have a cell phone tower closer to Antartica.
Prakash Sangam:
Or the desert in Sahara.
Salil Raje:
Or the desert in Sahara, exactly. So you need temperature ranges so that your products can survive all that. So those are some of the big constraints and challenges at the edge. So how is your latest Versal Gen 2 solving them? Versal Gen 2 products are very complex SoCs and they pack in a lot of processing power in a very small area.
AMD is foundation in heterogeneous computing. We have many different heterogeneous computing elements within Versal Gen 2 product. It’s geared towards AI workloads as well. These AI workloads have different stages of processing. It’s not just about AI inference, but it’s also about some pre-processing that happens, that handles data coming from cameras, radars, IDARs.
And there is AI inference, of course, which is where we deploy pre-trained AI models. And then you have the post-processing stage where you are making decisions. You’re either moving a robotic arm or applying brakes. So Versal Gen 2 products handle all of these stages, the end-to-end application, pre-processing, inference, post-processing, all on a single device.
So we have a whole host of processing capabilities within this Gen 2 product. We have CPUs, we have ARM-based CPUs, we have Graphic Processors, GPUs, we have Programmer Logic, we have AI engines. These are the neural processing units to do AI inference. We have many video processing subsystems that are hardened.
We have ISPs that are hardened. So what we can do is we can apply the right processing subsystem to the right part of the application. So the entire end-to-end application can be accelerated with the Gen 2 products. We are also scalable.So we are the smallest Versal Gen 2 product that can go after robotics, industrial robots or even consumer robots.
All the way to very large ADAS applications. So it scans the spectrum for different markets. And you can scale up and down our products with the same software stack. So we are very scalable as well. So I would say the way we handle these applications, there are really three pillars. We have heterogeneous Computing within our Versal Gen 2 products. We are very scalable and we are adaptable.
A lot of these engines that I talked about are very adaptable. So as newer generations of compute applications, newer AI models get created, the same device can adapt to those newer innovations. So a lot of interesting things packed into Versal Gen 2.
Prakash Sangam:
Okay, cool. So when you announced, you talked about Subaru adopting it. It’s been a couple of months now since the announcement. How is the traction? Anything that you can share publicly in terms of customers? What kind of use cases and applications you’re seeing?
Salil Raje:
Yes, absolutely. I mean, it’s still, I guess some time before we actually take the product to production. But we opened up product for early access. The traction has been phenomenal after the Subaru announcement. There was a more public announcement, but we have many other engagements with a whole host of customers right now.
Pipeline is very strong and we are working with these customers to deploy their application, their model with our software. We are working with automotive OEMs and tier 1s already. We’re also working with real-time video applications, smart camera applications. The number of customers engaged with us is growing every day.
Prakash Sangam:
Cool. Looking forward to seeing more being public about it. So, as I said, it’s sampling in the first half of next year and GA towards the end of next year, right?
Salil Raje:
Yeah. I think GA in late 2025, that’s exactly right.
Prakash Sangam:
Okay. Isn’t it very early? I mean, maybe it makes sense for the automotive, but usually you don’t announce chips this early, right?
Salil Raje:
As you move more and more into adaptive SoCs and create more of these complex SoCs with so many different processing subsystems, and the application complexity grows, we really need to engage with customers earlier and make sure that we can move their application, move their models into our product through software.
If you don’t need the silicon, you can truly simulate the entire application with our software. People do need that amount of time to be ready for when the silicon comes back. So you can actually, as our silicon goes GA in production, the customer’s application can go production almost at around the same time. So we do need that amount of time to work with our customers. And not very atypical actually for these kind of complex SOCs.
Prakash Sangam:
Okay. And you did mention about chiplet architecture using HBM and other things. Do you have a chiplet version of Versal 2?
Salil Raje:
So Versal Gen 2 products are what we deployed with our customers and launched. Right now, monolithic, but there are plans in place to ensure that we can use chiplets technology with some of our partners. We are a member of UCIe and we will create products with UCIe that will connect up to customer’s IP. And those plans are definitely in place.
Prakash Sangam:
Especially the Prime series version of Versal 2, which doesn’t have its own AI processor. I think that’s right for getting a high power CPU as AI co-processor using a chiplet architecture, right?
Salil Raje:
That’s right. I mean, you can imagine this area can explore, right? With the chiplet technology as well as with standardized interfaces. A lot of customers want to bring their own IP to us. Typically, these are not AI accelerator IPs. They are some hardened IP of their own.
Prakash Sangam:
I see.
Salil Raje:
We can connect to them through things like UCIE. Yeah. When it comes to AI accelerators, we do have a very strong portfolio. We have GPUs that can do that, and we have the NPUs that are already deployed in our Ryzen processes that people can use. And so people look to us to have that AI accelerator.
Prakash Sangam:
And especially when we look at the industrial sector, the use cases and the needs are so much varying, right? So I think that’s where the adaptability and the ability to design an SoC based on what is exactly needed for that market with chiplets, I think it becomes very easy and flexible.
Salil Raje:
Absolutely. I think the chiplet opens up many possibilities. But you mentioned adaptability, right? I think that’s our core and foundation. There are a lot of applications where there’s innovation going on at a rapid pace. And what you think you need today may not be what you need tomorrow.
And adaptability plays a huge role there.
You can deploy your application with our current devices and continue to upgrade them through the life cycle of the product. You can upgrade AI models. You can upgrade some of the sensors that you may need in the future. So that it gives you longevity. It gives you confidence that what I’m deploying today may not become irrelevant tomorrow.
Prakash Sangam:
Correct. Yeah. And in terms of evolution, of course, Versal 2 has some time before it comes into market. In terms of evolution, what are some of the trajectories that you’re looking at to take this even further?
Salil Raje:
So we are more and more focused on adaptable SoCs, which means that we will continue to focus on each of these applications and look at what types of processing subsystems we may need to harden within our SoC. And these markets are starting to diverge, right? So aerospace and defense is very different from industrial and it’s very different from automotive.
In the past, our history has been that we used to create a broad range of pure programmable logic devices and they used to be used across different applications. What we’re noticing now is that because there’s a divergence in markets and applications, we need to create products that are more focused on certain applications.
So, they will become more and more application-specific, but adapt to SOCs. So, you can get flexibility within that application, you can upgrade within that application, but there will be more application-specific as we move forward.
Prakash Sangam:
So, very apt to the business unit name that you have, right?
Salil Raje:
Yeah, that’s right.
Prakash Sangam:
All right. This was a great discussion, Salil. Thank you very much for all the insights. I look forward to more updates on Versal 2, customer traction and commercial use cases and so on. Maybe we can have a follow-up discussion in 2025.
Salil Raje:
Thank you, Prakash. It was great talking to you and thank you for inviting us. It’s been an exciting journey for us at AMD. We look forward to more products coming out and we look forward to talking to your audience about them.
Prakash Sangam:
Sure. Best of luck with all of that and thank you again for coming over to Tantra’s Mantra.
Salil Raje:
Thank you.
Prakash Sangam:
So, folks, that’s all we have for now. I hope you found this discussion informative and useful. If so, please hit like and subscribe to the podcast on whatever platform you are listening this on. I’ll be back soon with another episode putting light on another interesting tech subject.
Bye-bye for now.