Tantra’s Mantra Podcast – Episode 41

podcast

Listen & Subscribe On

Transcript

Prakash Sangam:
Hello everyone, welcome back to another episode of Tantra’s Mantra where we go behind and beyond the tech news headlines. I am your host Prakash Sangam, a founder and principal at Tantra Analyst.
It seems nowadays that we are in constant new cycles, you know, jumping from one to another. Last few weeks have been really crazy with the many events, announcements, sorts of news and so on. And I’m doing again what I usually do when there is a lot of news to talk about and analyze. Dial my good friend and fellow analyst at Next Curve, Mr. Leonard Lee. And I did exactly that and he is with us today to talk about it. We’ll focus more on Computex and Apple’s WWDC.
Leonard, welcome back, man.
Leonard Lee:
Hey, I’m really glad to be here. Thanks for inviting me on again, Prakash. And yes, it’s a lot of stuff that’s getting thrown our way, right? And as analysts, it’s really challenging to absorb it all and then synthesize and then be able to feed it back. So let’s give this a go.
Prakash Sangam:
Yeah, absolutely. So Leonard doesn’t need an introduction to this audience, as he’s been on the podcast so many times. But anyway, doesn’t hurt to remind our folks on where they can get all of your wonderful content, Leonard.
Leonard Lee:
Yeah. So Leonard Lee, executive analyst at Next Curve. For those of you who don’t know me, just check out my website or my company website at www.nextcurve.com. And I have a YouTube channel. You can just go to the media section. There’s tons of content and you can discover all that Next Curve. And yeah, look forward to having you visit my research and checking out.
Prakash Sangam:
I highly recommend you guys to visit his website. A lot of interesting content there.
All right. Thank you for that, Leonard. So let’s jump in. Let’s start with the Computex, the world’s biggest computing show. Lots of interesting announcements. Many are expected, some are not really expected. So what was your overall take? I think CoPilot plus pieces were everywhere, right? You were there in person.
Leonard Lee:
Yeah. Yeah, I think there was a strong residual from Microsoft Bill 2024. And, you know, of course, at that event, you know, you and I talked about the pre briefer event to that event, which was the unveiling of CoPilot plus PC and Qualcomm’s now very unique role as sort of the spearhead into this new era of AI or let’s call it CoPilot plus era with Microsoft.
And so that residual really kind of bled over into Computex and, you know, if I were to say what was like the main theme, the overarching theme, I would say the AI PC and CoPilot plus really was that and it kind of overshadowed Jensen Huang’s pre event, you know, keynote, you know, he held it at a local university, I forgot the name of the university, you’ll have to forgive me.
But, you know, he held basically a Taylor Swift like concert there. And we had the opportunity to attend, which was really nice. And so thank you to the folks at Nvidia for inviting me as well as a number of analysts to partake and witness Jensen on stage.But yeah, I mean, a lot of interesting stuff, a ton of things that I’m still trying to synthesize and, you know, put together in a report.
And it just seems like a lot of things happening at the same time, right?
Prakash Sangam:
Yeah, and it is talk about AI and Jensen was overshadowed, you’re saying how often that happens, right? So it looks like the edge AI and you know, AI moving to edge is happening, right?
Leonard Lee:
I think so. I mean, you know, that’s an interesting comment and suggestion because in all reality, you can train these models, right? But you need to be able to apply the models. And that happens with inference. And when you look at how inference, how these AI applications are, you know, designed and deployed in these chatbot type frameworks, a lot of this is going to have to happen on devices.
It’s probably not going to happen in the cloud. And so I think that’s where there’s this growing interest in on-device AI. And you know, everyone’s getting into the game in terms of the Silicon players, right? You can tell Qualcomm most prominently, especially with Snapdragon X series, right? Which is comprised of Elite as well as Plus.
And you and I were there in San Francisco when they unveiled Plus. I gave us a preview of Plus, right? And no doubt, man, you know what? I mean, they kind of set the benchmark, Qualcomm. And that’s quite an accomplishment.
Prakash Sangam:
And also, I mean, they have been talking about it for a long time, right? Initially, nobody believed it, you know, then let people realize, okay, this might happen, but not right now, maybe later and so on. But like now, I mean, it’s here, right? So that’s quite remarkable.
And also, I mean, and we’ll talk about WWDC a little bit later, but what was talked there, Apple following the same model, which means these many of these cases will spill over to smartphones as well. Again, that’s the epitome of Edge AI, right?
Leonard Lee:
I mean, I think the hint that we already have of what can happen on the AI PC is it starts with the smartphone, right? And I think that, I mean, you and I have been privy to a lot of the innovations that, you know, Qualcomm in particular has been pushing out there.
And, you know, I bring up Qualcomm simply because they’ve been kind of pioneering the AI PC stuff from the bottom up, right, meaning from the smartphone, from a smartphone legacy on up. And, you know, with Apple and what they’ve been doing, they’ve been bringing AI computing to the smartphone for, you know, years, right? Ever since like 2016.
And Qualcomm the same. So, these guys have been competing and trying to, you know, Qualcomm on the smartphone side trying to enable the, you know, Android ecosystem with AI leadership. You know, they’ve been going head to head and, you know, a lot of these AI applications that we’re seeing, a large portion of them are actually, believe it or not, ML based, you know, they’re not generative.
They’re coming to the PC. And I think they’re going to have effect there simply because a lot of the processors that are coming to market, whether it’s, you know, Intel’s Lunarlake or AMD’s Ryzen AI 300, and of course, you know, Qualcomm’s X series. These are really geared toward the ultra and, you know, the lightweight, small, you know, like lightweight, power efficient category of laptops.
Prakash Sangam:
Yeah, premium laptops to begin with and then they’ll scale up.
Leonard Lee:
I mean, there’s no, I mean, I think there’s no doubt. I don’t know what you think, but I think that they’re going to scale up.
Prakash Sangam:
Oh, yeah. No reason why they shouldn’t, right? It’ll be more like how it happened with 5G, right?  People were talking about use cases and such, but when you go into the market, it was very hard not to buy a 5G phone, right? So similarly, when you go into the market now, be it enterprise or consumer, very soon, probably next year around that time, it’d be hard to buy a non AI PC, if you ask me.
Leonard Lee:
Yeah, I agree with you.And I think when you look at what Microsoft is really pushing here, is that with on devices, they’re trying to get a lot of these AI workloads offloaded onto device, because cloud is very expensive, right? And there’s that whole aspect of privacy that we’ll probably talk about later.
I think those are the things that are really going to start to shape the conversation about generative AI applications, right? We’re probably going to be talking a lot less about cloud bound or cloud based. Those tend to be a little bit more niche in my observation.
I don’t know what you’ve observed, but there’s a lot of things that you can do on device with generative AI that can do some helpful things. Do you know what I’m saying? Practical, helpful things. And I think that’s eventually where generative AI is going to express its value to users.
Prakash Sangam:
Yeah, I mean, there are emerging use cases like Microsoft showed, like Apple showed. A lot of these were, as you mentioned, when AI was happening on the phones, it’s large, it’s not so much on the laptops in terms of cameras and other things. Now you have much more, right, with CoPilot kind of applications.
No matter what you are doing on your laptop and so on the phones, there is always somebody helping you, nudging you in the right direction, and helping you do things in a much better way, with knowing the context fully well.
Leonard Lee:
And so it’s very personalized as well. So going back to SOC is part of these CoPilot plus PCs, and as we had talked about in our previous episode, Qualcomm came up with a big bang, more than 20 models. All seven of the PC OEMs were committed to it, and many of these with multiple SKUs, different tiers and so on.
Prakash Sangam:
So AMD and Intel had to come back with their stuff because they don’t want to leave anything to Qualcomm per se, for this. I mean, they already have ceded the leadership because the first CoPilot plus PCs will purely be and only be on Qualcomm solution. So they are coming in behind, you know, how much behind is, you know, it’s a question that we talk about.
Leonard Lee:
Yeah, right.
Prakash Sangam:
So they came up with them and AMD said 100 plus designs. Intel said everybody is working on them and so on. And Intel also said they also see huge uptick in AI PCs, even without this 40 plus stops kind of NPU in them with their existing Meteor Lake, which they also call AI PCs, not CoPilot Plus, but AI PCs. They said they are seeing a lot of traction there.
Leonard Lee:
So I think the whole PC is becoming really interesting again after a very long time, right? Yeah, and also very confusing because what is an AI PC versus CoPilot Plus? I think that was like one of the things that folks continue to struggle with a little bit, right?
And then it all boils down to, and it was the big topic of Computex, the NPU tops, right? And that becomes really squishy, AMD is coming at it with their FPGA based solution, and then you have Intel and Qualcomm with their DSP based approach.
And it was interesting, you had AMD come in with the Ryzen AI 300 with 50 NPU tops, but then you have the guys to beat, which is Qualcomm’s X Elite X series at 45 tops, and then you see Lunarlake coming in with 48, right? But at the end of the day, they’re kind of like at parity now.
I don’t think that’s where the differentiation is really going to come as we see these devices hit the market. And this is a discussion that I had with other analysts that I was hanging out with. At the end of the day, it’s going to boil on the battery life, right?
Prakash Sangam:
Exactly, exactly. And it’s quality of the NPU, how efficient is the NPU, and then the CPU, right? And that’s where I think this is going to be interesting to see what Intel comes to market with and how it tests out in devices with their new CPU.
Leonard Lee:
So they have 4 plus 4 configuration, 4 E-Cores and 4 P-Cores. This is a very, very different architecture, but I think it might surprise people.
Prakash Sangam:
Correct. So I mean, a few top MPU tops here and there, I don’t think makes a huge difference in terms of user experience.
Leonard Lee:
I completely agree with you. So the performance per watt is going to be a key criteria.
Prakash Sangam:
So I was talking to Samsung on CoPilot plus PCs, and I asked them, so when you go talk to your customers, when you’re designing these PCs, there’s a lot of AI and other things. What is the top requirement and need in your customer’s mind?
Guess what they said? It’s not AI, of course AI is given, it’s battery life!
Some might think battery life, battle on the laptops is over, but it’s not, especially with the NPU coming in, and if the AI workloads becoming big, and if you really use CoPilot Plus, no matter what you do on the laptop, there’s always CoPilot might be running in the background, which means NPU power consumption is going to be critically important.
All the things that we are hearing that will happen on the phone and AI in the laptop will occur, in CoPilot Plus approach, NPU power consumption will become critically important, right?
Leonard Lee:
Right, exactly. So it’s like tops per watt. They always have to factor in the power aspect. Like we said earlier, NPUs that are not architected the same, and they don’t perform the same, even though they might have a certain top figure that they’re promoting. But it’s going to be interesting.
In terms of some insights around the NPU, I think a lot, you know, so CoPilot Plus aside, a lot of the demos that we saw, they were more related to your classical ML, right? So things that we’ve seen before for years, right?
And the classical use cases in terms of battery, you know, the power efficiency and running that stuff on an NPU. So that was demonstrated. Still, you know, a lot of the generative stuff, again, CoPilot Plus aside, still emerging, you know? But what’s kind of apparent is, you know, things like ML jobs related to security or monitoring and what have you, things that are background tasks that you can offload off of a CPU.
Those are probably going to be the practical and valuable applications. If you think in terms of security, you know, you want people to be able to run these agents in the background without just sucking up your battery, right, by running on CPU.
And that type of thing is going to be the near-term valuable applications. So it’s like these simple things, right? But they’re going to be valuable.
Prakash Sangam:
Yeah, interesting. And also, I mean, the expectation was that the X86, CoPilot Plus PC is both from Intel, AMD, would be coming towards the end of the year. And there’s some rumors that might be earlier for AMD, but AMD said, you know, they announced like 100 plus models or something.
And some of them might be shipping in July. So that was interesting. Any idea on what those will contain? Will they support full CoPilot Plus stack that Microsoft has? How many of them are launching? Any idea there?
Leonard Lee:
No, we didn’t really get a lot of color on that. But yeah, I mean, they are supposed to be CoPilot Plus PCs. So yeah, I attended the AMD session. And so that was pretty clear. I think the Lunarlake machines, the suggestion is that you look at sometime in Q3, September-ish timeframe, and that’s something that Intel’s mentioned before.
So that’s probably when we’ll see the first machine ship. And that’ll be critical because that’s back to school. That’s a holiday season. And that’s going to be important for Intel to get into the CoPilot plus PC flow.
Prakash Sangam:
There are two critical periods for PC market, right? The back to school, which is, you know, August, September timeframe and then holidays toward the end of the year. So I think Qualcomm will squarely be in on the back to school period this year. AMD will see how many and how much they’ll get up with.
Leonard Lee:
Yeah. And then towards the end for holiday season, I think all three will be competing, right? So yeah, it should be pretty brutal.
Prakash Sangam:
Yeah. When you look at consumer space, none of these tops in the technical specifications matter, right? It is brand what consumers believe in and how they feel when they, you know, look at, look and feel of it. Maybe battery life is the critical specification they look at.
Other than, I mean, I don’t know, it should be too difficult in my view to, for them to understand 45 tops versus 47, 48 versus 50 tops and so on, right?
Leonard Lee:
And so, yeah, and, you know, oddly, you know, Apple is sort of the elephant in the room, you know, even though Qualcomm really took the stage, you know, and dominated in terms of presence and position, there was that thing in the background lurking.
Prakash Sangam:
At the show, you know, CoPilot plus PC was the hero, you said. Anything from the data center, GPU, I mean, NVIDIA, Intel, and AMD all had announcements to make on the GPU and data center side. Any quick views there before we move to WWDC?
Leonard Lee:
Yeah, you know, one of the… So, we’ll start off with Intel. One of the things that really kind of popped. So, you know, I’ve been tracking Intel and line of processor for data center for quite some time. So, I have a whole earful of that, but I’ll just talk about the highlights.
One of the things I think, you know, I was impressed with as well as fellow analysts was Sierra Forest. So, you know, Xeon 6 with 200 up to 288 cores. That’s kind of a big deal. And, you know, my buddy, Jim McGregor, you know, Jim McGregor, of Terrius Research, he was suggesting that Intel could actually gain leadership, start to gain leadership in the next generation, especially with the core efficiencies that you’re going to be looking at, you know, with the next generation Xeon processors.
And the other thing is that, so that is something that I think really stood out amongst the analysts and in my own observation, and if we go to AMD, yes, they introduced the MI325X. And I guess the big headline coming out of that session was that they’re going to, or that keynote was they’re going to go on this one year cadence to mirror what NVIDIA is doing, right?
And then of course, NVIDIA, the thing that really stuck out in the keynote, and we’ve seen a lot of that material and what Jensen’s presented several times already through the course of the year was Rubin, right?So that’s going to be the generation of GPU systems after Blackwell.
Yeah, but all that stuff really sounds great. It’s already part of a continuum of expectations that I think we are all tuned into. Again, AI PC.
Prakash Sangam:
Yeah, true. That sums it up. Okay, so let’s quickly move to WWDC.
I mean, it was highly anticipated. A lot of questions on what Apple is doing on AI, and a lot of destructors saying Apple is behind. A lot of rumors, news on how it is working with OpenAI and relying on OpenAI and so on.
So the WWDC keynote came and gone. So a lot of news. I think the main, of course, news was Apple Intelligence. There are other things announced which are more incremental, but mostly the big news is Apple Intelligence. So I think they played it well.
They showed some use cases. They actually showed what you could use AI for in a phone or in a laptop in a user-friendly way, in my view. None of this is new. You could do the same thing on any other smartphone or laptop, but I think they showed, because of their ecosystem, how they can make it much more user-friendly and much more easy to do.
And they pushed really hard on privacy and security, how all the data lives on the device. Nobody has access to it. And even if they had to move data from the device to the cloud, users would be involved, even the cloud that the data will be run on. And it will run on mostly Apple Silicon.
And the source code would be ready for any third party to review in terms of how they’re using the data and so on. So there’s a lot of focus on there. And then, of course, they basically highlighted all the things they could do with their vertical integration that others cannot do. So that was pretty clear.
But I think what threw me and, of course, many others off as well, the support of Chad GPT. And they said they’re looking to support others as well in the future, like Google and so on. That kind of threw everybody off.
So you were talking about doing everything within Apple, but you also work with OpenAI and send stuff there and other things. So, you know, it’ll be interesting to see how that will work and how that affects the confidence and the trust that consumers have on Apple, right?
Leonard Lee:
Yeah, yeah. I mean, but I think one of the biggest takeaways from WWDC is that they actually trained their own models. So they mentioned foundation models several times. They mentioned it once, but you really had to get into the developer sessions to really understand what they were doing.
And so there is a report out that, I think it was a Reuters report, that mentioned that Apple had trained their own models. These are their own LLM. It’s an LLM as well as a diffusion model. Not in partnership, using basically Google’s TPU forms to train these models, right?
So, they’re basically just a hyperscaler customer, using their resources to train these models, which I think is smart.I mean, people talk about, hey, should you buy equipment or invest in your own data center, or you should do cloud and just do the whole OPEX thing?
Well, Apple’s doing the OPEX thing, and they’re not buying GPUs from Nvidia or anyone else.They’re going to be using their own Apple silicon, probably some form of serverized version of M4. Who knows what that really is, right?
But all we know is Apple silicon and the Apple software stack plus security and privacy framework. And so I think that is actually probably the most important takeaway. And I think the folks who have concerns, like Elon Musk, bless you, folks like Elon Musk probably have it wrong.
The underlying foundation model is not OpenAI, ChatGPT, GPT-4, anything like that. It’s apparently Apple’s. So this is a really interesting strategy, right? And it kind of mirrors what Microsoft does, although Microsoft seems to be a little bit more tightly coupled with OpenAI.
But I think what they’re doing with OpenAI is really kind of a loose coupling. They’re providing sort of a plug-in, right? And some of these other functions like quote unquote chat bot functions within iOS 18 and iPad, OS 18, et cetera, et cetera. And these are not tightly integrated into the software. It’s almost like a plug-in. And there’s going to be options.
So the way I look at it is kind of like what Apple has done with browsers, right? Search engines. You get an option. And eventually as Apple starts to work with other companies with these world models, you’re probably going to see something very similar to what Apple has done with Google and Search and having a default search engine.
They’ll probably have a default world model or at least the user can designate one, right? And so I think it’s pretty smart.
Prakash Sangam:
I agree that most of them, even if you look at their presentation, they spend like a couple of minutes on talking about OpenAI and spend most of their time on how it’s their own model, LLMs and diffusion, as you mentioned, they’re trained by them.
Leonard Lee:
Yeah, but I don’t think people really got that.
Prakash Sangam:
One thing that was confusing to me was, why mix that with OpenAI and create confusion? Even if it is a plugin, you can use OpenAI on the phone right now, on the device right now. And also Sam Uttman was basically in the audience in the prime location. So they highlighted a little bit, but not too much.
So I’m still not very clear on how much they will use. But based on what they said, it is mostly their own stuff, right?
Leonard Lee:
Yeah, and I think it’s going to be interesting because, if you take sort of an 80-20 rule, 80% of the interaction that you have, and it will probably be more, it will be with Apple Intelligence. And then, like they kind of suggested, if you have these occasions, and they’re typically going to be like niche occasions to tap into a world model, then yeah, they have that convenient little plug-in, and then it provides a little warning or a consent prompt, and then it sends the information to ChatGPT.
And I think it’s the free version, right? And then they also mentioned that paid ChatGPT users will be able to get the paid benefits of that subscription and be able to use it. But that’s not Apple.
Prakash Sangam:
That’s what I’m saying.  They never give, even if they’re working with somebody, they never give them a pedestal.
Leonard Lee:
Yeah, yeah, yeah. But then people are already using OpenAI, ChatGPT, GPT-4, APIs. They’re already using it on Apple devices. And so this is just, again, it’s more of a UI level integration. It’s not an OS deep integration.
And it certainly isn’t germane and core to Apple Intelligence. And if you read a lot of what’s been written out there, that’s where the confusion is. People didn’t pick up on that. But then I put that on diligence. If you’re going to write about stuff, know what you’re talking about. You know what I’m saying? Before you write it.
Prakash Sangam:
Yeah, I understand that. But I think my point is from an average consumer perspective, right? I mean, that’s another thing that I want to point out. Almost all of Apple talked about Apple Intelligence was for a consumer, right?  It’s 100% consumer play for now.
Maybe there will be more application use case on the enterprise side. But right now, when they talked about it, it was 100% consumer.
Leonard Lee:
We’ll see how it goes. It goes, but I think it’s too much for depending on the consumer, putting on the consumer. If you do this, it’s on us, and we provide all this privacy and everything that we talked about and we stand for.
Prakash Sangam:
However, when you do this, you get out of it, it feels to me like the small point footnote. If you do this, it’s all on us and we guarantee it. But if you don’t do this, you go outside this, then we are not responsible kind of a thing. I think it’s in my view, it’s a little bit too much to ask from average consumers.
Leonard Lee:
But wait, wait. That’s why they need to call Tantra Analyst. These media folks should call you. They shouldn’t just go off and try to interpret this stuff. You know, that’s what analysts are for, right?
Prakash Sangam:
True. But I’m saying from an average consumer perspective, it might be, you know, we will see how it goes.
Leonard Lee:
I know, I understand that open AI people know about ChatGPT and so on, but so on, we will consider about it.  You know, better journalism is going to help. Let’s put it that way.
And, you know, I think we do our jobs by clarifying things, and hopefully, you know, anyone who’s listening to the podcast is going to pick up the clarity and write about that instead of confusing consumers and confusing audiences. And that’s exactly why we’re analysts, right? And that’s a value we should bring to the table.
So, yeah, hopefully, a lot of people call both of us up and try to get clarity, you know, because, yeah, there’s no reason for the confusion, right?
Prakash Sangam:
So, okay, but one thing it really made very clear, the whole Apple Intelligence announcement, is that so far what Qualcomm has been talking about on-device AI, right? It’s basically a huge endorsement to that, right? It’s not just a vaporware, a lot of things are happening, that’s one thing.
And the second interesting that I observed, I haven’t seen many people talk about or write about it yet, is, you know, there is always this confusion, right? So everybody knows in a hybrid AI scenario, you have to make a decision on where to run that workload, on the device or on the cloud.
And maybe some part will run on device, and you have to do more than you push that to the cloud and do it there. And how do you decide who decides how it is distributed? It’s still an open question.And I’ve said this a few times, and probably in one of my previous podcasts as well, to do that, if you have a single vertically integrated entity like Apple, who controls the underlying SOC and the software and the framework to do that decision on where and when to run what workloads is hugely, hugely beneficial.
Now think about that being done on, say, CoPilot plus PC, because now the SOC is coming from somebody else, the operating system is coming from somebody else, the AI applications are coming from somebody else, true for Android as well.
Then you need some sort of a framework that all these parties agree to on how to divide this workload, right? In terms of Apple’s vertical integration, they control everything and they can implement it much sooner in a much better way than anybody else.
Leonard Lee:
Oh, definitely, and so their whole private cloud compute, I think that’s going to be a very interesting thing to examine. I mean, it’s a little bit murky. I mean, for audiences like us, right, analysts, it’s just like a bunch of high-level mumbo jumbo, but very interesting mumbo jumbo.
And I think we need time to be able to take a look at how they’re implementing this. But I think you’re absolutely right. They do have the advantage of being able to actually architect a hybrid AI infrastructure. And that is really, really, number one, elusive and hard.
And we see that with RAN, right? You know, all this open stuff and, you know, cloud, blah, blah, blah. It’s a lot more difficult than, you know, people make it, initially made it out to be. But to have this control and to be able to design the kind of intelligence and orchestration that you need to be able to securely, in a privacy-protected way, distribute these workloads and data access, right? It is going to be challenging for anyone outside of the Apple ecosystem.
And of course, you know, if anyone claims that they can do this, you know, we’re more than happy to hear from them. But, you know, the thing is that people oftentimes associate Apple with just, you know, end-point devices, right?
They’re actually very, very savvy in CDN, ADN, you know, application distribution networks and all kinds of stuff. You know, so all these things that they charge Apple’s app store guys for, right? You know, developers for the app store, these services. I mean, do you think all the stuff happens magically?
There’s actually, you know, they have some serious chops when it comes to distributed computing and data center. And so, you know, the fact that they’re also going to be using Apple Silicon in their data centers just to support and host, you know, Apple Intelligence is a huge statement as well, right? Because everyone assumes that NVIDIA has the greatest, you know, AI data center stuff.
Prakash Sangam:
That’s kind of went largely unnoticed, right? So they kind of made an announcement, if you will, that we’ll have data center class, you know, M series chips.
Leonard Lee:
Yeah. And we’ll build data centers with it. Which could be interesting because, you know, if you think about NuVIA and Qualcomm, they were originally defining core IP for service, right?
Prakash Sangam:
And so I think there’s opportunity for Qualcomm there to also get into service.
Leonard Lee:
But we already see NVIDIA doing that with Grace, right?
Prakash Sangam:
Correct.  So, I mean, the opportunity is basically, you know, when you have full control of this framework, I mean, yeah, I think that’s an interesting stuff. And that will show up in the user experience and the battery life of the devices.
I mean, for example, if you take the same application running on both Apple’s ecosystem, be it iPhone or Mac versus Android or Microsoft, I mean, if the hybrid AI moving between different parts of the AI network, if you will, will be so smooth here on Apple side, but it will take time for all these companies to come to an agreement, have a framework and then decide and so on, which I don’t think any work.
Every AI company I talk to, I ask this question, they’re saying, what? Which means they’re not even thought about it, and the groundwork for building that framework has not even started. So it will be interesting to see how that goes.
Leonard Lee:
And just that orchestration piece is off. So yeah, it’s going to be…
Prakash Sangam:
Anything else that you observed from WWDC other than the things you talked about?
Leonard Lee:
I just wanted to go back to the beginning. I think you’re right. A lot of this stuff is not new. And that’s actually what I was saying for a whole year when Microsoft was introducing all their CoPilot stuff. I was like, this stuff is not new. This is just chatbot framework.
This is like chatbots resurrected, and now you’re just going through your portfolio of tooling and applications and then creating plugins for the chatbot. So that’s why we see this velocity. And then I think this notion that Apple is behind is like, no, they just showed that they may be ahead.
And they did it like that. And it’s because of all the things that they had already built into their OS. And there’s a lot of discounting of their OS, especially in the Android and Windows world where they say, oh, it’s like a toy.
Not really. The proof is in the pudding. And so this may be a wake up call for everyone else. But then it’s also a statement about generative AI. It’s like, well, what is it really going to look like at the end? And how are the economics and the revenue share and the business fall? How are those all going to really play out going forward?
But I think people should really study what’s happening with what Apple announced. Yeah, and they said it is free of cost, at least in the beginning. Ultimately, I think they have to charge and they will charge, but I think that’s a swipe at Microsoft and OpenAI and others, basically charging now for premium service. I think they’re going to try to push everything on device.
And what they may do is they may charge folks who have older devices to have server-side AI intelligence. Basically, augmenting devices that don’t have a qualifying NPU. That’s a revenue opportunity for Apple. For somebody who has an iPhone X, let’s say, that wants to be able to use some Apple Intelligence features.
Prakash Sangam:
They’ll make all these features only run on the latest phones or the laptops with the NPU in them. That’s how you basically push users to upgrade quickly.
Leonard Lee:
Yeah, but if it’s just part of the OS, most of it’s running on device, I don’t think they’re going to charge for it.
Prakash Sangam:
But if you don’t have an MPU on the device, it would be hard to run this, right?
Leonard Lee:
Yeah, well, that’s why I’m saying is that for those devices that don’t, you might be able to subscribe to Apple Intelligence as a service, right? Kind of like iCloud.
Prakash Sangam:
Yeah, we’ll see. I mean, their ideas always have a frictionless user experience, so maybe they throw in as part of the iCloud subscription or something like that. I doubt, at least in the beginning, they’ll have a separate line item for running AI on cloud.
Leonard Lee:
Yeah, I think either way, they have options.
Prakash Sangam:
All right, man. As usual, that was a great discussion. Thank you very much for your insights.
Leonard Lee:
Oh, absolutely. And thanks for having me on. And thanks for sharing yours. It’s always good to kind of debate and riff off of your thinking. So always good stuff that comes out of that.
Prakash Sangam:
And not agree to on each and every topic, right? It’s always good to have diversity.
Leonard Lee:
It would be boring otherwise. So yeah, no, it’s good stuff.
Prakash Sangam:
All right, thanks again.
So folks, that’s all we have for now. Hope you found this discussion informative and useful. If so, please hit like and subscribe to the podcast on whatever platform you are listening this on. I’ll be back soon with another episode putting light on another interesting tech subject.
Bye bye for now…
 If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or Read our Tantra Analyst Insights Articles.