Prakash Sangam:
Hello everyone, welcome back to another episode of Tantra’s Mantra where we go behind and beyond the tech news headlines. I’m your host Prakash Sangam, founder and principal at Tantra Analyst. We all know AI is the buzz everywhere nowadays, but today most of that AI runs in the cloud. But as we move forward, it is becoming abundantly clear that cloud only AI is not going to be sustainable because of energy needs, latency, privacy, confidentiality, and accuracy requirements.
So we got to spread that intelligence around. That means pushing it towards the edge and bringing hybrid AI and so on. But that’s easier said than done. First of all, the definition of edge itself is very fluid and varies depending on who you ask. No matter how you define it, it is a complex, heterogeneous mix of different kinds of software, hardware, applications, services, and ecosystems.
Obviously, running AI on the edge is equally complex and requires a systems approach. Today, we are going to discuss how to achieve that manageability and so on in the edge with an expert. And that expert is Muneyb Minhazuddin of Intel Corporation. His title at the company kind of illustrates how complex edge AI is. He is the VP and GM of Network and Edge, including infra, apps and AI software.
Muneyb, welcome to the show.
Muneyb Minhazuddin:
Thank you, Prakash, for having me. Pleasure being on the show.
Prakash Sangam:
Well, so Muneyb, as I mentioned, you’ve been championing edge AI at Intel. Could you tell us a little bit about your roles and responsibilities at Intel and how long you’ve been working on the edge?
Muneyb Minhazuddin:
Sure, my pleasure. Yeah, my role responsibility, I own all the network and edge software assets at Intel. So Intel, again, has been the network and edge business that Intel came together from, putting together our business units across both network, telco, ethernet, all of that, as well as IoTG, and combining that together.
So there’s a lot of software, and that software is, my title says, it covers infrastructure software application and AI kind of software. I’ve been doing this at Intel for about 18 months, but I’ve been looking at Edge even before that, before I joined Intel from my previous company. So I spent five, six years looking at Edge.
Prakash Sangam:
So you’ve been taming Edge for some time now.
Muneyb Minhazuddin:
Oh, yes. I came over from VMware, where I was the GM for Edge Compute as well as Private 5G and Mac there.
Prakash Sangam:
Perfect. So let’s start with some basics. So how do you define Edge? Where does it start and end?
Muneyb Minhazuddin:
It’s pretty fluid. I think, like you said in the intro, one way to simplify it is anything outside your data center and cloud is an edge. What I mean is that the vicinity varies from, people have huge terms like near edge, far edge, vice edge, what have you. But the moment you leave your data center and cloud, it could be a hosted colo, they call the near edge as people call it.
Far edge is something that sits inside of a store or a factory, the extreme end, and there’s device edges which then becomes IoT, blah, blah, blah. So it’s just like that range varies, but simpler to say anything outside of a cloud and data center is an edge.
Prakash Sangam:
That’s good enough and it includes lots of stuff as you mentioned. Yeah, so Intel has been talking about Edge AI platform for some time, but you can formally announce Tiber branding and Tiber Edge platform recently during Intel Vision event, right? So what is a Tiber Edge platform and why should the industry care about it?
Muneyb Minhazuddin:
Oh yeah, look, I think there’s a first, let me also say, Hey, Intel launched Tiber as our software brand for the company, right? So just like we have a Xeon brand for servers and a core brand for, you know, our client silicon, you know, we needed to kind of step back and go, Oh, we need to have, you know, we aspire to have a strong software presence in the market.
But, you know, we wanted to create a brand for that. So the company wide software brand is Tiber, right? So Intel Tiber is a new brand we announced at Vision. Now the Tiber Edge platform, which now is a part of that kind of family of software, there’s other software from data center and other places of the business.
In the Edge, the Tiber Edge platform that we launched as part of that is basically a platform that makes sure that customers can, you know, when I say customers, enterprises at the Edge, right? So can do day zero, day one, day two for infrastructure, apps and AI.
Prakash Sangam:
Mm-hmm. Okay. And, you know, we talked about managing Edge is challenging and bringing AI to Edge is even more challenging. So how does Tiber solve it?
Muneyb Minhazuddin:
Yeah, no, you know, going from that, hey, day zero, day one, day two, like, how do you bring… So when I say day zero, day one, day two for those three parts, right, infrastructure, day zero means you have to do secure device onboarding, you know, zero-touch provisioning. Like, at the edge, you don’t have IT skillsets like you have in the data center or cloud.
So the box needs to arrive and configure itself like your Wi-Fi router in the house or whatever, right?
So infrastructure has a very day zero, day one, day two problem, which is device onboarding, zero-touch provisioning, zero trust and manageability security. The second part is applications that get built in the cloud and data center need to be then deployed, orchestrated and observed at the edge.
Again, no IT resources. And the third part, the applications then generate data. That data needs to be analyzed, inferred, and AI models and built around and inferred around that. So how do you build a model, train it or infer it, and then deploy it, orchestrate it, and then do life cycle around that? So day 0, day 1, day 2 of infra, apps, and AI, it’s very hard at the edge because there’s no IT resources, there’s not that amount of security, there’s very heterogeneous, like you called out, type of actual elements.
So that is a massive challenge for the industry. Yeah, and it’s basically looking to be kind of a life cycle management and helping the IoT folks. It’s a scaled operation, right? It’s a scaled operation because it’s easy to kind of do a proof of concept with one node. When you come to the telco world, hundreds of thousands of cell sites, or when you come to small coffee shops, again, tens of thousands of coffee shops, or gas stations, or oil and gas rigs, and remote.
And the cost there is more of getting somebody out there to troubleshoot install than the actual, like you can’t have that many resources everywhere. So how do you actually provide that across-the-board manageability? At the same time, those things need to be very self-sufficient. And in terms of deploying, does this go on all the devices, or it sits on Edge server somewhere, the software platform itself?
Yeah, it’s a combination of, right? So what we see in the typical deployments, I’ll give you kind of three situations. Now the solution supports all three, right? So one, which we see 50-60% of the time, is people are happy for this to be connected to the cloud, always. No concerns with cloud connectivity. There’s always good cloud connectivity, good examples.
In a retail store, generally, yeah, I can get connectivity. It’s not an issue. The second part of that is there is connectivity to the cloud. I’m not averse to having connection to the cloud, but then there’s unreliable pockets of connectivity lost. Now it could be manufacturing, oil and gas rigs, remote locations. So then there is this periods of two days, three days, a week, whatever, discontinuity that the Edge still needs to operate on.
And third is where people don’t want to this kind of federal government, like all the regulatory requirements that they have an aversion to being cloud at all, which means it’s completely air gap. So like those three situations, because this solution can be completely in the cloud, can be in a private instance on-prem or completely air gap. But we see majority of the customer requirements falling in these three pockets.And the solution, the platform supports all three configurations.
Prakash Sangam:
Very well. So it’s very flexible that way, depending on what kind of deployment you need. So does it only support Intel hardware or does it support others? As I mentioned, it’s very heterogeneous with different hardware and software components.
Muneyb Minhazuddin:
Oh, yeah. It’s an interesting thing. The precursor, I’m sure you know at Intel, we like our project names before coming out with a product name. The project names stick for a long time. We used to call this Tiber Edge platform Project Strata, because there are layers. There is a silicon layer on top of the infrastructure software layer, there’s an application software layer, and there’s an AI layer. That’s why it was called Project Strata. I’m trying to make that association.
The reason I’m giving you those layers when you talk about Intel hardware and above, when it comes to the hardware layer, of course, yes, it’s Intel. But as you move in the infra, the app and AI layer, we’re able to support more than Intel silicon. So for instance, our OpenVINO AI inferencing engine, last year at Innovation, we announced support for that for ARM and RISC already.
That’s part of the platform. So as you go higher up in the software abstraction layer, we actually do support heterogeneity of hardware beyond Intel silicon, because at the software layer, it shouldn’t matter. But as you get closer to drivers, firmware, accelerators, yes, it is tied to Intel silicon, where we would do additional runtime and acceleration on the Intel silicon.
But then providing orchestration, manageability, observability, AI model distribution, AI orchestration, we would support CPU, GPU, NPU, DPU from anybody.
Prakash Sangam:
That’s good. But somebody has to write the drivers and other things? You’re expecting the other guys to do that?
Muneyb Minhazuddin:
Not necessarily, no. I think the AI model distribution is all kind of, you know, we can, again, open Vino, as I said, that inferencing engine already supports all platforms today. I love that.
Prakash Sangam:
Specifically in terms of AI, so you provide your own models, you know, obviously in a different segments require different models, right? More optimized models for them.
Muneyb Minhazuddin:
Actually, we take pre-trained models from, you know, Anthropic, you know, hugging face, all of the above, right? So it’s pre-trained models. A lot of the times, models are not being trained at the edge. Like training is happening largely in the cloud. What’s happening at the edge, which is kind of interesting as it evolves, is you’re getting inferencing, but you’re also getting some amount of retraining and some amount of rag and data annotation that’s happened.
So it’s not like the fundamental models are still happening in the cloud, but what’s happening in reality is agility, dynamic way of adjusting and retraining, some amount of expert annotation of the data and some injection of rag is adjusting those models at the edge. So when you think about it, we’re not supplying our own models per se. We’re taking pre-trained models from the cloud.
Then we have the tool sets in the Tiber Edge platform to do retraining, data annotation, and creating that in order to suit. I can give actual customer examples, like use cases where we’re seeing that retraining and all of that makes a huge difference.
Prakash Sangam:
I saw a couple of them during vision, but go ahead, it would be great to…
Muneyb Minhazuddin:
One thing is windmills was one of the first things that happened. They were training models for windmill productivity in the cloud for about 12 months where they were watching the… Apparently, the productivity of windmills is not great, so this company who’s a renewables wanted to do 30% higher productivity, and for 12 months, they’ve been observing the patterns and trying to model it and apply it, not getting the productivity they wanted.
So now we were able to take the Tiber Edge platform, ingest the models directly from the inverters and converters. There’s no camera here, no vision, right? We’re just taking time series data from the inverters and generators, as well as from wind trackers, moisture sensors. So we can do multimodal data ingestion, time series, and then we’re able to retrain the model on the fly and then apply that dynamically.
The reason the dynamic I’m giving the windmill example is you can’t predict the wind conditions or the weather conditions. So you can’t have a fixed model that just applies all the time. You need a dynamic model which adapts.
So now watching the wind directions, the moisture conditions, etc., this model is dynamically moving the windmill back and forth, keeping up in order to maintain a 30% productivity KPI.
We’re able to achieve that in three weeks compared to them trying to have trained models from the cloud for 12 months and not achieving it.
Prakash Sangam:
Because you’re specifically applying to the specific use case in that location, in that condition, rather than…And it’s dynamically adapting itself, retraining itself to the changing weather conditions.
Muneyb Minhazuddin:
Yeah.
Prakash Sangam:
So that’s a good segue to the next question. So what are the sectors and segments that you’re seeing a lot of traction for Tiber?
Muneyb Minhazuddin:
Oh, yeah, that’s a good question. Across the board, by the way. Right. So we’re seeing, I gave you the windmill example, renewables, solar panels, oil and gas, warehouse, manufacturing, health care across the board. And very interesting types of… And they’re all solving very interesting problems. So, like, pallet detection in warehouse is one of our large customers, like truck arrives with damaged pallets. Once they receive it, they have to underwrite the insurance cost. As it rolls to the bay, we can automatically say that, and so they don’t have to do receivables. That turns around.
That’s saving that customer a couple hundred million dollars of insurance underwriting. Health care, a fabulous, fabulous example in health care. Life-saving, right? So a lot of electronic medical records, like cancer imaging sits locked in to each of the institutes, but can’t be pushed to the cloud because of HIPAA, you know, data privacy, patient requirements, they can’t be centralized.
So we’re able to actually, you know, do, like I just explained the windmill example, we’re able to do an inferencing, like retraining model on pockets of data across 71 institutions, then just leave the data where it is, but, you know, then federate the learnings across 71 institutions. Now you have, you know, a model that’s built, federated across 71 institutions on 50 million records, which was now supplying it to pathologists and oncologists who before each institution only had less than a million records, which is not good enough.
But we’re able to go that and crack that because we’re able to create that model by federating 71 institutions and small pockets of data, which could never be centralized.
Prakash Sangam:
Correct. I mean, I see the edge processing will have a huge impact on health care specifically because traditionally it is highly regulated in terms of who has access and so on.
Muneyb Minhazuddin:
So, yeah. And interesting to see how that develops. So across the spectrum, amazing set of new use cases are starting to emerge because of this.
Prakash Sangam:
And any of these are commercial now?
Muneyb Minhazuddin:
Oh, yeah. Oh, yeah. The health care example that I just explained was, you know, we talked about this at SAP’s conference recently on stage. So along with SAP, because SAP has HANA instances of all these medical records. So we did that with them. And yeah, we had that publicly kind of talked about. We have deployments now happening on the retail side.
One of our early customers has deployed about 200 edge sites. The windmills are scaling beyond, you know, into pilot phases. So we have about 40-odd POCs and pilots running right now.
Prakash Sangam:
Okay, perfect. And then typically how big are these deployment instances and points and so on?
Muneyb Minhazuddin:
So it’s about three months since we launched it. So we’re still kind of going into the pilot phases. So 100, 200 are good. Now the opportunity sizes of each one look like, you know, hundreds of thousands of edges eventually. But we’re just getting into the phase where people come, evaluate, and then, you know, go, oh, this is solving my problem.
The big, the big, how should I say, the big attraction is people are able to start their Edge AI journey on their existing infrastructure, like that windmill example. There was no green, like, you know, brand new hardware. We were just able to go and capture in the brownfield environment existing data and model it and give them an outcome.So Edge AI in a brownfield is what’s hot in the market.
Prakash Sangam:
Oh, I see. That’s interesting. And what is the go-to market strategy here? It’s basically through SIs primarily or?
Muneyb Minhazuddin:
We have two types of go-to-market, right? So one is like you called out, a lot of these enterprise solutions are system integrator-led. So at launch of the platform earlier this year, we announced partnerships with system integrators like Accenture, Ripro, Capgemini and LTTS. So they’re one of the routes to market because they are bringing all these use cases to us and solving that for the enterprise.
The second also is we are enabling a lot of our existing ecosystems. So at launch we announced that Tiber Edge works with both AWS as well as Microsoft. As on the CSP side, OEMs, we demonstrated Lenovo and then at Mobile World Congress and then at Hannover Messe with Dell. And then at Red Hat Summit, we announced our CEO Pat Gelsinger was on stage announcing the partnership of Tiber Edge platform with Red Hat.
So I would say there’s an SI route to market, and then there is our ecosystem route to market, which is OEMs, ODMs, OSVs like SAP, Red Hats of the world, and as well as CSPs like Microsoft and AWS and the rest of the world.
Prakash Sangam:
Interesting, you mentioned about cloud providers, hyperscalers, and some of the IoT folks as well, industrial folks, I mean. So many of these have their own kind of platforms for manageability, right? Provisioning and others, the hyperscalers, AWS as Greengrass, Microsoft as Azure Edge, and so on. And then you look at industrial folks like Siemens or GE for that matter. They have their own platforms and so on. How are all of this coming together?
Muneyb Minhazuddin:
Yeah, yeah, yeah. Who is binding them, right? They’re not competing with each other. Oh, no, it’s an interesting thing because, and then I think you listed a good kind of set of folks, right? So one typical thing, and if I take, let me generalize like, hey, the hyperscalers, the OEMs, the IT world and the OT world, because you mixed them up.
So IT world is very comfortable going, hey, I think I’ve done data center and cloud pretty easy. I just need to bring the same kind of template, because you know, data center and cloud got cleaned up a little bit. Like 20, 25 years ago, that was a mess too, right? I remember as a young engineer trying to connect my serial port, my parallel port, my infrared, all kinds of interfaces.
Now USB came and went bang, here’s done, right? So simplify everything, streamline. It’s still got that mess because every industry has its own set of protocols and converters, like IOs and interfaces and all of that, right? So big attraction, big gap for the IT providers on this, I call all the CSPs like you listed, Microsoft, Amazon, and Red Hats and VMware, and all of those folks is, they don’t like you said, the opening edge is super heterogeneous.They don’t have the heterogeneity that is needed at the edge.
They’re looking for a partner where we come and help is, today when you go to like an outpost and order for like Amazon and order and outpost, that’s a one you, two you, fixed box that Amazon will ship, right? That’s, there’s no, but that’s, you can do homogeneity in a data center or cloud. Same, repeat a million times. At the edge, the retail store is kind of different.
Even a retail store in a, you know, a Starbucks is different to a Walmart, to a Home Depot, right? Just in retail, now a healthcare hospital is different, manufacturing completely different, oil and gas. So education, so everything requires a variety of hardware flavors and things. And if hyperscalers, everybody needs to maintain hundreds, you know, and thousands of ODM hardware and flexibility, they can’t do what I would call hardware compatibility list of their software and maintain that.
So the partnership with a lot of the IT players is basically to help them, you know, expand. So you can go to Azure Arc today, downlist, and all the, you know, Intel based ODM hardware now shows up at Edge for them. Suddenly, we’ve expanded the catalog beyond just Azure HITCI platform, which is a Dell server to all kinds of ODMs that they can open their Edge services to.
In return, we’re bootstrapping, we’re bringing up our silicon, we’re bringing up the OS, we’re calling to the Azure cloud services and pulling services from Azure to get that goal for them. So each of these kind of technology partnerships is a different kind of mixture of where we’re helping Tiber Edge platform is helping them close the gaps they have.
Prakash Sangam:
It’s like kind of a gel which connects the standard offerings that the cloud providers had and this heterogeneity that they had.
Muneyb Minhazuddin:
Exactly. So think of it as a middle layer, southbound has interfaces to all kinds of ODM and OEM and not bound has interfaces to all clouds and ISVs, OSVs of the world. Yeah, I think the middle layer, I think is a good definition. That basically takes out the complexity from both sides.
Prakash Sangam:
Okay, that’s good to know. So another question that is in the same line. If you look at Intel’s history, been basically working with a few large scale players, right? But when you come to Edge IoT, for that matter, is it’s long tail of lots of thousands and thousands of small players. So are you looking to SIs to do that, to do the outreach and work with them?
Muneyb Minhazuddin:
Our intent to kind of scale is through both these routes, right? Internally, we call it the 2P-3P model. The two-party model is the SIs, the two-party model is us and the SIs, the SIs are leading the enterprise conversation. The 3P model is where we are providing elements of the platform to the hyperscalers, the OEMs, the OSVs, the ISVs of the world, and then they take it to market through their own channels. It’s a more embedded model. So we’re getting that both.
And the big difference is there is a value play and a volume play. The 3P model gives us a massive volume play through all ecosystem, and the 2P model gives a huge value play through the SIs who will go solve that for us. So I think, again, intentionally putting that two-speed model in place, because, again, historically Intel has not, this is why we put the Tiber brand, et cetera.
Intel usually has a lot of our software which are driving our silicon, and then we just landed in the market and not commercialize it. As we want to commercialize it, we have to get more thoughtful about these multiple routes to market. Our traditional route to market is our silicon gets embedded into the ecosystem, the 3P in general, and then gets sold as appliances of a bigger solution.
That’s the 3P model where we’re supplying unique software value props, and we still have that passed through our ecosystem. The 2P model through the SI is a value selling motion where we actually get software value also recognized through the SI. So we’re intentionally creating this two-speed model for the first time as we’re thinking about also commercializing software. Otherwise, people generally take system-level software from us for free.
We’ll continue to give that system-level software for free, but now we’re coming up with the ease of operationalization, ease of scaling, ease of manageability, security, which we are able to kind of go, oh, this is value-added software, so we can monetize that.
Prakash Sangam:
So I mean, as you mentioned, a lot of the software that Intel does is kind of given free, right? Either it’s reference software, or most of the time it’s free. But that is changing with Tiber Edge, right? You’re going to monetize it, and coming up with a separate brand makes me think you want to do it even more across the board, right?
Would that confuse some of your customers, even your internal customers? That kind of needs a different mindset and so on. So how are you managing that?
Muneyb Minhazuddin:
Yeah, I think it’s, again, as I said, it’s a traditional route to market has been through those, you know, ODMs, OEMs, CSPs, like we’re supplying embedded silicon to that. We haven’t traditionally pursued a system integrator or software, so I had to create that, you know, two-speed route to market. And create, by the way, for the first time, we were able to then go or create software SKUs, have a software supply chain established, a software kind of, you know, operations, contracting house, all of that kind of built out.
So, you know, and go create software distribution, SIs, reseller, contracts, all of that put in place, right? Because the backend in the software side is as heavy or actually more heavy than just saying, I want to sell software. We have to build out all the revops, the operations, the finance, the billing, all of that. We’ve done that, which means we’re quite intent.
As I said, again, we’ll continue to support free software, reference software for silicon. That has to happen. But in the Edge specifically, we see that there’s a gap, right? So if you go back again, two, three decades ago, there was a data center, virtualization created the normalization layer in the data center and client server architecture, what have you.Then cloud happened about a decade and a half ago, and then that kind of created that streamlining into the cloud.
That level of streamlining is not there at the Edge today because like you said, it’s super heterogeneous, there’s no standardization. So there is a gap in the market in software space. And we’re saying, hey, because we’ve spent about 10 years in the Edge already, IoT selling, supplying, silicon into the Siemens, the Rockwell, the Honeywells of the world, et cetera.
We have tons of experience managing that environment. So we’re coming and saying that standardization layer in software, operationalization layer in software doesn’t exist. And there’s an opportunity for us to kind of step in and close that gap. Like my past employer, VMware did for the data center about two decades ago. They created that and then monetized it. And the cloud providers came and monetized that layer of IaaS, PaaS and SaaS for the cloud.
That kind of streamlining of platform is missing at the edge. Everybody’s trying to move in, but pretty complicated based on the discussion we’re having by industry and segment. And we find ourselves in that unique position to be able to create that normalized platform. There’s a lot of variations and regulations, requirements and so on.
Prakash Sangam:
Yeah, that’s good. So as I mentioned, AI itself is kind of is becoming hybrid, right? There is cloud component, there is edge, the device as well and so on. So specifically on AI, how does this going to work? Everybody brings in their own piece as well, right? Like device might have their own AI components, models, cloud, a lot of AI is running in the cloud. And the IoT players or the industrial IoT players, they have their own specific models for that market and so on. So how does this gel together?
Muneyb Minhazuddin:
Yeah, it’s actually, you know, it’s a complex situation. That’s where wherever there’s complexity, Prakash is where you can actually monetize and make money. So because, yeah, I think because people want the simplify. So we’re doing two things, right?
So you probably hear us, you know, already talk about, you know, OPA, which is like an open standard. So we’re driving standardization across different things, right? So because one way to do it, like, you know, again, you don’t want to do this in a closed system. You want to do it in an open system. The only way you drive standardization, like you called out across devices, or key players, et cetera, is to drive towards a stomp instead of interfaces.
So we’ve announced our standardization efforts across multiple things, right? So, you know, I think if you think about OPA that we announced, which is an open platform for enterprise AI, that is kind of a composable building blocks for state-of-the-art generative AI systems, including LLMs, data stores, et cetera, right? So there has to be some sort of standardization or common set of interfaces because enterprise data is pretty unstructured, documents, blah, blah, blah.
So you have to have some standard interfaces for ingestion, for retrieval, for query, et cetera. So one aspect is to drive standardization. So how do you actually have these composable building blocks and blueprints for RAG and generative AI? And what’s the data ingestion, data streamlining, and how do you do that? That’s the standardization. Our platform already does that at commercial grade. Now that works for ingestion of data.
We also announced partnership with when I was at Hanoi or Messe with Project Margo, which is a left edge project, which again defines standardized interfaces for Siemens, Rockwell, Honeywell, Schneider, ABB, all of those folks, because as OT devices, they have their own OT device and system, but they’re looking for standard not bound API interfaces, something to write to that will help them then receive models, receive orchestration and control and logic.
So we’re also different layers of this infra OT data standardization, we’re driving standardization through open standards, while the platform day one has support for this already.
Prakash Sangam:
But it will still be a lot of effort to make sure this works good, right? I mean, you define something in standards, the interfaces, making sure it works is still a lot of work, I guess.
Muneyb Minhazuddin:
Oh, yeah, this is where again, this is by the way, this is where having the size in the mix is super important, because this is what they do day in and day out, sit with an enterprise, look at all the complex problems and create the blueprints, deploy them, maintain them and do that.
Prakash Sangam:
Exactly. So, okay. And in terms of monetizing, what is kind of the model you’re looking at? Is it licensed or is it, what kind of monetization model do you have in mind?
Muneyb Minhazuddin:
We have subscription licenses available today. We’re already shipping since we launched it in Q1. We’re already recognizing software revenues on the licenses. You can get them. The system integrators are getting them from distributors. We have enabled distributors for this today. So, there’s subscription-based licenses for Tiber Edge platform already available.
Prakash Sangam:
And the subscription is on per device or per instance? Or how is that?
Muneyb Minhazuddin:
Yeah, per Edge node, as we call it.
Prakash Sangam:
Okay. Perfect. And you said you already recognize revenue on this.
Muneyb Minhazuddin:
Yeah. We launched it in Q1. We had a healthy Q1 run more than I anticipated. I can’t talk about Q2. It’s not done yet. But we’re ahead of where we planned to be.
Prakash Sangam:
Okay, perfect. I mean, seeing software revenue is, I think, really significant. So… Yeah. When people generally… And this is where, like, you know, when we launched it, I was expecting this been three months since launch, right?
Muneyb Minhazuddin:
So three months you expect six to eight POCs and trials and things like that. Like, that’s good signs. If I had six to eight, that would still be high-flying on it. I have 40 plus. And then some of those POCs are converting into payments, which is good. Like, when customers actually pay, then you know you’re making a difference.
Prakash Sangam:
Correct. And especially in IoT, I’ve been around for so long. Everybody does POCs, right? And tons of POCs. Of course. They don’t really mean anything. Once the revenues start flowing in, that’s when I say, okay, this is real, right?
Muneyb Minhazuddin:
Yeah, yeah. Absolutely. Absolutely.
Prakash Sangam:
Okay. So and what’s next for Tiber Edge?
Muneyb Minhazuddin:
We have quarterly releases. We’ve launched it in Q1. We had a Q2 release ship out. We have a Q3, Q4 release. So we’re continuing to kind of build out the roadmap. As we get to our early customers, we’re getting tons of new requirements and evolving it, support for, you know, the AI space is not dull, as you know, right? There’s new things coming up every day, and they’re like, hey, can you interface with this? Can you interface with that? So the roadmap is quite healthy.
We have quarterly launches ready, and there are preset launching, expanding into more segments and models and use cases. So yeah, it’s a pretty vibrant launch and roadmap.
Prakash Sangam:
So in terms of cadence, you said you will have a release every quarter?
Muneyb Minhazuddin:
We have a quarterly release. I know the components like OpenVINO and all of those have more like six-week cadences, but it’s all good that the platform end-to-end has a quarterly cadence.
Prakash Sangam:
So it was a great discussion, Muneyb. Thank you very much for all the insights. So as we discussed, Edge is a complex market. I’ll be closely observing it slowly unfold in the next years, I guess. So I would love to have you back on the show in the near future to discuss how the Tiber journey has been. And best of luck with it. And thank you again for coming over to Tantra’s Mantra.
Muneyb Minhazuddin:
Thank you for having me on the show.
Prakash Sangam:
So folks, that’s all we have for now. Hope you found this discussion informative and useful. If so, please hit like and subscribe to the podcast on whatever platform you are listening this on. I will be back soon with another episode, putting light on another interesting subject.
Bye Bye for now…