Generative AI
It would be an understatement to say that Generative AI (GenAI) is having its day in the sun. Most of today’s GenAI powered by Large Language Models (LLMs) is run in the centralized cloud, built with power-hungry processors. However, it will soon have to be distributed across different parts of the network and value chain, including devices such as smartphones, laptops and edge-cloud. The main drivers of this shift will be privacy, security, hyper-personalization, accuracy, and better power and cost efficiency.
AI model “training,” which occurs less often and requires extreme processing, will remain in the cloud. However, the other part, “inference,” where the trained model makes predictions based on the live data, will be distributed. Some model “fine-tuning” will also happen at the edge.
Challenges of today’s cloud-based GenAI
No question that AI will touch every part of human and even machine life. GenAI, which is a subset application, will also be very pervasive. That means the privacy and security of the data GenAI processes will be critically important, and unfortunately, there is no easy or guaranteed way to ensure that in the cloud.
Equally important is GenAI’s accuracy. For example, ChatGPT’s answers are often riddled with factual and demonstrable errors (Google “ChatGPT hallucinations” for details). There are many reasons for this behavior. One of them is that GenAI is derived intelligence. For example, it knows 2+2=4 because more people than not have said so. The GenAI models are trained on enormous generic datasets. So, when that training is applied to specific use cases, there is a high chance that some results will be wrong.
Why GenAI needs to be distributed
There are many reasons for distributing GenAI, including privacy, security, personalization, accuracy, power efficiency, cost, etc. Let’s look at each of them from both consumer and enterprise perspectives.
Privacy: As GenAI plays a more meaningful role in our lives, we will share even more confidential information with it. That might include personal, financial, health data, emotions and many details even you or your family and closest friends may not know. You do not want all that information to be sent and stored perpetually on a server you have no control over. But that’s precisely what happens when the GenAI is run entirely in the cloud.
One might ask, we already store so much personal data in the cloud now, why is GenAI any different? That’s true, but most of that data is segregated, and in many cases, access to it is regulated by law. For example, health records are protected by HIPPA regulations. But giving all the data to GenAI running in the cloud and letting it aggregate is a disaster waiting to happen. So, it is apparent that most privacy-sensitive GenAI use cases should run on devices.
Security: GenAI will have an even more meaningful impact on the enterprise market. Data security is a critical consideration when utilizing GenAI for enterprises. Even today, the concern for data security is making many companies opt for on-prem processing and storage. In such cases, GenAI has to run on the edge, specifically on devices and the enterprise edge cloud, so that data and intelligence stay within the secure walls of the enterprise.
Again, one might ask, since enterprises already use the cloud for their IT needs, why would GenAI be any different? Like the consumer case, the level of understanding of GenAI will be so deep that even a small leak anywhere will be detrimental to companies’ existence. In times when industrial espionage and ransomware attacks are prevalent, sending all the data and intelligence to a remote server for GenAI will be extremely risky. An eye-opening early example was the recent case of Samsung engineers leaking trade secrets when using ChatGPT for processing company confidential data.
Personalization: GenAI has the potential to automate and simplify many things in life for you. To achieve that, it has to learn your preferences and apply appropriate context to personalize the whole experience. Instead of hauling, processing, storing all that data and optimizing a large power-hungry generic model in the cloud, a local model running on the device would be super-efficient. That will also keep all those preferences private and secure. Additionally, the local model can utilize sensors and other information in the device to better understand the context and hyper-personalize the experience.
Accuracy and domain specificity: As mentioned, using generic models trained with generic data for specific tasks will result in errors. For example, a model trained on financial industry data can hardly be effective for medical or healthcare use cases. GenAI models must be trained for specific domains and further fine-tuned locally for enterprise applications to achieve the highest accuracy and effectiveness. These domain-specific models can also be much smaller with fewer parameters, making them ideal for running at the edge. So, it is evident that running models on devices or edge cloud is a basic need.
Since GenAI is derived intelligence, the models are vulnerable to hackers and adversaries trying to derail or bias their behavior. A model within the protected environments of enterprise is less susceptible to such acts. Although hacking large models with billions of parameters is extremely hard, with the high stakes involved, the chances are non-zero.
Cost and power efficiency: It is estimated that a simple exchange with GenAI costs 10x more than a keyword search. With the enormous interest in GenAI and the forecasted exponential growth, running all that workload on the cloud seems expensive and inefficient. It’s even more so when we know that many use cases will need local processing for the reasons discussed earlier. Additionally, AI processing in devices is much more power efficient.
Then the question becomes, “Is it possible to run these large GenAI models on edge devices like smartphones, laptops, and desktops?” The short answer is YES. There are already examples like Google Gecko and Stable Diffusion optimized for smartphones.
Meanwhile, If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
In the fast-moving generative AI (Gen AI) market, two sets of recent announcements, although unrelated, portend why and how this nascent technology could evolve. The first set was Microsoft’s Office 365 Copilot and Adobe’s Firefly announcements, and the second was from Qualcomm, Intel, Google and Meta regarding running Gen AI models on edge devices. This evolution from cloud-based to Edge Gen AI is not only desirable but also needed, for many reasons, including privacy, security, hyper-personalization, accuracy, cost, energy efficiency, and more, as outlined in my earlier article.
While the commercialization of today’s cloud-based Gen AI is in full swing, there are efforts underway to optimize the models built for power-guzzling GPU server farms to run on power-sipping edge devices with efficient mobile GPUs, Neural and Tensor processors (NPU and TPU). The early results are very encouraging and exciting.
Gen AI Extending to the Edge
Office 365 Copilot is an excellent application of Gen AI for productivity use cases. It will make creating attractive PowerPoint presentations, analyzing and understanding massive Excel spreadsheets, and writing compelling Word documents a breeze, even for novices. Similarly, Adobe’s Firefly creates eye-catching images by simply typing what you need. As evident, both of these will run on Microsoft’s and Adobe’s clouds, respectively.
These tools are part of their incredibly popular suites with hundreds of millions of users. That means when these are commercially launched and customer adaption scales up, both companies will have to ramp up their cloud infrastructure significantly. Running Gen AI workload is extremely processor, memory, and power intensive—almost 10x more than regular cloud workloads. This will not only increase capex and opex for these companies but also significantly expand their carbon footprint.
One potent option to mitigate the challenge is to offload some of that compute to edge devices such as PCs, laptops, tablets, smartphones, etc. For example, run the compute-intensive “learning” algorithms in the cloud, and offload “inference” to edge devices when feasible. The other major benefits of running inference on edge are that it will address privacy, security, and specificity concerns and can offer hyper-personalization, as explained in my previous article.
This offloading or distribution could take many forms, ranging from sharing inference workload between the cloud and edge to fully running it on the device. Sharing workload could be complex as there is no standardized architecture exists today.
What is needed to run Gen AI on the edge?
Running inference on the edge is easier said than done. One positive thing going for this approach is that today’s edge devices, be it smartphones or laptops, are powerful and highly power efficient, offering a far better performance-per-watt metric. They also have strong AI capabilities with integrated GPUs, NPUs, or TPUs. There is also a strong roadmap for these processor blocks.
Gen AI models come in different types with varying capabilities, including what kind of input they utilize and what they generate. One key factor that decides the complexity of the model and the processing power needed to run it is the number of parameters it uses. As shown in the figure below, the model size ranges from a few million to more than a trillion.
Gen AI models have to be optimized to be run on edge devices. The early demos and claims suggest that devices such as smartphones could run models typically one to several billion parameters today. Laptops, which can utilize discrete GPUs, can go even further and run models with more than 10 billion parameters now. These capabilities will continue to evolve and expand as devices become more powerful. However, the challenge is to optimize these models without sacrificing accuracy or with minimal or acceptable error rates.
Optimizing Gen AI models for the edge
There are a few things that help in optimizing the Gen AI model for the edge. First, in many use cases, inference is run for specific applications and domains. For example, inference models specific to the medical domain need fewer parameters than generic models. That should make running these domain-specific models on edge devices much more manageable.
Several techniques are used to optimize trained cloud-based AI models for edge devices. The top ones are quantization and compression. Quantization involves reducing the standard 32-bit floating models to 16-bit, 8-bit, or 4-bit integer models. This substantially reduces the processing and memory needed with minimal loss in accuracy. For example, Qualcomm’s study has shown that these can improve performance-per-watt metric by 4-times, 16-times, and 64-times, respectively, with often less than 1% degradation in accuracy, depending on the model type.
Compression is especially useful in video, images, and graphics AI workloads where significant redundances between succussive frames exist. Those can be detected and not processed, which results in substantially reduced computing needs and improves efficiency. Many such techniques could be utilized for optimizing the Gen AI inference model for the edge.
There has already been considerable work and some early success for this approach. Meta’s latest Llama 2 (Large Language Model Meta AI ) Gen model, announced on July 18th, 2023, will be available for edge devices. It supports 7 billion to 70 billion parameters. Qualcomm announced that it will make Llama 2-based AI implementations available on flagship smartphones and PCs starting in 2024. The company had demonstrated ControlNet, an image-to-image model currently in the cloud, running on Samsung Galaxy S23 Ultra. This model has 1.5 billion parameters. In Feb 2023, it also demonstrated Stable Diffusion, a popular text-to-image model with 1 billion parameters running on a smartphone. Intel showed Stable Diffusion running on a laptop powered by its Meteor Lake platform at Computex 2023. Google, when announcing its next-generation Gen AI PaLM 2 models during Google I/O 2023, talked about a version called Gecko, which is designed primarily for edge devices. Suffice it to say that much research and development is happening in this space.
In closing
Although most of the Gen AI today is being run on the cloud, it will evolve into a distributed architecture, with some workloads moving to edge devices. Edge devices are ideal for running inference, and models must be optimized to suit the devices’ power and performance envelope. Currently, models with several billion parameters can be run on smartphones and more than 10 billion on laptops. Even higher capabilities are expected in the near future. There is already a significant amount of research on this front by companies such as Qualcomm, Intel, Google, and Meta, and there is more to come. It will be interesting to see how that progresses and when commercial applications running Gen AI on edge devices become mainstream.
If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Qualcomm vs. Arm
When the legal struggle between long-term allies Qualcomm and Arm became public, everybody thought it was an innocuous case that would quickly settle. Although I believe that is still the case, the recent uptick in hostilities points to a more convoluted battle.
It all started when Qualcomm announced and finally acquired processor design startup Nuvia in 2021. Nuvia was developing a new CPU architecture that it claims is superior to anything in the market. Qualcomm has publicly stated that it will use Nuvia designs and the team for its entire portfolio, including smartphones, tablets, PCs, Automotive, IoT and others.
Nuvia’s designs run Arm’s instruction set. It had an Instruction Set Architecture (ISA) license from Arm, with certain licensing fees. This license is also known as Architecture License Agreement (ALA) in legal documents. Since Qualcomm also has an ALA with Arm, with a different licensing fee structure, there is a difference of opinion between Qualcomm and Arm on which contract should apply to Nuvia’s current designs and its evolutions.
If you want to know more about the types of licensing Arm offers and other details, check out this article.
According to the court documents, the discussions between Qualcomm and Arm broke down, and unexpectedly, Arm unilaterally canceled Nuvia’s ALA and asked it to destroy all its designs. It even demanded Qualcomm not to use Nuvia engineers for any CPU designs for three years. Arm officially filed the case against Qualcomm on August 31, 2022.
Qualcomm filed its reply on September 30, 2022, summarily rejecting Arm’s claims. Following that, on October 26, 2022, Qualcomm filed an amendment alleging that Arm misrepresented Qualcomm’s license agreement in front of Qualcomm’s customers. Further, it asked the court to enjoin Arm from such actions.
Why is Arm really suing Qualcomm? Is it about the PC market?
Easy question first. No, it’s not just about the PC market. Qualcomm’s intention to use Nuvia designs across its portfolio is an issue for Arm.
Qualcomm has both ALA and Technology License Agreement (TLA) with Arm. The former is required if you are using only Arm’s instruction set, and the latter if you use cores designed by Arm. TLA fees are magnitudes higher than ALA. Qualcomm currently uses Arm cores and TLA licensing. According to Strategy Analytics analyst Sravan Kundojjala, it pays an estimated 20 – 30 cents per chip to Arm.
Since Qualcomm negotiated the contract years ago, its ALA rate is probably very low. So, if Qualcomm adopts Nuvia designs for its entire portfolio, it will only pay this lower ALA fee to Arm. For Arm, that puts all the revenue coming from Qualcomm at risk. That is problematic for Arm, especially when it is getting ready for its IPO.
With the Nuvia acquisition, Arm saw an opportunity to renegotiate Qualcomm’s licensing contract. Moreover, Nuvia’s ALA rate must be much higher than Qualcomm’s. That is because of two reasons. First, Nuvia was a startup with little negotiation leverage. And second, it was designing higher-priced, low-volume chips, whereas Qualcomm primarily sells lower-priced, high-volume chips. So, it is in Arm’s favor to insist Qualcomm pay Nuvia’s rate. But Qualcomm disagrees, as it thinks its ALA covers Nuvia designs.
Core questions of the dispute
Notwithstanding many claims and counterclaims, this is purely a contract dispute and boils down to these two core questions:
-
Does Nuvia’s ALA require mandatory consent from Arm to transfer its designs to a third party, Qualcomm?
-
Does Qualcomm’s ALA cover the designs they acquire from a third party, in this case, Nuvia?
Clearly, there is a disagreement between the parties regarding these questions. Since the contracts are confidential, we can only guess and analyze them based on the court filings. I am sure many things are happening behind the scenes as well.
Let’s start with the first one. Since Nuvia was a startup, its acquisition by a third party was given. I am assuming there is some language about this in the ALA. But, interestingly, Arm, in its complaint, hasn’t cited any specific clause of the contract supporting this. Arm only claims Nuvia requires consent. The argument that Arm didn’t want to disclose that in a public document doesn’t also hold. They could have cited the clause with the details redacted, just like other clauses mentioned there.
In the amended Qualcomm filing, there is some language about needing consent to “assign” the license to the new owner. But according to Qualcomm, it doesn’t need this license “assignment” as it has its own ALA.
If there is no specific clause in the Nuvia ALA regarding the acquisition by an existing Arm licensee. Then shame on the Arm contract team.
There is not much clarity on the second question. Most of Arm’s claims in the case are related to Nuvia ALA. Qualcomm claims it has wide-ranging licensing contracts with Arm that cover using Nuvia’s designs. But I am sure this question about Qualcomm ALA will come up as the case progresses.
Closing thoughts
Qualcomm and Arm have been great partners for a long time. Together they have created a vast global ecosystem. However, recent developments point to a significant rift between the two. Especially, Qualcomm’s allegation about Arm threatening to cancel its license in front of Qualcomm’s customers is troubling. There was some alleged talk of Arm changing its business model and other extremities, which might also unnerve other licensees. We are yet to hear Arm’s reply to these allegations.
I think the widely anticipated out-of-court settlement is still the logical solution. When the case enters the discovery phase, both parties will have access to each other’s evidence and understand the relative strengths. Usually, that triggers a settlement.
In my opinion, both parties wouldn’t be interested in a lengthy court battle. Arm is looking for IPO and doesn’t need this threat hanging over its head, which will spook its investors. Qualcomm is planning a major push into the PC market in collaboration with Microsoft, and supposedly planning its future SoC roadmap on Nuvia designs. So, it would also like to end the uncertainty at the earliest. Notwithstanding this case, Qualcomm seems to be going ahead with its plans.
We are still in the very early stage of this case. Arm lawyers have extensive licensing experience, and Qualcomm’s lawyers are battle-hardened from their recent lawsuits against Apple and FTC. I will be closely watching the developments and writing about them. So, be on the lookout.
Meanwhile, If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Prakash Sangam is the founder and principal at Tantra Analyst, a leading boutique research and advisory firm. He is a recognized expert in 5G, Wi-Fi, AI, Cloud and IoT. To read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
In the ongoing Qualcomm vs. Arm saga, the US District Court of Delaware recently released the case schedule, setting the stage for the battle. It reveals two critical dates. The first is for the discovery process, during which early settlements often occur. Second is the trial date, which lays out how long the litigants have to wait for a legal resolution. Usually, litigants, which can wait longer, have the upper hand, especially if the opposing party can’t handle the uncertainty and is pressed for time.
Unless either party realizes that their case is weak during discovery, we are on for a long battle. In such a case, Qualcomm has the luxury of time, which might push Arm for a quicker and unfavorable settlement.
Note: If you would like to know the chronology of this battle, what the issues are, and what is at stake, check out my earlier article, “Qualcomm, Arm legal spat regarding Nuvia becomes more bitter“
It’s a game of chicken
Typically, litigation between large companies, such as this, is a “game of chicken.” It’s a matter of who gives up first. There are a couple of stages where this “giving up” can happen.
The first is during the discovery phase, when both parties closely examine the other party’s evidence and other details. A good legal team can assess their case’s merits and, if weak, settle quickly.
If the discovery is inconclusive, the next thing both companies try to avoid is a long-drawn jury trial, which brings all the dirty laundry into the open. So, most such cases get settled before the jury trial begins. Even if the case goes to trial and is decided in the lower courts, litigants with the luxury of time can keep appealing to higher courts and delay the final verdict. So, it all boils down to who has the time advantage and can stay longer without succumbing to market, business, and other pressures.
Time in on Qualcomm’s side
In the Qualcomm vs. Arm case, discovery is set to start on January 13th, 2023, and the trial on September 23rd, 2024. The actual trial is almost two years away. In such a case, I think Qualcomm has the time-advantage, for a few reasons:
-
Qualcomm can keep using Nuvia IP without any issues till the matter is resolved
-
Based on precedent, it is highly unlikely that Arm will get an injunction against Qualcomm. And probably realizing this, Arm has not yet even asked for a preliminary one. This means Qualcomm can keep making and selling chips based on the disputed IP while the case drags on.
-
No matter who wins, the other party will most likely appeal, which might extend the case to 2025 or even 2026.
-
Qualcomm can indemnify and mitigate the risks of OEMs using the disputed IP
-
Qualcomm is initially targeting the laptop/compute market with Nuvia IP through its newly announced Oryon CPU core. Arm might be thinking because of the litigation, OEMs will be discouraged from developing products based on Oryon. However, Qualcomm can easily address that by indemnifying any investment risks OEMs face.
-
Since OEMs will initially utilize Oryon for fewer models, the overall shipments will be relatively small. Hence Indemnification is quite feasible for Qualcomm.
-
This will be an easy decision for OEMs – their limited initial investments for experimenting with the new platform are protected, while the possible future upside is enormous.
-
Qualcomm’s litigation prowess and recent successes against giants like Apple and FTC will give a lot of confidence to OEMs.
-
SoftBank would like to IPO Arm as soon as it can. However, the uncertainty of this case will significantly depress its valuation.
-
This case puts Arm’s future revenue from Qualcomm at risk. If Arm loses, its revenue from Qualcomm will be reduced to a paltry architecture license fee of 2 to 3 cents per device (estimated), magnitudes lower than the current rate.
-
If Arm wins, there is a considerable upside. That might attract some risky investors, but they will demand a discount from SoftBank/Arm for that risk.
-
Qualcomm’s strong track record in litigation will also affect investor sentiment.
-
SoftBank may not want to wait till the case is over for IPO
-
If the case drags on till 2026, that is a long time in the tech industry. A lot of things can change. For example, competing architectures like RISC-V might become more prominent. Qualcomm recently said it has already shipped 650 million RISC-V-based microcontrollers. Many major companies, including Google, Intel, and AMD, are members of the RISC-V group. This might reduce Arm’s valuation if SoftBank waits longer for IPO.
-
SoftBank’s other bets are also not doing so great. The upcoming slowdown in the tech industry might push it to dispose of Arm sooner than later.
Arguably, the prolonged litigation will also have some adverse impact on Qualcomm. As indicated many times by its CEO Cristiano Amon and other executives, Qualcomm plans to build on Nuvia IP and use it across its portfolio, from smartphones to automotive and IoT. The uncertainty might affect its long-term strategy, roadmap planning, and R&D investments. This might incentivize an early settlement but is not significant enough to compel them to do it.
Closing thoughts
The Qualcomm vs. Arm case has become a game of chicken. There might be a settlement based on discovery. If not, SoftBank/Arm can’t afford prolonged litigation, but Qualcomm can. This might push Arm to settle sooner and at terms more favorable to Qualcomm.
There are some other considerations as well, such as the worst- and best-case scenarios, how the recent appointment of Qualcomm veteran and previous CEO Paul Jacobs to the Arm board affect the dynamics of the case, and so on. Those will be the subject of my future article. So, be on the lookout.
Meanwhile, If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Softbank is reportedly planning Arm IPO while locked in a high-stakes legal battle with Qualcomm. Although the case is becoming a game of chicken, because of its enormous impact on many stakeholders, including the litigants, the huge Arm ecosystem and especially the two major markets – personal and cloud computing – it is worthwhile to understand the best and worst case scenarios.
Before analyzing those, it is also important to address lingering questions and confusion about the case.
Clearing the confusion and misconceptions
Generally, anything related to licensing is shrouded in secrecy, creating confusion. Fortunately, the court filings have clarified some questions and misconceptions people have.
Is the case only about Nuvia Intellectual Property (IP), or does it affect the other licenses Qualcomm has with Arm?
It’s only about Nuvia’s Architecture License Agreement (ALA) with Arm and products based on Nuvia IP. None of Qualcomm’s other designs and products, which are covered by its existing license with Arm, are affected.
The case will be in court for more than 2-3 years. Can Qualcomm develop and commercialize products based on Nuvia IP during that time?
Yes. So far, Arm has not asked for injunction relief against Qualcomm. Even if asked, it is highly unlikely that it will be granted. Injunctions are hard to get in the US, and Arm must explain why it waited so long. So, Qualcomm can use Nuvia’s IP while the case drags on in the courts. Qualcomm has already announced Oryon CPU based on Nuvia IP and is working with several PC OEMs.
Can Arm unilaterally cancel its licenses with Qualcomm?
No. Qualcomm claimed to have a legally binding licensing contract for several years in the court filings. So, unless Qualcomm violates any conditions of the agreement, Arm can’t unilaterally cancel the licenses.
As alleged in one of the court filings, can Arm change its practice of licensing to chipset vendors and instead license to device OEMs?
Arm can’t change licenses of existing licensees such as Qualcomm, Apple and many others, as they have legally binding agreements. But for any new licenses, Arm is free to engage with anybody it wishes, including device OEMs.
With these questions clarified, let’s look at the various scenarios.
Best and worst cases for Qualcomm
The best case for Qualcomm would be winning the case. The win would disrupt the personal and cloud computing market and revolutionize smartphone, auto and other markets. That means it will keep paying Arm at its current ALA rate for all the products that incorporate Nuvia IP.
The absolute worst case would be losing the case and all the following appeals (more on this later). But a realistic worst-case scenario would be settling with terms favorable to Arm. That means its ALA rate will go up. It is hard to predict by how much. The upper bound will probably be the rate in Nuvia’s ALA.
Best and worst cases for Arm
Surprisingly, the best case for Arm is not winning the case and the court ordering Qualcomm to destroy its designs and products (more on that later). Instead, it is the case heavily tilting to its side, making Qualcomm settle on terms favorable to Arm, even before its IPO. Those terms will depend on Qualcomm’s success with Nuvia IP, Arm’s IPO valuation and Soft Bank’s patience in extending the IPO timeline. It would be reasonable to agree that the upper bound would be the Nuvia ALA rate.
A settlement would also help calm the nerves of its other licensees.
The worst-case scenario for Arm is the status quo, where Qualcomm pays lower ALA rates instead of 20-30 cents per device Technology License Agreement (TLA) rates for all devices based on Nuvia IP. In my view, this is one of the reasons for Arm to go the litigation route – there is a significant potential upside if it can force Qualcomm to pay more. The downside is tied to litigation costs and a long, protracted legal fight can cost hundreds of millions.
There will be a substantial downside in the medium and long term, especially considering its IPO plans. A significant part of its revenue from Qualcomm is at risk if the latter moves all its design to Nuvia IP, and starts paying meager ALA rates. Additionally, this fight will unsettle the Arm ecosystem, and many licensees, including Qualcomm, will move aggressively to the competing RISC-V architecture. All of these will reduce Arm’s IPO valuation.
Absolute worst-case for everybody
The absolute worst case for the litigants and the industry will be Arm winning the case and the court agreeing to its request that Qualcomm destroys all its designs and products based on Nuvia IP. I think it is highly unlikely to happen. If at all, the court might order Qualcomm to pay punitive damages, but ordering to destroy years and billions of dollars of work, some of which consumers would already be using, seems excessive.
For the sake of argument, if all the designs were to be destroyed, Arm would have lost its biggest opportunity to expand its influence in the personal and cloud computing market. Among all its licensees, Qualcomm is the strongest and has the best chance to succeed in those markets. For Arm, this would be akin to winning the battle but losing the war. I think Arm is smart enough to let that happen.
Similarly, such destruction would be bad for Qualcomm as well. It would have lost all the time and money invested in buying Nuvia and developing products based on its IP. That will also sink its chances of disrupting personal and possibly cloud computing markets. Again, just like Arm, Qualcomm is smart enough to let that happen.
Losing such an opportunity to disrupt large markets such as personal and cloud computing would also be a major loss for the tech industry. This will also make any of Arm’s licensees almost impossible to acquire and the whole Arm ecosystem jittery. That will be a significant hurdle in the ecosystem’s otherwise smooth and expansive proliferation.
Hence, my money is on settlement. The only questions are when and on what terms.
Meanwhile, If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
The Chronicles of 3GPP Rel. 17
Have you ever felt the joy and elation of being part of something that you have only been observing, reading, writing about, and admiring for a long time? Well, I experienced that when I became a member of 3GPP (3rd Generation Partnership Project) and attended RAN (Radio Access Network) plenary meeting #84 last week in the beautiful city of Newport Beach, California. RAN group is primarily responsible for coming up with wireless or radio interface related specifications.
The timing couldn’t be more perfect. This specific meeting was, in fact, the kick-off of 3GPP Rel. 17 discussions. I have written extensively about 3GPP and its processes on RCR Wireless News. You can read all of them here. Attending the first-ever meeting on a new release was indeed very exciting. I will chronicle the journey of Rel. 17, through a series of articles here on RCR Wireless News, and this is the first one. I will report the developments and discuss what those mean for the wireless as well as the many other industries 5G is set to touch and transform. If you are a standards and wireless junkie, get on board, and enjoy the ride.
3GPP Rel. 17 is coming at an interesting time. It is coming after the much publicized and accelerated Rel. 15 that introduced 5G, and Rel. 16 that put a solid foundation for taking 5G beyond mobile broadband. Naturally, the interest is what more 5G could do. The Rel. 17 kick-off meeting, as expected, was a symposium of great ideas, and a long wish list from prominent 3GPP members. Although many of the members submitted their proposals, only a few, selected through a lottery system, got the opportunity to present in the meeting. Nokia, KPN, Qualcomm, Indian SSO (Standard Setting Organization), and few others were among the ones who presented. I saw two clear themes in most of the proposals: First, keeping enough of 3GPP’s time and resources free to address urgent needs stemming from the nascent 5G deployments; second, addressing the needs of new verticals/industries that 5G enables.
Rel. 17 work areas
There were a lot of common subjects in the proposals. All of those were consolidated into four main work areas during the meeting:
-
Topics for which the discussion can start in June 2019
-
The main topics in this group include mid-tier devices such as wearables without extreme speeds or latency, small data exchange during the inactive state, D2D enhancements going beyond V2X for relay-kind of deployments, support for mmWave above 52.6 GHz, Multi-SIM, multicast/broadcast enhancements, and coverage improvements
-
-
Topics for which the discussion can start in September 2019
-
These include Integrated Access Backhaul (IAB), unlicensed spectrum support and power-saving enhancements, eMTC/NB-IoT in NR improvements, data collection for SON and AI considerations, high accuracy, and 3D positioning, etc.
-
-
Topics that have a broad agreement that can be directly proposed as Work Items or Study Items in future meetings
-
1024 QAM and others
-
-
Topics that don’t have a wider interest or the ones proposed by single or fewer members
As many times emphasized by the chair, the objective of forming these work areas was only to facilitate discussions between the members to come to a common understanding of what is needed. The reason for dividing them into June and September timeframe was purely for logistical reasons. This doesn’t imply any priority between the two groups. Many of the September work areas would be enhancements to items being still being worked on in Rel. 16. Also, spacing them out better spreads the workload. Based on how the discussions pan out, the work areas could be candidates for Work Items or Study Items in the December 2018 plenary meeting.
Two specific topics caught my attention. First, making 5G even more suitable for XR (AR, VR, etc.) and second, AI. The first one makes perfect sense, as XR evolution will have even stringent latency requirements and will need distributed processing capability between device and edge-cloud etc. However, I am not so sure about AI. I don’t how much scope there is to standardize AI, as it doesn’t necessarily require interoperability between devices of different vendors. Also, I doubt companies would be interested in standardizing AI algorithms, which minimizes their competitive edge.
Apart from technical discussions, there were questions and concerns regarding following US Government order to ban Huawei. This was the first major RAN plenary meeting after the executive order imposing the ban was issued. From the discussions, it seemed like “business as usual.” We will know the real effects when the detailed discussions start in the coming weeks.
On a closing note, many compare the standardization process to watching a glacier move. On the contrary, I found it to be very interesting and amusing, especially how the consensus process among the competitors and collaborates work. The meeting was always lively, with a lot of arguments and counter-arguments. We will see whether my view changes in the future! So, tune in to updates from future Rel. 17 meetings to hear about the progress.
I just returned from a whirlwind session of 3GPP RAN Plenary #86, held at the beautiful beach town of Sitges in Spain. The meeting finalized a comprehensive package with more than 30 Study and Work Items (SI and WI) for Rel 17. With a mix of new capabilities and significant improvements to existing features, Rel 17 is set to define the future of 5G. It is expected to be completed by mid or end of 2021.
<<Side note, if you would like to understand more about how 3GPP works, read my series “Demystifying Cellular Standards and Licensing” >>
Although the package looks like a laundry list of features, it gives a window into the strategy and capabilities of different member companies. Some are keen on investing in new, path-breaking technologies, while others are looking to optimize existing features or working on the fringe or very specific areas.
The Rel. 17 SI and WIs can be divided into three main categories.
Blazing new trail
These are the most important new concepts being introduced in Rel. 17 that promise to expand 5G’s horizon.
XR (SI) – The objective of this is to evaluate and adopt improvements that make 5G even better suited for AR, VR, and MR. It includes evaluating distributed architecture harnessing the power of edge-cloud and device capabilities to optimize latency, processing, and power. Lead (aka Rapporteur) – Qualcomm
NR up to 71 GHz (SI and WI) – This is in the new section because of a twist. The WI is to extend the current NR waveform up to 71 GHz, and SI is to explore new and more efficient waveforms for the 52.6 – 71 GHz band. Lead – Qualcomm and Intel
NR-Light (SI) – The objective is to develop cost-effective devices with capabilities that lie between the full-featured NR and Low Power Wireless Access (e.g., NB-IoT/eMTC). For example, devices that support 10s or 100 Mbps speed vs. multi-Gigabit, etc. The typical use cases are wearables, Industrial IoT (IIoT), and others. Lead – Ericsson
Non-Terrestrial Network (NTN) support for NR & NB-IoT/eMTC (WI) – A typical NTN is the satellite network. The objective is to address verticals such as Mining and Agriculture, which mostly lie in remote areas, as well as to enable global asset management, transcending contents and oceans. Lead – MediaTek and Eutelsat
Perfecting the concepts introduced in Rel. 16
Rel. 16 was a short release with an aggressive schedule. It improved upon Rel. 15 and brought in some new concepts. Rel 17 is aiming to make those new concepts well rounded.
Integrated Access & Backhaul – IAB (WI) – Enable cost-effective and efficient deployment of 5G by using wireless for both access and backhaul, for example, using relatively low-cost and readily available millimeter wave (mmWave) spectrum in IAB mode for rapid 5G deployment. Such an approach is especially useful in regions where fiber is not feasible (hilly areas, emerging markets). Lead – Qualcomm
Positioning (SI) – Achieve centimeter-level accuracy, based only on cellular connectivity, especially indoors. This is a key feature for wearables, IIoT, and Industry 4.0 applications. Lead – CATT (NYU)
Sidelink (WI) – Expand use cases from V2X-only to public safety, emergency services, and other handset-based applications by reducing power consumption, reliability, and latency. Lead – LG
Small data transmission in “Inactive” mode (WI) – Enable such transmission without going through the full connection set-up to minimize power consumption. This is extremely important for IIoT use cases such as sensor updates, also for smartphone chatting apps such as Whatsapp, QQ, and others. Lead – ZTE
IIoT and URLLC (WI) – Evaluate and adopt any changes that might be needed to use the unlicensed spectrum for these applications and use cases. Lead – Nokia
Fine-tuning the performance of basic features introduced in Rel. 15
Rel. 15 introduced 5G. Its primary focus was enabling enhanced Broadband (eMBB). Rel. 16 enhanced many of eMBB features, and Rel. 17 is now trying to optimize them even further, especially based on the learnings from the early 5G deployments.
Further enhanced MIMO – FeMIMO (WI) – This improves the management of beamforming and beamsteering and reduces associated overheads. Lead – Samsung
Multi-Radio Dual Connectivity – MRDC (WI) – Mechanism to quickly deactivate unneeded radio when user traffic goes down, to save power. Lead – Huawei
Dynamic Spectrum Sharing – DSS (WI) – DSS had a major upgrade in Rel 16. Rel 17 is looking to facilitate better cross-carrier scheduling of 5G devices to provide enough capacity when their penetration increases. Lead – LG
Coverage Extension (SI) – Since many of the spectrum bands used for 5G will be higher than 4G (even in Sub 6 GHz), this will look into the possibility of extending the coverage of 5G to balance the difference between the two. Lead – China Telecom and Samsung
Along with these, there were many other SI and WIs, including Multi-SIM, RAN Slicing, Self Organizing Networks, QoE Enhancements, NR-Multicast/Broadcast, UE power saving, etc., was adopted into Rel. 17.
Other highlights of the plenary
Unlike previous meetings, there were more delegates from non-cellular companies this time, and they were very actively participating in the discussions, as well. For example, a representative from Bosch was a passionate proponent for automotive needs in Slidelink enhancements. I have discussed with people who facilitate the discussion between 3GPP and the industry body 5G Automotive Association (5GAA). This is an extremely welcome development, considering that 5G will transform these industries. Incorporating their needs at the grassroots level during the standards definition phase allows the ecosystem to build solutions that are market-ready for rapid deployment.
There was a rare, very contentious debate in a joint session between RAN and SA groups. The debate was to whether set RAN SI and WI completion timeline to 15 months, as planned now, or extend it to 18 months. The reason for the latter is TSG-SA being late with Rel. 16 completion, and consequently lagging in Rel. 17. Setting an 18-month completion target for RAN will allow SA to catch up and align both the groups to finish Rel. 17 simultaneously. However, RAN, which runs a tight ship, is not happy with the delay. Even after a lengthy discussion, the issue remains unresolved.
<<Side Note: If you would like to know the organization of different 3GPP groups, including TSGs, check out my previous article “Who are the unsung heroes that create the standards?” >>
It will be amiss if I don’t mention the excellent project management skills exhibited by the RAN chair Mr. Balazs Bertenyi of Nokia Bell Labs. Without his firm, yet logical and unbiased decision making, it would have been impossible to finalize all these things in a short span of four days.
In closing
Rel. 17 is a major release in the evolution of 5G that will expand its reach and scope. It will 1) enable new capabilities for applications such as XR; 2) create new categories of devices with NR-Light; 3) bring 5G to new realms such as satellites; 4) make possible the Massive IoT and Mission Critical Services vision set out at the beginning of 5G; while also improving the excellent start 5G has gotten with Rel. 15 and eMBB. I, for one, feel fortunate to be a witness to see it transform from concept to completion.
With COVID-19 novel coronavirus creating havoc and upsetting everybody’s plans, the question on the minds of many people that follow standards development is, “How will it affect the 5G evolution timeline?” The question is even more relevant for Rel. 16, which is expected to be finalized by Jun 2020. I talked at length regarding this with two key leaders of the industry body 3GPP—Mr. Balazs Bertenyi, the Chair of RAN TSG and Mr. Wanshi Chen, Chair of RAN1 Working Group (WG). The message from both was that Rel 16 will be delivered on time. The Rel. 17 timelines were a different story though.
<<Side note: If you would like to know more about 3GPP TSGs and WGs, refer to my article series “Demystifying Cellular Patents and Licensing.” >>
3GPP meetings are spread throughout the year. Many of them are large conference-style gatherings involving hundreds of delegates from across the world. WG meetings happen almost monthly, whereas TSG meetings are held quarterly. The meetings are usually distributed among major member countries, including the US, Europe, Japan, and China. In the first half of the year, there were WG meetings scheduled in Greece in February, and Korea, Japan, and Canada in April, as well as TSG meetings in Jeju, South Korea in March. But because of the virus outbreak, all those face-to-face meetings were canceled and replaced with online meetings and conference calls. As it stands now, the next face-to-face meetings will take place in May, subject to the developments of the virus situation.
Since 3GPP runs on consensus, the lack of face-to-face meetings certainly raises concerns about the progress that can be made as well as its possible effect on the timelines. However, the duo of Mr. Bertenyi and Mr. Wanshi are working diligently to keep the well-oiled standardization machine going. Mr. Bertenyi told me that although face-to-face meetings are the best and the most efficient option, 3GPP is making elaborate arrangements to replace them with virtual means. They have adopted a two-step approach:1) Further expand the ongoing email-based discussions; 2) Multiple simultaneous conference calls mimicking the actual meetings. “We have worked with the delegates from all participant countries to come up with a few convenient four-hour time slots, and will run simultaneous on-line meetings/conference calls and collaborative sessions to facilitate meaningful interaction,” said Bertenyi “We have stress-tested our systems to ensure its robustness to support a large number of participants“
Mr. Wanshi, who leads the largest working group RAN 1, says that they have already completed a substantial part of Rel 16 work and have achieved functional freeze. So, the focus is now on RAN 2 and RAN3 groups, which is in full swing. The current schedule is to achieve what is called ASN.1 freeze in June 2020. This milestone establishes a stable specification-baseline from which vendors can start building commercial products.
Although, it’s reasonable to say that notwithstanding any further disturbances, Rel. 16 will be finalized on time. However, things are not certain for Rel. 17. Mr. Bertenyi stated that based on the meeting cancellations, it seems inevitable that the Rel. 17 completion timeline will shift by three months to September 2021.
It goes without saying that the plans are based on the current state of affairs in the outbreak. If the situation changes substantially, all the plans will go up in the air. I will keep monitoring the developments and report back. Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get the latest on standardization and the telecom industry at large.
It is election time at the 3GPP, and last week was the ballot for the chairmanship of the prestigious RAN Technical Specification Group (TSG). Dr. Wanshi Chen of Qualcomm came out as a winner after a hard-fought race. I caught up with Wanshi right after the win to congratulate him and discuss his vision for the group as well as the challenges and opportunities that lie ahead. Here is a quick primer on the 3GPP ballot process and highlights from my discussion with Wanshi.
Side note: If you would like to know more about 3GPP Rel. 17, please check out the earlier articles in the series.
3GPP TSGs and elections
As I have explained in my article series “Demystifying cellular licensing and patents,” 3GPP has three TSGs, responsible for the radio access network, core network, services and system aspects, and are aptly named TSG-RAN, TSG-CN, and TSG-SA. Among these, TSG-RAN is probably the biggest in terms of size, scope, and number of activities. It is managed by one chair and three vice-chairs. The chair ballot was last week (started from March 16th, 2021) and the vice-chair ballot is happening as this article is being published.
The primary objective of the RAN chair is to ensure all the members are working collaboratively to develop next-generation standards through the 3GPP’s marquee consensus-based, impartial approach. The chair position has a lot of clout and prestige associated with it. The chairmanship truly represents the collective confidence of the entire 3GPP community in the position, providing vision and leadership to the entire industry. The RAN TSG chair leadership is especially crucial now when the industry is at a critical juncture of taking 5G beyond the conventional cellular broadband to many new industries and markets.
For the candidates, the 3GPP election is a long-drawn process, starting more than a year before the actual ballot. The credibility, and the competence of the individual candidates, as well as the companies they represent, are put to test. Although delegates vote as individuals in a secret ballot, the competitive positioning between the member companies, and sometimes the regional dynamics may play an important role.
During the actual election, the winner is decided if any candidate gets more than 71% of the votes, either in the first or the second round. If not, a third run-off round ensues, and whoever gets a simple majority there wins the race. This time, there were four candidates in the fray – Wanshi Chen of Qualcomm, Mathew Baker of Nokia, Richard Burbidge of Intel, and Xu Xiaodong of China Mobile. The election did go to the third run-off round, where Wanshi Chen won against Mathew Baker by a comfortable margin.
New chair’s vision for the next phase of 5G
Dr. Wanshi Chen is a prolific inventor, a researcher, and a seasoned standards leader. He has been part of 3GPP for the last 13 years. He is currently the Chair of the RAN-1 Working Group and was also a vice-chair of the same group before that. RAN-1 is one of the largest working groups within 3GPP, with up to 600 delegates. Wanshi has successfully presided over the group during its critical times. For example, he took over the RAN-1 chairmanship right after the 5G standardization acceleration, and was instrumental in finalizing 3GPP Rel. 15 in record time. Following that he also played a key role in finishing Rel. 16 on time as planned, despite the enormous workload and the unprecedented disruptions caused by the onset of the Covid-19 pandemic.
The change in RAN TSG guard is happening at a crucial time for 5G when it is set to transform the many verticals and industries beyond smartphones. 3GPP has already set a solid foundation with Rel.16, Rel. 17 development is in full swing, and Rel. 18 is being conceptualized. The next chair will have the unique opportunity to shape the next phase of 5G. Wanshi said “Industry always looks to 3GPP for leadership in exploring the new frontiers, providing the vision, and developing technologies and specifications to pave the way for the future. It is critical for 3GPP to maintain a fine balance between the traditional and newer vertical domains and evolve as a unified global standard by considering inputs from all regions of the world.”
Entering new markets and new domains is always fraught with challenges and uncertainties. However, “Such transitions are not new to 3GPP,” says Wanshi, “We worked across the aisle and revolutionized mobile broadband with 4G, and standardized 5G in a record time. I am excited to be leading the charge and extremely confident of our ability to band together as an industry and proliferate 5G everywhere.”
It is indeed interesting to note that Qualcomm was also at the helm of RAN TSG when 5G was accelerated. Lorenzo Cascia, Qualcomm’s VP of Technical Standards, and another veteran of 3GPP said “The primary task of the chair is to foster consensus among all member companies, and facilitating the continued expansion of 5G, and potentially formulating initial plans toward the industry’s 6G vision,” he added, “having known Wanshi for years, I am extremely confident of his abilities to lead 3GPP toward that vision.”
The tenure of the chair is two years, but usually, people serve two consecutive terms, totaling four years. That means Wanshi will have a minimum of two years and a maximum of four years to show his magic, starting from Jun 2021. I wish all the best to him in his new position. I will be closely watching him as well as 3GPP as 5G moves into its next phase.
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
The twin events of 3GPP RAN Plenary #92e and Rel. 18 workshops are starting to shape the future of 5G. The plenary substantially advanced Rel.17 development and the workshop kick-started the Rel 18 work. Amidst these two, 3GPP also approved the “5G Advanced” as the marketing name for releases 18 and beyond. Being a 3GPP member, I had the front row seats to witness all the interesting discussions and decisions.
With close to 200 global operators already live with the first phase of 5G, and almost every cellular operator either planning, trialing, or deploying their first 5G networks, the stage is set for the industry to focus on the next phase of 5G.
Solid progress on Rel. 17, projects mostly on track
The RAN Plenary #92-e was yet another virtual meeting, where the discussions were through a mix of emails and WebEx conference sessions. It was also the first official meeting for the newly elected TSG RAN chair Dr. Wanshi Chen of Qualcomm, and three vice-chairs, Hu Nan of China Mobile, Ronald Borsato of AT&T, and Axel Klatt of Deutsche Telekom.
Most of the plenary time was spent on discussing various aspects of Rel. 17, which has a long list of features and enhancements. For easy reference and better understanding, I divide them (not 3GPP) into three major categories as below:
New concepts:
Enhancements for better eXtended Reality (XR), mmWave support up to 71 GHz, new connection types such as NR – Reduced Capability (RedCap, aka NR-Light), NR & NB-IoT/eMTC, and Non-Terrestrial Network (NTN).
Improving Rel.16 features
Enhanced Integrated Access & Backhauls (IAB), improved precise positioning and Sidelink support, enhanced IIoT and URLLC functionality including unlicensed spectrum support, and others.
Fine-tuning Rel. 15 features
Further enhanced MIMO (FeMIMO), Multi-Radio Dual Connectivity (MRDC), Dynamic Spectrum Sharing (DSS) enhancements, Coverage Extension, Multi-SIM, RAN Slicing, Self-Organizing Networks (SON), QoE Enhancements, NR-Multicast/Broadcast, UE power saving, and others.
For details on these features please refer to my article series “The Chronicles of 3GPP Rel. 17.”
There was a lot of good progress made on many of these features in the plenary. All the leads reaffirmed the timeline agreed upon in the previous plenary. It was also decided that all the meetings in 2021 will be virtual. The face-to-face meetings will hopefully start in 2022.
3GPP RAN TSG meeting schedule (Source: 3gpp.org)
Owing to the workload and the difficulties of virtual meetings, the possibility of down-scoping of some features was also discussed. These include some aspects of FeMIMO and IIoT/URLLC. Many delegates agreed that it is better to focus on a robust definition of only certain parts of the features rather than diluted full specifications. The impact of this down-scoping on the performance is not fully known at this point. The discussion is ongoing, and a final decision will be taken during the next RAN plenary #93e in September 2021.
The dawn of 5G Advanced
The releases 18 and beyond were officially christened as 5G Advanced in May 2021, by 3GPP’s governing body Project Coordination Group (PCG). This is in line with the tradition set by HSPA and LTE, where the evolutionary steps were given “Advanced” suffixes. 5G Advanced naming was an important and necessary decision to demarcate the steps in the evolution and to manage the over-enthusiastic marketing folks jumping early to 6G.
The 5G Advanced standardization process was kickstarted at the 3GPP virtual workshop held between Jun 28th and July 2nd, 2021. The workshop attracted a lot of attention, with more than 500 submissions from more than 80 companies, and more than 1200 delegates attending the event.
The submissions were initially divided into three groups. According to the TSG RAN chair, Dr. Wanshi the submissions were distributed almost equally among the groups:
-
eMBB (evolved Mobile BroadBand)
-
Non-eMBB evolution
-
Cross-functionalities for both eMBB and non-eMBB driven evolution.
After the weeklong discussions (on emails and conference calls), the plenary converged to identify 17 topics of interest, which include 13 general topics and three sets of topics specific to RAN Working Groups (WG) 1-3, and one set for RAN WG-4. Most of the topics are substantial enhancements to the features introduced in Rel. 16 and 17, such as MIMO, uplink, mobility, precise positioning, etc. They also include evolution to network topology, eXtended Reality (XR), Non-Terrestrial Networks, broadcast/multicast services, Sidelink, RedCap, and others.
The relatively new concepts that caught my attention are Artificial Intelligence (AI)/Machine Learning (ML), Full and Half Duplex operations, and network energy savings. These have the potential to set the stage for entirely new evolution possibilities, and even 6G.
Wireless Networks are extremely complex, highly dynamic, and vastly heterogenous. There cannot be any better approach than using AI/ML to solve the hard wireless challenges. E.g., cognitive RAN can herald a new era in networking.
Full-duplex IABs with interference cancellation broke the decades-old system of separating uplink and downlink either in frequency or time domains. Applying similar techniques to the entire system has the potential to bring the next level of performance in wireless networks.
Reducing energy consumption has emerged as one of the existential challenges of our times because of its impact on climate change. With 5G transforming almost every industry, it indeed is a worthy effort to reduce energy use. The mobile industry with the “power-efficient” approach embedded in its DNA has a lot to teach the larger tech industry in that regard.
In terms of the topics of discussion, Dr. Wanshi said that he cannot emphasize enough that they are not “Working Items” or “Study Items.” He further added that the list is a great starting point, but much discussion to rationalize and prioritize it is needed, which will start from the next plenary, scheduled for Sep 13th, 2021.
For the full list of Rel. 18/5G Advanced topics, please check this 3GPP post.
In closing
The events in the last few weeks have surely started to define and shape the future evolution of 5G. With Rel. 16 commercialization starting soon, Rel. 17 standardization nearing completion, and Rel. 18 activities getting off the ground, there will be a lot of exciting developments to look forward to in the near future. So, stay tuned.
Demystifying Cellular Patents and Licensing
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Editor’s note: This is the first in a series of articles that explores the sometimes obtuse process of standardizing, patenting and licensing cellular technologies.
Answers to the questions you always wondered but were afraid to ask
Patents spark joy in the eyes of the innovators! Patents not only recognize innovators’ hard work but also provide financial incentives to keep inventing and continue to make the world a better place. Unfortunately, patent licensing often referred to as Intellectual Property Rights (IPR) licensing, has recently gotten a bad rap. The whole IPR regimen seems mystical, veiled under a shroud of confusion, misinformation, and of course, controversies. But tearing that shroud reveals the fascinating metamorphosis of abstract concepts developing into technologies that transform people’s lives. This process, in turn, creates significant value for the inventors.
I have been exposed to the cellular IPR in my entire career. And I thought I understood it well. But my research regarding the various aspects of the IPR journey, including their creation, evaluation, and licensing, was a real eye-opener, even for me. In a series of articles, I will take you through the same amazing journey that will demystify the myths, the misunderstandings, and the misinterpretations. I will use the standardization of 4G, which has run its full course, and that of 5G, which is ongoing, as the vehicles for our journey. So, get on board, buckle up and enjoy the ride!
Organizations that build cellular standards
It all starts at the International Telecom Union (ITU), an arm of UNESCO (www.unesco.org), which is a part of the United Nations. For any new generation of standard (aka G), ITU comes up with a set of minimum performance requirements. Any technology that meets those requirements can be given that specific “G” moniker. For 4G, these requirements were called IMT-Advanced, and for 5G, they are called IMT-2020. In the earlier days of 4G, there were two technologies that got the moniker. One among them was developed by IEEE, called WiMAX, which no longer exists. The other was developed by the 3rd Generation Partnership Project (3GPP), the most important and visible global cellular specifications organization.
3GPP, as the name suggests, was formed during 3G days, and has been carrying the mantle ever since. 3GPP is a combination of seven telecommunications Standard Development Organizations (SDO), representing telecom ecosystems in different geographical regions. For example, the Alliance for Telecom Industry Solutions Association (ATIS) represents the USA; the European Telecommunications Standards Institute (ETSI) represents the European Union and so on. In essence, 3GPP is a true representation of the entire global cellular ecosystem.
3GPP develops specifications that are then affirmed as relevant standards by SDOs in their respective regions. 3GPP’s specifications are published as a series of Releases. For example, Release 10 (Rel. 10) had the specifications that met the ITU requirements for 4G (IMT-Advanced). 3GPP sometimes also gives marketing names to a set of these releases. For example, Rel. 8 – 9 were named as Long Term Evolution (LTE), Rel. 10-12 were named as LTE Advanced, and so on. The Rel.15 includes the specifications needed to meet 5G requirements.
To summarize, ITU stipulates the requirements for any “G,” 3GPP develops the specifications that meet those requirements, and the SDOs affirm those specifications as standards in their respective regions.
How the standards building process works
With those many organizations and their representatives involved, standards development is a long, arduous, and systematic process. 3GPP has several specification working groups focused on different parts of the cellular system and its interworking, including radio network, core network, devices, and others. The members of these groups are representatives of different SDOs.
Now coming to the actual process itself, the ITU requirements act as goals for 3GPP. The efforts start-off with members bringing their proposals, i.e. their innovations, to achieve the set goals. For example, for 4G one of the proposals was techniques to use OFDMA for the high-performance mobile broadband. These proposals are presented in each of the relevant groups. There are usually multiple of them for any given problem. All these proposals are discussed, closely scrutinized, and hotly debated. Ultimately, winning ideas emerge through a consensus process. One of the members of the group is then nominated to be the editor, and he/she distills the winning ideas into a working document. That document is continuously edited and refined in a series of meetings, and when stable, is published as the first draft of the specification. Publishing the first draft is a major milestone for any release. Companies usually start designing their commercial products based on the first draft.
The standard refinement process continues for a long time even after the first draft, this is akin to how software “bug fixing” and update process works. Members continuously submit contributions aka bug-fixes to refine the draft. Typically, these contributions are substantially higher in volume than the initial proposals. This is because the latter are radically new concepts or innovations, whereas the former could be trivial, such as editorial corrections. Once all the bug-fixing is done, the final specification is released.
As evident, for any new innovation to be accepted and included in the standard, it has to go through a rigorous vetting and has to withstand the intense scrutiny by peers and competitors. This means the inclusion is an explicit recognition by the industry that the said technology is a superior solution to the given problem.
3GPP contributions and record-keeping
3GPP is a highly bureaucratic organization, with a robust and well established administrative and record keeping system. But for historical reasons, the system is not equally rigorous throughout the process. For example, record keeping is nominal until the creation of the first draft. The proposals, ideas, contributions presented during that time are just tagged as “considered” or “treated,” without any specific recognition. However, the record keeping gets very structured and rigorous after the first draft. The bug-fixing contributions that are adopted into the specification are tagged with more official-sounding names such as “approved,” no matter whether they are very trivial or significant. These uneven record-keeping and naming practices have created some very simpleton, amateurish and really flawed IPR evaluation methods. More on this in later articles.
Nonetheless, 3GPP specification development is a consensus-based, democratic process, by design. This necessitates collaboration among members, who many times have opposite interests. This approach indeed has made 3GPP a great success, resulting in the cellular industry to excel and thrive.
With the basic understanding of the organizations and processes in place, we are now well equipped for the next part of our IPR journey—understanding how developing standards is a system design endeavor solving end-to-end problems, not just a collection of disparate technologies, as we are given to believe. And that’s exactly what my next blog in the series will explore. Be on the lookout!
In my previous article in the series, I described the organizations and the process of creating cellular standards. I explained how it is an almost a magical process, where scores of industry players, many of whom are staunch competitors come together in a consensus-based approach to approve new standards. In this article I will delve into the specifics of how patents, often referred to as Intellectual Property Rights (IPR) are created, valued, licensed, and administered.
Cellular patents are created during the standardization process
The cellular standardization process is primarily a quest to find the best solutions for a systematic problem. The winning innovations borne out of that process create valuable patents. You can guarantee that almost all the ideas presented as candidates for standardization hit the patent offices in various countries before coming to 3GPP. The value of those innovations and thereby patents dramatically increases when accepted and incorporated into standards. Inclusion in the standard is also the stamp of approval that the innovation is the best of the crop, as it has won over other competing ideas, as I explained my previous article.
Another important aspect, especially relevant to cellular patents, is that the innovations presented to standards are the solution to solve an end-to-end system problem. This means those ideas are not specific to just the device or the network, but a comprehensive solution that touches many parts of the system. So, many times, it is very hard to delineate the applicability of those ideas to only one part or section of the system. For example, if you consider MIMO (Multiple Input Multiple Output) technique, it needs a complete handshake between the device and the network to work. Additionally, many patents might touch many subsystems within the device or the network, which further complicates the effort to isolate their relevance to specific parts. For example, consider how the power management and optimization in a smartphone works, which makes AP, Modem and other subsystems wake up or go to sleep in sync. That innovation might touch all those subsystems in the phone.
All patents not created equal
Thousands of patents go into building cellular wireless systems, be it devices, radio infrastructure or core networks. At a very basic level, these patents can be divided into two categories: Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEP or NEP). SEPs are those which are absolutely necessary to build a standard compliant product, and that can’t be circumvented. Hence, they are highly valued. On the other hand, non-SEPs are relevant to standards, but may not necessary for the basic functioning of the standard compliant products and can be designed around. For example, for 4G LTE devices, patents that define using OFDMA for cellular connectivity are SEPs, whereas patents that improve the battery life of the devices could be considered as non-SEPs.
3GPP and Standard Development Organizations (SOD) strongly encourage early disclosure of IPR that members consider essential, or potentially essential for standards. Further, they also mandatorily require licensing of SEPs on fair, reasonable and non-discriminatory (FRAND) terms. There are no such licensing requirements for non-SEPs.
While 3GPP or SDOs make FRAND compliance for SEPs mandatory, they don’t enforce or regulate any specific monetary value for them. They consider the licensing to be a commercial transaction outside their purview, and hence let the market forces decide their worth.
How to value patents?
According to some estimates, there were 250,000 active patents covering smartphones in 2012. And when I write this article in 2019, I am sure that number has become even bigger. Then the issue becomes how to determine the value of these patents, and how best to license and administer them to others.
With the sheer number of patents involved, it is impossible to manage licensing on an individual patent basis. It is even more impractical to license them on a subsystem or at the component level, as mentioned before, it is hard to delineate their applicability to a specific part. So, it indeed is a hard problem to solve. Since cellular standards have been around for a few decades now, it is worthwhile to examine how historically licensing has been dealt with.
In the 2G days when the cellular markets started expanding, there were a handful of well-established large players such as Ericsson, Nokia, Motorola, Nortel. Alcatel, Siemens and others. These players not only developed the technologies but also had their own devices and network infrastructure offerings. Since it was a small group of players, and all of them needed each other’s technology to make their products, they resorted to a simple method of bartering, also known as cross-licensing. Some industry observers and participants accused them of artificially inflating the value of their patents to make it very hard for any new players to enter the market.
With the advent of 3G, Qualcomm appeared on the scene with a unique horizontal business model. Qualcomm’s core business was to invent in advanced mobile technology, make it accessible to the ecosystem through licensing, and enable everyone to build compelling products based on its technology (Qualcomm initially invested in infrastructure, mobile device and service provider businesses, which they eventually divested). Qualcomm’s licensing made the initial investment more reasonable and the technologies accessible for the OEMs, which significantly reduced the entry barrier. The rise of Apple, Samsung, LG as well as the score of Chinese OEMs can be attributed to it.
Taking the market forces approach, Qualcomm decided to license the full portfolio of patents, including tens of thousands of patents, for a percentage of the wholesale selling price of the phone. They put a cap on the fee when the price of phone prices started getting higher. Qualcomm decided to license the IPR to the phone OEMs because that’s where the full value of their innovations is realized. Apparently, this was also the approach all the patent holders during that time, including Ericsson, Nokia and other practiced, as attested by some of these companies during Qualcomm vs. FTC trial. This practice has continued until now and has withstood the challenges all over the world. Of course, there have been challenges and changes to the actual fees charged. But the approach has still been largely intact.
Usually, the actual licensing rates are confidential among the licensee and licensors. We got some details during Qualcomm’s court cases around the world. As of now, what we know is, for example, Qualcomm charges 3.25% of the device wholesale price for its SEPs, and 5% for the full portfolio including both SEPs and non-SEPs. The device price base is capped at a max of $400.
There are others in the industry, such as Apple who are attempting to change this decade-old approach and proposing a new approach, sometimes referred to as the Smallest Saleable Patent Practicing Unit (SSPPU) pricing. Their argument is that most of Qualcomm’s SEP ’s value is in the modem, and hence the licensing fee should be based on the price of the modem and not the phone. Obviously, Qualcomm disagrees, and both are fighting it out in the courtrooms around the world.
Being an engineer myself, I know that when designing a solution, engineers don’t consider the constraints of limiting it to a specific unit, or subsystem or apart. Instead, they come up with the best solution that effectively solves the problem. Often, by the virtue of such an approach, the solution involves the full system, as I explained in two examples earlier. So, in my view, limiting the value to a specific unit is a very simpleton, impractical approach and grossly undervalues the monetizing ability of innovations. Hence, I believe, the current approach should continue, and let the market forces decide what actual price is.
The raging court battles between Apple and Qualcomm regarding licensing are underway now, and we will see what the courts decide. In the next article, I will look at some of these recent battles between the two behemoths, what were the basis, how it affected the IPR landscape and more. Please be on the lookout.
The statement “All patents are not created equal” seems like a cliché, but is absolutely true! The differences between patents are multi-dimensional and much more nuanced than what meets the eye. I slightly touched upon this in my previous article. There is denying that going forward, patents will play an increasingly bigger role in cellular, not only pitting companies against each other but also countries against one another for superiority and leadership in technology. Hence it is imperative that we understand how patents are differentiated, and how their value changes based on their importance.
Let me start with a simple illustration. Consider today’s cars, which have lots of different technologies and hence patents. When you compare the patents for the car engine, to say, the patents for the doors, the difference between relative importance is pretty clear. If you look at the standards for building a car, probably the patents for both the engine and the door are listed as listed essential, i.e., SEPs (Standard Essential Patents). However, the patent related to the engine is at the core of the vehicle’s basic functionality. The patent for the door, although essential, is clearly less significant. Another way to look at this is, without the idea of building the engine; there is not even a need for the idea for doors. That means the presence of one is the reason for other’s existence. The same concepts also apply to cellular technology and devices. Some patents are invariably more important than others. For example, if you consider the 5G standard, the patents that cover the Scalable-OFDMA are fundamental to 5G. These are the core of 5G’s famed flexibility to support multiple Gigabits of speeds, very low latency, and extremely high reliability. You can’t compare the value of that patent to another one that might increase the speed by a few kilobits in a rare use case. Both patents, although being SEPs, are far apart in terms of value and importance.
On a side note, if you would like to know more about SEPs, check out my earlier article here.
That brings us to another classic challenge of patent evaluation—patent counting. Counting is the most simplistic and easy to understand measure—whoever has the most patents is the leader! Well, just like most simple approaches, counting also has a big issue—it is highly unreliable. Let me again explain it with an illustration. Consider one person having 52 pennies and a second person having eight quarters. If we apply simple counting as a metric, the first person seems to be the winner, which can’t be farther from the truth. Now applying the same concept to cellular patents, it would look stupid to call somebody a technology leader purely based on the number of patents they own, unless you know what they are.
When you look at the 5G standard, it has thousands of SEPs. If you count patents for Scalable-OFDMA and other similar fundamental and core SEPs with the same weight as minor SEPs that define peripheral and insignificant protocols and other things, you would be highly undervaluing the building blocks of the technology. So, simply counting without understanding the importance of the patents for technology leadership is very flawed. Also, the process of designating a certain patent as a SEP or not is nuanced as well, which makes the system vulnerable to rigging and manipulation, resulting in artificially increased SEP counts. I will cover this in the later articles. This potential for inflating the numbers further exacerbates the problem of patent counting.
In conclusion, it is amply clear that all patents are not created equal, and simpleton patent counting is not the best measure to understand the positioning of somebody’s technology prowess. One has to go deeper and understand their importance to realize the value. In my next articles, I will discuss the key patents that define 5G and explore alternate methods for patent evaluations that are possibly more robust and logical. In the meantime, beware and don’t be fooled by entities claiming to be leaders because of the sheer volume of their patent portfolio.
Demystifying cellular patents and licensing – Part 4
3GPP is this mystic organization that many seem to know, but few understand it. The key players of this efficient and well-regarded organization work often without the fanfare or public recognition. But no more! As part of this article series, I go behind the doors, explore the organization, meet the hard-working people, and bare the details on its inner workings.
Side note, if you would like to understand the cellular standardization process, please read my previous articles in the series here, here, and here.
“3GPP is a membership-driven organization. Any company interested in telecommunications can join, through one of its SDOs (Standard Development Organizations)” said Mr. Balazs Bertenyi of Nokia Corporation, the current chair of TSG-RAN and a 3GPP veteran. “One of the important aspects of 3GPP is that a large portion of its working-level office bearers are members themselves and are elected by the other fellow members.”
I became a proud member of 3GPP through the American SDO, ATIS, earlier this year.
3GPP organization structure
3GPP consists of three layers, as shown in the schematic: Project Coordination Group (PCG) at the top, which is more ceremonial; three Technical Specifications Groups (TSG) in the middle, each responsible for a specific part of the network; multiple Working Groups (WG) at the bottom, where the actual standards development occurs. There are many ad-hoc groups formed within each of these as well. All these groups meet regularly, as shown in the example meeting cycle.
Inner workings of WGs and the unsung heroes
Let’s start with the WGs, specifically the ones that are part of TSG-RAN. Being an RF Engineer, these are closest to my heart. However, this discussion applies equally to other TSGs/WGs as well. There are six WGs within TSG-RAN, each with one chair and two vice-chairs.
The best way to understand the group’s workings is to analyze how a fundamental 5G feature such as Scalable OFDMA would be standardized. There might be a few proposals from different member companies. The WGs have to evaluate these proposals in detail, run simulations for various scenarios to understand the performance, the pros and cons, competitive benefits, and so on. They have to decide the best solution and develop standards to implement it across the system. As evident, the WG chair must facilitate the discussion in an orderly, fair, and impartial way, and let the group reach a consensus decision. As you can imagine, this task is a combination of science and art—bringing people together through collaboration, personal relationships, and making sure they arrive at meaningful conclusions—all of this while under tremendous time pressure.
In such a situation, WG members expect the chair to be fair, balanced, and trustworthy. Many times, the members whose companies they represent are bitter competitors with diagonally opposite interests, each trying to push their views and assertions for adoption. “It is quite a task bringing these parties together for a consensus-based agreement, in the true spirit of 3GPP,” says Mr. Bertenyi. “It requires deep technical knowledge, a lot of patience, empathy, leadership, and ability to find common ground to be a successful WG chair.” That is the reason why 3GPP’s process of electing chairpersons through the ballot, instead of nomination, makes perfect sense.
The members of WG vote and elect somebody they trust and have respect for to lead the group. Before taking over, the employer of the newly elected officer has to formally sign a support letter declaring that the officer will get all the support from his company to successfully undertake his duties as a neutral chair. “From then on the elected officer stops being a delegate for his company, and becomes a neutral facilitator working in the interest of 3GPP and the industry” added Mr. Bertenyi. “Being a chair, I have presided over many decisions that were not supported by my company but were the best way forward in a given dispute. I have seen it often happen in WGs as well. For example, I saw Wanshi Chen, chair of RAN-1 do the same many times.”
The WG members are primarily inventors trying to develop solutions for difficult technological challenges. The WG chairs are at the forefront of this effort, and by virtue of that, it is not uncommon for them to be prolific inventors themselves and be a party to a large number of patents. This, in fact, proves that they are worthy of the leadership role they are given.
“It wouldn’t be untrue to say that the hard-working WG chairs are truly unsung heroes of 3GPP, and they deserve much respect and accolades,” says Mr. Bertenyi. “I am extremely proud to be working with all the chairs of our RAN WGs—Wanshi Chen of Qualcomm heading RAN-1, Richard Burbidge of Intel heading RAN-2, Gino Masini of Ericsson heading RAN-3, Xutao Zhou of Samsung heading RAN-4, Jacob John of Motorola heading RAN-5, Jurgen Hoffman of Nokia heading RAN-6.”
Responsibilities of TSG and PCG
While the WGs are workhorses, TSG sets the direction and manages resource allocation and on-time delivery of specifications.
There are three TSGs, one each for Radio and Core Networks and a third for systems work. Each of the TSGs has a chair and three vice-chairs, all elected by the members. They provide direction based on market conditions and needs. For example, the decision to accelerate 5G timelines in 2016 was taken by the TSG-RAN. The chairs are usually accomplished experts and excellent managers. I witnessed how effectively Mr. Bertenyi conducted the recent RAN#84 plenary while being fair, cheerful, and decisive at the same time.
PCG on record is the highest decision-making body, dealing mostly with non-technical project management issues. It is chaired by the partner SDOs on a rotational basis. It provides oversight, formally adopts the TSG work items, and ratifies election results and the resources commitments.
Elections and leadership tenure
As mentioned, all the working-level 3GPP office bearers are duly elected by fellow 3GPP members in a completely transparent ballot process. The standard tenure of each office bearer is two years. But often they are reelected for a second term based on their performance, as recognition for their effective leadership. Many times members start with vice-chair position and move on to the chair level, again based on their performance.
In closing
3GPP is a truly democratic, consensus-based organization. Its structure and culture that encourages collaboration, even among bitter business rivals, has made it a premier standards development organization. The well-managed cellular technology roadmap and success of the mobile industry at large is a testament to 3GPP’s systematic and broad-based approach.
Quick Note – I will be attending the next RAN-1 WG meeting scheduled for Aug 26- 30th 2019 in Prague, Czech Republic. So, stay tuned for the 3GPP Rel.16 and Rel.17 progress report.
While the 5G race rages on, so does the race to be perceived as the technology leader in 5G. This race transcends companies, industries, regions, and even countries. No major country, be it the new power such as China or existing leaders such as the US and Europe, wants to be seen as laggard. In this global contest, 5G patents and IPR (Intellectual Property Rights) is the most visible battleground. With so many competing entities and interests, it indeed is hard to separate substance from noise. One profound truth prevails even with all the chaos: Quality of inventions always beats quantity.
The fierce competition to be the leader has made companies make substantial investments to innovate new technology as well as play a key role in standards development. Since the leadership battles are also fought in the public domain, the claims of leadership has been relegated to simplistic number counting, such as how many patents one has, or much worse, how many contributions one has submitted to the standards. In the past, there have been many reports dissecting these numbers in many ways and claiming one or the other company to be the leader.
The awakening – Quality matters
Fortunately, now there seems to be some realization of the perils of this simplistic approach to a complex issue. There have been reports recently about why the quality, not the quantity matters. For example, last month, the well known Japanese media house, Nikkei, published this story based on the analysis of Patent Result, a Tokyo-based research company. Even the Chair of the 3GPP RAN group, Mr. Balazs Bertenyi, published a blog highlighting how technology leadership is much beyond simple numbers.
Ills of contributions counting
One might ask, what’s wrong with number counting, after all, isn’t it simple and easy to understand? Well, simple is not always the best choice for complex issues. Let me illustrate this with a realistic example. One can easily create the illusion of technology leadership by creating a large number of standards contributions. The standards body 3GGP, being a member-run organization, has an open policy for contributions. As I explained in the first article of this “Demystifying cellular patents” series, there is a lot of opportunity to goose-up the number of contributions during the “bug-fix” stage when the standard is being finalized. Theoretically, any 3GPP member can make an unlimited number of contributions, as long as nobody opposes them. Since 3GPP is also a consensus-driven organization, its members are hesitant to oppose fellow member’s contributions, unless they are harmful. It’s an open question whether anybody has exploited this vulnerability. If one looks closely, they might find instances of this. Nonetheless, the possibility exists, and hence simply, the number of contributions can’t be an indicator for anything important, let alone technology leadership.
<<Side note: You can read all the articles in the series to understand the 3GPP standardization process here.>>
In his blog, Mr. Bertenyi says, “…In reality, flooding 3GPP standards meetings with contributions is extremely counterproductive...” It unnecessarily increases the workload on the standards working groups and extends the timelines, while reducing the focus on the contributions that really matter.
So what matters? Again, Mr. Bertenyi explains, “…The efficiency and success of the standards process are measured in output, not input. It is much more valuable to provide focused and well-scrutinized quality input, as this maximizes the chances of coming to high-quality technical agreements and results.”
Contrasting quantity with quality
Another flawed approach is measuring technology prowess by counting the number of patents the company holds. Well, unlike mere contributions, the number of patents has some value. However, this number can’t be the only or meaningful measure for leadership. What matters is actually the specific technology those patents bring to the table. Meaning, how important they are to the core functioning of the system. The Nikkei article, which is based on Patent Result’s analysis, sheds light on this subject.
Patent Result did a detailed analysis of the patents filed in the U.S. by major technology companies, including Huawei, Intel, Nokia, Qualcomm, and many others. It assessed the quality of the patents according to a set of criteria, including originality, actual technological applications, and versatility. Their ranking based on the quality of patents was far different than that of the number of patents.
Some might ask, isn’t the SEP (Standard Essential Patent) designation supposed to separate the essential, i.e., important ones from non-important ones? Well, in 3GPP, SEP designation is a self-declaration. Because of that, there is ample scope for manipulation. This process is a major issue in itself, and a story for another day! So, if something is an SEP, it doesn’t necessarily mean it is valuable. In my previous article “All patents are not created equal,” I had compared and contrasted two SEPs in a car: one for the engine of the car and another for its fancy doors. While both are “essential” to make a car, the importance of the first is magnitudes higher than the second. On the same strain, you couldn’t call a company with a large number of “car-door” kinds of patents to be a leader over somebody who has fewer but more important “car-engine” level patents.
So, the bottom line is, when it comes to patents, quality beats quantity any day of the week, every time!
As I discussed in my previous articles, the industry is finally waking up to the fact that when it comes to patents, quality indeed matters much more than quantity. Also, the realization that simpleton approaches such as standards contribution counting or counting the mere number of patents doesn’t give an actual picture of technology leadership. At the same time, assessing the quality of patents has been a challenge. While the gold standard, in my view, is market-based valuation, new quality accessing metrics and methods are emerging. These are designed to consider many aspects such as how fundamental and market impacting the inventions are, how wide the reach of the patents is, how many other patents are derived from them etc. and try to come up with a quality score. I will explore many of them as part of this article series, here is the discussion on the first one on the list.
<<Side note: You can read the previous articles in the series here. >>
Patent Asset Index™ by LexisNexis® Patent Sight®
Patent Sight is a leading patent analytics and valuation firm, based in Germany. Its services are utilized by many leading institutions in the world, including the European Commission. Patent Sight has developed a unique methodology that considers the importance of the patent in the hierarchy of the technologies, its geographical coverage, and other parameters to provide a score called the Patent Asset Index. This index allows industry as well as general audiences to not only understand the comparative value of the patents that various companies hold but also rank them in terms of technology leadership.
Here are some of the Patent Sight charts regarding 4G and 5G patents, presented at a recent webinar hosted by Gene Quinn of IPWatchDog. During the webinar, William Mansfield of Patent Sight shared these charts. The first chart shows the number of patents filed by some of the top cellular companies between 2000 and 2018. As is evident, if only quantity was the metric, one could say that companies such as Qualcomm, Huawei, Nokia, LG, and Samsung, are far ahead of the others.
Now let’s look at the Patent Asset index chart of the same companies:
Under this assessment, the scene is vastly different. Qualcomm is still in the lead, and there is a drastic change in the ranking as well as the relative standings of others. Qualcomm is far ahead of its peers, followed by Samsung as a distant second, followed by LG, Nokia, and InterDigital. Surprisingly, Huawei, which was neck-to-neck with Qualcomm in terms of sheer number patents, is much farther behind.
Why quality vs. quantity comparisons matter?
Unquestionably patents are borne out of important innovations. However, as I have explained in this article, all patents are not created equal. Also, when it comes to cellular patents, there is a much-believed myth that Standard Essential Patents (SEPs), as the name suggests, are extremely important, and are core to the technology. However, because of 3GPP’s self-declaration policy, this designation is not as reliable as it seems and is highly susceptible to abuse. For example, companies with deep pockets that are interested in boosting their patent profile might invest large sums in developing non-core patents and declaring them as SEPs. That’s why the quality indicators such as the Patent Asset Index and other such approaches are important tools to assess the relative value of the patent portfolios. In the next articles, I will discuss other indicators and the specific parameters and the methodologies involved in the quality determination. So be on the lookout!
As a keen industry observer, I have seen with awe, the attention patents (aka IPR- Intellectual Property Rights) have recently gotten. And that has everything to do with the importance 5G has gotten. Most of the stakeholders now realize that IPR leadership indeed means technology leadership. But the issue that many do not understand is, how to determine IPR leadership. A lot of them, especially gullible media, falsely believe that owning a large number of patents represents leadership, no matter how insignificant those parents are. I have been on a crusade to squash that myth and have written many articles, published a few podcasts to that effect. Gladly though, many are realizing this now, and speaking out. I came across one such report titled “5G Technological Leadership,” published by the well-known US think tank, Hudson Institute.
Infrastructure is only one of the many 5G challenges
The report recognizes the confusion the 5G policy discussion in the US is mired in, and how misdirected the strategy discussions have been. It rightly points out that the well-publicized issues of lack of 5G infrastructure vendor diversity, as well as the size and speed of 5G deployments, are only small and easy to understand parts of the multifaceted 5G ecosystem. The authors of the report, Adam Mossoff & Urška Petrovčič strongly suggest that it would be wrong for the policymakers to only focus on these aspects. I could not agree more.
How to determine technology leadership?
A much more important aspect of 5G is the ownership of the foundational and core technologies that underpin its transformation ability. 5G being a key element of the future of almost every industry on the planet, whoever owns those core technologies will not only win the 5G race but also will wield unassailable influence on the global industry and the larger economy.
As mentioned earlier, technology leadership stems from IPR ownership. This is not lost on companies and countries that aspire to be technology leaders. This is clearly visible in the number of 5G patents filed by various entities. And that brings us to the critical question “Does having a large number of patents bring technology leadership?”
Patent counting is an unreliable method
It is heartening to hear that the report decisively says that patent counting is an unreliable method to determine 5G leadership, and it would mistake to use it as such. Further, the report asserts that the decision boils down to the quality of those patents, not quantity. The quality of patents here means; how fundamental and important they are for the functioning of 5G systems.
Sides note: Please check out these two articles (Article 1, Article 2) to understand how to determine the quality of patents.
The misguided focus on patent quantity has made many companies and even countries to pursue options that are on the fringes of what is considered ethical. For example, the report attributes the recent rise in 5G patents filed by Chinese individuals and companies to the government’s direct subsidies for filing patents, not necessarily to the increase in innovation. There might be other unscrupulous reasons too, such as companies over declaring Standard Essential Patents to achieve broad coverage or to avoid unknowingly violating the disclosure requirements, and others.
As I have discussed in my previous articles and podcasts, the standards-making body 3GPP’s honor-based system has enough loopholes for bad actors to goose up their patent count without adding much value or benefit.
The Hudson Institute report quotes an important point raised by the UK Supreme court—Reliance on patent counting also risks creating “perverse incentives,” wherein companies are incentivized to merely increase the number of patents, instead of focusing on innovation.
All this boils down to one single fact—when it comes to patents, the quality of patents is much more important than quantity.
In closing
After the initial misguided focus on the quantity of patents as a measure of technology leadership, the realization of the importance of the quality of patents is slowly sinking in. As the awareness of the transformational impact of 5G is spreading, the awareness about the importance of the quality of 5G patents is growing as well. Hudson Institute, being a think tank and an influential public policy organization, is rightly pointing out the key issues that are either missing or misdirected in the national technology policy debate. This is especially true for the 5G patent quality discussion. Hope the policymakers, and the industry takes notice and reward companies with high-quality patents while penalizing the manipulators.
If you would like to read more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Samsung Networks
The virtualization of cellular networks has been ongoing for some time. But virtualizing the Radio Access Network (RAN) has always been an enigma and was the final frontier for the trend. The rising star of 5G infrastructure business—Samsung—jumped on to the virtualized RAN (vRAN) bandwagon with their announcement yesterday. I think this will prove to be another turning point in moving the industry from decades-old “custom hardware + integrated software,” approach toward the modern, efficient, and flexible vRAN architecture.
What is vRAN and why does it matter?
Even since the dawn of the cellular industry, radio networks were always thought to be the most complex part of the equation. It was mainly because of the dynamic nature of the wireless links, compounded by the challenges of mobility. The “custom hardware + integrated software” approach proved to be the winning combination to solve that complexity. The resulting operator lock-in, and the huge entry barrier it created for new entrants, made the established infrastructure players to wholeheartedly embrace that approach. As the cellular technology moved from 2G to 3G, 4G, and now 5G, the complexity of the radio networks grew exponentially, keeping the approach intact.
But things are rapidly changing. Thanks to the accelerated growth of computing, now, it indeed is possible to break this combination and use commercial off-the-shelf (COTS) hardware and disaggregated software. This new approach is called vRAN.
The advantages of vRAN are obvious. It allows flexibility, drastically reduced entry barriers for new players, which leads to an expanded ecosystem. Operators will be able to choose the best hardware and software from different players and deploy the best-performing systems. All this choice increases competition, and substantially reduces costs, while increasing the pace of innovation.
Samsung’s 5G vRAN offerings
Samsung has announced full, end-to-end vRAN offerings for 5G (and 4G). These include virtual Central Unit (vCU), virtual Digital Unit (vDU), and existing Radio Units (RU). According to the press release, vCU was already commercialized in April 2019, and the full system was demonstrated to customers in April 2020. Samsung’s vCU and vDUs run on Intel x86 based COTS servers.
Let me explain the role of these units without going into too much detail. vCUs are responsible for non-real-time functions, such as radio resource management, ciphering, retransmission, etc. On the other hand, vDUs contain the real-time functions related to the actual delivery of data to the device through the RUs. RUs convert digital signals into wireless waves. A single vCU can typically manage multiple vDUs, and a single vDU can connect to multiple RUs.
“Our vRAN solutions can deliver the same reliability and performance as that of today’s legacy systems,” said Alok Shah, Vice President, Networks Strategy, BD, & Marketing at Samsung Electronics, “while bringing flexibility and cost benefits of virtualization to our customers.”
Another important aspect of the announcement is the support for Dynamic Spectrum Sharing (DSS), which allows 5G to utilize the 4G spectrum. This is extremely crucial, especially for operators who have limited low or mid-band 5G spectrum. Shah mentioned that they have put a lot of emphasis to ensure DSS smooth interworking between the new vRAN 5G and the legacy 4G systems.
A significant step for the industry
Samsung made everybody’s head turn when it won a significant share of the 5G market in the USA, beating long-term favorites such as Ericsson and Nokia. This came on the heels of its 5G wins in South Korea, and strong 4G performance in hyper-competitive and large market like India. Additionally, Samsung’s strong financial position gives it a distinct advantage over its traditional rivals.
So, when such a strong player adopts a new trend, the industry will take notice. Until now, the vRAN vendor ecosystem consisted primarily of smaller disruptive players, such as Mavenir, Altiostar, Parallel Wireless, and others. Major cloud players such as Facebook, Intel, Google, Qualcomm, and others are largely observing the developments from outside. Nokia, another major legacy vendor recently announced its 5G vRAN offerings as well, with the general availability slated for 2021. Samsung’s announcement makes vRAN much more real, and future that much brighter. Also, Samsung being a challenger, has much more to gain with vRAN than its legacy competitors such as Ericsson, Nokia, and Huawei.
vRAN also opens the possibility for Open RAN, in which vCUs, vDUs, and RUs from different vendors can work with each other, providing even more flexibility for operators. Although Samsung didn’t specifically mention this in the PR, Shah confirmed that the use of standardized open interfaces makes their vRAN system inherently open. He also pointed to their growing portfolio of Open RAN compliant solutions, developed based on multiple collaborations with US operators. Open RAN and vRAN have gotten even more attention and importance because of the geopolitical issues surrounding the US ban of Huawei, and the national security concerns.
Side note: If you would like to learn more about Open RAN architecture and its relevance to addressing the U.S. government’s concerns with Huawei, listen to this Tantra’s Mantra podcast episode.
The generational shift which requires major re-hauling of network infrastructure is a perfect opportunity for operators to pursue new technologies and a new approach. However, the move to vRAN will be gradual. Greenfield 5G operators such as Dish Network in the USA might start off with vRAN, some of the US operators looking at building out 5G on the new mid-band spectrum might use vRAN for that as well, so are the enterprises building private networks. The migration of larger legacy networks will be gradual and will happen over a period of time.
In closing
After a long period of skepticism, it seems the market forces are aligning for vRAN. Because of its enormous benefits in terms of flexibility, and cost-efficiency, there is a lot of interest in it. There is also strong support for this approach from large industry players. In such a situation, Samsung’s announcement has the potential to be a turning point in moving the industry toward vRAN. In my view, Samsung with its end-to-end virtualized portfolio, and a solid financial position is strongly positioned to exploit that move. For a keen industry observer like me, it would be fascinating to watch how the vRAN saga unfolds.
For more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
While the media is abuzz with the news of Samsung Foldable smartphones, being a network engineer at heart, I am more excited about Verizon and Samsung’s recent announcement about the successful completion of 5G virtual RAN (vRAN) trials using the C Band spectrum. Verizon’s adoption of vRAN for its network build, and Samsung’s support for advanced features such as Massive MIMO (mMIMO) for its vRAN portfolio bodes very well for the rapid 5G expansion in the USA. I recently spoke to Bill Stone, VP of technology development and planning at Verizon, and Magnus Ojert, VP and GM at Samsung’s Network Business, regarding the announcement as well as the progress of C Band 5G deployments.
The joint trial
The trials were conducted over Verizon’s live networks in Texas, Connecticut, and Massachusetts. Since the spectrum is still being cleared for use, Verizon had to get a special clearance from FCC. The trials used Samsung’s containerized, cloud-native, fully virtualized RAN software and hardware solutions supporting 64T64R mMIMO configuration for trials. This configuration is extremely important to Verizon for many reasons that I will explain later in the article. This trial is yet another critical milestone in Verizon’s race to build the C Band 5G network.
Verizon’s race to deploy C Band 5G network
After spending $53B on C Band auctions, Verizon is in a race against itself and its competition to put the new spectrum to use. It needs to have a robust network in place before the strong 5G demand outpaces the capacity of its current network. As many of you might know, Verizon is currently using the Dynamic Spectrum Sharing (DSS) technique to opportunistically use its 4G spectrum for 5G, along with focused mmWave deployments. Verizon also needs an expansive coverage footprint to effectively compete against T-Mobile, which is capitalizing on the spectrum-trove it got through the Sprint acquisition.
Verizon is busy like a beehive—signing deals with tower companies, site-prep work for deployments, working closely with its vendors, running many trials, and so on. Owning a significant portion of the fiber backhaul to sites is helping Verizon expedite the buildout. Stone confirmed that vRAN will be the mainstay for their C Band deployments, and they are firmly on the path to transition to virtual and Open RAN across the entire network. This will give Verizon more flexibility, agility, and cost-efficiency in enabling new services in the future, especially during the later phases of 5G, when the service expands beyond the smartphone and mobile broadband market. He added that the trials like this one are a great step in that direction. Although their vRAN equipment supports open interfaces, the initial deployments will only be single-vendor. I think the—single-vendor vRAN followed by multi-vendor Open RAN— is a smart strategy that will be adopted by many operators.
The most interesting C Band development all the industry is watching is how Verizon’s plan to use its AWS band (1.7 GHz) site-grid for C Band (3.5 GHz) will pan out. According to Stone, one way Verizon is looking to compensate for C Band’s smaller coverage footprint is to use the 64T64R antenna configuration. He expects this to improve the uplink coverage, which is the limiting factor. He added that the initial results from the trial are very encouraging.
The coverage benefit will necessitate a rather expensive 64T64R configuration across most of its outdoor macro sites. Verizon is also looking at small cells, indoor solutions, and other options to provide comprehensive coverage. He aptly said, “All the above” is his mantra when it comes to using these options to expand coverage. Considering that robust network and coverage are Verizon’s key differentiators, there is not much margin for error in its C Band deployments.
Samsung leading with its mMIMO and vRAN portfolio
After scalping a surprise win by getting a substantial share of Verizon’s 5G contract, Samsung has been consolidating its position by continuously expanding its RAN portfolio. Ojert emphasized that they are working very closely with Verizon for a speedy and successful C Band rollout.
Side note: To know more about Samsung’s network business, please listen to this Tantra’s Mantra podcast interview of Alok Shah, VP Samsung Networks.
Being a disruptor, Samsung has been an early adopter of vRAN and Open RAN architectures. It understands that the key success factor for these new architectures is providing performance that meets or exceeds that of legacy networks. The 64T64R has almost become a litmus test for whether the new approaches can easily evolve to support complex features such as mMIMO.
There have already been commercial deployments of legacy networks supporting 64T64R. Hence, it becomes a de facto bar for any new large-scale vRAN deployments. The telecom industry is hard at work to make it a reality. Verizon’s plan to use it to close the coverage gap of the C Band makes it almost mandatory for all its vendors.
Running these trials on live networks, that too at multiple locations makes a great proof-point for the readiness of Samsung’s gear for large-scale deployments. Ojert emphasized that by being a major supplier for cutting-edge 5G networks in Korea that use a similar spectrum, Samsung better understands the characteristics of the band. He added that they will utilize the entire portfolio of Samsung solutions including small cells, indoor solutions, and others in helping Verizon build a robust network.
C Band commercial deployments and service
FCC is expected to clear up to 60 megahertz of the total up to 200 megahertz of C Band spectrum later this year. Verizon is projecting to have C Band 5G service in the initial 46 markets in the first quarter of 2022, covering up to 100 million people. It will expand that as the additional spectrum is cleared, to reach an estimated 175 million people by 2024.
The initial deployments will be based on the Rel. 15 version of 5G, with the ability to do a firmware upgrade to Rel. 16, and beyond, for services such as URLLC, as well as Stand-Alone configuration.
C Band (along with its mmWave) spectrum indeed is a potent option for Verizon to substantially expand 5G services, effectively compete, and prepare for the strong evolution of 5G. It will be interesting to watch how the rollout will change the market landscape.
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
As an eventful 2021, which witnessed 5G becoming mainstream despite all the challenges, comes to a close, the analyst part of my mind is reviewing and examining major disruptions in the cellular market brought by 5G. The rise of Samsung, mostly known for its flagship galaxy phones and shiny consumer electronics, as a global 5G infrastructure leader really dawned on me as a key one.
As a keen industry observer, I have been tracking Samsung Networks for a long time. A little more digging and research revealed how systematically it charted a path from its solid home base in Korea to its disruptive debut in the USA, followed by expanding its influence in Europe and other advanced markets. All the while building a comprehensive 5G technology and product portfolio.
In this article, I will try to follow its growth steps in the last two years and explore how it is well-positioned to lead in the upcoming 5G expansion.
Strong presence at home and early success in India built the Samsung foundation
Korean operators like Korea Telecom and SK Telecom have always been at the bleeding edge of cellular technology, even from 3G days. As their key supplier, Samsung’s technology prowess has been a significant enabler for these operators’ leadership, especially in 4G and 5G. That has also helped Samsung to be ahead of the curve.
Samsung’s first major international debut was in India in 2013, supporting Reliance Jio, a new cellular player that turned the Indian cellular and broadband market upside down. Samsung learned valuable lessons there about deploying very large-scale, expansive cellular networks.
The leadership at home combined with the experience in India provided Samsung a solid foundation for the next phase of its global expansion.
Disruptive debut in the USA that changed the infra landscape
U.S. cellular industry observers sulking about the lack of 5G infra vendor diversity were pleasantly surprised when Samsung won a large share of Verizon’s contract to build the world’s first 5G network. That was a major disruption because of two reasons. First, Samsung virtually replaced a well-established player, Nokia. And second, it’s Verizon, for whom the network is not just a differentiation tool but the company’s pride. Verizon entrusting Samsung with the deployment of its high-profile, business-critical, first 5G network, speaks volumes about Samsung’s technical expertise and product superiority.
Over the years, Samsung has scored many key 5G wins in the U.S., including early 5G-ready Massive MIMO deployments for Sprint (now T-Mobile), supplying CBRS-compliant solutions to AT&T and 4G and 5G network solutions for US cellular.
These U.S. wins were the result of a well-planned strategy, executed with surgical precision. Samsung started 5G work in the U.S. as early as 2017 with testing and trials. In fact, Samsung was the first to receive FCC approval for its 5G infra solution, in 2018, quickly followed by outdoor and indoor 5G home routers.
It’s not just the initial contract wins and delivering on the promise. Samsung has been consistently collaborating with operators in demonstrating, trialing and deploying new and advanced 5G features such as 64T64R Massive MIMO and virtual RAN, c-band support, indoor solutions, small cells and more.
In other words, Samsung has fully established itself as a major infra player in the lucrative and critical U.S. market. The rapid deployment of 5G, even in rural areas, and the impending rip and replace of Chinese infrastructure for national security reasons bode well for Samsung’s growth prospects in the country.
Samsung methodically expands into Europe, Japan and elsewhere
After minting success in the high-stakes U.S. market, Samsung signed a contract with Telus of Canada in 2020. Canada was a simple expansion, and going after other advanced markets, such as Europe and Japan, was a natural next step.
Europe is one of the most competitive and challenging markets to win. Not only it is the home to two well-established infra players–Ericsson and Nokia – but also the biggest market outside China for Huawei and ZTE. Samsung has seen early success with some of the key players in Europe. For example, it successfully completed a trial with Deutsche Telecom in the Czech Republic, potentially giving Samsung access to DT’s extensive footprint in the region. Recently, Vodafone UK selected Samsung as the vRAN and Open RAN partner for its sizable commercial deployment, and Samsung is collaborating with Orange for Open RAN in France. Getting into these leading operators in the region is a significant accomplishment. In my view, with the other players such as Telefonica being very keen on vRAN and Open RAN, entry there is only a matter of time.
Even with these wins, it is still early. The company’s strategy in Europe is still unfolding. A significant tailwind for Samsung is the heightened national security concern, which has significantly slowed the traction of Chinese players. Additionally, onerous U.S. restrictions have seriously crippled Huawei.
Japan has always been the most advanced market. So far, it is dominated by local players such as NEC and Fujitsu. Expanding its wings there, Samsung has been collaborating with KDDI on 5G since 2019. It also got into the other major operator NTT DoCoMo earlier this year with the contract to supply O-RAN compliant solutions.
Comprehensive technology and product portfolio that fueled all this growth
5G has always been characterized as a race. That means the first to market and the leaders will emerge as winners taking a large share of the value created by 5G. Interestingly, it has played out as such so far. The investments in 5G are so large that once companies establish leadership and ecosystem relationships, it is extremely hard to change or displace them.
Realizing that, Samsung invested big and early in 5G technology development. Being both a network and device supplier, it can utilize that investment over a much broader portfolio. Samsung conducted pioneering 5G testing and field trials as early as 2017 and 2018, in Japan with KDDI. When many in the industry were still debating the ability of mmWave to support mobility, Samsung collaborating with SK Telecom, demonstrated successful 5G video streaming in a race car moving at 130 Mph speed. Samsung was also the industry’s first to introduce mmWave base stations with integrated antennas, significantly simplifying deployment.
In the emerging area such as Edge-Cloud, Samsung is already working with major Cloud providers such as Microsoft and IBM and chipset players such as Marvel.
Currently, Samsung has one of the most comprehensive portfolio of network solutions, software stack and tools, support for all commercial 5G bands, including both Sub-6 GHz and mmWave, with advanced features such as Massive MIMO, for indoor and outdoor deployments, for new architectures such as vRAN and Open RAN, for public or private networks and so on.
One of the major advantages of Samsung, when compared to its infra competitors, is its strong financial strength that comes from being part of a huge industrial conglomerate. In businesses like 5G, where investments are large, risks are high and payback times are long, such financial strength can decide between winning and going out of business.
In closing
Samsung Networks’ journey from its humble beginnings in Korea to a global 5G infrastructure leader is fascinating. It has invested heavily to become a technology leader, and has successfully used that leadership along with meticulous planning and execution to be a global leader in the 5G infra business.
It is still early days for both 5G and Samsung. It will be interesting to watch how Samsung can utilize this early lead to capture even bigger opportunities created by the expanding 5G’s reach and new sectors such as Industrial IoT.
In the meantime, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
Samsung Networks held its mid-year analyst day last week, giving an update on their progress on the vRAN/Open RAN front, Dish deployment, and the opportunities they see in the Private Networks space. I was among a few key analysts they invited to their offices in Dallas for the meeting. I came out of the meeting well informed about their strategy and future path, which is following the trajectory I discussed in my earlier articles here.
Strong vRAN/Open RAN progress
Since launching its vRAN portfolio, Samsung has steadily expanded its sphere of influence in North America, Europe, and Asia. Although its surprising debut at Verizon was with legacy products, Samsung Network has used its market-leading vRAN/Open RAN portfolio as leverage to expand its reach, including at Verizon’s c-band deployments and at newer customers, regions, and markets. Having both legacy and vRAN support makes them an ideal partner with any operator, be it the ones continuing to use the legacy approach for faster deployment and expansion of 5G or the ones looking to utilize newer architectures for building future-proof networks, or even the ones looking to bridge between the two.
The chart below captures the continuing successes Samsung Networks has witnessed in the last couple of years.
As Verizon’s VP Bill Stone explained to me during a recent interview, a significant portion of their c-band deployment is vRAN. An operator like Verizon, who considers its network a differentiator, putting full faith in Samsung’s vRAN portfolio shows the latter’s product quality and maturity.
Vodafone UK partnered with Samsung Networks to commercialize its first Open RAN site and has plans to expand it to more than 2,500 sites. You can read more about this in my earlier article here.
To clarify, people often confuse between vRAN and Open RAN. vRAN is the virtualization of RAN functions so that you can run them on commercial off-the-shelf (COTS) hardware. In contrast, Open RAN is building a system with hardware and software components with open interfaces from different vendors. vRAN is firmly on its way to becoming mainstream. However, there are still challenges and lingering questions about Open RAN. That’s why the progress of early Open RAN adopters such as Dish, interests everybody in the industry.
Samsung’s recent announcement regarding 2G support for vRAN was interesting. I knew that there are some 2G markets out there. But was surprised to see the size of this market, as illustrated in the chart below:
This option of supporting 2G on the same Open RAN platform will help operators efficiently support the remaining customers and eventually transition them to 4G/5G while using the same underlying hardware. From the business side, this option will help Samsung Networks break into new customers, especially in Europe.
Powering America’s first-ever Open RAN network with Dish
Nothing illustrates more than one of the world’s most watched new 5G operators fully committed to Open RAN launches its network with you as the primary infra vendor. Dish has a long list of firsts: the first fully cloud-native vRAN and Open RAN network in the US; the first multi-vendor Open RAN network in the US; the First to use public cloud for its deployment, and more.
As evident from many auctions, public disclosures, and this study by Allnet Insights & Analytics, Dish has a mix of many different spectrum bands with highly variable characteristics. They include bands from 600 MHz to 28 GHz, bandwidths ranging from 5 MHz to 20 MHz, paired (FDD), unpaired (TDD and supplemental downlink), licenses in crosshairs with satellite broadband operators, and so on. Dish has embarked on a unique journey of being a major greenfield countrywide cellular provider in the US in a few decades while adopting a brand-new architecture such as Open RAN. Additionally, it also has tight regulatory timelines to meet. In such a scenario, it needs a reliable, versatile, financially strong infra partner with a solid product portfolio. Above all, it needs a vendor fully committed to Open RAN. Dish seems to have found such a partner in Samsung Networks.
To be clear, it is still very early days for Dish and Open RAN. The whole industry is watching their progress with open and watchful eyes.
Finding a foothold in the private networks market
Private Networks is one of the most hyped concepts in the cellular industry today. Indeed, 5G Private Networks have a great prospect with Industry 4.0 and other futuristic trends. But based on my interactions with many players in the space, customers’ real needs seem to be plain-vanilla mobile broadband connectivity. In many cases, be it large warehouses, educational institutions, or enterprises with sprawling campuses, cellular Private Networks will be needed for use cases requiring seamless mobility, expanded coverage (indoor and outdoor), increased capacity, and in some cases, higher security. And these will complement Wi-Fi networks.
During the event, Samsung Networks explained how they are addressing these immediate and prospective long-term needs of the market, with examples of early successes. These include deployments at Howard University in the USA, a relationship with NTT East in Japan, and the latest collaboration with Naver Cloud in South Korea.
Naver also has deployed an indoor commercial 5G Private Network in its office. The network, covering a sizeable multi-story building, serves a bunch of autonomous robots. These robots work as office assistants, providing convenience services, such as delivering packages, coffee, and lunch boxes to Naver employees throughout the building. All the robots are controlled by Naver’s cloud-based AI. The need for 5G instead of Wi-Fi stems from mobility, low latency, coverage, and capacity requirements.
Mobility is needed for reliable connectivity with hand-offs when robots are moving around. Low latency is required to connect robots to cloud AI for seamless operations. Extended coverage and capacity are needed to ensure the connectivity of robots is not degraded by the traffic from all the other office machines, including computers, printers, network drives, and others.
Naver and Samsung are planning to market such concept and services to other customers.
In closing
The analyst meeting provided other analysts and me with a good understanding of Samsung Networks’ current traction in vRAN/Open RAN and an overview of their strategy for the future.
It seems Samsung Network is well poised to expand its market with its vRAN/Open RAN portfolio, along with support for legacy architecture. Dish being a bellwether for Open RAN, the industry is very closely watching its success and its collaboration with Samsung Networks.
Private Networks is an emerging concept for 5G with great potential. Samsung Networks seems to have scored some early partners and deployment wins.
The 5G infrastructure market expansion is exciting, and Samsung seems to have gotten a good head start. It will be interesting to see how it evolves, especially with the fears of global recession looming.
Meanwhile, to read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Samsung Networks’ news cycle started weeks before the much-awaited Mobile World Congress 2022, making its mark in Europe. The cycle continued, with many more announcements coming right before, during and after the event. The notable ones were: building a solid coalition to streamline virtualized RAN (vRAN) and open RAN, expansion into the red-hot private networks domain and traction in public safety deployments.
All this point to Samsung Networks evolving from its initial disruptor role to a market and thought leadership role, tracking the trajectory I had detailed last year in this article.
Building comprehensive, interoperable vRAN/open RAN ecosystem
As I had explained in my recent Forbes article, the biggest challenge of new architectures like vRAN and open RAN is stitching together a system with disparate pieces from many different companies. Most of these pieces, by definition, are generic and off-the-shelf (COTS – Commercial Off the Shelf). In such case, it is an arduous task for operators and system integrators to ensure these pieces interwork seamlessly and operate as a single system. Moreover, this system has to meet and exceed the performance of legacy architectures. Understanding this challenge, Samsung Networks is taking charge to innovate and build a comprehensive ecosystem of vRAN/open RAN players with fully interoperable solutions.
An announced coalition consists of well-known brands with a proven track record. It has cloud infra players such as Dell and HPE, chipset giants such as Intel, and cloud software platform players such as Red Hat and Wind River. I wouldn’t be surprised if the roster grows with additional partners such as Qualcomm, Marvel and hyperscalers in the near future.
The primary objective of the coalition is to develop fully interoperable, deployment-ready, pre-tested, and pre-integrated vRAN and Open RAN solutions. Anybody who has done system integration knows that even though, in theory, standards-compliant products should interwork, during actual deployments, nasty surprises always spring up. This collaboration is designed to remove that exact element of surprise and make deployments seamless, predictable, and cost-effective.
By joining hands with Samsung Networks, all these players who are leaders in their respective domains have recognized the leadership and growing influence of the company.
CBRS and Private Networks deployments
Private Networks have attracted a lot of attention lately. There has been much news regarding deployment plans, commitments, and trials. Samsung Networks was among the first to deploy an actual commercial Private Network on the campus of Howard University.
On the second day of MWC, Samsung Networks announced that NTT East selected it as the partner for Private Network deployments in the eastern region of Japan. This followed successful completion of 5G Standalone (SA) network testing by both the companies. 5G SA is a crucial feature for Private Networks, especially for delivering massive IoT and mission-critical services to enterprises, large industries, and others.
In the USA, CBRS shared spectrum is touted as the ticket to Private Networks. After a somewhat slow start, CBRS deployments have been picking up pace in the last couple of years. During MWC, Samsung announced a collaboration with Avista Edge Inc, for an interesting use case of the CBRS spectrum. Avista Edge is a last-mile, fixed wireless access (FWA) technology provider, with an innovative approach to delivering broadband. As part of the deal, Avista Edge will offer broadband services to rural communities through electric utilities and Internet Service Providers. Samsung will provide its On-Go Alliance certified Massive MIMO radios and compact core network to Avista Edge.
Right after MWC, Samsung also announced another CBRS deal—with Mercury Broadband in collaboration with t3 Broadband. Mercury Broadband is a rural broadband provider, and t3 Broadband is an engineering services company. Samsung will provide its 6T64R Massive MIMO radios and baseband units for more than 500 FWA sites across Kansas, Missouri, and Indiana. The network is expected to expand to additional states through 2025.
Public safety partnership and new mmWave use case
Samsung Networks and the Canadian operator TELUS announced the country’s first Mission Critical Push-to-X (MCPTX) deployment, serving first responders, public safety workers, and others. It will be deployed over TELUS’s 4G and 5G networks and has already been trialed with select customers. The broader commercial availability is expected in the for later part 2022.
Samsung Networks’ MCPTX solution packs a comprehensive suite of tools, offering: real-time audio and video communication between the first responders, priority access in congested networks during natural disasters, connected ambulances, and vehicular traffic controls.
In an interesting use case of mmWave, Samsung Networks signed a deal with all three Korean operators to provide a high capacity mmWave backhaul to the subway Wi-Fi system in Seoul. Seoul is one of the highly connected cities in the world, and data consumption continues to grow. The system will provide high capacity backhaul to Wi-Fi Access Points in the subway stations and trains, allowing users to enjoy extreme speeds, capacity, and better broadband experience while in transit. This set-up was successfully trialed in September 2021.
In closing
After impressive 5G rollouts in the USA over the years, including its most recent Verizon C-band deployment, Samsung Networks is set to establish a solid foothold in Europe. Further, it is becoming a recognized leader in vRAN/Open RAN, and is widening its appeal to rural players and private network providers around the globe.
Its announcements at MWC 2022 provided solid proof of its expansion strategy and early success. I’ll be interested to see how Samsung Network grows and tracks the trajectory outlined in my 2021 article.
Prakash Sangam is the founder and principal at Tantra Analyst, a leading boutique research and advisory firm. He is a recognized expert in 5G, Wi-Fi, AI, Cloud and IoT. To read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
MWC 2023 turned out to be a graduation party for Samsung Networks, from a market disruptor to a mature, reliable and confident 5G infrastructure leader. This was evident from the flurry of announcements made around the event, including its own, as well as from operators and other ecosystem partners.
The announcement season actually started late last year when DellOro Group crowned Samsung Networks the vRAN/open RAN market leader. To top that, during MWC, Samsung Networks announced its next-gen vRAN 3.0, as well as many collaborations and partnerships.
To my credit, Samsung Networks followed the trajectory I outlined in this article in 2021. It has meticulously built and expanded its global footprint and created a sizeable ecosystem of partners that are technology and market leaders in their respective domains.
Next-gen infrastructure solutions
Unlike other large infrastructure vendors such as Ericsson, Huawei and Nokia, Samsung was an early and enthusiastic adopter of vRAN/open RAN architecture. Being a challenger and a disrupter made that decision easy — it didn’t have any sacred cows to sacrifice, i.e., legacy contracts and relationships. That gave it a considerable head start that it continues to maintain.
The vRAN/open RAN transition is shaping up to be a two-step process: First, a disaggregated, cloud-native, single vendor, fully virtualized RAN (vRAN), with open interfaces, followed by a multi-vendor truly open RAN. Many of Samsung Network’s competitors are still on the first step, deploying their first commercial base stations. In contrast, Samsung Networks has already moved on to the second step (more on this later).
Samsung Networks announced its next-gen solutions, dubbed vRAN 3.0, which brings many performance optimizations and significant power savings. The former brings a key feature that supports up to 200 MHz of bandwidth with 64T64R massive MIMO configuration. That almost entirely covers the mid-band spectrum of U.S. operators. The latter involves optimizing usage and sleep cycles of CPU cores to match user traffic, thereby minimizing power consumption. These software-only features (with the proper hardware provisioning) exemplify the benefits of a disaggregated vRAN approach, where the new capabilities can be rapidly developed and deployed.
Also, part of vRAN 3.0 is the Samsung Cloud Orchestrator. It streamlines the onboarding, deployment and operation processes, making it easier for operators to manage thousands of cell sites from a unified platform.
Although large parts of vRAN/open RAN are software-defined, the key radio technologies still reside in hardware. That is where Samsung Networks has a strong differentiation. It is the only major network vendor that can design, develop and manufacture 4G and 5G network chipsets in-house.
Strong operator traction and contract wins
Samsung Networks’ collaboration with Dish Wireless is notable at many levels. Dish Wireless is one of the biggest open RAN greenfield deployments. Its trust in keeping Samsung Networks as a primary vendor says a lot. It is also a multi-vendor deployment, wherein Samsung Networks is integrating its own as well as Fujitsu’s radio units (RU) into the network. Interestingly, Marc Rouanne, EVP and chief network officer of Dish Wireless, joined Samsung Networks’ analyst briefing at MWC and showered lavish praise on their work together, especially on system integration, the Achilles heel of open RAN.
Vodafone has been a great success story for Samsung Networks. After successfully launching the U.K. network with the famous Golden Cluster and integrating NEC radios, both companies are now extending their collaboration to Germany and Spain.
In Japan, Samsung’s Network’s relationship with KDDI has grown tremendously. Leading up to MWC, they announced the completion of network slicing trials, followed by commercial 5G open RAN deployment along with Fujitsu (for RU) and a contract for a 5G standalone core network, a first for Samsung outside Korea.
A recent Dell’Oro report identified North America and Asia-Pacific as the growth drivers for vRAN/open RAN. Although Europe is a laggard, even then, that region’s revenue is expected to top $1 billion by 2027. Apart from the above announcements, Samsung Networks has announced many operator engagements and contract wins across these three regions over the years. So, geographically, Samsung Networks is putting the bets in the right places.
Expanding the partner ecosystem
Success in the infrastructure business is decided by the company you keep and the partnerships you nourish. That is even more true with vRAN/open RAN, where networks are cloud-native, software-defined, and multi-vendor, with open interfaces.
There was a long list of partner announcements around MWC 2023. The cloud platform provider VMWare is working with Samsung Networks for the Dish deployment. Another provider, Red Hat, announced a study that can save significant power for operators when their platform and Samsung Networks’ RAN are working together.
Cloud computing provider Dell Technologies announced through its 5G Head of Marketing Scott Heinlein‘s blog a collaboration to integrate Samsung Network’s vCU and vDU with its PowerEdge servers.
Finally, Intel, in its announcement, confirmed that Samsung had validated its 4th Gen Intel Xeon Scalable processors for the core network.
Again, these are just the MWC 2023 announcements. There were many more in the last few years.
In summary, through its differentiated solutions, strong operator traction and robust partnerships, Samsung Networks has graduated from a credible disrupter to a reliable, mature infrastructure player, especially for vRAN/open RAN. It was vividly on display with all its glory at MWC 2023 through its proven track record, product, operator, and partner announcements. I can’t wait to see how its next chapter unfolds while global networks transition to new architectures.
Meanwhile, if you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast. If you want to know more about the vRAN/open RAN market, check out these articles.
Samsung recently opened the doors to its North American Samsung Networks Innovation Center in Plano, TX, further boosting its presence in the region. This state-of-the-art facility, supported by development centers and well-equipped labs, not only helps Samsung Networks and its partners to optimize, test and showcase their 5G products and services, but it also signifies the company’s strong commitment to support the needs of customers and build new partnerships in the region.
I got to tour the Innovation Center and the labs firsthand a couple of weeks ago and was impressed by the facilities. The opening of the Innovation Center is even more opportune, considering that we are at the cusp of the second phase of 5G, driven primarily by architecture like vRAN/open RAN, new business propositions like private networks, new and exciting use cases such as Industrial IoT, URLLC and XR. This center will be a valuable asset for Samsung Networks and its customers and partners in experiencing new technologies in real life, and ultimately helping make those technologies mainstream.
This is yet another step in the remarkable global growth of Samsung Networks in the 5G era, which I have documented in the article series here.
Showcase of the best of Samsung Networks’ technology
The front end of the expansive Samsung facilities is the Innovation Center, which houses many live demonstration areas highlighting various technologies and use cases. The current set-up includes demos of vRAN/open RAN with network orchestration, fixed wireless access (FWA) both FR-1 (Sub6 Ghz) and live FR-2 (mmWave) systems, private network with low-latency based IIoT use cases, X.R. and others.
The most impressive for me was the Radio Wall of Fame — a vast display of Samsung Networks’ radios deployed (and ready to be deployed) in the Americas, supporting a wide range of the spectrum, output power, form factors, bandwidths, bands and band combinations, MIMO configurations and more. It is awe-inspiring that in a short span, Samsung Networks has developed almost all the configurations desired by customers in the Americas.
Optimizing and perfecting technologies for the Americas
The hallmark of any successful infrastructure player is to “think global and act local,” as markets are won by best addressing the specific needs of local and regional customers, which might often be disparate. Like other major cellular infra players, most of Samsung Networks’ core development happens offshore. But most, if not all, the customization and optimization happens in the country, including the crucial lab and field testing.
The best example of this localization is the fact that Samsung supports spectrum bands and band combinations needed for U.S. operators, including its unique shared CBRS band. There are estimated more than 10,000 possible band combinations defined by 3GPP, many of which are necessary in the USA. “Supporting and testing all the band combinations operators require is an arduous task, and that’s precisely where our well-equipped labs come into play,” says Vinay Mahendra, director of engineering, Networks Business, Samsung Electronics America, “The combinations are tested for compliance, optimized for performance, and can be demonstrated to operators at this facility before deploying them in the field.” This applies to many other local needs, such as configurations, deployment scenarios, and use cases. The new Plano Innovation Center is the showcase, and existing labs there and elsewhere in the country serve as the brains and plumbing.
Testing ground for partners
A 5G network is an amalgamation of different vendors, and seamless interoperability between them is a basic need. This need elevates the complexity to a new level with vRAN/open RAN, where software and hardware are disaggregated and might come from different vendors. A typical multi-vendor open RAN network could have different RU, D.U., CU vendors, cloud orchestration and solution providers, chip and cloud providers, etc. Integrating all those hardware and software pieces and making the system work together is no small task. It requires close collaboration among vendors, ensuring the system is thoroughly tested and pre-certified, so that the disruptions and issues in the field and hence the time and costs can be minimized. That’s exactly the role of the Innovation Center and the labs.
The next phase of 5G will be driven by non-traditional applications, services and use cases, such as IIoT, mission critical services, X.R., private networks, and many others that we haven’t even imagined yet. Those must be developed, tested, perfected, and showcased before being offered on commercial networks. Being a market leader, Samsung, with its partners, is in the driving seat to enable these from the network side. Again, a task cut out for its Innovation Center.
In closing
Samsung Networks’ Innovation Center in the U.S. is opening at the critical juncture when 5G is ready for its next phase in the country, exploring new deployment models, architectures and use cases. The center and the adjoining labs will serve as a centerpiece for the company and its partners to develop and commercialize that next phase. It will help Samsung Networks showcase its innovations and partner technologies and show company’s commitment to its customers in the region.
I am looking forward to seeing new technologies and concepts being demonstrated there.
If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Mobile World Congress was back to its past glory this year with more than 100,000 visitors, as reported by the organizer GSMA. As expected, there was lots of news, and open RAN was one of the major themes. Among many things I cover, I found it interesting how Samsung Networks expanded its partnerships, extended its reach and solidified its position as one of the top 5G global infrastructure players.
The open RAN action started much before MWC this year. Verizon touted that it has deployed more than 130,000 open RAN compatible radios in February 2024. Before that, AT&T, which hasn’t been outspoken about its open RAN strategy, surprised the industry with a large single-vendor contract to Ericsson in December 2023. This contract upset the infrastructure market structure. Suddenly Ericsson, a laggard so far, became an overnight open RAN champion. Meanwhile the deal reduced Nokia’s relationship to only T-Mobile, among the top three U.S. mobile network operators (MNOs). These announcements indicated that open RAN is slowly but surely becoming mainstream.
Samsung’s continued operator traction
I have closely followed Samsung Networks’ journey from its disruptive international debut to carefully charted global expansion to its current leadership position, as documented in my ongoing article series.
In North America, Samsung currently has major 4G LTE, 5G legacy and open RAN deployments at Verizon, a legacy deployment at Telus and exclusive multi-vendor 5G open RAN deployment at Dish Network. Samsung supplied a majority of the aforementioned 130,000 open RAN compatible radios at Verizon. It further expanded its reach in the region by signing up to deploy Canada’s first open RAN network with Telus. This is noteworthy also because Samsung will provide comprehensive solutions, including the latest vRAN 3.0 for 4G/5G, open RAN compliant Massive MIMO radios (up to 64T64R), support for third-party radio integration, and AI-based Services Management and Orchestration (SMO) platform, a first for Samsung.
In Europe, Samsung has established a strong relationship with Vodafone since 2021. Last year, Samsung and Vodafone began a large-scale open RAN rollout across 2,500 sites in the U.K. At MWC, both companies announced that they are extending that further to deploy open RAN in 20 major cities in Romania. Samsung is rapidly expanding its footprint and becoming a critical player in the region.
Telecom: a game of partnerships
Telecom is a game of partnerships. It’s even more critical in open RAN, where the whole premise is to utilize various vendors’ different software and hardware components. During MWC, Samsung announced new partnerships and further strengthened existing ones.
The first was with AMD, where Samsung and Vodafone made the first end-to-end open RAN data call on AMD processors. The call was made with AMD EPYC 8004 series processors on Supermicro’s Telco/Edge servers, supported by Wind River Studio Container-as-a-Service (CaaS) platform.
Following that, Samsung announced the industry’s first end-to-end call in a lab environment with Intel’s next-gen Granite Rapids processors. Earlier this year, Samsung and AWS announced a data call using Samsung’s versatile vRAN software, Samsung’s vRAN accelerator card and Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS Graviton processors.
As the debate rages on whether x86 or Arm processors are the best, or in-line or look-aside accelerators are most suitable for vRAN/open RAN, Samsung is not taking sides and is offering all the options, giving choice to operators. One might think offering all options is more expensive and resource-intensive. This is precisely where the financial strength of the larger Samsung conglomerate comes into the picture and makes Samsung Networks differentiate itself.
Let me explain. Global Infrastructure is a relatively new, strategic and growth business for the mothership. In today’s dynamic infrastructure market, marred by financial challenges and geopolitics, Samsung has the opportunity to prove itself as a significant, reliable global infrastructure player by leveraging its economic strength, technological prowess and international presence.
The additional cost to support all options is fully justified if that means more contract wins, market leadership and more significant influence in the market. That’s why I think, instead of being an arbitrator, offering choice is a brilliant move by the company.
Looking ahead – 5G SA Core network, RIC, 5G Advanced, 6G
Samsung Networks is known for its RAN and is a market leader in vRAN/open RAN. However, it also has the Core Network business that has been steadily growing and is now ready to change gears. The timing is impeccable, as the industry is slowly transitioning to the new cloud-native architecture and, more importantly, to standalone (SA) mode, creating opportunities for new vendor introduction.
Samsung already supplies its vCore to SK Telecom, KT, and LGU+ in South Korea. It recently went live with the nationwide commercial 5G SA Core Network for KDDI and deployed the virtual roaming gateway for TELUS.
With open RAN slowly becoming mainstream for many operators, the focus is now moving toward automation and other advanced capabilities this architecture offers. Many of those are realized through RAN Intelligent Controller (RIC) and rApps. At MWC, Samsung highlighted its Non-Realtime RIC platform, its own rApps, as well as those of Viavi, Capgemini, ZinkWorks and others. RIC and rApps will soon be the new battleground for infrastructure players.
5G Advanced is the next phase of 5G with many exciting features, including AI. At MWC, Samsung discussed its chipsets for AI-based baseband and radios, as well as using AI for enhanced beamforming and uplink performance. It also showed mock-ups of advanced radios with next-gen Massive MIMO, supporting up to 256 TRX and 3072 antenna elements. The company indicated that these radios would support 6 GHz and 13 GHz bands, all gearing up for possible 5G Advanced and 6G deployments. Although technology discussions are starting now, there is still quite a bit of time for 6G.
In summary, MWC proved to be an excellent time for Samsung to showcase its progress and solidify its position as a top 5G infrastructure player.
Prakash Sangam is the founder and principal at Tantra Analyst, a leading boutique research and advisory firm. He is a recognized expert in 5G, Wi-Fi, AI, Cloud and IoT. To read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
5G Integrated Access Backhaul (IAB)
5G is the hottest trend now, so much so that even the Covid-19 pandemic, which has badly ravaged the global economy, could not stop its meteoritic rise. Apple’s announcement to support 5G across its portfolio cemented 5G’s market success. With 5G device shipments expected to grow substantially in 2021, naturally, the industry’s focus is on ensuring expanded coverage and delivering on the promise of gigabit speeds and extreme capacity.
However, it is easier said than done, especially for the new mmWave band, which has a smaller coverage footprint. Leading 5G operators such as Verizon and AT&T have gotten a bad rap because of their limited 5G coverage. One technology option is integrated access backhauls (IABs) with self-interference cancellation (SLIC) that enable operators to deploy hyper-dense networks and quickly expand coverage.
mmWave Bands And Network Densification
Undeniably, making mmWave bands viable for mobile communication is one of the biggest innovations of 5G. That has opened a wide swath of spectrum, almost a tenfold increase, for 5G. However, because of their RF characteristics, mmWave bands have a much smaller coverage footprint. According to some studies, mmWave might need seven times the sites or more to provide the same coverage as traditional Sub-6GHz bands. So, to make the best use of mmWave bands, hyper-dense deployments are needed. Operators are trying to use lampposts and utility posts for deployment to achieve such density.
The biggest challenge for hyper-dense deployment is providing rapid and cost-effective backhaul. Backhauls are a significant portion of the CAPEX and OPEX of any site. With a large number of sites needed for mmWave, it is an even harder, more time-consuming and overly expensive process to bring fiber to each of them. A good solution is to incorporate IABs, which use wireless links for backhaul instead of fiber runs. IABs, which are an advanced version of relays used in 4G, are being introduced in the 3GPP Rel. 16 of 5G.
In typical deployments, there would be one fiber backhaul site, called a donor, say at a crossroad and a series of IABs installed on lampposts along the roads connected to it in a cascade configuration. IABs act as donors to other IABs as well to provide redundancy. They can also connect to devices, which would be beneficial now and in the future.
Drawbacks Of Traditional Relays And IABs
While IABs seem like an ideal solution, they do have challenges. The biggest one is their lower efficiency. I’ve observed that it can be as low as 60% during high-traffic load scenarios. This means you will need almost double the IABs to provide the same capacity as regular mmWave sites.
IABs can be deployed in two configurations based on how the spectrum is used for both of its sides (access and backhaul): using the same spectrum on both sides, or using a different spectrum for each side.
Using the same spectrum on both sides creates significant interference between the two sides (known as self-interference) and reduces efficiency. Using a different spectrum requires double the amount of spectrum, which also drastically reduces efficiency. Operators are always spectrum-constrained. Hence, in most cases, they cannot afford this configuration. Moreover, this creates mobility issues and leads to other complexities such as frequency planning, which needs to be maintained and managed on an ongoing basis.
So, in my opinion, the best approach is to use the same spectrum for both sides and try to eliminate or minimize the self-interference.
SLIC Maximizes IAB Efficiency
SLIC is a technique to cancel interference caused by both the links using the same spectrum. It involves generating a signal that is directly opposite to the undesired signal such as interference and canceling it. For example, for the access link, the signal from the traffic link is the undesired signal and vice versa. This technique has been known in theory for a long time, but thanks to recent technological advances, it is now possible to implement it in actual products. In fact, there are already products for 4G networks in the market that implement SLIC.
For 5G IABs, I’ve observed that SLIC can increase the IAB efficiency to as high as 100%, meaning IABs provide the same capacity as regular mmWave sites. 5G IABs with SLIC have been developed, and leading operators such as Verizon and AT&T have already completed their testing and trials and are gearing up for large-scale commercial deployments in 2021 and beyond.
In Closing
Unlike 4G relays, which were primarily used for coverage extension or rapid, short-term deployments (for example, to connect temporary health care facilities built for accommodating rapid surge in Covid-19 hospitalizations), operators should consider IABs with SLIC as an integral part of their network design. In addition, operators have to decide on an optimal mix of IAB and donor sites so that it provides adequate capacity while minimizing the overall deployment cost.
Mobilizing mmWave bands was one of the major achievements of 5G. However, their smaller coverage footprint could be a challenge, requiring hyper-dense deployments. The biggest hurdle for such deployments is quick and cost-effective backhaul solutions such as IABs. Further, SLIC techniques maximize the efficiency of those IABs.
5G has seen unprecedented traction; many flagship devices are already in the market, and many more are on the way, including the much-rumored and anticipated iPhone 5G. After the excitement of limited initial launches, when operators are starting the large-scale deployments, the basic question they are faced with is whether to focus on coverage or capacity. Well, the right answer is both, but that is easier said than done, especially for operators such as Verizon and AT&T that have limited low and mid-band (aka Sub-6Hz) spectrum.
In a series of articles, I will discuss this dilemma and explore the solutions that the industry is working on to effectively address it. Especially the ones such as Integrated Access Backhaul (IAB) that have shown early promise, and many innovations that not only enable such solutions but also make them efficient. This is the first article in the series.
When launching a 5G network, the easiest thing is to utilize sub-6GHz bands, if you have access to them, and provide a basic coverage layer. That is exactly what Sprint (now part of T-Mobile) in the US and many operators outside the US did. However, the amount of bandwidth available in the sub-6GHz spectrum is limited, and hence the capacity in those networks would quickly be used up, especially if the growth of 5G continues as predicted. There is every indication that it will, for example, contrary to what many people expected, 5G deployment in the US is not affected by the Covid-19 pandemic. This means those operators will soon have to move to the bandwidth-rich high-band spectrum, i.e. millimeter wave bands (mmWave). These bands have more than ten-times available spectrum than sub-6GHz, and are critical to deliver on the promise of 5G—multi-gigabit user speeds, the extreme capacity to offer new services, be it fixed wireless access to homes and offices, massive IoT, Mission Critical Services, or bringing new user experiences on a massive scale.
Operators such as Verizon and AT&T, who did not have access to enough Sub-6GHz bands, leapfrogged and took the bold step of launching 5G with mmWave spectrum. This spectrum is far different in many aspects than others that the mobile industry has used so far.
<<Side note: If you would like to know more about mmWave bands, check out my article – Is mmWave just another band for 5G?>>
The biggest differences between Sub-6GHz and mmWave bands are coverage and indoor penetration. Because of their RF properties, mmWave bands have smaller coverage footprint and do not penetrate solid objects such as walls. Although this was long known by experts, it came almost as a shock to uninformed general industry observers. Operators, especially Verizon, got a lot of flak from the media on this. Some even doubted the feasibility of mmWave bands. Thanks to the extensive field tests, any lingering doubts are now duly resolved. In fact, almost all global regions are now working toward allocating the mmWave spectrum for 5G.
By the virtue of a smaller footprint, mmWave will need more sites than Sub-6GHz to provide similar coverage. For example, simulations run by Kumu Networks estimate that 26 GHz spectrum will need seven to eight times more sites than 3.5 GHz spectrum, as shown in the figure below:
The ideal 5G deployment strategy for operators is to utilize sub-6GHz to provide expansive, city, and country-wide coverage, and utilize dense deployment of mmWave, as shown in the figure, in high-traffic dense urban, urban and even in pockets of suburban areas to provide extreme capacity. Because of the density and a large amount of spectrum available, the mmWave cluster will provide magnitudes higher capacity than sub-6GHz clusters. Additionally, such dense deployments are much easier with mmWave, because of their smaller coverage footprint.
Many operators are working with city governments and utilities to deploy mmWave sites on lampposts, which should provide good densification. Studies have shown that such deployments could provide excellent results, supporting a large number of subscribers with a huge amount of capacity resulting in excellent user experience. FCC, being proactive, has been working to streamline regulations for the deployment of such outdoor sites.
Clearly, lampposts, and in some cases building tops, are the ideal spots for mmWave installations, because they readily have access to power, which is one of the two key requirements for a new site. However, the other requirement—backhaul, is a far different story. Since these are high capacity sites, they need fiber or other high bandwidth means of backhaul. The first issue is, there may not be fiber drops near all the lampposts. Even if there are, bringing fiber to each post is not only extremely time consuming and very expensive, but also hard to manage and maintain on an ongoing basis. This means the industry has to look for alternate cost-effective, and easy to install solutions that offer bandwidth and latency similar to fiber.
Realizing this, the industry body 3GPP has been working on an interesting solution called Integrated Access Backhaul (IAB). IABs are being standardized in Rel. 16, and further enhanced in Rel. 17. Rel. 16 is expected to be finalized in July of this year and followed by Rel 17 in 2021.
<<Side note: If you would like to know more about 3GPP standardization and Rel 17, please check this article series – The Chronicles of 3GPP Rel. 17.>>
IABs use wireless links for both backhaul and access (i.e. regular user traffic). As evident, they will need a large amount of licensed spectrum to offer fiber-like backhaul performance. But that raises a lot of questions —such as “Don’t IABs decrease the available spectrum for access? How would that affect the network capacity? Can you still deliver on the grand promises of 5G?” and many more.
All those are valid questions and concerns. What if I say that there are ways to make and deploy IABs without compromising on the available spectrum? More like having the cake and eating it too, yes, that is possible! How, you ask? Well, you will have to wait for my next article to find out!
Also, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
One of the exciting features the recently finalized 3GPP Rel. 16 brings to 5G is the support for Integrated Access Backhaul (IAB). IABs have the potential to be a game-changer, especially for millimeter Wave (mmWave) deployments by solving the key challenge of backhaul. However, the traditional design of IABs offers low efficiency. In this article, I will take a deep dive into IABs, their deployment configurations, and most importantly, the techniques needed to improve their efficiency.
Side Note: If you would like to learn more about 3GPP Rel. 16, check out this article “3GPP Rel. 16–Stage set for the next phase of 5G, but who is leading?”
What are IABs and how do they work?
IABs are cell sites that use wireless connectivity for both user traffic (access) as well as backhaul. IAB’s predecessor— relays—have been around since 4G days. IABs are essentially improved and rechristened relays. If you have heard of Sprint “Magic Box,” then you have already heard about relays and to some extent IABs as well.
So far, relays were used primarily to extend coverage in places where it was challenging or uneconomical to deploy traditional base stations with fiber or ethernet backhauls. They were also useful when connectivity needs were immediate and temporary. A great use case was the recent COVID-19 crisis when temporary healthcare facilities with full connectivity had to be built very quickly. There are many such applications, for example, indoor deployments in retail stores, shopping malls, etc., where operators do not have access to fiber.
However, with expanded capabilities, IABs have a much bigger role to play in 5G, especially for mmWave deployments who have gotten a bad rap for having smaller coverage footprint. IABs allow operators to rapidly deploy mmWave sites and expand coverage by solving the teething backhaul issue.
IABs are deployed just like any other mmWave sites, of course without requiring pesky fiber runs. As shown in the figure below, IABs connect to donor sites in the same way as smartphones or any other devices. The main donor sites will need high capacity fiber backhaul. One or more IABs can connect to a single donor site. There could be multi-hop deployments, meaning IABs could also act like donors to other IABs. Each IAB could connect to multiple sites or IABs, providing redundancy. This configuration lends itself very well for mesh architecture in the future as well. IABs are transparent to devices, meaning devices connect to IABs just as they would to any regular base stations.
IABs are ideal for mmWave deployments
As I had explained in my previous article, mmWave 5G deployments need a dense cluster of sites to provide good outdoor coverage. Since bringing backhaul to all these sites is cumbersome and expensive, using IABs for such deployments is ideal. For example, in city centers, there could be a handful of donor sites with fiber backhaul, connecting to clusters of IABs around them. As evident, with such approach operators could provide much broader coverage with much fewer fiber runs, in a very short time. The savings and ease of installation are quite obvious.
It should be noted that unlike regular sites, IABs do not add new capacity. They instead share the capacity of the donor site much more efficiently across a much larger coverage area. Since the mmWave band has lots of spectrum, capacity may not be a limitation. Ultimately, the level of data traffic and the amount of spectrum operators have access to will decide the appropriate mix of donor sites and IABs.
One of the issues with IABs is interference. Since donors and IABs use the same spectrum, they might interfere with each other. But thanks to the smaller coverage footprint of mmWave bands, the interference is relatively minimal, compared to traditional bands. Another big advantage of mmWave bands is the support for beamforming and beamsteering techniques. These techniques allow the signal (beam) between all the nodes to be very narrow and highly directional, which further reduces interference.
Performance challenges of IABs
The biggest challenge of IABs is their lower efficiency. Since they use the wireless link for both sides (towards donor and user), they have to either use a separate spectrum or time-share between the sides. In both cases, efficiency is reduced, as the first case uses twice the spectrum, and the latter allows only one side to be active at any time. Let me explain, reasons for it.
If the same spectrum is used for both sides, there will be huge self-interference, meaning the transmitter from one side feeds into the receiver of the other side making interference so high that signal from actual users is drowned out and can’t be heard. So, the spectrum for both sides must be different. Since operators are often short on spectrum, they cannot afford this configuration. Even if they could, there are many complexities, such as requiring frequency planning, inability to support mobile IABs, confusion in handover between the two frequencies, and many more.
Hence, almost every deployment utilizes an alternate approach called Half-Duplex, in which the sides are tuned ON alternatively. The IAB ON/OFF timing has to be synchronized across the network to avoid interference. The situation is even more complicated if there are multi-hop deployments.
The best way to understand the performance of IABs is to simulate a typical system and analyze various scenarios. Kumu Networks, a leader in relay technology, did exactly that. Here is a quick overview of what they found out.
They simulated a typical city intersection, as shown in the figure here. They put a fiber-fed donor at a city intersection and a cluster of IABs along the streets, some connected directly, others in multi-hops. The aggregate throughput is calculated for the entire system with one, two, and multiple hops.
This chart shows the performance of the system, plotting the aggregate throughput of all users in the system vs. the number of hops. The red line in the chart represents the traditional Half-Duplex configuration that we just discussed. With this configuration, the throughput goes down significantly as the number of hops in the system increase. This is because the more hops there are, the less time slice each IABs gets, and lower the throughput.
You also see a blue line on the chart. This represents the Full-Duplex configuration, for which the throughput slightly increases and stabilizes even when more hops are added. Obviously, Full-Duplex is the most desired configuration.
Now, what is Full duplex? As the name suggests, it is keeping both sides of the IAB switched ON all the time, while using the same spectrum. So, with this configuration, there is no need for additional spectrum, no more time-sharing, and hence no more reduced efficiency. But didn’t we just discuss why this is not possible because of self-interference?
Well, what if I say that there are techniques to effectively cancel that self-interference? I know you are intrigued by this and want to know more. But for that, you will have to wait for my next article. So, be on the lookout!
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Many 5G operators are quickly realizing that Integrated Access Backhauls (IABs) are an ideal solution to expand 5G coverage. This is even more important for operators such as Verizon and AT&T, who are primarily utilizing millimeter Wave (mmWave) bands for 5G. As I explained in my earlier articles, traditional techniques only allow half-duplex IAB operation, which severely limits its usability. The SeLf Interference Cancellation (SLIC) technique enables the full-duplex IAB operation and offers full capacity and efficiency. In essence, it just not IABs, but IABs with SLIC are the most efficient, and hassle-free way to expand 5G mmWave coverage.
Side note: If you would like to learn more about IABs, and how to deploy hyperdense mmWave networks, please check out the other articles in the IABs article series.
What is self-interference, and why is it a challenge?
The traditional configuration for deploying IABs is half-duplex, where the donor and access (user) links timeshare the same spectrum, thus significantly reducing the efficiency. The full-duplex mode, where both the links are ON at the same time, is not possible as the links interfere with each other—the transmitter of one link feeding into the receiver of the other. This “self-interference” makes both the links unusable and the IAB dysfunctional.
So, let’s look at how to address this self-interference. As shown in the figure, IAB has two sets of antennas, one for the donor link, and another for the access link. The best option to reduce self-interference is to isolate both the antennas/links. Based on the years of work on the cousins of IABs—repeaters, and relays—we know that for the full-duplex mode to work, this isolation needs to be 110 – 120 dB.
Locating the donor and access antennas far apart from each other or separating them with a solid obstruction could yield significant isolation. However, since we would like to keep the IAB unit small and compact, with integrated antennas, there is a limit to how much separation you could achieve this way.
The mmWave bands have many advantages over sub-6GHz bands in achieving such isolation. Their antennas are small, so isolating them is relatively easy. Since they also have a smaller coverage footprint, the interference they spew into the other link is relatively smaller. That is why I think IABs are ideal for mmWave bands. If you would like to know more about this, check out the earlier articles.
The lab and field testing done by a leading player Kumu networks indicates that for mmWave IABs, the isolation that can be achieved by intelligent antennas separation is as high as 70 dB. That means the remaining 40-50 dB has to come from some other means. That is where the SLIC comes into play.
How does SLIC work?
To explain interference cancellation in simple words, you create a signal that is directly opposite to the interfering signal and inject that into the receiver. This opposite signal negates the interference leaving behind only the desired signal.
The interference cancellation can be implemented either in the analog domain or the digital one. Each is implemented at different sections of the IAB. Analog SLIC is typically done at the RF Front End (RFFE) subsystem, and the digital SLIC is implemented in or around the modem subsystem.
Side note: If you would like to know more technical details on self-interference cancellation, please check this YouTube video.
Again, when it comes to mmWave IABs, because of their RF characteristics, almost all the needed additional 40-50 dB of isolation can be achieved only through digital SLIC. Here are the frequency response charts of a commercial-grade mmWave digital SLIC IP block developed by Kumu Networks. This response is for a 28 GHz, 400 MHz mmWave system, and as evident, it can reduce the interference, i.e. increase the isolation by 40-50 dB.
SLIC enables full-duplex IABs
Here is a chart that further illustrates the importance of SLIC in enabling full-duplex operation of IABs.
t plots the IAB efficiency against the amount of isolation. The efficiency here is measured as the total IAB throughput when compared to the throughput of a regular site with a fiber backhaul. As can be seen, IAB in full-duplex mode is more efficient than half-duplex, if the isolation is 90 dB or more. And with 120 dB of isolation, IAB can provide the same amount of capacity as that of a regular mmWave site. It is pretty clear that SLIC is a must to make IABs really useful for 5G.
When will IABs with SLIC be available?
Well, there are two parts to that question. Let’s look at the second part first. SLIC is not a new concept. In fact, it is available in the products being shipped today. For example, Kumu Network’s LTE Relays that support SLIC are already deployed by many operators. And they already have developed the core IP for 5G mmW digital SLIC and it is currently being evaluated by many of its customers. As mentioned before, the frequency chart showing the interference cancellation is from the same IP block.
Now, regarding the first part, 3GPP Rel. 16, which introduced IABs was finalized only a few months ago in Jun 2020. It usually takes 9-12 months for the new standard to be supported in commercial products. Verizon and AT&T are already testing IABs and have publicly disclosed that they will start deploying them in their networks in 2021.
Final thoughts
In a series of articles, we took a very close look at 5G IABs, especially for the mmWave deployments. The first article examined why hyper densification of mmWave sites is a must for 5G operators, the second article explained how IABs address the main challenge of cost-effective backhaul, and this article illustrates why SLIC is a basic need for highly efficient, full-duplex operation of IABs.
5G mmWave IABs are a powerful combination of well-understood concept, proven technology, and an ideal spectrum band. No wonder the industry is really excited about their introduction. The finalization of 3GPP Rel. 16 has set the IAB commercialization in motion, and operators can’t wait for them to be deployed in their networks.
For more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
IoT Device Security
As the awareness of the transformative nature of 5G is increasing, the industry is slowly waking up to the enormous challenge of securing not only the networks, but also all the things these networks connect and the vital data they carry. When it comes to the Internet of things (IoT), the challenges of security couldn’t be bigger, and the stakes involved couldn’t be higher. The spread of IoT in homes, enterprises, industries, governments, and other places is making wireless networks the backbone of the country’s critical infrastructure. Safeguarding it against potential threats is a basic national security need.
With 5G set to usher in industry 4.0—the next industrial revolution, governments across the globe are understandably taking a keen interest in how 5G is deployed in their countries. There has naturally been a lot of emphasis on its security aspects. The current focus has primarily been on the network infrastructure side. Many countries, such as the USA, Australia, and New Zealand, have put restrictions on buying equipment from certain network infrastructure vendors such as Huawei and ZTE. As stated by these governments, their concerns are regarding the lack of clarity about the ownership and control of these vendors. While these concerns are valid, focusing only on the infrastructure side is not sufficient. It might even be more dangerous because it might give a false sense of security.
Infrastructure-focused security is insufficient
Network infrastructure is only one part of the story. Telecommunications is often referred to as “two-to-tango” as it needs both infrastructure as well as devices to make the magic happen. So, to have foolproof security, one needs to cover both ends of the wireless link, especially for IoT. Securing only the network side would be akin to fortifying the front door while keeping the back door ajar. Let me illustrate this with a real-life scenario. Consider something as benign as traffic lights, which at the very outset, don’t seem to need strong security. But what if somebody hacked into and turned off all the traffic lights in a major metropolitan area? That would surely bring the city to a screeching halt, resulting in a major disruption, and even loss of life. The impact could be even worse if power meters are hacked, causing severe disruption. It would be an outright catastrophe if critical systems, such as the national power grid, are attacked, bringing the whole country to its knees.
When it comes to IoT devices, conventional wisdom is to secure only the most expensive and sophisticated pieces of equipment. However, often, simple devices such as utility meters are more vulnerable to attacks because they lack strong hardware and software capabilities to employ powerful security mechanisms. And they can cause huge disruptions.
IoT device security is a must
IoT devices are the weakest link in providing comprehensive system-wide security. More so because IoT’s supply chain and security considerations are far too different and much more nuanced than those of smartphones. Typically, the development and commercialization of smartphones are always under the purview of a handful of large reputed organizations such as device OEMs, OS providers, and chipset providers. Whereas the IoT device ecosystem is highly fragmented with a large number of relatively unknown players. Usually, large players such as Qualcomm, and Intel provide cellular IoT chipsets. A different set of companies use those chipsets to make integrated IoT modules. Finally, the third set of companies use those modules to create IoT end-user devices. Each of these players adds their own hardware and software components into the device during different stages of development. Because of this, IoT devices are far more vulnerable than smartphones.
Address IoT device security during the procurement
It is evident that IoT users have to be extremely vigilant regarding security and integrity of the entire supply chain. This includes close scrutiny of the origin of the modules and the devices, as well as a detailed evaluation of the reputation, business processes/practices, long-term viability and reliability of the module and device vendors. Because of the high stakes involved, there is also a possibility of malicious third-parties infiltrating the supply chain and compromising the devices even without the knowledge of vendors. Case in point, the much-publicized Bloomberg Business Week report about allegedly tampered motherboards vividly exposed the possibility of such vulnerability. Although the allegations, in that case, are not yet fully corroborated or debunked, it confirms beyond doubt that such vulnerabilities do exist.
It is abundantly clear that the more precautions IoT users take during the procurement and deployment phases, the better it is. Because of the sheer volume, and the long life of IoT devices, it is virtually impossible to quickly rectify or replace them after the security vulnerabilities or infiltrations are identified.
The time to secure IoT devices is now!
Looking beyond the current focus on 5G smartphones, 5G Massive IoT will be upon us in no time. Building upon the solid foundation of LTE IoT, Massive IoT, as the name suggests, will connect anything that can and needs to be connected. This will span homes, enterprises, industries, critical city, state, and national infrastructure, including transportation, smart grids, emergency services, and more. Further, with the introduction of Mission Critical Services, the reach of 5G is going to be even broad and deep. All this means the security challenges and stakes are going to get only bigger and more significant.
So, it is imperative for the cellular industry, and all of its stakeholders to get out of the infrastructure-centric mentality and focus on comprehensive, end-to-end security. Every IoT device needs to be secured, no matter how small, simple, or insignificant it seems, because the system is only as secure as its weakest link. The time to address device security is right now, while the networks are being built, and the number of devices is relatively small and manageable.
Nowadays, security and privacy are on everybody’s mind. Hardly a day goes by without the news of security breaches at major institutions. Most of the time, the reporting is focused on the cloud or network infrastructure, hardly ever on devices. However, when it comes to cellular IoT, devices are the most vulnerable, as I explained in my previous article. IoT devices, being very simple, are usually much easier to hack in to, and can compromise the whole system.
The IoT device ecosystem is unique and far different than that of smartphones, in many aspects. Because of that, security challenges are also different, and many of them are related to a unit called IoT module, which is at the heart of any IoT device. To really understand the scope and impact of these challenges, it is important to closely look at the market landscape of the entire cellular IoT ecosystem. It is even more relevant now, considering that today’s 4G LTE cellular IoT will evolve into 5G Massive IoT.
Unique device ecosystem, much different from smartphones
The cellular IoT device ecosystem has far different considerations, especially from the security and privacy perspectives. The ecosystem includes modem chipset providers, many of whom are the same as those of smartphones, as well as a few smaller players. Cellular IoT also has a different category of vendors, called module providers. They take the barebones chipsets and add their own software and hardware to develop modules with standard interfaces and such. Device vendors develop IoT devices largely based on these modules. Modules simplify the connectivity and operator certification-related complexity so that the device vendors concentrate on developing use case-specific devices. Essentially, modules are a key link in the value chain between chipset providers and IoT device vendors.
Chipset and device market landscape
In the device ecosystem, the chipset market is dominated by the same large and well-known smartphone modem vendors, such as Qualcomm, Intel, MediaTek, Huawei (HiSilicon), Sequans, Altair, and others. They provide a full range of solutions with varying degrees of advanced features, including single and multimode options for eMTC, NB-IoT, with support for 3G, 2G, GPS, onboard processing and so on. Apart from the advanced features, the overall cost is a major consideration for the industry.
The cellular IoT device ecosystem is very large and diverse. The vendors are usually small and possess expertise in specific use cases. They don’t necessarily have the skillset and scale to justify designing devices based off the IoT chipsets. That’s where module vendors come in. Traditionally, IoT vendors were mostly from the US and Europe. However, there has recently been a surge in vendors from China, who are completely unknown outside the country. Many of them have taken cues from and have duplicated device and module designs from traditional vendors. The proliferation of Chinese vendors is primarily due to the Chinese government’s concerted effort and heavy investment in IoT in the country. The Chinese government’s well-funded large IoT projects coupled with considerable subsidies provided by operators such as China Mobile and China Telecom has created an ideal environment for these companies to flourish. The recently awarded 5G contracts are a great example of how the Chinese government and operators support Chinese vendors. These companies, emboldened by their success in China, are now trying to pursue global opportunities. Since they are leveraging the investments and subsidies availed in China, they can be extremely price-competitive in global markets.
IoT module market landscape
IoT modules are the “bridge of trust” between the well-known chipset vendors and the unknown device vendors. Module vendors also work with the regulators and cellular operators for certification, which addresses a significant hurdle for device vendors. The certification ensures smooth and rapid deployment of these devices in the field. As evident, the selection of module vendors is key to ensure device and system security.
The module vendor market comprises of a mix of existing and emerging players. Some players such as Gemalto (Siemens M2M at the time), Sierra Wireless (+acquisition of Sony Ericsson M2M and Wavecom), Telit (+acquisition of Motorola M2M) have been around since the 2G days. Others such as U-Blox entered the market during 3G and early part of 4G, leveraging their mobile expertise. Finally, the emerging module vendors from China, who just like IoT device vendors in the country, have grown at a fast pace, with substantial government support and operator subsidies. There is a long list of such players. A few among them, such as Quectel, SIMCom, Longsung, Fibocom, and Norway, are eyeing global markets. Many others may be looking with watchful eyes at how the initial players fare in their endeavor, before stepping out themselves.
Ecosystem challenges
Anybody who has looked closely at the IoT market realizes that the biggest challenge is its relatively low margins across the board, be it chipsets, modules or devices. Considering that the module vendors are relatively small compared to the chipset, infrastructure, cloud, or application vendors, they don’t have a lot of leverage, resulting in an extreme margin squeeze. In such a situation, increasing market share becomes crucial, putting even more pressure on pricing. This is exactly where government-funded projects and operator subsidies that the Chinese vendors enjoy at home starts to matter and alter the landscape. Because of government support at home, their pricing can be artificially low, reaching predatory levels.
Speaking to some of the sources in the industry reveals that there is indeed a race to the bottom when it comes to module pricing. If it persists, there is a real danger of non-Chinese players becoming financially unviable. This is of grave concern, especially when we are getting ready to move to 5G. Supporting 5G will need huge upfront investments, and the pay off period could be very long. If these companies can’t earn enough profit, they can’t afford to invest in 5G, and potentially, in the worst case, exit the market.
What do these challenges mean for the cellular IoT Industry?
If you feel like you have seen this movie before, you are not wrong! If you examine the turn of events in the cellular infrastructure market during the late 90s and early 2000s, the situation is almost identical. During that time, major American and European cellular infrastructure vendors failed to anticipate such threat and were unable to compete with emerging Chinese rivals that were allegedly supported by their government. Many American and European vendors such as Motorola, Lucent, Siemens, Ericsson, Nokia, with decades of experience and successful existence had to perish, merge, or downsize. Chinese upstart vendors such as Huawei and ZTE found a ripe market and quickly took away market share, grew exponentially, and became dominant players.
Why is the comparison with the past relevant, and why is it a security concern? Well, IoT devices are the weakest link in the security of the overall system. The industry needs to be as concerned about the security of IoT vendors, as much as with the infrastructure vendors, if not more.
What happens if we don’t heed to the teachings of the past? What are the implications for the security and privacy of IoT networks? I will explore those questions in my next article. So, be on the lookout!
In my previous articles here, and here, I explained the rationale for increased focus on device security and its challenges. The threats are more acute, especially from unknown foreign vendors offering predatory pricing. After reading the articles, a few people questioned me about the ills of such a situation and even suggested that the fierce competition will keep the pricing low and vendors in check. In this article, I will explore whether such short-term thinking will help or hurt the industry in the long-term and examine some what-if scenarios. I will also draw parallels to some historical lessons, and finally, offer suggestions on how the IoT ecosystem could protect itself.
Learning from history
The best parallel to what is happening in the IoT vendors space is the situation of American and European cellular Infrastructure vendors during the 3G transition, in the late 90s and early 2000s. I vividly remember it because I was amidst all of it, working for one such company. The world was slowly moving from 2G to 3G. The infra behemoths mostly from US and European companies, including, Lucent, Motorola, Nortel, Nokia, Siemens, Alcatel, and others were trying to get their customers to move to 3G quickly. However, they soon faced unprecedented headwinds from unknown Chinese companies named Huawei and ZTE, offering extremely low pricing. It was alleged that their low pricing was not only because of their lower cost but also more importantly because of the support from their governments. American and European vendors, confident because of their decades of heritage and experience, never took these players seriously. But alas, because of the dot com bust, and intense price pressure, many of those behemoths folded in no time. Others cobbled together to survive, but as a much smaller shadow of their former self. Only two among them remain in business, that too largely because of the US market where Chinese vendors are not allowed. From the ecosystem perspective, there are far fewer choices of vendors globally, and even fewer in the US.
So, what can we learn from this harrowing experience? Well, simply making decisions on cost alone might be very attractive in the short run, but might have negative long-term consequences. Once the landscape changes, it cannot be put back.
Perils of inaction now
If this practice of offering artificially low prices on IoT devices and modules because of Chinese government subsidies goes unchecked, none of the non-Chinese vendors can sustain low margins and will edge towards bankruptcy or exit the market. Very soon, there would be anybody of repute left.
In such a situation, the IoT needs of critical infrastructures such as power grid, smart cities, installations of national security, and others, will not have any option but to rely on unknown suppliers without any proven track record or reputation. The case would be similar for large enterprises, industrial complexes, and such where IoT devices are a basic staple. The confidence in the security of IoT devices should be unquestionable and not even up for debate. Consider 5G Massive IoT, which will build on the solid foundation of 4G IoT. Additionally, going forward sharing of spectrum between defense and civilian cellular networks is going to be the norm. An early example of such an arrangement is CBRS, which allows sharing of spectrum between the US Navy and cellular operators. Any security breach in such deployments could expose the critical military operations for sabotage. These include radar and satellite communication systems.
Generally, there are risks with relying on a group of suppliers all coming from the same region/country. What if, trade wars flare up, resulting in high tariffs, or even worse, import/export bans, similar to the recent US ban of Huawei? In such a case, the whole critical infrastructure could come to a screeching halt — also, such vulnerability provides a huge advantage to the foreign country in any trade negotiations.
Many of the Chinese vendors are very small without any public, reliable information on their background, ownership, business, objectives, or motives. What if they plan to conquer the market now with low pricing, and increase prices exorbitantly soon after all the competition has diminished? Even worse, what if they had ulterior motives? No matter how much these companies vouch for their authenticity and business objectives, unless they can open themselves for close scrutiny or better yet, list on some of the reputed stock exchanges in the US or Europe, it is extremely hard to be convinced of their authenticity. If you consider the headwinds that Huawei is facing, even with its significant brand recognition, the path for unknow IoT companies will be even harder, if not virtually impossible.
How to ensure device security
Historically, utilities and many critical national infrastructure providers have been very conservative in their vendor selection. They make their vendors go through an extreme, multi-level vetting process, covering both technical as well as financial viability. They should continue this practice and include evaluation of overall ecosystem health, long-term impacts, and diversity of suppliers. Private enterprises should get the cue from them and be very careful in their vendor selection as well. The assessment should also include import bans, trade wars, and other such unlike yet catastrophic considerations.
The IoT users should evaluate the lifetime cost of ownership of their IoT devices, instead of just the initial cost. IoT devices typically have a very long life, extending ten years in some cases. During such a long time, the cost of maintenance, timely upgrades, quick fixing of security flaws exceeds the original procurement cost of the device. Additionally, these institutions should examine and understand the motivation behind predatory pricing and act with a long-term point of view.
As a last resort, the government and regulators should look at putting safeguards in place for procurement of critical infrastructure. The focus should not just be on the network, but equally, if not more on the devices as well. For example, the US government banned some vendors from supplying cellular network infrastructure. There could be a case be made for similar safeguard for devices for critical uses as well.
The biggest step the IoT users, be it government agencies or private enterprises, can take is to make sure to create an environment to nurture diverse, strong, reputable, and reliable players who value security.
The Federal Communications Commission (FCC) will vote on Friday to virtually block Huawei’s access to the U.S. market, but this rare bipartisan action only protects one element of America’s digital infrastructure. In reality, the likeliest and most susceptible security vulnerabilities aren’t well understood by policymakers, and we’re at the beginning of a very long fight.
In the $2.4 trillion telecom sector, the dawn of 5G is more than a buzzword. It’s truly a new era full of great promise, as well as great danger. But our policymakers’ focus has only been on the big companies with name recognition, without attention paid to the less prominent ones that might pose much larger security risks.
Huawei and ZTE (another major Chinese manufacturer up for the FCC’s vote, but which doesn’t get the same publicity) are easy targets for the uninformed masses who fear all things China. Meanwhile, the national security threat from other Chinese-subsidized and foreign-controlled telecom companies is potentially more vast and insidious than our leaders in Washington, DC understand and acknowledge.
There’s been no mention by politicians, in news media or on social media about the security risks posed by devices or cellular modules – the mini-computers that make up the brains of the Internet of Things (IoT). There will be 43 billion in the world by 2023, and consequently they’re the favored target for hackers. Unlike phones or chipsets, these modules are untraceable once embedded in devices. These elements are so critical in connected infrastructure that If a hostile state or player gains control with intent to attack the U.S., it’s far more horrific to imagine the scale of destruction than with a compromised smartphone or social media account.
Unauthorized access to your iPhone or Facebook enables spying. But access to an IoT device enables direct action in the real world. Shutting off power to Washington, DC. Turning off traffic lights in Manhattan. Pumping the breaks on autonomous cars in San Francisco. Stopping heat in winter to homes in Minnesota. Interfering with medical devices in Florida.
Forget the compromised security of smartphones. A compromised module – one of dozens that’ll be in every American home within the next few years – could mean literal life or death.
Five of the top ten IoT module manufacturers are Chinese, and they rake in 71 percent of the industry’s revenue using the same government backing and Huawei playbook to stifle competition in the U.S. and Europe. China’s heavy investment in IoT in the country – coupled with considerable government subsidies – allow Sunsea, Fibocom and Quectel to be extremely price-competitive in global markets.
Industry insiders have been vocal in sharing stories of these companies slashing module prices below reasonable production costs. Driving out competition with a questionable pricing structure – and the consequent potential for future manipulation of affordability and availability – adds another layer to the concerns regarding 5G security.
It’s arguable that Chinese vendors Sunsea, Fibocom and Quectel are clones of Huawei, especially since they’ve effectively cornered the global market for the most critical components in the IoT. That’s why it’s important for politicians and security experts to glance up from their research on Huawei to better understand the implications of U.S. reliance on Chinese IoT manufacturers.
The U.S. government shouldn’t ban a company just for being China-based, nor target one just for being in the business of telecommunications or technology. Not every tech company in China is a stooge for the government with unreserved, evil intent. In fact, companies like Quectel and Fibocom thrive in good part due to legitimate innovation, amazing engineers and good quality.
Nonetheless, the FCC will vote on Friday on Huawei and ZTE. We must hope that this is just a first salvo in making 5G and the Internet of Things secure, with more investigation and possible action to come. If the Trump Administration truly wants to protect the American people from foreign interference via smart devices, the FCC and Congress need to be more strategic in looking at potential threats beyond the flashiest names.
The millions of IoT devices we use knowingly or unknowingly make our modern societies function. These include utility meters, traffic lights, and they even connect to the national grid. 5G is elevating their use to even higher levels and making them an integral part of the country’s critical infrastructure.
But that also is making that infrastructure more vulnerable to security threats. Reps. Mike Gallagher and Raja Krishnamoorthi of the U.S. House Select Committee on China understand this threat and are rightly sounding alarm bells. It’s fascinating how these seemingly benign and almost invisible IoT devices can be such a grave threat.
IoT devices are an integral part of the national critical infrastructure
The U.S. IoT market is massive, estimated to be $199B in 2024, according to Statista. IoT technology is found in almost any connected device for individual or industrial use. Since IoT devices manage and control the country’s critical assets, including power, water, natural gas, and many industries, even more with 5G IoT, they are part of national critical infrastructure.
Imagine the havoc the sudden collapse of the national grid or large-scale disruption of utilities can create. Such catastrophes can bring the country to a screeching halt, threaten lives, and cause lasting damage.
Despite its critical role, IoT security hasn’t gotten the attention of regulators and governments it deserves. It was considered a “business risk” to be managed by the industry. Fortunately, that is starting to change. The recent letters from the congressmen to the FCC, the Department of Defense, and the Treasury Department regarding cellular connectivity modules used in IoT devices indicate that lawmakers are now treating this as a national security issue.
Vulnerabilities of IoT devices
When it comes to cellular IoT devices, the biggest threat is the security of the connectivity module (aka IoT module) on which they are built. This module is the gatekeeper, which controls all the data going in and out of the device. If the module is compromised, the whole device, and in many cases all the systems it connects to, are compromised.
Connectivity modules could have many vulnerabilities. There could be backdoors built into the hardware or the software when modules are shipped from the factory (called “Zero Day” attacks) or introduced during numerous upgrades modules receive during their more than ten years of lifespan. These upgrades are similar to the ones our smartphones receive but are usually automatically executed.
Because of prohibitive costs, operators can’t examine and verify all the devices and their firmware updates. No matter who and how these vulnerabilities are created, they can be exploited by bad actors. If those bad actors are state-sponsored, the risk is even higher.
As FBI Director Christopher Wray mentioned in his recent testimony, “Hackers are positioning on American infrastructure in preparation to wreak havoc and cause real-world harm to American citizens and communities.”
The attackers can stay dormant for a long time and attack at a time of their choosing. Hence, it wouldn’t be wrong to say that any device with such vulnerabilities can become a ticking national security timebomb.
IoT security: A tragedy of commons
IoT is a largely low-margin, low-revenue (per subscription) business with a highly cost-competitive market. Most operators manage security as a business risk. They invest just enough to protect against fraud and liability. National security probably never makes it to their priority list.
Considering the complexity, cost, and potential risks involved, the responsibility of ensuring the security of IoT devices, from a national security perspective, rests squarely on the regulators and the government. The simple and highly reliable approach to achieve that seems to be establishing a fully trusted supply chain comprising local players and players from trusted national partners.
This is where things get complicated. According to Counterpoint Research, almost a quarter of the US cellular connectivity module is controlled by one Chinese company, Quectel. More alarmingly, a large portion of the IoT modules used in the cellular network used by first responders called FirstNet are also Chinese.
And that’s precisely why these congressmen are concerned and asking relevant US departments to intervene. As opined by many law experts, Chinese laws require all Chinese companies “to support, provide assistance, and cooperate in national intelligence work.”
So, then the question arises: Is the Huawei-like approach of totally banning these companies the right strategy? If not, are there any other remedies available? What are the pitfalls? All these questions need to be addressed before taking any substantive action. Look out for my next article for details on them and possible answers.
Always Connected PCs (ACPCs)
Have you heard the phrase “converting poison into medicine?” Well, that’s kind of what is happening to the PC industry now. Let me explain. Not too long ago, the rise of powerful smartphones and tablets, which were primarily powered by ARM processors, decimated the PC market. Interestingly, the tenets of smartphones – always connected, long battery-life, thin and light weight— that caused the downfall of PCs are bringing life back into them. The introduction of ultra-thin laptops and 2-in-1s are making PCs get their mojo back. In early December 2018, Qualcomm announced a major step in this smartphonification of laptops. Their new world’s first 7nm Snapdragon 8cx compute platform not only embodies all those hallmark characteristics of a smartphone, but also will provide the performance that will meet or exceed that of traditional intel x86 processors. Most importantly Snapdragon 8cx will run the full Windows 10 Enterprise version, and will natively run browsers and many other applications.
Qualcomm dipped their toes into the PC market by creating a new category, aptly named Always Connected PC (ACPC), which used their repurposed mobiles SoCs. They started with Snapdragon 835 and very recently Snapdragon 850. All these were built for Android OS, later optimized for Windows 10 and for computing devices. They had restricted Windows version, and offered limited performance mainly because the applications were run using ARM to x86 translators. They were good enough for use cases with light and simple tasks such as browsing, video etc., but not ready for processor intensive apps or enterprise-grade use cases. But the story is completely different for newly announced Snapdragon 8cx.
Qualcomm said that Snapdragon 8cx is purpose-built from the ground up for computing and Windows 10. Supposedly they have been working on this since 2015! Snapdragon 8cx indeed shares the architecture with, and was announced at the same time as, their flagship Snapdragon 855 mobile SoC. This will naturally attract the skepticism that just like previous version, this platform might also be slightly tweaked version of the mobile SoC. However, when you look closely at the significant difference between the building blocks of the two, it is quite clear that indeed Snapdragon 8cx is a different breed. For example, 8cx has the much more powerful Kryo 495 CPU vs. 485 on Snapdragon 855. The clocking configuration for the eight cores of the CPU is different as well. The Snapdragon 8cx has more advanced Adreno 680 Extreme vs. 640 in the mobile SoC. The Snapdragon 8cx has features that are only found in high-end enterprise laptops, such as support for dual HDR 4k displays, up to 16 GB RAM, NVMe SSD, UFS 3.0 and many more. Most importantly, during the launch event, Microsoft confirmed the Windows 10 Enterprise support for the Snapdragon 8cx, which indeed is a strong vote of confidence to the platform. Additionally, many popular applications such as Chrome, Firefox, Microsoft Edge, Internet Explorer browsers as well as Gameloft, Hulu and other applications run in the native mode and a wide range of apps are optimized for ARM on Windows.
When you combine these features along with trendsetting X24 LTE modem that provides up to 2 Gbps peak speed, Quick Charge 4, advanced audio capabilities with aptX HD codec, as well as the hallmark ARM features, multiday battery-life, always-on connectivity, I think there is no question that Snapdragon compute platform and ARM architecture is ready for primetime, and is well-equipped to challenge the dominance of Intel x86 based platforms in performance computing. Qualcomm’s claim that Snapdragon 8cx performance is comparable to a competitor (supposedly Intel core I-5) and is delivered at twice the battery-life should send chill down Intel’s spine.
Qualcomm confirmed that Snapdragon 8cx can be integrated with X50 modem for 5G connectivity, But for some reason it didn’t make it a major selling point. Looks like they are worried about the 5G taking away all the goodness of the compute effort, or perhaps there might be laptops which will not support 5G. Qualcomm is tight-lipped about the reasons. In my view, although X24 modem has excellent performance, ACPC with 5G is the ultimate ACPC one could have. After all it’s the “connected” PC, why not supersize it and make it the best on all aspects? Also, the huge capacity gains and efficiency improvements of 5G will enable operators to offer very attractive “always on” unlimited plans.
Coming back to the competitive landscape, ultra-thin PCs are the most profitable tier for Intel. They have had a good run with them so far. Some devices such as Microsoft’s Surface Pro and HP’s Folio have shown that Intel I-5 core processors can be designed into attractive fanless laptops with long battery-life, However, most other Intel x-86 based laptops fall much short. With Snapdragon 8cx based laptops planned to hit during second half of 2019, amidst the busy back to school and holiday seasons, it would be interesting to see how Qualcomm and Intel platforms will compete and perform. Come 2020, this will very quickly turn in to not just processors battle but also a 5G battle.
With 5G, the ACPC battle gets even more interesting. Based on Qualcomm’s comments, it seems that they will have 5G based ACPC in the market in early 2020, if not in late 2019. Intel has announced its own 5G connected laptop plans with Sprint. Knowing x-86 performance and their delayed 5G modems, lt will be a tall order for Intel to beat the battery -life and more mature 5G connectivity of Qualcomm ACPCs. With connected ultra-thin, long battery-life laptops continue to gain popularity and Qualcomm catching up in performance, Intel must adapt to extremely fast pace of innovation that smartphonificaton is bringing to PC industry to compete effectively.
A bunch of recent events, including the announcement of Microsoft Surface Pro X and Samsung Galaxy Book S, are supporting a turning point in the largely stagnant laptop market. These devices, dubbed as always-on, always-connected PCs (ACPCs), bring the hallmark characteristics of smartphones to laptops while also providing enterprise-class computing performance. As a long-time observer and an industry analyst, I strongly believe that ACPCs are set to transform laptops and redefine personal computing.
After revolutionizing portable personal computing in the late 1980s and ’90s, laptops have not changed much. Of course, they have become a bit thinner, lighter and more powerful. But considering that you still need to carry the charger and look for Wi-Fi or other connectivity wherever you go, you can’t call those incremental improvements a big leap. These incremental steps look even smaller when compared to the speed at which smartphones have evolved.
ACPCs completely change the outlook for laptops and accelerate the pace of innovation. They are always on, connected to LTE or 5G, can run a full day without needing a recharge and provide performance at par with or better than today’s bulky laptops. All of this is made possible by a new breed of processors with micro-architecture similar to the ones used in smartphones.
Smartphone Revolution Powered By Arm Processors
Ever since their debut in the early 2000s, smartphones have been dominating the personal computing space. They have rapidly grown in both performance and influence. Almost all of today’s smartphones are powered by processors with a micro-architecture designed by the British company Arm Holdings. Smartphone players such as Apple and Qualcomm use processor cores designed by Arm.
(Full disclosure: Qualcomm is a client of my company, Tantra Analyst.)
These processors have been proven to be power-efficient. Designed primarily for portable devices, they seem to have previously focused more on power consumption than processing capability. But the evolution of these processors and the optimizations from the original equipment manufacturers (OEMs) have dramatically improved their performance in recent years. This has set Arm processors up for performance-focused devices such as laptops, PCs and even servers.
Laptops Have Survived The Test Of Times
Laptops have defied many predictions of ultimate demise. It was netbooks they said would kill the laptops, but they ended up just being a fad. Then it was tablets that were supposed to replace laptops. But they never scaled up.
The way I see it, the biggest trait of laptops, which made them stand strong against these odds, was their ability to be a productivity and content creation tool — be it for personal and consumer-type use cases or enterprise ones. The basic needs for such use cases are excellent performance and support for thousands of existing Windows applications.
Writing The Next Chapter Of Laptops
The first attempt at making the Windows operating system (OS) compatible with Arm processors was circa 2012, called Windows RT, designed for tablets. But it turned out to be a dud, mainly because it couldn’t run existing applications. Its makers, Microsoft and Qualcomm, still believing in the concept, doubled their efforts. This round made sure Windows 10 and all those existing applications would work flawlessly on Arm processors used in ACPCs.
It is debatable whether ACPCs are a new category or an existing yet transformed laptop category. Some OEMs such as Lenovo, Samsung and Asus are continuing with traditional clamshells, whereas others like Microsoft are trying out the 2-in-1 model with detachable displays that covert to fully functional tablets.
I think it is telling that many PC vendors have introduced ACPCs. I believe that the attractiveness of bringing the smartphone-like battery life and user experience to laptops, the proliferation of 5G, along with a strong commitment from Microsoft and the entire PC ecosystem makes it clear that ACPCs are the future of laptops.
What’s Inside The ACPCs?
ACPCs are powered by Qualcomm Snapdragon platforms. The first-generation devices used optimized versions of Snapdragon SD835 and SD850. But the latest ones, including Samsung Galaxy Book S and Surface Pro X, use purpose-built Snapdragon 8cx (Pro X uses a modified version of 8cx chip called SQ1). Snapdragon 8cx has a powerful CPU and GPU, as well as strong artificial intelligence capability.
I’ve seen many popular browsers, video game platforms and media player developers porting their applications to run natively on Arm processors. Likewise, many enterprise vendors have ported their applications on Windows on Arm. Adobe announced that its drawing and painting applications will be available to ACPCs. And according to Microsoft, Surface Pro X offers three-times higher performance compared to the previous generation Surface Pro 6 that used a conventional x86 processor. So, there is no question in my mind that ACPCs are now primed for running high-performance workloads of consumers as well as enterprises.
The progress of ACPCs may be slower than some might have expected, but it takes time to transform an industry with more than three decades of history. I believe the Arm micro-architecture ready for performance-focused computing has repercussions beyond laptops, as there could be many applications and use cases.
What This Means For Marketers
Because of the stagnant market, it seems that marketers have gradually reduced their attention to laptops and, instead, moved their strategies toward media more suited for smartphones. I believe ACPCs will drastically change that equation. Marketers will likely need to quickly pivot their marketing plans and spend. Specifically, the 2-in-1 model almost creates a new category of devices, and marketers will be well served if they capitalize on this growing popularity and devise their marketing plans around them.
We are at the turning point of personal computing, and at the dawn of a new era with devices powered by Arm micro-architecture. It will be interesting to watch it unfold, especially for an analyst and a keen industry observer like me.
The fun of being an analyst is that you get to test new gadgets firsthand and share your opinions without any inhibitions. It also comes with a sense of responsibility towards your readers. I got my Microsoft Surface Pro X about two weeks ago and have been using it as my daily driver ever since. My verdict – it is an excellent productivity notebook for a pro user like me, who extensively uses office applications, browsing, videos, and social media. Beyond that, it also signals the dawn of a new class of always-on, always-connected notebooks (aka ACPCs) that will redefine personal computing.
<<Side note: If you would like to know more about ACPCs, please check out my earlier articles here and here>>
Easy set-up
I bought a 16GB/256GB Pro X model with a keyboard and stylus. The windows set-up on this was a breeze. The impressive part was the ease of enabling cellular connectivity, like a smartphone—push the nano-SIM in, a couple of clicks, and you are ready to go. I have been using connected laptops since 2008/3G days. It was always a pain to transfer a subscription from one laptop to another. Although I didn’t utilize it, a user-removable SSD drive is another neat feature. The best part of this machine is its always ON feature, just like smartphones. You come in front of it, your face is recognized, and it is ready to go. Additionally, OneDrive allowed me to move files from my old laptop seamlessly.
Ever since setting it up, I have been using it as my primary computer for working in my home office, for meetings with clients, bringing it to my son’s karate and other classes, etc. Thanks to the Snapdragon/SQ1 processor, Pro X is so thin, and light, carrying it around is extremely convenient.
A solid productivity machine
The biggest character of Pro X is that it is a great workhorse, and using it is a joy! Its bright display is beautiful, and its thin bezels make a full 13” screen fit in a small form factor. Coming from my 13.3” laptop, I felt homely. I am a power user of many of the Microsoft Office tools, including Word, Excel, PowerPoint, and Outlook. The user experience was very snappy and super responsive, even when multi-tasking with lots of documents, spreadsheets, and presentations. Switching between windows of the same app or between different apps was very smooth.
I use emails on Outlook as my to-do list—keeping many email windows (more than 15) open till the action items in them are dealt with. My previous laptops had issues dealing with this, especially when the laptop was put to sleep and turned back on. Many times Outlook would become unresponsive, requiring restarts. But Outlook on Pro X has been pretty stable so far.
A lot of my work happens through the browser, and Chrome is my favorite. I usually have more than ten tabs open that span multiple Gmail accounts, local, national, and international news sites with video feeds, ads, etc., Tweetdeck and Twitter pages, Yahoo finance page, multiple forums that I regularly follow, Whatsapp web, Google Sheets and Google Photos that I share with my wife, Facebook, and others. I also use tabs as my to-do list. My kids call me crazy when they see how many tabs I use. Surprisingly, the user experience was smooth even with those many tabs open. As you might know, Chrome currently runs in the emulator mode. Microsoft recently announced the beta of their Edge browser that will run natively on ARM processors (i.e., on SQ1) that would further improve the performance and battery life. I am thinking of migrating to Edge and evaluate the experience myself.
So, all in all, I was very impressed with the workload Pro X could take and proved itself as a solid machine.
A perfect companion for travel and offsite work – battery life and connectivity
The biggest differentiation of ACPCs such as Pro X, as touted by Microsoft, Qualcomm, and Arm, is their more than a full day of battery life. I really experienced it while using Pro X. I would always have at least 10 -20% of battery left after a full day of work (8-9 hours). That was using a mix of Wi-Fi and cellular connectivity. I bet I could eke out even more with optimized screen brightness and connectivity settings.
Pro X transformed how I go out for meetings and travel. I would always bring the charger with my old laptop to avoid battery anxiety, which necessitated carrying a bag. Once I decided to get the bag, I would throw in lots of “just-in-case” items that I hardly use. But with Pro X, viola! No anxiety, no charger, no bag, and none of the other junk! This thing is so sleek, light, and stylish. I carry it as a notebook! And a nice stylus with handwriting converter to boot! Additionally, with fast charging, its battery can go from 0 to 100% in a little over an hour.
For a road-warrior like me, integrated cellular connectivity is a no brainer. It is such a relief that I am always connected, no matter where — no need to search for Wi-Fi, no worries of security and privacy, etc. Also, no need to use my phone’s hotspot and worry about its battery running out.
What about gaming and other incompatible apps?
This is the most frequent question I encountered when carrying or using Pro X in public. Well, I am not a gamer, and, it turns out, I don’t use those x-86 apps that don’t have 32-bit versions, which are needed to run them on Pro X. So, I am not the best person to give a judgment on that.
There have been reports of people having trouble running games on this. That has actually worked in my favor! Ever since I opened the Pro X package, my teenage son had his eye on this thing, always tinkering with it. I think he tried a few of his favorite games, such as Minecraft, Fortnite, CS:GO. I have a feeling either they didn’t work, or he didn’t like the user experience. That is because, after the first couple of days, he resorted back to his powerful gaming rig. Obviously, Pro X is no match to his purpose-build beefy desktop.
What are the misses?
I think the biggest miss is its steep price tag. Even the most basic configuration with only the keyboard would cost $1,100 plus tax. So, this is no mainstream computer but targeted toward those who value its premium design and features.
Despite the premium cost, I was surprised that there was no cellular data plan included. I would have expected Microsoft to bundle at least a few months, if not a year, of data to let consumers evaluate the always-connected experience.
Pro X is a notebook, literally not a laptop. As with any Surface Pro, it is almost impossible to use it on your lap.
Heralding the ACPC era
Many people might review Pro X like any other expensive gadget, on its merits and misses. However, the relevance of Pro X is far beyond this one product. Its performance conclusively proves that ACPCs are real, and can deliver on the promises their proponents Qualcomm, Microsoft, and Arm have been making for the last two years. Pro X also shows the strong commitment these companies have for the ACPC concept. As mentioned, Pro X is not a mainstream device, but it will herald a new era of personal computing, and I am sure there will be more cost-effective options soon that will make arm-based ACPCs mainstream.
Qualcomm, during its annual Tech Summit in Maui, Hawaii, unveiled a comprehensive portfolio of platforms for Always-On, Always-Connected PCs (ACPCs) to cover the full spectrum of tiers and use cases. This announcement further solidifies the industry’s move toward ACPCs, led by Qualcomm, Microsoft, and Arm.
<<Side note – If you would like to know more about ACPCs, please check out my earlier articles here, here and here. >>
A broad portfolio of offerings
The Snapdragon 8cx, announced at the same event last year, was the first real ACPC platform that brought Arm chips into the performance and enterprise computing space. Since then, the 8cx has powered a handful of devices, including trend-setting Microsoft Surface Pro X, stylish Samsung Galaxy Book S, and the first 5G supported Lenovo ACPC. Many other designs are in the pipeline.
While the Snapdragon 8cx was targeted at the premium and high-performance segment, the newly announced Snapdragon 8c and Snapdragon 7c offer OEMs the choice to address to the other tiers in the highly competitive laptop space. The tiering is based on CPU, GPU, and DSP performance, Artificial Intelligence (AI), and Machine Learning (ML) capabilities, and cellular connectivity speeds. However, Qualcomm never forgets to emphasize that even with tiering, all the platforms squarely deliver on the ACPCs famed promise of smartphone-like ultra-thin form-factor, multiday battery life, and excellent connectivity, without any compromises. This promise is attractive for any tier, and that’s why almost every major PC OEM has embraced ACPCs.
Snapdragon 8c for everyday laptops
The key aspect of Snapdragon 8c is enabling sub-$800, highly capable, consumer, and enterprise ACPCs that excel in high productivity workloads, as well as top-notch entertainment and multimedia performance. The 8c is a beast sporting a 7nm octa-core Kryo 490 CPU, Adreno 675 GPU, 4-channel LPDDR4x memory, support for NVMe SSD, and UFS 3.0, dedicated Hexagon AI/ML Tensor Accelerator, integrated Snapdragon X24 LTE modem, and many other impressive features.
Snapdragon 8c offers 30% higher system performance than its predecessor—Snapdragon 850, more than 6 Trillion Operations Per Second (TOPS) AI/ML, and up to 2 Gbps of cellular speed.
You can get more detailed specifications of this platform here.
Snapdragon 7c for entry-level ACPCs
The primary focus of Snapdragon 7c is to bring the ACPC experience to even the cost-conscious entry-level laptops. These laptops are highly functional, with a sub-$400 price point. The 7c sports 8nm octa-core Kryo 468 CPU, Adreno 618 GPU, 2-channel LPDDR4x memory, robust AI/ML support unheard of at this tier, and integrated Snapdragon X15 LTE modem, among other things.
It offers 25% higher performance than competing solutions in the entry tier, more than 5 TOPS AI/ML, and up to 800 Mbps of cellular speed.
You can get the detailed specifications of this platform here.
Busting the myths of portability
Till now, portability in computing always meant a complex trade-off between weight and size, performance, battery life, and cost. If you wanted a thin and portable computing device, the only option was to use a tablet and be content with limited performance and crippled functionality, without the support for productivity OS such as Windows 10. On the other hand, if you wanted robust performance and long battery life, you had to cope with large and bulky devices with extended battery packs. If you wanted a combination of these features, you had to be ready for a hefty price tag.
But with ACPCs, you get uncompromised experience without any tradeoffs— Arm architecture that offers superior battery life and performance, full Windows 10 support for unhindered productivity, integrated cellular modem for always-on connectivity. All of that together in a thin, light-weight, and very attractive form factors, just like your smartphone.
The ACPCs are essentially aligning the computing industry with the smartphone industry. That will bring the smartphone industry’s hallmark of rapid innovation to the computing industry. Together both will benefit from the large economies of scale, cost-efficiency, and a huge ecosystem of OEMs, app developers, consumers, and enterprise players. That, in turn, has the potential to revitalize the stagnant and uninteresting laptop market and bring it much needed excitement and growth.
In other words, ACPCs are set to challenge the status quo of Intel’s x86 architecture and revolutionize the laptop/personal computing market.
In closing
Qualcomm’s announcement expanding the reach of ACPCs illustrates how the “Windows on Snapdragon” concept that Qualcomm, Microsoft, and Arm envisioned a few years ago is slowly but steadily coming to fruition. The comprehensive portfolio of platforms will pave the way for making ACPCs mainstream, bringing their benefits to all market segments, not just for the premium tier.
It will be interesting to see how the tussle between deeply rooted traditional x86 architecture and the disruptive Arm architecture unfolds and shapes the laptops and personal computing space.
While smartphones are all the rage in 5G, the market trends are aligning for a quiet revolution of 5G-enabled laptops (5GPCs) and other non-smartphone computer devices. The world’s first 5GPC, Lenovo’s Yoga 5G, was introduced at CES 2020, kick-starting the process. Although always-connected, always-on laptops (ACPCs) have been around for some time, their widespread adoption has been constrained mainly because of restrictive and expensive data pricing. The extremely high capacity and improved efficiency of 5G, which allows operators to offer attractive pricing combined with the remarkable improvement in the performance of ACPCs, has the potential to push the 5GPC market into high gear.
5G Offers The Best Network Technology For ACPCs
5G traction has been beyond anybody’s expectations. As of the end of 2019, 348 operators were investing in 5G and 61 operators had already commenced 5G services. The operators who have launched are steadily expanding their coverage. The introduction of dynamic spectrum sharing (DSS) — which allows 5G to use the 4G spectrum, expected commercially in the second half of 2020 — will substantially improve coverage. Thanks to the diligent work of regulators around the world, 5G has over 10 times more spectrum than 4G in many cases. That includes all the bands: higher (e.g., millimeter wave), middle (e.g., 2.5 and 3.5 GHz) and lower (e.g., 600 MHz).
Although 5G’s super-high speeds get all the attention, the biggest advantage of 5G is its extreme capacity, thanks to all that spectrum. That means cellular operators have the opportunity, more than ever, to experiment with new pricing and data plans. We already see glimpses of that in the true unlimited data plans for smartphones and fixed wireless access (FWA) services and plans. I strongly believe that 5GPCs will be a worthy addition to the new horizons operators will explore with 5G.
For the operators pouring billions of dollars into 5G network build-out, the sooner and the more users they get on that network, the better. The abundant capacity of the 5G network allows operators to move laptop users into a new usage paradigm: from today’s “data sipping, only turning on the cellular connection when needed, always conscious of hitting the data limit” mindset to the “anywhere, anytime, worry-free” paradigm.
5G also allows true service bundling: a single contract and attractive pricing for smartphones, FWA, laptops and other connected devices. This, while reducing the cost for users, will increase the overall average revenue per user (ARPU) for operators. Bundled pricing brings service stickiness and builds long-term customer relationships. Operators could also work with 5GPC device OEMs to bundle the connectivity as part of the device cost, for at least the first months/year of 5G service. As a seasoned ACPC user, I know that once you experience the liberation of not looking for hot spots and constant worries of the safety of hot spots, hardly anybody will go back, as long the cost of that experience is reasonable.
5GPCs Will Be The Best ACPCs
ACPCs have been continuously improving their performance and are now ready to be productivity, enterprise and performance laptops. For example, the recently announced world’s first 5GPC by Lenovo offers high performance and 24-hour battery life. (Full disclosure: The laptop is powered by Qualcomm Snapdragon 8cx, and Qualcomm is a client of mine.) With a 5GPC, you can work from virtually anywhere without worrying about being near a power outlet or a Wi-Fi hot spot. The data speeds with 5G should be far better than any regular hot spot would provide.
With today’s traditional laptops that have shorter battery life, even if you had cellular connectivity, the untethered experience is limited because you have to always think of charging options. The extremely long battery life of ACPCs makes them truly untethered. Not being tethered physically or wirelessly is an exhilarating experience. And it is logical to think people would be willing to spend a little bit more for this higher perceived value.
5GPCs will be particularly attractive for enterprises. There are many reasons for this, and the biggest one is security. One of the main security risks for enterprises is their employees connecting laptops to unknown, unsecured Wi-Fi hot spots. With 5GPCs, IT departments will be certain that their employees will always be connected to a secure known 5G network. The potential costs of lost data or security breaches would certainly outweigh any minimal increase in the cost of 5G cellular connectivity. Also, 5GPCs bring many other benefits to enterprises: Integrated GPS allows reliable asset tracking and security mechanisms such as geofencing; being always on, laptops will always be up to date with the latest security patches and updates. Of course, the increase in employee productivity by being reliably connected all the time with excellent speeds goes without saying.
5GPCs will bring much-needed excitement to the largely stagnant laptop market. If managed properly, the 5GPC trend has the potential to create a new full replacement cycle, which might last for years.
All the stars are aligning for 5GPC to be an attractive market for the industry. 5GPCs have the performance to make the best use of 5G and provide a differentiated experience. Both consumers and enterprises will benefit enormously from 5GPCs. Cellular operators can utilize 5G’s extreme capacity to offer services that make true anywhere, always-connected, fully untethered experiences possible. But it will only be a reality if they can offer attractive and innovative pricing and data plans. With major 5GPC device announcements trickling in and operators looking to expand their 5G offerings, it will be interesting to see how the story of 5GPCs plays out.
For the last few weeks, while the influencer world was busy with testing and reviewing the Samsung Galaxy S20 and Galaxy Z Flip smartphones, I was diligently using and testing another equally important and impressive Samsung product—Galaxy Book S—the latest always on, always connected PC (ACPC). My verdict? It defines what portable laptops are meant to be. However, being an analyst, I can’t stop myself from giving the rundown on why I think so and how it provides a glimpse of the future of laptops.
Purchasing and setting up Book S
The Galaxy Book S comes in only one configuration—the Snapdragon 8cx processor, 8GB LPDDR4X RAM, and 256 GB SSD (MicroSD slot supporting up to 1TB) with Windows 10 Home OS. I bought mine on the Samsung website. Ordering was a breeze, although Samsung may confuse buyers by showing only Verizon and Sprint as the supported carriers. I bought the Verizon version by paying in full ($999 + tax). However, it came factory unlocked and it worked perfectly fine with Sprint, T-Mobile, and Google Fi. I am reasonably sure, would work with AT&T as well. I have sought clarification from Samsung on whether the Verizon and Sprint versions are different SKUs and have any major differences, such as spectrum bands supported, carrier aggregation combinations, etc. I am yet to hear back from them (will update this article if I do in a reasonable time). Surprisingly, I believe Samsung is artificially limiting the reach, and the market opportunity by only showing two operators, even though it works with virtually any operator. This is important because other laptops in this category only support certain operators. For example, HP Spectre works only with AT&T and T-Mobile.
The set-up was easy. I did have an issue with the keyboard backlight not working, which was resolved with a Windows update. Backlighting has three levels, which is nice, but the first step is dim enough that you might confuse it for not working except in low light situations.
Incredibly thin and light, with extremely long battery life – perfect for travel or the office
I have used a lot of laptops in my professional life, and that is an understatement. By far, this is the thinnest, lightest laptop that did everything I wanted, while providing the longest battery life. The official dimensions can be found here. My workloads are primarily productivity-focused. As I had explained in my earlier article, I use more than 15 email windows, multiple sessions of Microsoft Office applications including Word, Excel, PowerPoint, and usually have more than 20 browser tabs open at a time. The Samsung Galaxy Book S with its Snapdragon 8cx processor never struggled under this load. There is something to be said about the new chromium-based Microsoft Edge browser, which comes as a default. It is fast, stable and supports Chrome extensions, so I never miss my previous favorite Chrome browser! Edge provides native ARM64 support, so its battery life performance versus Chrome which runs in 32-bit simulation mode is beyond compare on the Snapdragon compute platform.
The Galaxy Book S is a perfect companion for a road warrior like me. However, thanks to COVID-19, my travel is severely curtailed. During the limited travel I did with the Galaxy Book S, I never carried its charger for single-day trips or in town meetings. That means no backpacks, no other bags to carry, just the Book S like a notebook. At the end of each of those days, I ended the day with more than 30-40% of the battery still remaining. Truly remarkable.
Without travel, I have converted the Galaxy Book S into my home workstation. With external 32’ WQHD (1440p) monitor, mouse and keyboard, all connected through a USB-C hub, I almost forget that it is a laptop, such is the user experience!
The Galaxy Book S always gets compliments about its thinness and weight, whether I use it in meetings or when I go to my son’s karate class etc. Many wonder how one could fit a fan in such a thin chassis. Some of my curious IT friends even tried to search for the fan and vents! It is the kicker to tell them that it has no fan or vents, thanks to the Qualcomm Snapdragon 8cx processor inside.
The secret behind the incredible size and battery life of the Galaxy Book S
The biggest challenge laptop designers face is the tradeoff between size (thinner and lighter) vs. performance and battery life. Designers seem to have reached a saturation point in that tradeoff. It all boils down to the thermal characteristics of today’s processors—higher the performance, more the power used, and more the heat generated. There are two options to manage this heat—either use a fan and proper ventilation or throttle the performance. Most of today’s laptops, even the ones such as MacBook Air, utilize fans, which makes them big and bulky while also increasing the power consumed. Premium sleek devices such as the older generation Microsoft’s Surface line-up uses throttling which compromises the user experience. In terms of increasing battery life, the only option is adding bigger batteries, which increases weight.
Now comes the Snapdragon 8cx compute platform used in the Samsung Galaxy Book S. Built using the best from Qualcomm’s mobile heritage, combined with the performance you’d expect of a PC. It is based on Arm’s architecture, offering similar performance as x86 based Core i5. Snapdragon 8cx provides consistently higher performance with minimal heat production in an extremely power-efficient way. So, without fans or cooling constraints, and without the need for bigger, heavier batteries, device designers can develop extremely thin, light, and high-performance laptops, such as Samsung’s Galaxy Book S, whose battery-life is measured in days not hours.
Galaxy Book S vs. Surface Pro X
Since I have reviewed and have been using the Microsoft Surface Pro X for the last few months, a comparison between the two is another question I am often asked. Well, I like them both. They have some common uses but many where one is more suited than the other. For example, as I had explained in my article, Pro X can be off-balance when you try using it on your lap, whereas the Galaxy Book S proved to be a perfect fit for such uses. As a detachable 2-in-1, the Pro X is ideal if you like to use your device also as a tablet and use the stylus. The Galaxy Book S is a clamshell design that is more suitable for a driver or a workstation easily connected through USB-C docks and such. Although the Galaxy Book S has less RAM (8GB vs. 16GB), I haven’t seen that affect my productivity apps much. But if you are using more graphics and processor-intensive applications, the difference might be more apparent. Of course, Pro X, with all the accessories costs upwards of $1500, whereas Galaxy Book S is around $1000. I currently use both devices. All my content is on OneDrive and these being always connected, I can seamlessly switch between the two, no matter where I am.
The biggest concern of ACPCs still remains the app compatibility. More apps are being ported over to run natively in ARM64, though there are applications, like some games and video editors and such, that are still incompatible. It is worth noting though that most of those demanding applications don’t run well on other thin and light notebooks either. The other concern for some is around high cellular data pricing, but operators now have bundled options where one can get reasonably priced unlimited add-on data plans.
A glimpse of the future
The Samsung Galaxy Book S is only the second ACPC based on Snapdragon 8cx, and supports the best in class 4G LTE connectivity, with peak speeds up to 1.2Gbps. But we are at the dawn of 5G, which promises to provide multiple gigabit user speeds, extreme capacity, and lower latency. 5G ACPCs (aka 5GPCs) will be the best devices to utilize this unprecedented connectivity everywhere, as I have explained here. Book S gives a glimpse of what those 5GPCs have to offer in the years to come. In fact, the world’s first 5GPC has already been announced, and many are on the horizon. I can’t wait to get my hand on those!
It is bliss, as an engineer, to witness a whopping 2Gpbs speed on a live commercial network, using an off the shelf device. And that was my experience a few weeks ago, using the new Lenovo Flex 5G on Verizon’s live mmWave network in San Diego. It is even more amusing considering that I had tested 9.6 Kbps (yes, Kilo bites per second)) speeds on 2G networks only two decades ago, and 10s of Mbps only a few years ago.
The Flex 5G is the world’s first 5G PC and it’s powered by the Qualcomm Snapdragon 8cx 5G compute platform, using the Snapdragon X55 5G Modem-RF system. It represents what ideal productivity 5G PC should be—Ultra high-speed mmWave and Sub-6GHz 5G connectivity, the famed long battery life of Always Connected PCs (ACPCs), robust performance, and lightweight fanless design—all of which are enabled by the Snapdragon processor.
It is a perfect device for a user like me—a professional, who is always on the move, who needs top-notch connectivity, light, and high-performing laptop, without the hassle of constantly looking for Wi-Fi hotspots and power outlets.
Immediately after buying the Flex 5G, I couldn’t stop myself from testing and tweeting my initial thoughts. I used it extensively as my daily driver and travel companion for more than a month, and I came out very impressed.
Side note: If you would like to know more about ACPCs, including reviews of the Microsoft Surface Pro X and the Samsung Galaxy Book S, check out my other articles in this series.
Solid and highly functional build
Built in Lenovo’s popular Yoga style (in fact, this laptop is called the ‘Lenovo Yoga 5G’ outside the U.S.), the Flex 5G‘s aluminum and magnesium body looks sleek and stylish. At 2.9lbs., it is slightly heavier than other ACPCs I have used (Surface Pro X and Galaxy Book S), but you really don’t feel that much of a difference when carrying it around as it is still very light and portable. I especially liked its rubbery back and sides which offer a very satisfying firm grip when holding it, and stability when placed on uneven surfaces. This came very handy during my recent RV trip with the family. The Flex 5G would sit firmly, no matter where I placed it—on the seat, on the table, or anywhere else—even when driving on bumpy roads.
Blazing fast 5G connectivity
The Flex 5G’s claim to fame is its 2 Gbps 5G mmWave speed. Unlike many peak speed claims, you can actually get that speed when standing close to the base station! But generally, when you move away from the base station and when the network load increases, speeds will move to hundreds of Mbps, though still notably better than 4G and better than most home networks. I did extensive testing on Verizon’s 5G UWB (mmWave) live network in San Diego and was blown away by the speed.
When I tested, Verizon had two sites in San Diego, but they seem to have added two more recently. The coverage is limited to a couple of blocks around those sites. Most of my testing was near the University Heights site. I could get speeds in excess of 1 Gbps more than a block away, as long as there was line of sight (LoS). I would get decent speeds even without LoS, but would quickly drop to 4G LTE when moved behind buildings or major obstructions. But thanks to the Flex 5G’s dual connectivity, the handoffs in and out of 5G coverage were seamless. I have included screen captures of some of the test results. Verizon has good 4G coverage, offering high speeds in the area, which was a big plus.
I did some speed test comparison between the Flex 5G and Samsung Galaxy S20, which also utilizes the Snapdragon X55 5G Modem-RF system. Generally, the speeds on the Flex 5G were slightly higher, and coverage little bit better than S20. I would attribute that to the laptop having better antennas (probably with higher gain), better spacing, and fewer near-end obstructions such as hand and other body parts.
During the testing, I discovered that Ookla, Netflix Fast, and other speed test sites will not give full speed when checked on browsers (Edge, Chrome, and Firefox). The speeds topped at 600 – 700 Mbps. But Windows 10 apps showed the full gigabit speeds. This confused me a bit. When checked, Ookla could not give any specific reason for such behavior and suggested to always use the app for accurate results. This indicates that browsers are not yet optimized to utilize such high speeds, and that might create user experience challenges, if not addressed soon.
Days-long battery life
The Flex 5G, just like the other ACPCs I have reviewed, lives up to its promise of long battery life. It sports a 4-cell 60Wh battery, slightly bigger than comparable Yoga laptops. This is made possible by the Qualcomm Snapdragon 8cx 5G compute platform, which is thermally efficient so devices utilizing the solution don’t need a fan or any other specialized cooling, so there is extra space and weight margin. This also helps the Flex 5G remain lighter than other comparable models.
Instead of testing Lenovo’s claimed 26 hours of video playback time, I tested the laptop for my typical productivity use. This included multiple email tabs, lots of browser tabs, Microsoft 365, Zoom and other conference call apps, YouTube, audio/podcast recording/editing, and others. I got more than two days of battery life from a single charge while doing these things. The laptop was connected primarily through Wi-Fi with occasional cellular use. The battery lasted even longer during my limited travels as the usage was lower, but it was always using a cellular connection. I wish I had done more testing during travel, but Covid-19 didn’t allow it. Since I often travel to most of the major cities and areas Verizon and other operators are deploying 5G, I could have fully utilized the benefits of 5G connectivity.
Performance tuned for productivity
The Flex 5G is a perfect machine for productivity. I found its processing power to be more than adequate for all my usage (mentioned above). Even with all these applications running, it never got hot. I am not a gamer, nor do I use any high-intensity graphics applications, so I cannot speak to application compatibility or the performance for those needs. Also, it is worth noting that such thin, lightweight laptops are not targeted for such users anyway.
One revelation was how accustomed I have gotten to the absence of fan noise during my more than 8 months of using Snapdragon-powered ACPCs. A couple of weeks ago, when I had to use a buddy’s laptop, its fan noise was so distracting and drove me crazy. Once you experience the pure silence of these ACPCs, it’s hard to go back to traditional devices with loud, heavy fans.
The Flex 5G comes with Windows 10 Pro and one year of free Microsoft 365 Personal. It has 2×2 11AC with MU-MIMO Wi-Fi (aka WiFi5) which has excellent performance. I was especially impressed with the quality of the on-board microphone. I was moderating a 5G panel at the recently held IWCE Virtual event, and my headset broke at the last minute, so I had to use the laptop mic, and I was really impressed by how good it sounded.
Some misses and room for improvement
Despite the excellent overall experience, there are some misses too. The 256 GB SSD is rather small for a premium productivity laptop. It is even worse considering that there are no upgrade options: the SSD is not field-replaceable (soldered to the board), and there is no micro SD slot. For its thickness and weight, Lenovo could have provided a full-sized USB-A port, in addition to, or instead of one of the two USB-C ports. Also, it currently only supports Verizon 5G connectivity in the United States (unlocked version works only in 4G mode with other operators).
Verizon’s extremely limited 5G coverage leaves a lot to be desired. mmWave needs dense deployment of sites, as I had explained in my earlier article, and I hope they do so soon. They will also soon enable the Dynamic Spectrum Sharing (DSS) feature, which allows 5G to use the existing 4G spectrum, which will tremendously help to rapidly expand 5G coverage. But with limited 4G spectrum, gigabit speeds will not be possible. Snapdragon X55 inherently supports DSS. Verizon also needs to improve its customer support system for ACPCs. I had some issues activating the device and the frontline reps had no clue where to redirect me. It took a few tries and a couple of hours to get to the right person and get my service going.
The Lenovo Flex 5G is available for $1399 on the Verizon website (but shows $1699 on the Lenovo webpage for some reason), which is anywhere from $200-$300 higher than comparable thin, lightweight premium productivity laptops. Considering that this is first of its kind, and you are futureproofing your investment, it might be worthwhile for many mobile professionals like me. A lot also depends on how quickly the 5G coverage improves, and how soon we will start traveling and moving around again like before.
In closing
The Lenovo Flex 5G lives up to its promise of the world’s first 5G PC and shows what a 5G PC should be. It delivers on all the characteristics of a Snapdragon-powered ACPC – a sleek fanless design, lightweight build, multi-day battery life, crested with ultra-high-speed mmWave 5G connectivity. The device’s 5G usability is currently somewhat limited by Verizon’s coverage. However, they are working hard to add more mmWave sites and bring DSS, which should substantially expand coverage. The Flex 5G currently delivers a great computing experience now, and will only be enhanced as 5G coverage grows.
To read more reviews like this as well as to get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
FTC vs. Qualcomm Antitrust Trial
The ongoing saga between FTC and Qualcomm
It is unbelievable when one of the world’s richest companies complains that it is an undue burden to pay for the innovations that power its high margin products. But it sure looks like a well-orchestrated war on innovation with sinister motives, when a government agency such as the FTC (Federal Trade Commission) joins hands with it in beating down its much smaller (10x) supplier that is a proven technology pioneer.
I am talking about the trial that is underway between the FTC and Qualcomm in the U.S. District Court in San Jose, California. I am not a lawyer, instead, a passionate engineer who was part of the 2G, 3G, 4G, and now 5G transitions. I know first-hand what it takes to conceive, build, and deploy wireless technologies. Here are my thoughts on this legal tussle and its potential consequences.
Wireless communication, especially for broadband data, is a fascinating invention that it is largely invisible—literally and metaphorically. Unlike beautiful smartphone screens, artful industrial designs, or clever apps, wireless has been an enigma attracting little attention or appreciation. You only realize its importance when out of coverage! Oh, the agony, the insecurity, and the fear of missing out! The device is called a smart “phone” for a reason: without the “phone” functionality, most of those smarts have little value!
“Wireless data” is the defining technology of the smartphone, not just another feature
Why am I explaining the importance of wireless data? In the current FTC trial, the Commission’s lawyers and witnesses put forward two complaints: 1) Licensing fees should be based on the modem’s price, not that of the device, and 2) Qualcomm’s licensing fees are too high. Looking at the first, wireless data is the fundamental and defining technology of any smartphone. Also, it is a misconception to think that wireless data technology is only contained within the “modem” block. In reality, the functionality is the result of a comprehensive system design that makes the smartphone work as a complete device, with all subsystems and software in it. Additionally, the design includes complex interactions with numerous infrastructure and network (radio, core, and cloud) elements to function as a well-orchestrated system. So, it would be disingenuous and utterly ridiculous to limit the value of all of this technology to a small percentage of the price of a modem.
On the licensing fees argument, fees should be determined by the value the technology imparts to the overall usefulness of the device, and not correlated with a single isolated part. Also, the valuation of wireless technology should be market-driven, not arbitrarily or subjectively determined by the FTC or other regulatory authority. If you accept the notion of regulatory price-fixing, then why stop with Intellectual Property (IP)? Why not also regulate the price of smartphones? If you look at the recent price increases, it may not be a bad an idea after all! Jokes aside, as witnessed by the spectacular proliferation of smartphones over the last decade, market pricing of wireless technology IP has benefited the mobile industry and the consumers.
The value of Qualcomm’s IP has been accepted by most of the industry, as illustrated by more than 300 negotiated licenses. Moreover, after a lengthy investigation by and negotiations with the Chinese regulator, the NDRC (National Development and Reform Commission), Qualcomm agreed to a settlement that included rates deemed fair by the Chinese agency. It is telling that even Chinese OEMs agree that the licensing rates are fair, despite these OEMs having far thinner margins and much smaller scale than Apple, who makes most of the mobile industry’s profits (almost 90% by some estimates). So, it would seem that the subjective claim of Apple–“license fees are too high”–doesn’t pass the sniff test. It is interesting to note that many of FTC’s witnesses in the trail, such as Huawei, Apple, and Intel, are Qualcomm’s arch-rivals.
Will the FTC case against Qualcomm help or harm consumers?
Let’s examine the premise of this case and how it relates to FTC’s mission, which is to ensure fair competition so that consumers benefit from wider choices and lower prices.
When you look at the US smartphone market, there are two dominant players, and others are smaller, emerging players. I believe any negative action by FTC will further exacerbate this situation by eliminating these smaller players. Wireless innovation is extremely hard, time-consuming, and capital intensive. Qualcomm invests billions of dollars in R&D every year. A lot of this investment is done very early, years before a market even exists, which means there are significant risks involved. For example, Qualcomm has been investing in 5G since 2014, and commercial devices will only start entering the market in 2019 and 2020. For a company like Qualcomm, the only way to recoup such large, ongoing investments is to license its technology to as many smartphone OEMs as possible. Moreover, most of these OEMs don’t have the money to do their own R&D, and they rely on Qualcomm’s innovations to cost-effectively compete with the big OEMs. This creates a vibrant, highly competitive marketplace that offers consumers a wider range of choices and affordable prices, the ultimate goal of FTC. A great example of this is 4G LTE, which enabled many new and very innovative smartphone OEMs to enter the market. They are growing stronger and are expected to be formidable competitors in 5G. The virtuous cycle repeats as Qualcomm reinvests large portions of its licensing revenue back into R&D to offer a continuous stream of innovations.
In the absence of an entity like Qualcomm, most OEMs would be deprived of new technologies. Only a few big OEMs would be able to invest billions into technology development, and it’s unlikely that these vertically-integrated players would share most of their technology with others. Most other OEMs would not be able to afford to invest on their own and probably exit the market. This outcome would be the opposite of the FTC’s mission. If you don’t believe this, look at how aggressively Apple, Samsung, and Huawei have been trying to vertically integrate by either acquiring or building as much of their own technology as possible.
Beware of the consequences
Any attempt to trivialize or delegitimize Qualcomm’s IP and its role in the industry will have a long-lasting impact not only on the smartphone market but on the entire tech industry. If the FTC undermines companies’ ability to earn rewards for the investments, or worse, arbitrarily caps the value of their technology, it will discourage the American innovation and severely curtail the flow of capital to those innovations. Small and medium-sized companies that are the backbone of this innovation engine will be the most affected. So, in essence, this trial may (unwittingly?) amount to a war on the American innovation engine, and a negative outcome will ultimately hurt American consumers by decimating competition and choice in the marketplace; this is the antithesis of the FTC’s very existence and charter.
Analyzing the long term impacts of FTC’s activist litigation
In all the chaos of allegations, counter allegations, scores of testimonies, rebuttals, cross-examinations, and others, I humbly request that Judge Koh and the FTC pause for a moment and ponder this question: “If Qualcomm loses this case, who will win?” No, it’s not the FTC; the real winner would be China, in the form of its proxy Huawei (and to a lesser extent, Apple).
In my previous article, I explained how FTC’s activist attempt to fight Qualcomm will result in reduced competition, limited choice, increased prices, and will ultimately do great harm to consumers and the industry. This is clearly against FTC’s sworn mission and the very reason for its existence. But the importance of this case goes much further and beyond the FTC; it goes directly to the core of the purpose of the United States government itself, which is to protect the lives, the assets, and the interests of citizens of this great country. Today, technological advances define the future of countries. Rightly so, the U.S. government has made the protection of its intellectual property one of its main objectives. However, FTC’s actions are summarily against that objective.
Qualcomm is a well-oiled innovation engine
As the trial progressed, a lot of interesting facts have come to the light of day. It is undeniably clear that Qualcomm has been and continues to be a well-oiled innovation engine, efficiently cranking out technologies and products. In the testimony on Friday, Jan 25th, 2019, Christopher Johnson of Bain & Company reluctantly spilled the beans from the competitive analysis they did for Intel. They benchmarked investments, execution, and productivity between Intel and Qualcomm, especially pertaining to the development of wireless technologies and products. Bain’s analysis showed that Qualcomm’s investment on the SoCs (System on Chip) was comparable to that of Intel, but produced three times as many products. The report also showed that Qualcomm invested much more than Intel in developing wireless technologies and modems, which are at the heart of all mobile devices and networks.
With Qualcomm’s strong performance, no wonder weaker modem chipset players couldn’t compete and quickly folded. For example, companies such as Broadcom (which consolidated assets from Renesas, and Beceem), ST Ericsson, and Texas Instruments exited the business. Other players such as Infineon were bought by bigger companies like Intel. As a result, the majority of smartphone OEMs, be it new ones such as Apple, Samsung, LG, and a whole slew of Chinese OEMs, or legacy OEMs such as Motorola, Sony, Blackberry, and others, ultimately ended up using Qualcomm’s chipsets. In other words, Qualcomm’s strong market position was primarily because of its clear vision, incredibly talented engineers, and military-precision execution. However, this position didn’t give them the market power as alleged by FTC or make them immune to competition. As proven time and again, the highly-competitive mobile market only rewards winners, and harshly punishes those that stumble. Nokia’s spectacular demise from its peak is a great example of this. Specific to Qualcomm, the failure of the Snapdragon 810 chipset which came after the blockbuster Snapdragon 800, made many OEMs quickly abandon Qualcomm and take their business elsewhere. In the fast-changing mobile industry, market power is a misnomer, and only the companies that have the right foresight, investment and execution survive and thrive.
Down payment for the next-gen technologies
When analyzing the value of cellular IP and modem chipsets, conventional wisdom might be to only consider the share of a company’s contribution in the current generation and to evaluate accordingly. However, many fail to understand that wireless technology is not static, but a series of evolutions, and multiple releases within each evolution (G, or generation). For OEMs to be successful, the key is to leverage a steady stream of technologies and solutions to feed multiple generations of products. That means, the price they are paying for today’s technology also includes a down payment for the next generation of technologies they will need down the road. For example, when OEMs were selling 3G devices in 2006 and 2007, Qualcomm’s R&D engineers were already working on 4G technologies, funded in large part by licensing revenue from all of those OEMs’ devices. And when 4G was growing exponentially in 2014 and 2015, Qualcomm was already heavily re-investing in 5G. Essentially, Qualcomm has acted like an R&D design house for the entire smartphone industry ever since 2G. It is a virtuous cycle of innovation and re-investment, one generation after another.
What happens, if this cycle of innovation and re-investment is disrupted?
If Qualcomm loses this trial, and its ability to recoup investments through licensing technology at market prices is severely curtailed, Qualcomm will undeniably have to reduce investment in risky new technologies. Remember that 5G is still in its infancy, and the industry still has a long way to go to achieve its promise of changing the world. As articulated in testimonies in the trial, it is not just the investment that matters; Qualcomm’s vision, brain trust, and execution will also be severely hampered. Damage to Qualcomm will create a big void that no other American company may be able to fill, and any public company would be faced with the same challenge of not being able to recoup its investments with fair returns. There are not many companies in the U.S. that have the expertise, and fewer still, the efficient horizontal business model of Qualcomm, as made amply clear by Bain’s analysis.
China’s premier technology provider, Huawei, would be more than happy to fill this void, and with tacit support from the Chinese government. Unlike publicly-traded American companies, Huawei enjoys freedom from the worries about access to capital for investment, and it’s not particularly worried about returning a profit to investors. Remember that Advanced Information technology is among the top of “Made in China 2025” goals set out by the Chinese government. Capitalizing on its current momentum, Huawei would willingly take the world’s R&D crown. And the FTC would unwittingly be handing over the tiara on a silver platter.
The irony is that other parts of the U.S. government, for example, the U.S. Department of Justice, are busy pressuring other governments to keep Huawei at bay for security concerns. They even criminally charged Huawei for IP violations and other charges. Yet the FTC is upholding Huawei as its key, credible witness in undermining Qualcomm, the crown jewel of U.S. innovation. What could you call this travesty? The tragedy of democracy, the lethargy of bureaucracy? No matter what you call it, this is indeed a national disgrace.
It’s been more than a month since arguments rested for the FTC vs. Qualcomm case. Every passing day is increasing the anxiety of people on both sides of the issue. The media is rife with the rumors, leaks, and loud calls for the U.S. Government to intervene for national security reasons and take CIFIUS-like action.
FTC vs. Qualcomm might seem like any other antitrust case, but in reality the outcome could potentially jeopardize U.S. national security. Qualcomm is the undisputed leader in technologies and R&D that power cellular systems such as 3G, 4G and now 5G. Telecommunication networks are the plumbing that connects the country, and cellular technology is its brain. Any country that wants to control its destiny should own that technology, or at the very least, have significant influence in steering the evolution of its capabilities. If the FTC case seriously damages Qualcomm, China’s Huawei will claim its place and be the global champion of cellular technology.
But, you might ask, hasn’t the government already addressed this issue by banning Huawei in the U.S.? Well, that would be akin to shutting off one faucet in a house while water is free to flow through all of the others. There is much more to cellular technology than just the network infrastructure. Let me explain.
What it takes to be a leader in the cellular technology:
To be a leader in the cellular technology, one needs deep, end-to-end system expertise. One needs years of experience designing new wireless systems, standardizing them, building and enabling a large ecosystem to commercialize them, and continuously evolving them after they launch. Very few companies possess such capabilities; most specialize in one or a few specific areas. For example, companies like Apple focus on devices, and others like Ericsson and Nokia focus on network infrastructure.
The leading companies that have complete systems expertise are Qualcomm and Huawei (Of course, there is also Samsung, I will discuss about that in a later article). Let’s take a closer look at these leaders, starting with Huawei. The rise of Huawei is worthy of a business school case-study. It has meticulously built its businesses, allegedly with strong financial and bureaucratic support from the Chinese Government. Huawei realized the importance of cellular technology and standardization, and started very early, since the 2G days. It initially focused on infrastructure products, then strategically expanded into smartphones, and subsequently developed its own platforms for modem, application processor, neural processor, even reportedly its own operating system, and other key technologies. Huawei owns virtually all key technologies in the cellular value chain and is also a force to be reckoned with in 5G standardization. No wonder Huawei is considered the crown jewel and a role model for the Chinese government’s global technology ambitions.
On the other side is Qualcomm, which to uninformed eyes might look like any other chipset supplier that can easily be dispensed with and replaced. However, upon closer inspection, one realizes that it is a systems engineering company with deep, and unmatched end-to-end wireless competence. Qualcomm has gained valuable experience leading the successful commercialization of 2G, 3G, and 4G. The intensity with which the company almost single-handedly drove the acceleration of 5G has clearly shown its capabilities. For 5G, Qualcomm co-developed the full system architecture and design from the ground up, including fundamental technologies and algorithms. Qualcomm’s R&D teams also built complete prototype systems to develop, test, and perfect the technologies that the company contributed to 3GPP to define and standardize 5G. Qualcomm, because of its unwavering focus on engineering and technology instead of glitzy consumer marketing and brand, isn’t a household consumer name unlike many of its competitors.
Some might then ask: why only Qualcomm, why can’t other U.S. giants that are much larger and have greater financial wherewithal, take on Huawei? When it comes to the mobile industry, other than Qualcomm, there might only be two other companies that could come close — Apple and Intel. Let’s look at them more closely.
Although Apple is the profit leader in smartphones, reportedly raking in almost 80% of all mobile industry profits, it is pretty thin on the cellular technology front. Instead, its strategy has been to optimize existing technologies, and bring them into its vertically-integrated devices and closed ecosystem. Apple is indeed more focused on developing proprietary technologies that improve user experience and increase the appeal of its devices. Despite being a dominant smartphone player since the 3G days, Apple hasn’t brought any groundbreaking innovations to the cellular ecosystem or cellular standards. The company is never on the leading edge of cellular technology adoption either. Specifically, with 5G, it is more than a year behind almost every other major smartphone OEM, including smaller players such as Xiaomi, Vivo, Oppo, and far behind rivals Samsung and Huawei. Short of using its bounty of more than $200 Billion to buy another wireless technology leader (which could run into serious antitrust scrutiny), Apple would find it very hard, if not impossible, to compete with Huawei in the 5G+ technology race. Even if it developed the necessary competence, Apple’s vertical integration strategy would likely make it keep all IP to itself, and not license it to others. I really don’t see the company making a U-turn and becoming the cellular technology torchbearer for the country.
Then there’s Intel, which has ruled the PC industry for many decades. It might be because of its apathy toward the cellular industry in its early days (Intel sold its division that built processors for early smartphones to Marvel), the company has never succeeded in becoming a force to reckon with. Intel’s heavy bet on WiMAX didn’t pan out, instead, putting the company years behind in LTE. Even after buying Infineon, a strong modem player of yesteryears, the company still seems to be struggling in wireless. Intel did score a major victory last year by claiming 100% of iPhone modem share, albeit only offering the performance of Qualcomm’s previous generation of modems. To date, Intel’s 5G wireless story is not promising either. It seems to be almost one year and two generations behind its peers. Apple’s recent aggressive stance in growing its modem competence doesn’t bode well for Intel either. Also, I have lots of doubts about Intel’s end-to-end system capabilities. As a result, I believe Intel is in no position to compete with Huawei.
The bottom line is, Qualcomm is the only safe bet for the U.S. to maintain its edge in 5G and beyond.
What happens if Qualcomm is weakened by an adverse FTC trial ruling?
Qualcomm’s (and the U.S.’s) fate is hanging in the balance, pending the outcome of the FTC Trial. One might wonder what would happen if Qualcomm were to lose this case. Qualcomm’s licensing business, which generates the bulk (2/3) of its profits, might be seriously impacted. Without going into hypothetical scenarios, one thing would be certain: the company’s ability to invest in fundamental cellular technology development would be severely curtailed. Its virtuous cycle of technology development and plowing profits back into future technology R&D would come to a screeching halt. U.S. dominance of cellular technology would likely rapidly decline, and eventually end. With strong market presence and the Chinese Government’s backing, Huawei would be virtually unstoppable and would exert significant influence on the definition of future of cellular technologies… and it’s doubtful that it would have the U.S.’s interests and needs at heart.
Most affected would be smaller OEMs. Without substantial resources, or access to cutting-edge technology IP and advanced, high-performance platforms from Qualcomm, they would not be able to compete in the premium tier against vertical players like Apple, Huawei, and Samsung. The premium smartphone market in the U.S. would become an even greater duopoly (Apple and Samsung) and oligopoly outside the U.S. (the former two plus Huawei). It’s no wonder that both Apple and Huawei are strong supporters of (and collaborators with) the FTC’s case.
In the end, the real losers will be consumers, who will have no choice but to bend to the whims of these increasingly powerful vertical players… vendors that have already shown a strong affinity for increasing smartphone prices.
So, for the U.S. government, the time to act is now. I hope that saner instincts will prevail, resulting in actions that will protect, preserve, and propel U.S. technology, innovation, and the country’s vital communication infrastructure.
While the final decision on the FTC vs. Qualcomm case is still pending from the last two months, the new developments have put the very premise of FTC’s case in question. The details revealed during the Apple vs. Qualcomm trial and the ensuing settlement are making the pillars of the FTC case crumble. Everybody is eagerly waiting for the FTC’s next move, and wondering how all of this will affect Judge Koh’s final decision, if she eventually has to give one.
One might ask, “What is the relevance of the Apple vs. Qualcomm litigation on the FTC case?” Well, Apple was one of the key witnesses and a major force behind the FTC case. The underlying principles, claims, and counterclaims are same between the two, so much so that Apple’s main arguments presented during the case with Qualcomm were almost verbatim to what was put forward in the FTC trial. So, both cases are undeniably intertwined, and the result of one will affect the other.
FTC’s claims are in serious jeopardy
At a very high-level, the majority of FTC’s allegation can be combined into three claims:
-
Qualcomm’s licensing practices are not compliant with FRAND (Fair Reasonable and Non-Discriminatory) terms, and that has harmed the cellular industry, including Apple
-
Licensing at the device level is not justified
-
Qualcomm’s alleged market power combined with its licensing policies have harmed competitors such as Intel
Let’s evaluate the merits of each of these claims, especially in the wake of the settlement and the new information it has brought to light.
Apple was one of the strongest forces behind FTC’s case against Qualcomm. The documents revealed during the Apple vs. Qualcomm case show that the ultimate reason behind Apple’s litigation (including FTC case) was to reduce its royalty cost. There was no alleged harm. Even during the trial, the FTC failed to produce any concrete evidence to show the harm to the industry caused by Qualcomm’s licensing practices. Now, Apple signing a long-term licensing contract as part of the settlement clearly shows that Qualcomm’s licensing practices are indeed fair and market driven. Furthermore, the other over one hundred licensing contracts Qualcomm has signed with many OEMs including majors such as Samsung, and LG proves this point as well. All of this debunks FTC’s first claim.
As it became very apparent during the trial, licensing at the device level is a decades-old industry norm. All the Intelectual Property (IP) holders practice this because it is the most efficient and practical way to capture the value of IP. Stipulating a cap on the maximum device price for license fee calculations makes the practice even more meaningful and fair. As disclosed during the trial, Qualcomm’s licensing fees are up to 5 percent of the wholesale price of the phone, with a device price cap of $400. This license includes a portfolio of more than 130,000 Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEPs). For reference, in another related case between Apple and Qualcomm in San Diego, the jury awarded $1.41 per device to Qualcomm for just three non-SEPs. That is a far cry when compared to the $7.5 for every iPhone that Apple was paying before the dispute started. So again, FTC’s second claim has no merit. On a side note, If you would like to know more about patents and licensing, check out my explainer articles here: Part-1 and Part-2.
There was no dearth of drama on the day Apple and Qualcomm settled the dispute. The settlement news broke while the opening statements were still being presented in the court. The Qualcomm’s stock shot up by record levels immediately after the settlement. Mere hours after the settlement news, Intel announced their decision to exit the 5G smartphone modem business. Some might think that Intel decision to quit proves FTC’s claim of harm to competitors. However, closer scrutiny reveals a different story.
By Intel’s own admission, the reason for their decision was Apple signing a multiyear modem supply deal with Qualcomm, as part of the settlement. As publically discussed in many forums, the most likely reason for Apple to ditch Intel in favor of Qualcomm was the realization that Intel wouldn’t be able to meet Apple’s hefty 5G modem needs. This indeed is a major miss by Intel, considering that they are currently the sole modem supplier to Apple’s latest iPhone. Their inability to deliver the right modem solution for such a large and almost guaranteed opportunity clearly shows a profound and fundamental flaw in Intel’s operations and execution strategy. By all counts, 5G was a level playing field for Intel as well as everybody else in the race including Qualcomm, and Intel failed to deliver. In such a case, it is reasonable to argue that, this might as well be the case with 4G LTE. That means, whatever harm the FTC has claimed for Intel in 4G LTE was because of its inability to deliver, and not because of Qualcomm’s alleged market power or licensing policies. This proves that FTC’s third claim is completely flawed as well.
Who stands to benefit from FTC trial now?
With Apple and Qualcomm settling, and Intel exiting 5G smartphone modem market and mulling strategic options for its modem business, the question arises, “Who stands to benefit now from the continuation of FTC case?” The surprising answer is China’s Huawei, as it was FTC’s third collaborator along with Apple and Intel. This is such an unfortunate and disgraceful situation that an arm of the US government is directly helping a foreign entity, against a US company who is heralded as the country’s 5G leader. This is even more ironic and embarrassing, considering that the US government has virtually banned Huawei for national security reasons!
What could be the possible outcome?
With all the major claims of the FTC discredited, its case is in serious jeopardy. As Judge Koh noted during the closing stages of the trial, this case is very complex with a huge amount of evidence to examine. The hurried summary judgment that Judge Koh gave in the early part of the trial, the radical remedy that the FTC is seeking, and the recent developments, complicate the case even further.
The FTC didn’t make a strong case, to begin with, it looks even weaker now. That means, it is almost impossible for Judge Koh to give a judgment that might permanently alter cellular IP licensing regimen being practiced for decades. In my view, the only possible option for the FTC now is to settle with Qualcomm and save its face, especially considering that anything other than that will help Huawei. I am sure Judge Koh will be happy with that outcome as well. Any decision other than that will surely be challenged in the appellate court and most likely be overturned.
The telecom industry is still digesting the surprising and far-reaching decision by Judge Koh of the U.S. Northern California District Court. The expansive court order is as hard to digest as it is to comprehend. If you thoroughly read it (yes, I have, all the 233 pages!), it seems that Judge Koh had already made up her mind long before the trial, and hand-picked specific points from testimonies, evidence, and circumstances to suit her narrative. However, the battle rages on: Qualcomm is appealing the decision at the U.S. Ninth Circuit Court of Appeals. Meanwhile, the company is requesting a stay from Judge Koh until the appeal is heard. I think this is a mere formality, as I expect Judge Koh to reject the stay request. If and when that happens, Qualcomm will request a stay from the Ninth Circuit Court. While all of these court proceedings play out over the next few months, if not years, it is important to consider the havoc this decision and the possible denial of stay might cause in the market. It is even more crucial because we are at a critical juncture in the global 5G race, and this decision will affect how different companies, and perhaps more importantly, different countries progress.
In my previous article, I had briefly touched upon the question “who might benefit from an adverse decision against Qualcomm.” Since that fear has become a reality now, a more detailed discussion and evaluation of some what-if scenarios is in order.
At the very outset, there is no question that Huawei and China are the biggest beneficiaries. With this legal quagmire, the attention of Qualcomm’s executives and many of its engineers may be divided between trying to prevail in the legal fight and making great technology. This distraction gives Huawei (and in turn, China,) a leg-up, allowing it to strengthen its position in 5G. When you dig a little deeper, you will realize that, if Qualcomm’s request for a stay is not granted, the situation gets even direr.
What happens if the stay request is denied?
As I have discussed in my previous article, licensing revenues are the lifeblood of Qualcomm’s virtuous cycle of technology development, commercialization, and monetization. Judge Koh’s order threw a monkey wrench in to that cycle, exposing almost all of Qualcomm’s licensing contracts to renegotiation risk. Based on the news articles, it seems that recent deals with Apple and Samsung could be safe for some time; but I can’t imagine both of those behemoths not trying to use the court’s decision to eke out more concessions from Qualcomm. If you remember, during a separate trial, Qualcomm produced documentary evidence that showed how Apple intentionally tried to harm Qualcomm’s licensing business. Bottom line is, Qualcomm’s every licensing contract could be up for grabs. The company’s much-publicized, recent licensing spat with LG offers a glimpse of how convoluted and long these renegotiations could get.
Let’s look at the biggest block of the licensing lot, the Chinese OEMs that bring in a large portion of Qualcomm’s licensing revenue. Just like LG, all of these OEMs buy chipsets from Qualcomm. That means just as LG is trying to do, they might also ask for chipset based licensing. But most of them, if not all, license Qualcomm’s full portfolio, including cellular SEPs (Standard Essential Patents), non-cellular SEPs (e.g. Wi-Fi and Bluetooth), and non-SEPs (NEPs). However, the court order only applies to cellular SEPs. Given Judge Koh’s ruling, how would you negotiate a licensing deal that would span all these different kinds of patents? It would seem that the only option would be for Qualcomm and its licensees to examine more than 130,000 patents, one-by-one, and license on a la carte basis. As one could imagine, that would be a herculean task. Taking this insanity further, many of these are system-level patents, which mean they may cover more than just the modem or any single chip, and span different parts of the system and software. For example, if you consider MIMO, an important feature of 4G and 5G, the technology covers not just the modem but also RFICs and antennas, phones, and network equipment. Would patents related to MIMO be licensed based on modem pricing or RFIC, or antennas, or base stations? Also, different vendors produce these components. So, would all those vendors have to get licenses for cellular SEPs? So many complex questions with few clear answers!
If your head is not yet spinning with the complexity, consider this absurdity: Qualcomm would still be free to license all patents other than cellular SEPs at the device level. This means, there might be a case wherein the prices of non-SEPs would be higher than that of SEPs, which at some level defies logic! The point is, licensing could get so complex that it might take years to agree on how to structure meaningful contracts. A side note, look for my next article on the range of absurdities this court order is causing. Also, if you would like to know more about cellular licensing, please read my articles here, here and here.
The real threat of 5G investments getting strangled
During the uncertainty of lengthy negotiations and the complexity of restructuring of contracts, it is highly likely that many OEMs would be tempted to stop paying royalties. This would be similar to what Huawei is doing during its negotiations with Qualcomm now, and to what Apple did until it settled with Qualcomm back in March 2019. Such a large-scale disruption could mean that the revenue stream that feeds the Qualcomm’s R&D engine would go dry. The direct casualty of such an outcome would be the development of 5G, and America’s leadership in 5G. As you might know, we are only in the early stages of 5G. A lot of what 5G promises is still under development. All of that requires billions of dollars of investment and multiple years of sustained development, with a long lead time for revenue generation. Any interruption to Qualcomm’s licensing revenue could directly impact Qualcomm’s ability to create those inventions and the development of 5G. The world would be at the mercy of China for the future of 5G, and to deliver technologies for Industry 4.0, and others.
Handing a powerful lever to China in the trade war
The fact that a large portion of Qualcomm’s licensing revenue comes from Chinese OEMs has huge significance when the United States is in a bitter trade war with China. As evident from developments, both countries will use whatever leverage they have to get the upper hand. In such a case, the considerable revenue stream for a strategic American company will surely be weaponized and used as a bargaining chip by China in the broader trade negotiations. It is no secret that the Chinese government wields considerable influence over these OEMs. If you think about it, this is such a potent tool, not only for trade negotiations but also to severely hurt America’s prospects for 5G leadership.
Whose interest is FTC fighting for?
It is abundantly clear that the real and biggest beneficiaries of the FTC’s and Judge Koh’s actions are neither the American People, nor American companies, but ironically, China and Chinese companies. And this too, to the detriment of American 5G leadership and at the expense of an American technology company that has been hailed as a 5G leader by the U.S. Government itself. This is exactly the reason the U.S. Department of Justice voluntarily tried to impress upon Judge Koh that she be cognizant of the implications of her decision for America’s national interests.
On the closing note, to those who value free markets and fair competition, I would like to point them to recently finalized 5G infrastructure contracts in China. Huawei won the lion’s share of these contracts, clearly showing how the Chinese government protects its companies. Who is there to protect the American companies? Far from protecting its own national interests, a U.S. government agency is effectively fighting tooth and nail to hurt a legitimate American company and help the Chinese. What an irony!
Last week’s remarkable decision of the United States Court of Appeals for the Ninth Circuit (appellate court) consisting of three judges, finally brings some common sense into FTC’s bizarre antitrust case against Qualcomm. The appellate court granted Qualcomm’s request to stay the United States District Court for the Northern District of California’s (lower court) ruling, which had far-reaching implications for the entire U.S. patent regimen.
Side note: If you are new to the subject would like to understand the background, please read my previous articles here, here, here, here and here.
What did the appellate court say?
The court order must have sounded like music to Qualcomm’s ears. Even they could not have written it better! Don’t be confused by the title of the court order which says “partial stay,” Qualcomm actually got all of what it requested, and then some. The tone, the language, the arguments, the selection of phrases and words, the precedence cited, the direct denunciation of the lower court’s decision, everything screams a thumping Qualcomm victory.
First, it says that the application of the Sherman Act (antitrust law) to the case is not accurate, as private businesses have discretion on who they deal with. That means, Qualcomm is free to license its Standard Essential Patents (SEPs) to whomever they choose — effectively negating the lower court’s order of mandatorily licensing of SEPs to rival chipmakers on exhaustive basis.
Second, it acknowledges that there is a stark difference of opinion between two governmental agencies tasked with enforcement of antitrust laws— FTC and Department of Justice (DOJ). This is in complete contrast to the lower court’s abject disregard for DOJ’s request to conduct additional briefings before imposing remedies, and be considerate about the effects of broad and far-reaching remedies that alter market dynamics and jeopardize national security.
Third, it clearly states that the appellate court is satisfied with Qualcomm’s argument that its practice of licensing only to devices OEMs and charging royalties at the device level doesn’t violate any antitrust laws. This is again the opposite of one of the key rulings of the lower court. The appellate court goes on to even mention the extraordinary step taken by the sitting FTC commissioner— Maureen K Ohlhausen, publically expressing her dissent to the theory urged in the complaint and adopted by the lower court.
Fourth, it says that it also agrees with Qualcomm’s strong argument that implementing the lower court ruling, before the appeal decision, will do irreparable harm to its business. This was one of the easiest things to understand and realize to anybody even with a hint of knowledge of the licensing and wireless business. The lower court’s complete disregard for such logical reasoning was appalling to the keen observers of this case like me.
Finally, the appellate court concludes that the difference of opinion between FTC and all the other relevant government agencies, including DOJ, Department of Defense, and Department of Energy, warrants the stay be granted. It further points out that these government agencies have opined that the lower court’s adverse action against Qualcomm threatens national security and “has the effect of harming rather than benefiting consumers.”
If you feel like you have heard these arguments before, you are right. These are the same arguments I put forward in my previous articles here, here, here, here and here.
What’s next?
The biggest kicker in the appellate court’s order is its ridicule of the lower court’s order as “.. a trailblazing application of the antitrust laws or instead of an improper excursion beyond the outer limits of the Sherman Act..”
To be sure, the lower courts are supposed to implement the law based on precedence, and not be a trailblazer!
Further, the appeal hearing is scheduled for Jan 2020, much quicker than usual timelines. The tone of the appellate court order, the decisive and unambiguous way in which the panel has struck down all the major aspects lower court’s assertions, strongly suggests that the overturn of its ruling is imminent. The urgency in scheduling the appeal hearing also indicates the importance appellate court imparts to this case. Qualcomm filed its long opening brief to the court on Aug 24th,2019.
Final thoughts
This appellate court decision was longtime coming. Actually, the whole trial was a series of bizarre turns of events. From the judge arbitrarily limiting the evidence period to March 2018, excluding the pertinent evidence thereafter, to strange explanation for summarily discounting defendant’s in-court live testimony, because the judge felt that the witnesses looked “prepared” to using an extremely narrowly defined potential violation for an extremely broad and industry-altering remedy and so on. But fortunately the saner senses have finally prevailed, and justice is being served the right way, albeit delayed. Now all the eyes are on the Jan 2020 hearings.
Qualcomm got a reprieve when the United States Court of Appeals for the Ninth Circuit stayed the decision of United States District Court for the Northern District of California’s (DC) in its antitrust case. Immediately after the stay, Qualcomm filed its opening brief (175 pages long), which was followed by a flurry of supporting Amicus Briefs (each more than 40 pages) from different companies, U.S. government, a retired circuit court judge, and groups of experts. While all of them criticize DC’s ruling, two of them choose to be neutral; all others were strongly in favor of Qualcomm.
<<Side note: If you would like to know more about Ninth Circuit court ruling, and the complete FTC vs. Qualcomm saga, check out this article series.>>
Principal arguments
The briefs supporting Qualcomm strongly condemn DC’s ruling. Their arguments can be summed up into three major themes:
-
DC either misunderstood or misapplied the US antitrust laws, as well as the precedence. The proponents claim that Qualcomm’s licensing approach, “No license No chips” policy or alleged “higher licensing prices” don’t violate Sherman Act. Also, Qualcomm’s decision to only license to device OEMs is not against the Fair and Reasonable and Anti-Discriminatory (FRAND) principles of Standards Developments Organizations (SDOs). Additionally, they claim the FTC or court did not show apparent consumer harm.
-
The remedies imposed by DC are very broad and far-reaching. The ruling applies to every aspect of Qualcomm’s licensing business including all of its global contracts; in many cases, those are even outside the purview of FTC or the DC. For example, contracts with Chinese OEMs for devices to be sold only in China are beyond FTC’s authority.
-
The ruling creates widespread disruption to the decades-old licensing regimen that has proven to encourage innovation, be efficient, and easy to implement. If licensing based on Smallest Saleable Patent Practice Unit (SSPPU) becomes mandatory, that will put almost every existing licensing deal that doesn’t use SSPPU, up for renegotiation. The proponents claim that because many patents span multiple functional units, DC’s ruling will create an unfathomable mess of who to license who, at what rate, and how.
The focus of each Amicus Brief
All the briefs came with a heavy dose of related precedence. Since the supporters are from different fields, each of them stressed on different parts of the argument, as highlighted in the sections below:
U.S. Department of Justice (DoJ):
One of DoJ’s main points is, alleged “unreasonably high royalty” is not anti-competitive; on the contrary, they quote from precedence that high royalties enable “risk-taking that produces innovation and economic growth.”
DoJ also emphasizes that Sherman Act violation requires “harm to completion” and not just “harm to competitors” as alleged by DC. DoJ ridicules DC’s “misunderstanding” of antitrust law, and also reminds it about the CFIUS’ action to block the takeover of Qualcomm because of national security reasons.
Judge Paul R. Michel (Ret.) – Served on Circuit Court for more than 20 years
Judge Michel states that SSPPU is a mere tool to avoid jury confusion. He argues, since this was a bench trial, and because of the sheer number of complex patents (~140,000) that cover multiple functional units, use of SSPPU does not make any sense.
The judge also points to the disastrous outcomes when the SSPPU was mandatorily applied to IEEE standards 802.11ah and ai, which were ultimately rejected by ANSI (American National Standards Institute).
A group of 20 antitrust and patent law professors and experts
These experts, including a retired chief judge of the federal circuit court of appeals (Randall R. Rader), who came up with the SSPPU concept, point out that the antitrust law needs actual proof of the harm (e.g., economic analysis), not just “Per Se” or “theory-driven arguments.” They condemn DC for using the discredited theory of Mr. Shapiro (without using his name) and simplistic documentary evidence, such as email, instead of concrete economic evidence to establish anti-competitive conduct.
They draw an interesting parallel between the decade long antitrust crusade against IBM, launched at the closing days of Johnson administration and that of Qualcomm, filed during the last days of Obama administration. They point out that DoJ learned its lessons about the ill effects of antitrust overreach by pushing IBM, an American technology jewel, to certain bankruptcy, and warn against repeating it.
International Center for Law & Economics (ICLE)
ICLE, a group which has many antitrust and economics experts, opines that this “case is a prime—and potentially disastrous—example of how the unwarranted reliance on inadequate inferences of anticompetitive effect lead to judicial outcomes utterly at odds with Supreme Court precedent.”
Further, ICLE quotes one of the previous relevant judgments that seem to uproot the crux of DC’s argument—“The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system.”
Cause of Action Institute (CoA)
CoA, a non-partisan government oversight group, comes down rather heavily on both DC and FTC. It reiterates the words of a sitting FTC commissioner who called this trial “a product of judicial alchemy, which is both bad law and bad public policy.”
Further, CoA asserts that FTC exceeded its statutory authority in at least four ways, including the reasons that DC’s “injunction violates due process and is unenforceable for vagueness.”
Alliance of U.S. Startups & Inventors for Jobs (USIJ)
USIJ states the fact that the cellular industry is one of the most competitive, dynamic, and thriving markets, and there is no need for regulatory or judicial interference. Instead, it suggests that the FRAND complaints and the other concerns can be better resolved by using contract and patent law rather than antitrust law. They say that the latter would be akin to using a hammer instead of a scalpel.
It warns that DC’s ruling will stop companies from participating in standardization, and that will be anticompetitive and will harm consumers.
InterDigital
InterDigital emphasizes that antitrust law shouldn’t trump innovation, and it points out how the law is being misused to make inventors “accept sub-FRAND royalties.” It also cautions about how antitrust overreach will weaken innovative US companies, and make their leadership replaced by foreign companies supported by their governments, who may not have the US’s best interests at heart.
InterDigital doesn’t specifically mention whether it supports Qualcomm or not.
Dolby
Dolby comes out strongly in favor of keeping the flexibility of patent holders in deciding where in the value chain they license. They insist that this allows the innovators to maximize returns on their huge investments and fairly compensates them for the risks.
Dolby faults DC in misinterpreting the FRAND commitments to SDOs and suggests that there are no mandatory requirements to license at any specific level or to any specific providers. It also highlights the confusion and the havoc it would create if the well-established end product based licensing, practiced across many industries, is altered in any way.
Dolby only asks for the reversal of DC’s summary judgment instructing Qualcomm to license to rival chips makers.
Nokia
Nokia points out the difficulties in licensing at a component level, and how patents cover more than a single functional unit, and how SSPPU is not applicable at all. While highlighting these inconsistencies in DC’s decision, it remains neutral.
In closing
There is a striking commonality in what Qualcomm has claimed in its briefing and all the Amicus Briefs coming from this diverse set of experts and in some cases competitors such as InterDigital. That suggests that there indeed is a strong case to be made against DC’s ruling. As I have pointed out in my earlier article, the appellate court seems to agree with many of these assertions as can be gleaned from the stay ruling. I would be highly surprised if the appellate court doesn’t overturn many of the draconian rulings of the DC.
Also, In response to Qualcomm’s briefing, FTC is expected to file its briefing sometime in October or November, and any Amicus Briefs supporting it will follow soon after. Come back to my column here for the latest developments and what they mean.
The stage is set for Feb 13th, 2020, hearing of FTC vs. Qualcomm antitrust case at the United States Court of Appeals for the Ninth Circuit (Ninth Circuit). In preparation, FTC, Qualcomm, and many interested parties have filed their briefs in support and against the decision by the United States District Court for the Northern District of California (lower court).
In the briefs, FTC’s subtle change in tactic caught my eye. They seem to have changed their “hero” argument. They are now trying to make Qualcomm’s alleged breach of FRAND (Fair Reasonable and Non-Discriminatory) commitments to Standard Setting Organization (SSOs), their main argument, while treading lightly on their earlier key, albeit discredited, “surcharge on competitor” theory. Is it a sign of FTC losing confidence in its case? Also, their FRAND breach argument seems to be on shaky ground.
<<Side Note: If you would like to understand the history of this case, please refer to my earlier articles on the subject>>
I spent many hours meticulously reading through all the briefs (~1500 pages). They are complex, with lots of legal jargon, illustrations, and citations. Here is a high-level summary of the arguments and my opinions on their effectiveness.
The hypothetical “surcharge on competitors” argument
FTC and its supporters are still relying on the theory put forward by Prof. Carl Shapiro. They also have provided torturous examples and illustrations. However, this theory was rejected by the US Court of Appeals for the District of Columbia Circuit in a separate case—United States vs. AT&T. The court’s rejection, as stated, was based on the evidence of actual market performance. Interestingly, both these cases have lots of similarities. Just like AT&T’s case, FTC’s arguments are also based only on theory, without any empirical study of actual market conditions. Moreover, the developments in the market completely debunk Dr. Shapiro’s theory. Unfortunately, those developments could not be included in the trial as evidence, because they happened outside the discovery period of the trial.
According to the theory, Qualcomm allegedly abused its monopoly power to create an imaginary surcharge on the competitors, making their chipsets more expensive. In reality, around 2016, Apple, who was exclusively using Qualcomm’s chipsets, also started using Intel’s chipsets. This fact virtually nullifies the monopoly power allegation. To a large extent, it also disproves the claim that the alleged imaginary surcharge was disincentivizing competitors. Alas! None of this mattered in the trial because of a stringent discovery timeline.
FTC claims that this imaginary surcharge reduced competitors’ profit and hampered their investment in R&D. That seems like a ridiculous argument when you consider that those competitors are behemoths like Intel, and the OEMs are giants like Apple. Looking at all these contradictions, it is clear why FTC is not pushing this argument as hard as it did in the lower court.
Is “harm to competitors” the same as “harm to the competitive process?”
For claiming antitrust law violations, prosecutors must prove harm to the competitive process. FTC is arguing that Intel being late with CDMA and LTE chipsets, and players such as Broadcom and ST Ericsson exiting the market prove harm to competition. Many experts, including the US Department of Justice (DoJ), argue that such instances as well as companies making less profit show harm to competitors, but not necessarily to the competitive process.
During the trial in the lower court, there was ample evidence presented to explain the reasons behind the problems competitors faced — none instigated by Qualcomm. For example, documents presented by Intel’s strategy consultant Bain and Company attributed Intel’s delay to faulty execution; an executive from ST Ericsson opined that they couldn’t execute fast enough to keep up with Qualcomm and rapidly lost the market share, which resulted in their exit.
The reasons for competitors not faring well in CDMA and being late in LTE were pretty clear to the keen industry observers like me. Regarding CDMA, not many chipset vendors were interested in that market as they thought the opportunity was small and fast diminishing. There were only a couple of large CDMA operators (Circa 2006), and with LTE on the horizon, they thought CDMA would quickly disappear. Hence they never invested in it. Much to their chagrin, CDMA thrived for many years, allowing Qualcomm to enjoy a monopoly. Ultimately, Intel acquired a small vendor—Via Telecom—in 2015 to get CDMA expertise. On the LTE front, nobody foresaw the exponential growth of LTE smartphones. Qualcomm, because of its early investment and cellular standards leadership in LTE, surged ahead, leaving others in the perpetual catch-up mode. For example, even when the LTE market has stabilized, Qualcomm chipsets had superior performance.
Alleged practice of “license for chips” policy
FTC claims that it has factually proven Qualcomm’s alleged “license for chips” policy, where Qualcomm would only sell its highly coveted chips if the OEMs sign the license agreement. Qualcomm disagrees. In my view, FTC’s evidence is pretty scant and unconvincing. It includes a few emails with some text that alludes to such intention (license for chips). In many of these emails, the main topics of discussion seem to be something unrelated. There were a couple of testimonies from Qualcomm’s OEMs, mentioning how they “felt” the overhang of this policy during negotiations. But they didn’t have any tangible evidence. There was only one concrete instance—a mail with a veiled threat. But the evidence presented in response showed that Qualcomm top management swiftly dealt with it, and condemned any such practice by its lower cadres.
Another of FTC’s claims is regarding an agreement between Qualcomm and Apple, through which Qualcomm paid Apple for a commitment to use its chipsets in a majority of the devices. FTC alleges that this amounts to Qualcomm indirectly subsidizing licensing fees, and that violates antitrust law. This also is part of the imaginary surcharge to competitor argument. Qualcomm claims that, as stated in the contract, the payment was to compensate Apple for the expenses it would incur in modifying its designs to incorporate Qualcomm chipsets, and was a traditional volume discount. When the contract was signed, Apple was already the market leader with multiple successful iPhone models and was using a different vendor’s chipset. That would indicate Qualcomm didn’t posses any monopoly power over Apple. The contract and the payment were revocable, which Apple ultimately did. So, it is questionable whether it can be treated as a subsidy.
Is FRAND commitment “duty to deal?”
Now to the new “Hero” argument. FTC claims that Qualcomm’s FRAND commitment to the US-based SSOs binds it to license its Standard Essential Patents (SEPs) to rival chip vendors (aka duty to deal). The SSOs in question are ATIS (Alliance for Telecommunications Industry Solutions), and TIA (Telecom Industry Association). The argument is, Qualcomm’s decision to not license to rival chipmakers is a violation of antitrust law. Many of the third parties on the FTC’s side overwhelmingly support this argument as well, for obvious reasons. Well, this at the surface seems like a simple and compelling argument. But it has multiple facets.
<<Side Note: If you would like to understand SEP and the patents process, refer to this article series>>
First, do these commitments mean holders have to license the patents, or is it enough to provide access to them? Second, whether FRAND violation, if true, amounts to an antitrust violation, which is usually a much higher bar? Third, which is more interesting—Are patents practiced by the chipsets or by the end devices (e.g., smartphones)? If latter, then licensing and violation only occurs at the device level, so no real need to license to chipset vendors. Fourth, the policies and practices of the biggest SSO —ETSI (European Telecommunications Standards Institute). ETSI’s policies are considered as the gold standard for SSOs. Interestingly, in its decades of history, ETSI has never compelled its members to license to rival chipset vendors or at the chip/component level. Many of the current SEP holders, such as Nokia, Ericsson, and others, strongly supported this approach during the trial. Well, I have merely scratched the surface of this argument. Since this is now FTC’s main argument, indeed, it needs close scrutiny, which I will do in my next article.
If you have been following this case and feel that you have heard these arguments before, you are right! Both sides made these arguments in the lower court and still sticking to them, except for FTC’s subtle change. It will be interesting to see how the Ninth Circuit considers these arguments. I will be in court to witness and report it. Make sure to follow my updates on twitter @MyTechMusings.
As promised in my previous article, here is a detailed discussion on FTC’s FRAND (Fair Reasonable And Non-Discriminatory) argument in its antitrust case against Qualcomm. FTC argues that Qualcomm agreeing to the FRAND (Fair and Reasonable Anti Discriminatory) requirements of Standards Setting Organizations (SSO) binds them to license patents to all applicants; Qualcomm declining to license its Standard Essential Patents (SEPs) to rival chipset vendors amounts to an antitrust violation. The FRAND requirements are more nuanced than what they appear to an untrained eye. I will dig deeper and try to decipher the arguments as well as examine the industry’s practices for more than two decades.
<<Side Note: If you would like to know the full history of this case, please refer to my article series. >>
What does FRAND commitment to SSOs mean?
The SSOs in question here are TIA (Telecommunications Industry Association), which developed CDMA standards, and ATIS (The Alliance for Telecommunications Industry Solutions), which developed LTE standards. Both organizations require their members to mandatorily sign the IPR policy document, which includes the FRAND requirements.
TIA has a 24-page IPR Policy document. The most relevant portions to this case are on pages 8 and 9:
(2) (b) A license under any Essential Patent(s), the license rights which are held by the undersigned Patent Holder, will be made available to all applicants under terms and conditions that are reasonable and non-discriminatory, which may include monetary compensation, and only to the extent necessary for the practice of any or all of the Normative portions for the field of use of practice of the Standard
The first part of this section is pretty straight forward. But the part marked in red is what is at issue here. In layman’s terms, this means the patent holder agrees to give a license for the practice of the standard. In other words, licenses to the applicants whose products practice the standard. Qualcomm argues that devices—and not chipsets—practice the standards. They point to the actual language/text of the standards as evidence. It is customary for the patents to state, “UE (User Equipment, aka device) shall do this,” or “Base station shall do that,” etc. And the standards never state, “Chipset shall do this or that.” Considering that, Qualcomm argues, they are not required to license SEPs to chipset vendors, but only to device vendors. To that effect, they also point out that they have never sued any chipset vendors for patent infringement.
Now, let’s look at the ATIS IPR policy, which is governed by the “Patent Policy as adopted by ATIS and as set forth in the “Operating Procedures for ATIS Forums and Committees,” a 26-page document. The most relevant portions are on page 10 and 11:
“…Statement from patent holder
Prior to approval of such a proposed ANS, ATIS shall receive from the identified party or a party authorized to make assurances on its behalf, in written or electronic form (b) assurance that a license to such essential patent claim(s)will be made available to applicants desiring to utilize the license for the purpose of implementing the standard. (i) under reasonable terms and conditions that are demonstrably free of any unfair discrimination…”
Again, looking at the highlighted part, Qualcomm argues, as stated in the standard, chipsets don’t implement the standard, but the devices do. So, there is no need for them to license to chipset vendors!
Is a violation of SSO commitment violation of US antitrust law?
Even if you consider that SSO IPR policies are violated, then the question becomes, “does that amount to a violation of US antitrust law?” One argument is that the alleged FRAND violation is a commercial matter and can easily be dealt with through contract and patent law, instead of policy tools such as antitrust law. In his Amicus Brief in support of Qualcomm, Hon Judge Paul R. Michel (Ret.) of US circuit court gave a compelling simile: “as a general proposition, the hammer of antitrust law is not needed to resolve FRAND disputes when more precise scalpels of contract and patent law are effective.”
Even the United States Court of Appeals for the Ninth Circuit (Ninth Circuit) panel, while granting Qualcomm’s request for a stay, ridiculed the lower court’s ruling as “… a trailblazing application of the antitrust laws or …an improper excursion beyond the outer limits of the Sherman Act..”
Precedence and other considerations
3GPP (3rd Generation Partnership Project), the cellular specifications group, prefers all the SSOs across the world to have consistent IPR policies. ETSI (European Telecommunications Standards Institute) is one of the major players among the eight SSOs that are the organizational partners of 3GPP. There has been much discussion at ETSI regarding the issue of component-level licensing, such as licensing to chipset vendors. But ETSI has never stated that it supports or requires its members to offer component-level licensing. So, the lower court decision creates inconsistency between ATIS, ETSI, and other SSOs, whose impacts go far beyond this case.
<<Side Note: If you would like to learn more about 3GPP’s organizational structure and operational procedures, please refer to this article series.>>
More than two decades of cellular patent licensing history proves that the device-level licensing works smoothly and efficiently. Although the discussions related to this case are mostly about modem chipsets, typical devices have hundreds of different components. If licensing is brought to the component-level, it would be a logistical and legal nightmare for OEMs to understand, and negotiate separate licenses with all those vendors, as I explained in this article. Also, probably every existing cellular IPR contract will have to be rewritten.
Final thoughts
So far, there have been only a few minor cases in the telecom industry regarding the violation of FRAND commitments. FTC’s case against Qualcomm is the first major case where its relevance to antitrust law is being tested. The decision of this trial will be a defining moment in the “component vs. device-level” licensing debate. Qualcomm seems to have strong arguments, and the earlier Ninth Circuit panel agreed with most of them. But now the appeals hearing has a new panel of judges, which brings a new set of uncertainties to the case. As promised before, I will be there in person to witness the appeals hearing of this historic case. Be sure to follow my Twitter feed @MyTechMusings for the latest.
The title best describes the current situation after the recent hearing in the more-than-yearlong saga between FTC and Qualcomm. On Feb 13th, 2020, a three-judge panel of the US Court of Appeals for the Ninth Circuit (Ninth Circuit) heard Qualcomm’s appeal to reverse the ruling of the US District Court of Northern California (lower court). During the hearing, the panel asked a lot of skeptical questions to FTC regarding its position, arguments, and precedents, probed Qualcomm’s stance, and almost snubbed the US Department of Justice (DoJ). Although the judges appeared confused in the beginning, they seemed to have gotten the main points toward the end. Based on the verbal and non-verbal communications of the judges, Qualcomm definitely had a more positive day than FTC.
<<Side note: If you would like to understand the history of the case, please refer to the article series “FTC vs. Qualcomm Antitrust Trial”>>
I was fortunate enough to be in the court to witness the hearing. The appeals panel consisted of three judges: Judge Callahan, Judge Rawlinson, and Judge Murphy III. Being in front of them, I was able to observe lots of their non-verbal cues, such as subtle changes in mood and facial expressions, inaudible grunts, how keenly were they listening to whose arguments, etc., which many people watching online might have missed.
With only about 50 minutes allocated to the hearing, both parties only focused on the main points. What caught my eye was that during Qualcomm’s arguments, judges were more in the listening mode and only prodding Qualcomm for clarifications. But during FTC’s time, they were more skeptical, often questioning and challenging FTC counsel’s assertions, and mostly in the “so what” mode. This is unlike other appeals cases, where usually appellants (Qualcomm in this case) face more scrutiny.
<<Side note: Please refer to my articles here and here for more details on the arguments at play in the case>>
Duty to deal
FTC massively hurt their case by conceding that Judge Koh had erred in citing the Aspen Skiing case as the precedent for “Duty to Deal,” i.e. the ruling that Qualcomm has the duty to license its patents to competitors. Judge Callahan even went to the extent of saying that the house of cards, i.e. FTC’s case, starts to fall if the card of Aspen case is pulled out. Qualcomm obviously made a field day with it, quoting lower court’s argument that “Duty to Deal” was one leg of the three-legged stool, and with that gone, the case couldn’t stand (literally and figuratively). FTC’s alternate precedents of Caldera and United Shoe Company cases, or argument about Qualcomm breaching FRAND commitments to Standards Setting Organizations (SSOs) didn’t seem to impress the panel. So, I am positive that this ruling will be reversed.
“No license no chips” policy
This argument confused the heck out of judges. Multiple times Judge Callahan asked and confirmed that Qualcomm was not accused of the “No chips No license” policy, which obviously is antitrust conduct. She even suggested that probably Judge Koh of the lower court was confused about that as well! In other words, she didn’t think “No License No Chips” was anti-competitive. There was a clear difference of opinion between FTC’s and Qualcomm’s counsels on how OEMs expressed their views on the policy. FTC said that many witnesses from smartphone OEMs had given testimonies about paying higher royalties because of the risk of not getting chips. On the other hand, Qualcomm said that there was only one witness, from one OEM, in a non-monopoly market. To my recollection attending those hearings, mostly OEM expressed that they felt such policy existed, but never showed any evidence of Qualcomm practicing it. So, obviously, the panel will have look at the actual testimonies to make their determination. There was no discussion on whether this policy itself was illegal or not. but using this policy for creating the alleged surcharge on competitors.
Surcharge on competitors
If no license no chips discussion was confusing, this torturous surcharge claim hypothesis knocked the wind out of judges! Judge Murphy even said that he was having a hard time keeping up with all these things! I don’t blame them. Most of FTC’s time was spent on making the judges understand what FTC calls a surcharge, how it affects competition in their view etc. As expected, the panel challenged this claim from multiple angles—precedence, market evidence, harm to competition not competitors, etc. and tried to poke holes in FTC’s position.
Here are the notable questions and challenges. Judge Rawlinson asked “… what would be wrong with that (higher royalty fees), doesn’t the Supreme court say that patent holders have the right to price their patents, what would be anticompetitive about that?” and “..What case says that it is anti-competitive to move (cost) from chip to patent?” Judge Callahan asked, “Why did the OEMs say it’s unfair because they have to buy a license anyway?”; “..who is a Goliath here, Apple is more of a Goliath than Qualcomm”; “..your argument that Qualcomm’s licensing fees increase rival’s cost doesn’t make sense to me…” ; “There seems to be….. a conflation of profitable and anti-competitive (one means the other).”; “… weren’t there multiple competitors enter the …market successfully beginning around 2015, leading to a precipitous decline in Qualcomm’s market (share)? Judge Murphy III asked, “…why don’t we let OEMs exercise their right in patent law to file (cases for) predatory pricing, abuse of monopoly, etc. (instead of antitrust law)?” These were mere samples.
The panel was unconvinced and most likely will still be even after looking at the documents.
Chip volume incentives or royalty discount
This issue was not discussed as much as others but was used as a basis for other arguments. FTC claims that Qualcomm’s volume discount to Apple is exclusionary and anti-competitive. Qualcomm, during its rebuttal, argued that licensing and chipset are two separate contracts and it doesn’t make sense to combine them. Again, this is another issue where the judges will have to look at the documentation and decide.
Is the “Threat to national security” argument justified?
This is the first time that DoJ and FTC are on opposite sides of a case. Qualcomm ceded five minutes of their time to DoJ. DoJ’s major claim is that the lower court’s global and expansive remedy harms national security. Judge Murphy seemed hostile against DoJ and asked whether they have any market analysis or financial evidence to prove the claim. DoJ counsel, although startled by the question, came back with a reasonable explanation that the basis for the case was 3G and 4G, but applying the remedy to 5G will negatively affect the country’s standing in 5G. 5G being such a crucial technology for many aspects of the country, DoJ and other government departments (Department of Defense and Department of Energy) are convinced that implementing the ruling will harm the country. FTC counsel was quick to capitalize on Judge Murphy’s assertion and discount the security concern as a simple abstraction without any supporting studies.
I am not sure whether the panel will consider the security question seriously.
What does all this mean?
You have to consider that the hearing is only one part, albeit an extremely important one, in resolving the case. The court will examine all the briefs, and case documentation before making a final decision. One could argue that the cues from the hearing may be overblown, for example, all those questions and challenges could just be the judges probing both parties to completely understand their stance and such. However, specific things such as difficulty in fully grasping the FTC’s argument, and understanding its point of view clearly indicate that the judges don’t believe those arguments and are not taking them at the face value. It also suggests that the FTC’s arguments are not as robust as the lower court thought they were.
From Qualcomm’s perspective, after a clear win with the stay, this hearing turned out to be very positive. The FTC had a major initial setback because of the Aspen Skiing reversal, but at least made the panel understand its arguments. Whether the panel agrees with them or not is a separate matter. In my view, Judge Callahan and Judge Rawlinson seem to be aligned with Qualcomm’s arguments and Judge Murphy seems to be neutral or slightly aligned with FTC’s argument. Ultimately, as Judge Murphy III succinctly put it, “anticompetitive behavior is illegal… hyper-competitive behavior is not… this case asks us to draw the line between the two.” Meaning, the judges have to decide whether Qualcomm’s behavior is anticompetitive or hyper-competitive.
What’s next?
There is no fixed timing for the Ninth Circuit’s decision. The expectation is six to twelve months. The decision doesn’t have to be unanimous, meaning, only two of the three judges have to agree.
In terms of outcome possibilities, the panel could completely knock down all the lower court’s rulings, or fully uphold them, or do anything in between. Meaning, it could agree to some parts of the ruling and reverse the others or make a determination on some and send the others back to the lower court to reconsider. No matter what the panel’s decision is, either party can request a full panel review, which involves all the 20+ judges at the Ninth Circuit, and further knock on the Supreme Court’s door. If Qualcomm loses, especially the claims that affect its licensing policy, I am sure it will go to the Supreme Court. On the other hand, if the FTC loses, it might ask for the full panel review and let it go after that.
As it stands today, I think Qualcomm is in a pretty good situation and more likely to win than the FTC.
Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get updates on this trial as well as the telecom industry at large.
The United States Court of Appeals for the Ninth Circuit (Ninth Circuit) gave a landmark decision in favor of Qualcomm, on Aug 11th 2020, in the long running antitrust case brought about by FTC. This was a highly anticipated outcome in the multi-year saga, which saw fortunes go back and forth between the parties. The detailed opinion written by Judge Callahan, representing the panel of three judges, is a tell-a-tale of how FTC mischaracterized Qualcomm’s business model, and how the United States District Court for the Northern District of California (lower court) misjudged the case. The ruling vacated all the decisions of the lower court, including the partial summary judgement. I spoke to Don Rosenberg, EVP, and General Counsel of Qualcomm, who of course was quite pleased with the outcome. He said, “we felt vindicated by the appeals court’s ruling and are looking forward to continue bringing path-breaking innovation like 5G to life.”
Ninth Circuit’s decision was not just relevant for this case, but clarifies a whole slew of long-standing issues, and will set a defining precedent for IPR licensing in the future, especially from an antitrust point of view.
Side note: If you would like to know the full background of the case, refer to my earlier articles in the FTC vs. Qualcomm article series.
Well expected outcome
The recent developments in the case had made me predict such ruling. The Ninth Circuit’s stay of the lower court’s decision, and the language used in that order, the tone of the in-person hearing, and the deep skepticism the panel showed in their questioning made it amply clear the direction the panel was tilting.
The case indeed had a lot of unusual and rather interesting turn of events from beginning to end. It was filed in the last days of the last administration with only a few FTC commissioners in the office. One of those commissioners who was opposed to this move wrote a scathing opinion in The Wall Street Journal, publicly disparaging the case. The new incoming chair of FTC recused himself from the case, which left the case on autopilot with FTC staff taking charge. The instigators, major supporters and witnesses moved away from the case midway—Apple and Huawei settled their licensing disputes with Qualcomm, Intel exited the modem market. The US Department of Justice, which shares the antitrust responsibility with FTC, went strongly against FTC, it even became a party to the hearing and pleaded against the case. But the biggest surprise for me was the ferocity with which the Ninth Circuit tore down and reversed every decision of the lower court, including the summary judgement.
Highlights of the ruling
This indeed was a complex technical case, where the judges had to quickly develop full understanding of the industry. Rosenberg highlighted the challenges of appellate court judges “They have to work on the record that somebody else has created for them, including lots of documentary evidence, witness testimony, lower court’s assertions and more” he added “considering that, the judges did an amazing job, cutting through the noise and really getting to the core issues and opine on them.” The interesting thing I found reading through more than 50-page ruling is, how it summarized and reduced the case into five key questions:
-
Whether Qualcomm’s “no license, no chips” policy amounts to “anticompetitive conduct against OEMs” and an “anticompetitive practice in patent license negotiations”
-
Whether Qualcomm’s refusal to license rival chipmakers violates both its FRAND commitments and an antitrust duty to deal under § 2 of the Sherman Act
-
Whether Qualcomm’s “exclusive deals” with Apple “foreclosed a ‘substantial share’ of the modem chip market” in violation of both Sherman Act provisions
-
Whether Qualcomm’s royalty rates are “unreasonably high” because they are improperly based on its market share and handset price instead of the value of its patents
-
Whether Qualcomm’s royalties, in conjunction with its “no license, no chips” policy, “impose an artificial and anticompetitive surcharge” on its rivals’ sales, “increasing the effective price of rivals’ modem chips” and resulting in anticompetitive exclusivity
The panel decided that FTC and lower courts were wrong on all counts. Rosenberg said that the opinion gave very logical, persuasive and point to point arguments with obviously relevant citations to refute all those assertions. Here are some of the excerpts from the opinion:
“…OEM-level licensing policy, .. was not an anticompetitive violation of the Sherman Act.”
“…to the extent Qualcomm breached any of its #FRAND commitments, the remedy for such a breach was in contract or tort law…”
“…”no license, no chips” policy did not impose an anticompetitive surcharge on rivals…”=
“…We now hold that the district court went beyond the scope of the Sherman Act…”
” Thus, it [Qualcomm] does not “compete”—in the antitrust sense—against OEMs like Apple and Samsung in these product markets. Instead, these OEMs are @Qualcomm’s customers…”
“…OEM level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,”
“…while Qualcomm’s policy toward OEMs is “no license, no chips,” its policy toward rival chipmakers could be characterized as “no license, no problem…”
“…even if we were to accept the district court’s conclusion that Qualcomm royalty rates are unreasonable, we conclude that the district court’s surcharging theory still fails as a matter of law and logic.”
“…neither the Sherman Act nor any other law prohibits companies from (1) licensing their SEPs independently from their chip sales; (2) limiting their chip customer base to licensed OEMs…”
“…Our job is not to condone or punish Qualcomm for its success, but rather to assess whether the FTC has met its burden under the rule of reason … We conclude that the FTC has not met its burden…”
What this means for the industry
This indeed was a landmark decision with long ranging consequences. It surely clears the clouds of uncertainty that were hanging over Qualcomm’s licensing business for a long time. It will also be a welcome decision for many other patent holders and licensors. The precedent this case has set will be used for resolving patent related antitrust issues for a long time to come. Here are some of the specific things I think are relevant:
-
Device-level licensing is not anti-competitive
-
FRAND and patent violations are outside the purview of the antitrust law, and are better handled under the contract law
-
Royalties of one company do not have to be in-line with the rates other companies charge
-
Surcharge on competitors may have to be direct, at least the “effective surcharges” from complex inferencing do not work
Rosenberg said “Qualcomm’s novel licensing model and its policies have now gone through intense global legal litigation and have successfully proven themselves. Now we are more confident and working hard to innovate and to expand the reach of 5G and bring its benefits to the world.”
What is next for the case?
The FTC has not made comments on its next steps. It does have a couple of options. It could ask for what is called an “en banc hearing” in which the whole Ninth Circuit bench (or a major part of it) is asked to hear the case. But for that to happen, a majority of the judges would have to vote to agree to the hearing. Even after the en banc hearing, either party could knock on the doors of the Supreme Court and ask whether it would be willing to hear the case.
But, keeping all the theoretical options aside, I think a unanimous verdict, ferocious opinion coupled with the fact that all of the lower court’s decisions were vacated, makes it very less likely for FTC to keep pushing the case further. Since the instigators and supporters have also moved on, there is no incentive for anybody to keep it going. The FTC might ask for an en banc hearing anyway as a face-saving step as that does not require significant effort from its side. Since en banc is a large effort, and many other judges will have to spend a lot of time and energy to fully understand such a highly technical and complex case to give any verdict, I doubt they will grant it. Hence, I am confident that in many respects, this is the end of the road for the case.
As we await the FTC’s response, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Right before the passing of the deadline, as expected, the Federal Trade Commission (FTC) took another swing at Qualcomm by filing a request to reconsider the recent appellate court decision. But to everybody’s surprise, the FTC Chair and Trump appointee Joseph J. Simons, coming out of recusal, authorized that decision.
This request will again set in motion activities at The United States Court of Appeals for the Ninth Circuit (Ninth Circuit). After a few more weeks of action, I believe, eventually, this case will go into the history books as a great precedent for antitrust law in the realm of patents and licensing. Interestingly, Apple which was the alleged instigator of this case is already using this precedent to fight its case against Epic Games!
Side note: If you would like to know the full background of the case, refer to my earlier articles in the FTC vs. Qualcomm article series.
Well expected action by FTC but not by its chair
Even after the emphatic rebuke from the unanimous Ninth Circuit panel, FTC was well expected to file this request called en banc, as I predicted in my earlier article. There are many reasons for it: First, it doesn’t require much effort, only a short brief need to be submitted. Second, even in the unlikely event that its request is accepted, the rehearing will be short with minimal participation from FTC. Third, FTC would not like to appear as if it has given up on the case.
The most surprising thing was FTC’s chairman Simons siding with the other two commissioners resulting in the 3-2 in favor of en banc. He was recused from the case till May 2020, because his previous employer, Paul Weiss Rifkind Wharton & Garrison, advised Qualcomm on its unsuccessful bid to buy NXP Semiconductors. Since he is a Trump appointee, and the FTC case was filed in the wee hours of the Obama administration, even without the full commission in office, it was widely assumed that he would be against the case. Additionally, the administration’s Department of Justice (DoJ), Department of Defense, and few departments are also against the case, and in an unusual move, DoJ forced themselves into the Ninth Circuit hearing and argued against FTC.
The reasons behind Simons vote are not clear. Trump tweeting about government agencies not acting against tech companies might have made him show some action but on the wrong target. Since this was an easy move for FTC, he must have thought of going along with FTC staff during the last step of this case. Or maybe he actually believes in the case? We can only speculate. FTC taking the full 45 days available to file the request was also interesting. Maybe they are taking a more critical look at the case. As you may know, because of the 2-2 tie at the commision, FTC staff was running the show till now.
How does en banc work?
En banc is a process through which either of the parties requests the entire bench of the Ninth Circuit to reconsider the case. If you recollect, the earlier decision was heard by a three-member panel. Now, the full bench with 29 judges, minus any recusals, will take a vote on the request. If the majority votes to accept the request, the case will be assigned to another panel of 11 judges for a rehearing. The rehearing is expected to be short, only requiring Qualcomm to submit a reply to FTC’s en banc brief. No new evidence, and typically no physical hearing.
The rehearing has a quite high bar. Historically, less than one percent of the requests have been accepted. Only cases that are consequential for precedence, or that contradict any previous rulings or resolve any previous contradictions in the circuit are accepted. Also, the bench’s view of whether the panel has correctly applied the appropriate laws is a crucial consideration.
What is FTC arguing?
The 83-page long brief filed by FTC relies on many of their same arguments presented earlier in the case. Here are a few things, that are new and worth noting:
-
Argues that the Ninth Circuit panel only examined the applicability of the antitrust law to patents and licensing, and opined it is not, which obviously FTC disagrees
-
Points out that the panel did not disagree with any of District Judge Koh’s findings, and hence they must be true. Further, they refer to them as “facts” which I think is a big leap of faith
-
Relies heavily on United Shoes and Microsoft antitrust cases and attempts to draw strong parallels between them and Qualcomm. Clearly, they have learned their lesson and have moved away from the Aspen Skiing case!
-
Argues that Qualcomm’s royalties are inflated because of its chip monopoly, because, as claimed unsuccessfully before, its peers’ licensing revenues are much lower.
Side note: If you would like to know more about patent evaluation and how major companies rank in terms of cellular patents, check out this article series and this Tantra’s Mantra podcast.
What’s next and what does all this mean?
As mentioned, the next step is bench voting, if voted yes, the panel rehearing. The voting usually takes a week or two, and if the rehearing ensues, Qualcomm will have 21 days to reply, followed by a few more weeks for the hearing. So, the whole thing should be relatively short, maybe a couple of months.
It is not clear how the rehearing will be executed. Everything will be at the discretion of the panel. It may relook at the full case, or only some aspects of the case, and similarly full or partial remedies if it comes to that.
Considering that two sets of Ninth Circuit judges have given sided in favor of Qualcomm—one set of three granting the stay, and another set of three giving the decision, it was a unanimous decision that completely reversed the District Court’s ruling including the summary judgment, combined with a compelling 53-page opinion written by Judge Callahan, it is highly unlikely that the bench will vote of rehearing. Note that the judges have to rule against the judgment of six of their colleagues to vote yes. Also, if it goes to the rehearing, the panel has to study this highly complex case in depth to come to any reasonable conclusion.
Other than the fact that this is an important case for royalties, licensing and antitrust that affect a large portion of the economy with 5G, every other aspect of the case points to a No vote.
If FTC’s request is rejected, or if it loses the rehearing, it still has the option to go to the Supreme Court. In fact, they can approach the Supreme Court even during the en banc process.
Considering how far the case has come, my money is on en banc request getting rejected. In the unlikely case of this going to the rehearing, I have a strong feeling that the panel’s decision will be reaffirmed. If either of these happens, I think it would be futile for the FTC to go to the Supreme Court, and I seriously think it will not try to do that, as there are many negative consequences and long term risks, with little chance of success.
As we await the en banc decision, if you would like more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter atTantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
After pointlessly fighting tooth and nail for almost two years, FTC will now be forced to end the case, after the latest setback at The United States Court of Appeals for the Ninth Circuit (Ninth Circuit). The Ninth Circuit’s well expected en banc denial, following a series of upsets, put the death nail in the coffin. After the direct, clear, and very short seven-line opinion, I am certain that FTC will not even imagine knocking on the doors of the U.S. Supreme Court.
This decision clears all the clouds hovering around Qualcomm–the country’s 5G crown jewel. This will also have a long-lasting impact not only on its licensing business and policies but also on the technology industry and innovations as a whole.
Side note: If you would like to know the full background of the case, follow this FTC vs. Qualcomm article series.
A wave of setbacks for the FTC
After some initial success at the United States District Court for the Northern District of California (US District Court), FTC has constantly seen setbacks, and at times, very harsh rebukes at the Ninth Circuit.
First, the three-judge panel unanimously accepted Qualcomm’s request for the stay, with a ruling that almost ridiculed the US District Court’s decision. The panel opined it as “…a trailblazing application of the antitrust laws or … an improper excursion beyond the outer limits of the Sherman Act…”
Second, when another three-judge appeals panel heard the case, its questioning and doubting FTC’s confusing arguments made it amply clear which way the panel was leaning.
Third, the actual unanimous judgment almost shredded the US District Court’s decision and completely reversed it and threw it out, including the initial summary judgment. The opinion written by Judge Callahan was a tell-a-tale of how US District Court Judge, Lucy Koh miss-applied the antitrust laws.
Finally, this wholesome denial of the en banc request was yet another strong strike against FTC’s unfounded fascination in continuing the unworthy prosecution of a free and very successful American enterprise. It indeed quashed the hopes of some who thought the surprise move of FTC Chairman Joseph J. Simons, a Trump appointee, to authorize the en banc request had brought life back into the case.
In retrospect, the case has gone through a whole slew of US Federal judges—six judges of the panels, to some extent the full Ninth Circuit bench of more than 25 judges. But the only sympathizer for FTC, from the US legal system, seems to be Judge Lucy Koh of the US District Court. As an observer who attended almost all the court hearings, I found her handling of the case to be bizarre. Some of the examples of her strange behavior include: artificially limiting the discovery period which skewed the case, clinging on to the hypotheses such as “tax on the competitor,” which were rejected by other courts and judges, rejecting the testimonies of all of Qualcomm’s executives, including that of its highly respected and revered founder, and an industry veteran, Dr. Irwin Jacobs.
A series of unfortunate events
As I have indicated many times in my earlier articles, this case had a lot of oddities right from the beginning and they continued throughout the proceedings. The case was filed in the last days of the previous administration, with only partial commission present. The sitting FTC commissioner publicly criticized the case by writing a harsh rebuke on The Wall Street Journal. When the full commission was constituted, the Chairman recused from the case, making the decision a tie with two commissioners supporting and the other two opposing. That made the case almost run on autopilot, managed by the FTC staff. Apple, which was one of the alleged instigators and a major witness in the case, settled with
Qualcomm and ended its active support.
Many U.S. Government agencies opposed FTC’s action. The U.S. Department of Justice, which shares the responsibility and partners with FTC on antitrust matters, vehemently opposed the case and even took the unprecedented step of testifying against it at the appeals hearing. Many legal scholars and previous FTC commissioners, Ninth Circuit judges, opined against the case.
What’s next?
Although FTC has a theoretical option of knocking on the door of the US Supreme Court, I don’t think these series of setbacks and strong rebukes leave it any option other than to close the case and move on. If the appeals decision was not unanimous, not a complete reversal, or the en banc was accepted, there was some justification. Without any of those, it would be utterly stupid for FTC to continue the case and waste even more taxpayer money.
If they had any doubts, the Ninth Circuit’s en banc unambiguous opinion, which is mere seven lines long makes it pretty clear. That is the shortest court document that I have ever seen and analyzed. Many go up to a hundred pages or more. This decision for sure clears all the doubts around Qualcomm’s licensing policies and the industry- standard practice of licensing to OEMs. That means the practice of calculating licensing fees based on the price of the device (with caps, of course) is completely valid and legal. The case establishes a pretty significant precedence for licensing practices and applicability of antitrust laws. It will have a long-lasting impact on not only the cellular but almost the entire technology industry and beyond. With 5G set to transform almost every industry on the planet, the repercussions of the case are impossible to overstate. Look for a detailed article on this from me soon.
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Demystifying semiconductor techs that move industry forward
The tech industry has seen a blistering pace of innovation and market dominance. Global equity markets are swayed by how Apple, Amazon, Google, Facebook, Microsoft, Netflix, Intel, Nvidia perform. Seven out of the top ten companies in S&P500 and three out of the top 10 Dow Jones Industrial Average Index components are tech companies. The meteoritic rise of these giants was primarily fueled by unprecedented advancements in computing, especially mobile computing. So much so that the global economic future is guided by consumers’ mobile-first experiences and technologies that enable those experiences.
In a series of articles, I will explore the history and evolution of computing, i.e. semiconductor technologies, how they have shaped our present, and will define our future. Additionally, I will provide my commentary on some of the critical industry events that have influenced this evolution, and analysis of how the developments in the industry that are underway have the potential to change the course and drastically alter the future the industry has collectively envisioned.
Semiconductor technology evolution – a tale of two architectures
When you look at the evolution of semiconductor technology and architectures, there are two clear paths. First, Intel’s x86 architecture that dominates the server, desktop, and laptop computing space. And second, Arm Ltd of the U.K., which controls almost all the mobile computing space. Historically, x86’s primary focus has been performance, sometimes at the expense of power consumption. On the other hand, Arm has been feverishly focused on lower power consumption, but limited performance.
However, both companies are trying to evolve their architectures to improve on both performance and power consumption axes. Intel’s latest x86 laptop processors have improved much over their predecessors in terms of battery life. Arm processors have improved leaps and bounds in performance over the years, rivaling even Intel in personal computing processors, while still maintaining their low-power heritage. Currently, these architectures have limited overlap in terms of use cases and markets. But the turf war between them has been brewing for some time and is about to get brutal pretty quickly. Apple moving from Intel’s x86 processors to their own Arm-based M1 processor for Mac laptops is a good indication of that.
The future of technology will run on Arm
There is no doubt the future will be dictated by the mobile-first experiences that users are accustomed to and expect from everything tech, and everywhere else. That means, almost everything will be mobile, untethered, and wireless. 5G is providing even bigger impetus and extending that trend beyond the consumer segment to industrial as well. All this means, all the untethered devices from simple consumer devices to large machines in factories will run on batteries, which in turn means, lower power consumption is going to be of paramount importance.
Arm’s inherently low-power consumption will surely be the architecture of choice for the untethered world. Although Arm only dominates the mobile compute world today, its processing capabilities are evolving rapidly and with the thousands of innovative companies working on its technology, it is on track to expand beyond that space. E.g. the server market where Intel x86 has complete domination. Arm is trying to make a play, as even there, power consumption is becoming a challenge and big cloud companies are looking for low-power solutions. Industrial IoT, Automotive, Edge-Cloud, and many other segments are ripe for digital transformation and are good candidates for Arm adoption.
Arm’s “horizontal” business model
Unlike Intel, which has a vertical model of developing architecture and fabricating its own processors, Arm has adopted a “horizontal” business model. It develops the architecture and processor technology and licenses them in different flavors to semiconductor companies. Because of this model, Arm has enabled thousands of big and small companies including giants such as Apple, Samsung, Qualcomm, Microsoft, and others, to make market-leading and even market-defining products based on its architecture. If you are using any consumer electronics product that has some sort of processor in it, most likely it is based on Arm technology.
Arm’s horizontal business model is one of the key reasons behind the tech boom. While Arm focuses on continually improving the architecture and developing a strong roadmap, its large partner ecosystem focuses on developing processors and end products. The software ecosystem develops services to best exploit these technologies and products, creating an endless cycle of innovation that has fueled the tech boom.
Recent developments at Arm
The recent announcement of Nvidia buying Arm from its owner Softbank came as a shock to many who were part of this innovation cycle. This move has the potential to completely upend the whole ecosystem and may require significant realignment. Interestingly, Nvidia competes with almost all of Arm’s major customers in some shape or form. Additionally, Nvidia and Arm have quite different strategies, approaches, target market segments, and customer base, which makes it even more nerve-wracking for the ecosystem.
As evident, this is a multifaceted issue, with numerous primary, secondary, and tertiary impacts on Arm’s future as well as its huge ecosystem. In a series of articles, I will analyze all those dimensions very closely and present my thoughts on the subject. So, be on the lookout!
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Right after the last Nvidia quarterly earnings release, Jim Cramer, host of CNBC Mad Money spoke to Jensen Huang, CEO of Nvidia regarding the deal with Arm. Most of his questions were softballs, but what caught my attention was Jensen’s comment that Arm was not a must for Nvidia’s success, but a nice to have. That got me thinking and made me take a deeper dive into the rationale for the merger. Here are some of my thoughts on why Nvidia needs Arm more than vice versa.
Nvidia’s announcement of its intent to acquire Arm from Softbank has brought Arm out of the shadows and into the limelight. Arm has always been a silent performer, quietly powering the modern smartphone revolution. Its inner workings have been an enigma for many industry observers. And now, many are scrambling to understand what Arm does, and how Nvidia’s buyout will affect the semiconductor industry, the competitive landscape, and the future of tech at large. If you do not yet know the importance of Arm, consider this: almost any tech gadget you can think of, be it a simple IoT device, a game console, a smartphone, or even a modern car, has been touched by Arm technology in some shape or form. Its importance and reach are only going to expand, as the whole world moves toward untethered and low-power computing, as I explained in my earlier article here. Hence, the impact on the industry of its buyout by Nvidia is going to be oversized and impossible to overstate.
Side note: You can read the full article series here.
Arm’s licensing model
To scrutinize Nvidia’s rationale effectively, one has to really understand Arm’s business model, especially its licensing model. In simple terms, Arm is the design house of power-efficient processors (aka cores) for the entire tech industry. It makes money by licensing those technologies in different forms. It offers three types of licenses—Processor, Optimized Processor, and Architecture. Let us look at each of these more closely.
The first, Processor License, is simply the permission to use processor cores designed by Arm. Licensees cannot change Arm’s designs but are free to implement them however they like in their own solutions. For example, Qualcomm, Samsung, and Huawei have this type of license. They combine multiple types of Arm cores (e.g., CPU, GPU, or other types, and in some cases, different sizes of cores) alongside other proprietary cores to make their semiconductor Systems on a Chip (SoC’s). They also optimize the cores to achieve greater performance and to provide differentiation from other SoC’s. You might have heard about how Qualcomm Snapdragon, Samsung Exynos, and (Huawei) HiSilicon Kirin platforms perform differently. That difference is because each company uses and optimizes Arm cores differently. So, such a license is for players that have the technical and financial wherewithal to do such optimizations.
The second, Optimized Processor License, is a bit more involved and detailed, where Arm not only provides the basic processor design but also, optimizations to achieve a certain level of guaranteed performance. This license is well-suited to companies that do not have the capabilities to implement and optimize a design, for example, smaller IoT chipset providers. This is probably Arm’s most popular option, with thousands of licensees.
The third, Architecture License, is also sometimes referred to as an Instruction Set Architecture (ISA) License or simply, Instruction Set License, and is the most minimalistic option. ISA licensees only get access to Arm’s instruction set and can design their own cores that run those instructions. Apple is such a licensee. Its A-series processors used in iPhones, iPads, and the new M1 processor used in Macs are designed by Apple but, run Arm’s instruction set. Nvidia, Google, Microsoft, Qualcomm, and Tesla also possess architecture licenses.
Why is Nvidia buying Arm?
The reasons Nvidia has given for buying Arm can be grouped into three categories of benefits: 1) Using Arm’s vast ecosystem to distribute Nvidia’s Intellectual Property (IP); 2) Invest in Arm architecture to consolidate and expand its reach in the data center market; 3) Co-invent the Edge-cloud with Arm’s and Nvidia’s technologies.
In general terms, these reasons seem very attractive and complementary, benefiting both companies and their shareholders. They seem to benefit the industry at large as well, by giving others access to Nvidia’s market-leading graphics IP and accelerating the growth of data center and Edge-cloud markets. However, when you remove the covers and dig a bit deeper, there are quite a few peculiarities to consider.
First—Nvidia distributing its IP to the Arm ecosystem: from a business model perspective, Nvidia and Arm could not be more dissimilar. Arm is a pure-play licensing company that derives most, if not all, of its revenues from licensing. That means it is a neutral player across the whole ecosystem because it licenses its technology to all, and does not compete with any of its customers. On the other hand, to my knowledge the only thing Nvidia licenses is its CUDA software, and at no charge. One reason CUDA is free is because it only runs on Nvidia GPUs. Nvidia makes most of its money from its highly differentiated, high-margin GPU hardware and integrated software. Given this lucrative revenue stream, it is hard to fathom Nvidia’s willingness to license its GPU IP to Arm’s ecosystem, which would diminish its differentiation and destroy those sky-high margins. This could be particularly problematic, as some Arm licensees are in the process of developing products for the data center, where Nvidia makes most of its money. Nvidia’s licensing revenues and margins, like Arm’s, would be a pittance compared to Nvidia’s existing product revenues and margins. Unless there is another more plausible explanation where margins and revenue stream are not sacrificed, it is hard for anybody to buy this argument.
Second—Helping Arm expand into the data center market: this seems like a novel idea… the significant financial and other resource infusions Nvidia can make into the program could certainly accelerate Arm’s current trajectory. However, the “Arm for data center market” effort is already well underway, mainly because the data center service providers themselves have realized the importance of power-efficient processing, for financial as well as environmental (carbon footprint) reasons. Cloud giants such as Amazon, Google, and Facebook have reportedly been working on their own in-house, Arm-based platforms. Arm already seems to have the financial and market support it needs. On the contrary, Nvidia with its high-performance, but energy-guzzling GPUs will need low-power CPUs to complement (and improve) its portfolio, especially since the data center market is becoming extremely energy conscious. Additionally, it is likely Arm, with its decades-long experience in low-power design, that can teach a trick or two to Nvidia to help reduce the power consumption of its GPU designs. So, although Nvidia’s resources might help Arm, it seems Nvidia needs Arm equally, if not more.
Third—Co-inventing the Edge-cloud: unlike Arm in the data center market, this ship has sailed for some time. Power-efficient design is a basic necessity for edge compute, and one of the reasons that Arm is at the center of this universe. Thousands of small and large companies, including the cloud titans, are investing in, and developing technologies for the Edge-cloud. Nvidia will be a noteworthy addition to that ecosystem, but only one of many such players. Also, with power at a usability premium for Edge-cloud use cases and workloads, Nvidia has to pivot from its performance-only focused design philosophy to more power-efficient architectures. In this market, Arm will be of greater value to Nvidia than the other way around.
Upon closer examination of the three main reasons cited by Nvidia for the acquisition, one seems unconvincing, and the other two seem counter to Nvidia’s logic because it appears Nvidia would benefit more from Arm than vice versa. Moreover, Arm and its customers are already on the path with which Nvidia is proposing to help Arm. But, if the merger goes through, Arm, instead of being a neutral supplier with no conflicts of interest with its customers, would instead become a technology supplier as well as a competitor for its customers in the Cloud, the Edge-cloud, PC’s, the automotive industry, and AI. This dichotomy might affect Arm’s vast ecosystem and its unwavering support for the architecture. Also, Arm has developed its architecture and its business with significant inputs from its ecosystems. Ecosystem players would likely be disincentivized to share their inputs with a competitor, Nvidia-Arm. Nvidia’s resources, it seems, would not come without opportunity cost to Arm.
I am sure you are aware of news reports citing many concerned ecosystem players reaching out to the FTC and other antitrust agencies about the acquisition. You might even be wondering what these players, including behemoths like Google, Samsung, Qualcomm, and even Apple, are worried about? Well, that is the topic of my next article… so be on the lookout!
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
The Chronicles of 3GPP Rel. 17
Have you ever felt the joy and elation of being part of something that you have only been observing, reading, writing about, and admiring for a long time? Well, I experienced that when I became a member of 3GPP (3rd Generation Partnership Project) and attended RAN (Radio Access Network) plenary meeting #84 last week in the beautiful city of Newport Beach, California. RAN group is primarily responsible for coming up with wireless or radio interface related specifications.
The timing couldn’t be more perfect. This specific meeting was, in fact, the kick-off of 3GPP Rel. 17 discussions. I have written extensively about 3GPP and its processes on RCR Wireless News. You can read all of them here. Attending the first-ever meeting on a new release was indeed very exciting. I will chronicle the journey of Rel. 17, through a series of articles here on RCR Wireless News, and this is the first one. I will report the developments and discuss what those mean for the wireless as well as the many other industries 5G is set to touch and transform. If you are a standards and wireless junkie, get on board, and enjoy the ride.
3GPP Rel. 17 is coming at an interesting time. It is coming after the much publicized and accelerated Rel. 15 that introduced 5G, and Rel. 16 that put a solid foundation for taking 5G beyond mobile broadband. Naturally, the interest is what more 5G could do. The Rel. 17 kick-off meeting, as expected, was a symposium of great ideas, and a long wish list from prominent 3GPP members. Although many of the members submitted their proposals, only a few, selected through a lottery system, got the opportunity to present in the meeting. Nokia, KPN, Qualcomm, Indian SSO (Standard Setting Organization), and few others were among the ones who presented. I saw two clear themes in most of the proposals: First, keeping enough of 3GPP’s time and resources free to address urgent needs stemming from the nascent 5G deployments; second, addressing the needs of new verticals/industries that 5G enables.
Rel. 17 work areas
There were a lot of common subjects in the proposals. All of those were consolidated into four main work areas during the meeting:
-
Topics for which the discussion can start in June 2019
-
The main topics in this group include mid-tier devices such as wearables without extreme speeds or latency, small data exchange during the inactive state, D2D enhancements going beyond V2X for relay-kind of deployments, support for mmWave above 52.6 GHz, Multi-SIM, multicast/broadcast enhancements, and coverage improvements
-
-
Topics for which the discussion can start in September 2019
-
These include Integrated Access Backhaul (IAB), unlicensed spectrum support and power-saving enhancements, eMTC/NB-IoT in NR improvements, data collection for SON and AI considerations, high accuracy, and 3D positioning, etc.
-
-
Topics that have a broad agreement that can be directly proposed as Work Items or Study Items in future meetings
-
1024 QAM and others
-
-
Topics that don’t have a wider interest or the ones proposed by single or fewer members
As many times emphasized by the chair, the objective of forming these work areas was only to facilitate discussions between the members to come to a common understanding of what is needed. The reason for dividing them into June and September timeframe was purely for logistical reasons. This doesn’t imply any priority between the two groups. Many of the September work areas would be enhancements to items being still being worked on in Rel. 16. Also, spacing them out better spreads the workload. Based on how the discussions pan out, the work areas could be candidates for Work Items or Study Items in the December 2018 plenary meeting.
Two specific topics caught my attention. First, making 5G even more suitable for XR (AR, VR, etc.) and second, AI. The first one makes perfect sense, as XR evolution will have even stringent latency requirements and will need distributed processing capability between device and edge-cloud etc. However, I am not so sure about AI. I don’t how much scope there is to standardize AI, as it doesn’t necessarily require interoperability between devices of different vendors. Also, I doubt companies would be interested in standardizing AI algorithms, which minimizes their competitive edge.
Apart from technical discussions, there were questions and concerns regarding following US Government order to ban Huawei. This was the first major RAN plenary meeting after the executive order imposing the ban was issued. From the discussions, it seemed like “business as usual.” We will know the real effects when the detailed discussions start in the coming weeks.
On a closing note, many compare the standardization process to watching a glacier move. On the contrary, I found it to be very interesting and amusing, especially how the consensus process among the competitors and collaborates work. The meeting was always lively, with a lot of arguments and counter-arguments. We will see whether my view changes in the future! So, tune in to updates from future Rel. 17 meetings to hear about the progress.
I just returned from a whirlwind session of 3GPP RAN Plenary #86, held at the beautiful beach town of Sitges in Spain. The meeting finalized a comprehensive package with more than 30 Study and Work Items (SI and WI) for Rel 17. With a mix of new capabilities and significant improvements to existing features, Rel 17 is set to define the future of 5G. It is expected to be completed by mid or end of 2021.
<<Side note, if you would like to understand more about how 3GPP works, read my series “Demystifying Cellular Standards and Licensing” >>
Although the package looks like a laundry list of features, it gives a window into the strategy and capabilities of different member companies. Some are keen on investing in new, path-breaking technologies, while others are looking to optimize existing features or working on the fringe or very specific areas.
The Rel. 17 SI and WIs can be divided into three main categories.
Blazing new trail
These are the most important new concepts being introduced in Rel. 17 that promise to expand 5G’s horizon.
XR (SI) – The objective of this is to evaluate and adopt improvements that make 5G even better suited for AR, VR, and MR. It includes evaluating distributed architecture harnessing the power of edge-cloud and device capabilities to optimize latency, processing, and power. Lead (aka Rapporteur) – Qualcomm
NR up to 71 GHz (SI and WI) – This is in the new section because of a twist. The WI is to extend the current NR waveform up to 71 GHz, and SI is to explore new and more efficient waveforms for the 52.6 – 71 GHz band. Lead – Qualcomm and Intel
NR-Light (SI) – The objective is to develop cost-effective devices with capabilities that lie between the full-featured NR and Low Power Wireless Access (e.g., NB-IoT/eMTC). For example, devices that support 10s or 100 Mbps speed vs. multi-Gigabit, etc. The typical use cases are wearables, Industrial IoT (IIoT), and others. Lead – Ericsson
Non-Terrestrial Network (NTN) support for NR & NB-IoT/eMTC (WI) – A typical NTN is the satellite network. The objective is to address verticals such as Mining and Agriculture, which mostly lie in remote areas, as well as to enable global asset management, transcending contents and oceans. Lead – MediaTek and Eutelsat
Perfecting the concepts introduced in Rel. 16
Rel. 16 was a short release with an aggressive schedule. It improved upon Rel. 15 and brought in some new concepts. Rel 17 is aiming to make those new concepts well rounded.
Integrated Access & Backhaul – IAB (WI) – Enable cost-effective and efficient deployment of 5G by using wireless for both access and backhaul, for example, using relatively low-cost and readily available millimeter wave (mmWave) spectrum in IAB mode for rapid 5G deployment. Such an approach is especially useful in regions where fiber is not feasible (hilly areas, emerging markets). Lead – Qualcomm
Positioning (SI) – Achieve centimeter-level accuracy, based only on cellular connectivity, especially indoors. This is a key feature for wearables, IIoT, and Industry 4.0 applications. Lead – CATT (NYU)
Sidelink (WI) – Expand use cases from V2X-only to public safety, emergency services, and other handset-based applications by reducing power consumption, reliability, and latency. Lead – LG
Small data transmission in “Inactive” mode (WI) – Enable such transmission without going through the full connection set-up to minimize power consumption. This is extremely important for IIoT use cases such as sensor updates, also for smartphone chatting apps such as Whatsapp, QQ, and others. Lead – ZTE
IIoT and URLLC (WI) – Evaluate and adopt any changes that might be needed to use the unlicensed spectrum for these applications and use cases. Lead – Nokia
Fine-tuning the performance of basic features introduced in Rel. 15
Rel. 15 introduced 5G. Its primary focus was enabling enhanced Broadband (eMBB). Rel. 16 enhanced many of eMBB features, and Rel. 17 is now trying to optimize them even further, especially based on the learnings from the early 5G deployments.
Further enhanced MIMO – FeMIMO (WI) – This improves the management of beamforming and beamsteering and reduces associated overheads. Lead – Samsung
Multi-Radio Dual Connectivity – MRDC (WI) – Mechanism to quickly deactivate unneeded radio when user traffic goes down, to save power. Lead – Huawei
Dynamic Spectrum Sharing – DSS (WI) – DSS had a major upgrade in Rel 16. Rel 17 is looking to facilitate better cross-carrier scheduling of 5G devices to provide enough capacity when their penetration increases. Lead – LG
Coverage Extension (SI) – Since many of the spectrum bands used for 5G will be higher than 4G (even in Sub 6 GHz), this will look into the possibility of extending the coverage of 5G to balance the difference between the two. Lead – China Telecom and Samsung
Along with these, there were many other SI and WIs, including Multi-SIM, RAN Slicing, Self Organizing Networks, QoE Enhancements, NR-Multicast/Broadcast, UE power saving, etc., was adopted into Rel. 17.
Other highlights of the plenary
Unlike previous meetings, there were more delegates from non-cellular companies this time, and they were very actively participating in the discussions, as well. For example, a representative from Bosch was a passionate proponent for automotive needs in Slidelink enhancements. I have discussed with people who facilitate the discussion between 3GPP and the industry body 5G Automotive Association (5GAA). This is an extremely welcome development, considering that 5G will transform these industries. Incorporating their needs at the grassroots level during the standards definition phase allows the ecosystem to build solutions that are market-ready for rapid deployment.
There was a rare, very contentious debate in a joint session between RAN and SA groups. The debate was to whether set RAN SI and WI completion timeline to 15 months, as planned now, or extend it to 18 months. The reason for the latter is TSG-SA being late with Rel. 16 completion, and consequently lagging in Rel. 17. Setting an 18-month completion target for RAN will allow SA to catch up and align both the groups to finish Rel. 17 simultaneously. However, RAN, which runs a tight ship, is not happy with the delay. Even after a lengthy discussion, the issue remains unresolved.
<<Side Note: If you would like to know the organization of different 3GPP groups, including TSGs, check out my previous article “Who are the unsung heroes that create the standards?” >>
It will be amiss if I don’t mention the excellent project management skills exhibited by the RAN chair Mr. Balazs Bertenyi of Nokia Bell Labs. Without his firm, yet logical and unbiased decision making, it would have been impossible to finalize all these things in a short span of four days.
In closing
Rel. 17 is a major release in the evolution of 5G that will expand its reach and scope. It will 1) enable new capabilities for applications such as XR; 2) create new categories of devices with NR-Light; 3) bring 5G to new realms such as satellites; 4) make possible the Massive IoT and Mission Critical Services vision set out at the beginning of 5G; while also improving the excellent start 5G has gotten with Rel. 15 and eMBB. I, for one, feel fortunate to be a witness to see it transform from concept to completion.
With COVID-19 novel coronavirus creating havoc and upsetting everybody’s plans, the question on the minds of many people that follow standards development is, “How will it affect the 5G evolution timeline?” The question is even more relevant for Rel. 16, which is expected to be finalized by Jun 2020. I talked at length regarding this with two key leaders of the industry body 3GPP—Mr. Balazs Bertenyi, the Chair of RAN TSG and Mr. Wanshi Chen, Chair of RAN1 Working Group (WG). The message from both was that Rel 16 will be delivered on time. The Rel. 17 timelines were a different story though.
<<Side note: If you would like to know more about 3GPP TSGs and WGs, refer to my article series “Demystifying Cellular Patents and Licensing.” >>
3GPP meetings are spread throughout the year. Many of them are large conference-style gatherings involving hundreds of delegates from across the world. WG meetings happen almost monthly, whereas TSG meetings are held quarterly. The meetings are usually distributed among major member countries, including the US, Europe, Japan, and China. In the first half of the year, there were WG meetings scheduled in Greece in February, and Korea, Japan, and Canada in April, as well as TSG meetings in Jeju, South Korea in March. But because of the virus outbreak, all those face-to-face meetings were canceled and replaced with online meetings and conference calls. As it stands now, the next face-to-face meetings will take place in May, subject to the developments of the virus situation.
Since 3GPP runs on consensus, the lack of face-to-face meetings certainly raises concerns about the progress that can be made as well as its possible effect on the timelines. However, the duo of Mr. Bertenyi and Mr. Wanshi are working diligently to keep the well-oiled standardization machine going. Mr. Bertenyi told me that although face-to-face meetings are the best and the most efficient option, 3GPP is making elaborate arrangements to replace them with virtual means. They have adopted a two-step approach:1) Further expand the ongoing email-based discussions; 2) Multiple simultaneous conference calls mimicking the actual meetings. “We have worked with the delegates from all participant countries to come up with a few convenient four-hour time slots, and will run simultaneous on-line meetings/conference calls and collaborative sessions to facilitate meaningful interaction,” said Bertenyi “We have stress-tested our systems to ensure its robustness to support a large number of participants“
Mr. Wanshi, who leads the largest working group RAN 1, says that they have already completed a substantial part of Rel 16 work and have achieved functional freeze. So, the focus is now on RAN 2 and RAN3 groups, which is in full swing. The current schedule is to achieve what is called ASN.1 freeze in June 2020. This milestone establishes a stable specification-baseline from which vendors can start building commercial products.
Although, it’s reasonable to say that notwithstanding any further disturbances, Rel. 16 will be finalized on time. However, things are not certain for Rel. 17. Mr. Bertenyi stated that based on the meeting cancellations, it seems inevitable that the Rel. 17 completion timeline will shift by three months to September 2021.
It goes without saying that the plans are based on the current state of affairs in the outbreak. If the situation changes substantially, all the plans will go up in the air. I will keep monitoring the developments and report back. Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get the latest on standardization and the telecom industry at large.
It is election time at the 3GPP, and last week was the ballot for the chairmanship of the prestigious RAN Technical Specification Group (TSG). Dr. Wanshi Chen of Qualcomm came out as a winner after a hard-fought race. I caught up with Wanshi right after the win to congratulate him and discuss his vision for the group as well as the challenges and opportunities that lie ahead. Here is a quick primer on the 3GPP ballot process and highlights from my discussion with Wanshi.
Side note: If you would like to know more about 3GPP Rel. 17, please check out the earlier articles in the series.
3GPP TSGs and elections
As I have explained in my article series “Demystifying cellular licensing and patents,” 3GPP has three TSGs, responsible for the radio access network, core network, services and system aspects, and are aptly named TSG-RAN, TSG-CN, and TSG-SA. Among these, TSG-RAN is probably the biggest in terms of size, scope, and number of activities. It is managed by one chair and three vice-chairs. The chair ballot was last week (started from March 16th, 2021) and the vice-chair ballot is happening as this article is being published.
The primary objective of the RAN chair is to ensure all the members are working collaboratively to develop next-generation standards through the 3GPP’s marquee consensus-based, impartial approach. The chair position has a lot of clout and prestige associated with it. The chairmanship truly represents the collective confidence of the entire 3GPP community in the position, providing vision and leadership to the entire industry. The RAN TSG chair leadership is especially crucial now when the industry is at a critical juncture of taking 5G beyond the conventional cellular broadband to many new industries and markets.
For the candidates, the 3GPP election is a long-drawn process, starting more than a year before the actual ballot. The credibility, and the competence of the individual candidates, as well as the companies they represent, are put to test. Although delegates vote as individuals in a secret ballot, the competitive positioning between the member companies, and sometimes the regional dynamics may play an important role.
During the actual election, the winner is decided if any candidate gets more than 71% of the votes, either in the first or the second round. If not, a third run-off round ensues, and whoever gets a simple majority there wins the race. This time, there were four candidates in the fray – Wanshi Chen of Qualcomm, Mathew Baker of Nokia, Richard Burbidge of Intel, and Xu Xiaodong of China Mobile. The election did go to the third run-off round, where Wanshi Chen won against Mathew Baker by a comfortable margin.
New chair’s vision for the next phase of 5G
Dr. Wanshi Chen is a prolific inventor, a researcher, and a seasoned standards leader. He has been part of 3GPP for the last 13 years. He is currently the Chair of the RAN-1 Working Group and was also a vice-chair of the same group before that. RAN-1 is one of the largest working groups within 3GPP, with up to 600 delegates. Wanshi has successfully presided over the group during its critical times. For example, he took over the RAN-1 chairmanship right after the 5G standardization acceleration, and was instrumental in finalizing 3GPP Rel. 15 in record time. Following that he also played a key role in finishing Rel. 16 on time as planned, despite the enormous workload and the unprecedented disruptions caused by the onset of the Covid-19 pandemic.
The change in RAN TSG guard is happening at a crucial time for 5G when it is set to transform the many verticals and industries beyond smartphones. 3GPP has already set a solid foundation with Rel.16, Rel. 17 development is in full swing, and Rel. 18 is being conceptualized. The next chair will have the unique opportunity to shape the next phase of 5G. Wanshi said “Industry always looks to 3GPP for leadership in exploring the new frontiers, providing the vision, and developing technologies and specifications to pave the way for the future. It is critical for 3GPP to maintain a fine balance between the traditional and newer vertical domains and evolve as a unified global standard by considering inputs from all regions of the world.”
Entering new markets and new domains is always fraught with challenges and uncertainties. However, “Such transitions are not new to 3GPP,” says Wanshi, “We worked across the aisle and revolutionized mobile broadband with 4G, and standardized 5G in a record time. I am excited to be leading the charge and extremely confident of our ability to band together as an industry and proliferate 5G everywhere.”
It is indeed interesting to note that Qualcomm was also at the helm of RAN TSG when 5G was accelerated. Lorenzo Cascia, Qualcomm’s VP of Technical Standards, and another veteran of 3GPP said “The primary task of the chair is to foster consensus among all member companies, and facilitating the continued expansion of 5G, and potentially formulating initial plans toward the industry’s 6G vision,” he added, “having known Wanshi for years, I am extremely confident of his abilities to lead 3GPP toward that vision.”
The tenure of the chair is two years, but usually, people serve two consecutive terms, totaling four years. That means Wanshi will have a minimum of two years and a maximum of four years to show his magic, starting from Jun 2021. I wish all the best to him in his new position. I will be closely watching him as well as 3GPP as 5G moves into its next phase.
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
The twin events of 3GPP RAN Plenary #92e and Rel. 18 workshops are starting to shape the future of 5G. The plenary substantially advanced Rel.17 development and the workshop kick-started the Rel 18 work. Amidst these two, 3GPP also approved the “5G Advanced” as the marketing name for releases 18 and beyond. Being a 3GPP member, I had the front row seats to witness all the interesting discussions and decisions.
With close to 200 global operators already live with the first phase of 5G, and almost every cellular operator either planning, trialing, or deploying their first 5G networks, the stage is set for the industry to focus on the next phase of 5G.
Solid progress on Rel. 17, projects mostly on track
The RAN Plenary #92-e was yet another virtual meeting, where the discussions were through a mix of emails and WebEx conference sessions. It was also the first official meeting for the newly elected TSG RAN chair Dr. Wanshi Chen of Qualcomm, and three vice-chairs, Hu Nan of China Mobile, Ronald Borsato of AT&T, and Axel Klatt of Deutsche Telekom.
Most of the plenary time was spent on discussing various aspects of Rel. 17, which has a long list of features and enhancements. For easy reference and better understanding, I divide them (not 3GPP) into three major categories as below:
New concepts:
Enhancements for better eXtended Reality (XR), mmWave support up to 71 GHz, new connection types such as NR – Reduced Capability (RedCap, aka NR-Light), NR & NB-IoT/eMTC, and Non-Terrestrial Network (NTN).
Improving Rel.16 features
Enhanced Integrated Access & Backhauls (IAB), improved precise positioning and Sidelink support, enhanced IIoT and URLLC functionality including unlicensed spectrum support, and others.
Fine-tuning Rel. 15 features
Further enhanced MIMO (FeMIMO), Multi-Radio Dual Connectivity (MRDC), Dynamic Spectrum Sharing (DSS) enhancements, Coverage Extension, Multi-SIM, RAN Slicing, Self-Organizing Networks (SON), QoE Enhancements, NR-Multicast/Broadcast, UE power saving, and others.
For details on these features please refer to my article series “The Chronicles of 3GPP Rel. 17.”
There was a lot of good progress made on many of these features in the plenary. All the leads reaffirmed the timeline agreed upon in the previous plenary. It was also decided that all the meetings in 2021 will be virtual. The face-to-face meetings will hopefully start in 2022.
3GPP RAN TSG meeting schedule (Source: 3gpp.org)
Owing to the workload and the difficulties of virtual meetings, the possibility of down-scoping of some features was also discussed. These include some aspects of FeMIMO and IIoT/URLLC. Many delegates agreed that it is better to focus on a robust definition of only certain parts of the features rather than diluted full specifications. The impact of this down-scoping on the performance is not fully known at this point. The discussion is ongoing, and a final decision will be taken during the next RAN plenary #93e in September 2021.
The dawn of 5G Advanced
The releases 18 and beyond were officially christened as 5G Advanced in May 2021, by 3GPP’s governing body Project Coordination Group (PCG). This is in line with the tradition set by HSPA and LTE, where the evolutionary steps were given “Advanced” suffixes. 5G Advanced naming was an important and necessary decision to demarcate the steps in the evolution and to manage the over-enthusiastic marketing folks jumping early to 6G.
The 5G Advanced standardization process was kickstarted at the 3GPP virtual workshop held between Jun 28th and July 2nd, 2021. The workshop attracted a lot of attention, with more than 500 submissions from more than 80 companies, and more than 1200 delegates attending the event.
The submissions were initially divided into three groups. According to the TSG RAN chair, Dr. Wanshi the submissions were distributed almost equally among the groups:
-
eMBB (evolved Mobile BroadBand)
-
Non-eMBB evolution
-
Cross-functionalities for both eMBB and non-eMBB driven evolution.
After the weeklong discussions (on emails and conference calls), the plenary converged to identify 17 topics of interest, which include 13 general topics and three sets of topics specific to RAN Working Groups (WG) 1-3, and one set for RAN WG-4. Most of the topics are substantial enhancements to the features introduced in Rel. 16 and 17, such as MIMO, uplink, mobility, precise positioning, etc. They also include evolution to network topology, eXtended Reality (XR), Non-Terrestrial Networks, broadcast/multicast services, Sidelink, RedCap, and others.
The relatively new concepts that caught my attention are Artificial Intelligence (AI)/Machine Learning (ML), Full and Half Duplex operations, and network energy savings. These have the potential to set the stage for entirely new evolution possibilities, and even 6G.
Wireless Networks are extremely complex, highly dynamic, and vastly heterogenous. There cannot be any better approach than using AI/ML to solve the hard wireless challenges. E.g., cognitive RAN can herald a new era in networking.
Full-duplex IABs with interference cancellation broke the decades-old system of separating uplink and downlink either in frequency or time domains. Applying similar techniques to the entire system has the potential to bring the next level of performance in wireless networks.
Reducing energy consumption has emerged as one of the existential challenges of our times because of its impact on climate change. With 5G transforming almost every industry, it indeed is a worthy effort to reduce energy use. The mobile industry with the “power-efficient” approach embedded in its DNA has a lot to teach the larger tech industry in that regard.
In terms of the topics of discussion, Dr. Wanshi said that he cannot emphasize enough that they are not “Working Items” or “Study Items.” He further added that the list is a great starting point, but much discussion to rationalize and prioritize it is needed, which will start from the next plenary, scheduled for Sep 13th, 2021.
For the full list of Rel. 18/5G Advanced topics, please check this 3GPP post.
In closing
The events in the last few weeks have surely started to define and shape the future evolution of 5G. With Rel. 16 commercialization starting soon, Rel. 17 standardization nearing completion, and Rel. 18 activities getting off the ground, there will be a lot of exciting developments to look forward to in the near future. So, stay tuned.
Demystifying Cellular Patents and Licensing
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Editor’s note: This is the first in a series of articles that explores the sometimes obtuse process of standardizing, patenting and licensing cellular technologies.
Answers to the questions you always wondered but were afraid to ask
Patents spark joy in the eyes of the innovators! Patents not only recognize innovators’ hard work but also provide financial incentives to keep inventing and continue to make the world a better place. Unfortunately, patent licensing often referred to as Intellectual Property Rights (IPR) licensing, has recently gotten a bad rap. The whole IPR regimen seems mystical, veiled under a shroud of confusion, misinformation, and of course, controversies. But tearing that shroud reveals the fascinating metamorphosis of abstract concepts developing into technologies that transform people’s lives. This process, in turn, creates significant value for the inventors.
I have been exposed to the cellular IPR in my entire career. And I thought I understood it well. But my research regarding the various aspects of the IPR journey, including their creation, evaluation, and licensing, was a real eye-opener, even for me. In a series of articles, I will take you through the same amazing journey that will demystify the myths, the misunderstandings, and the misinterpretations. I will use the standardization of 4G, which has run its full course, and that of 5G, which is ongoing, as the vehicles for our journey. So, get on board, buckle up and enjoy the ride!
Organizations that build cellular standards
It all starts at the International Telecom Union (ITU), an arm of UNESCO (www.unesco.org), which is a part of the United Nations. For any new generation of standard (aka G), ITU comes up with a set of minimum performance requirements. Any technology that meets those requirements can be given that specific “G” moniker. For 4G, these requirements were called IMT-Advanced, and for 5G, they are called IMT-2020. In the earlier days of 4G, there were two technologies that got the moniker. One among them was developed by IEEE, called WiMAX, which no longer exists. The other was developed by the 3rd Generation Partnership Project (3GPP), the most important and visible global cellular specifications organization.
3GPP, as the name suggests, was formed during 3G days, and has been carrying the mantle ever since. 3GPP is a combination of seven telecommunications Standard Development Organizations (SDO), representing telecom ecosystems in different geographical regions. For example, the Alliance for Telecom Industry Solutions Association (ATIS) represents the USA; the European Telecommunications Standards Institute (ETSI) represents the European Union and so on. In essence, 3GPP is a true representation of the entire global cellular ecosystem.
3GPP develops specifications that are then affirmed as relevant standards by SDOs in their respective regions. 3GPP’s specifications are published as a series of Releases. For example, Release 10 (Rel. 10) had the specifications that met the ITU requirements for 4G (IMT-Advanced). 3GPP sometimes also gives marketing names to a set of these releases. For example, Rel. 8 – 9 were named as Long Term Evolution (LTE), Rel. 10-12 were named as LTE Advanced, and so on. The Rel.15 includes the specifications needed to meet 5G requirements.
To summarize, ITU stipulates the requirements for any “G,” 3GPP develops the specifications that meet those requirements, and the SDOs affirm those specifications as standards in their respective regions.
How the standards building process works
With those many organizations and their representatives involved, standards development is a long, arduous, and systematic process. 3GPP has several specification working groups focused on different parts of the cellular system and its interworking, including radio network, core network, devices, and others. The members of these groups are representatives of different SDOs.
Now coming to the actual process itself, the ITU requirements act as goals for 3GPP. The efforts start-off with members bringing their proposals, i.e. their innovations, to achieve the set goals. For example, for 4G one of the proposals was techniques to use OFDMA for the high-performance mobile broadband. These proposals are presented in each of the relevant groups. There are usually multiple of them for any given problem. All these proposals are discussed, closely scrutinized, and hotly debated. Ultimately, winning ideas emerge through a consensus process. One of the members of the group is then nominated to be the editor, and he/she distills the winning ideas into a working document. That document is continuously edited and refined in a series of meetings, and when stable, is published as the first draft of the specification. Publishing the first draft is a major milestone for any release. Companies usually start designing their commercial products based on the first draft.
The standard refinement process continues for a long time even after the first draft, this is akin to how software “bug fixing” and update process works. Members continuously submit contributions aka bug-fixes to refine the draft. Typically, these contributions are substantially higher in volume than the initial proposals. This is because the latter are radically new concepts or innovations, whereas the former could be trivial, such as editorial corrections. Once all the bug-fixing is done, the final specification is released.
As evident, for any new innovation to be accepted and included in the standard, it has to go through a rigorous vetting and has to withstand the intense scrutiny by peers and competitors. This means the inclusion is an explicit recognition by the industry that the said technology is a superior solution to the given problem.
3GPP contributions and record-keeping
3GPP is a highly bureaucratic organization, with a robust and well established administrative and record keeping system. But for historical reasons, the system is not equally rigorous throughout the process. For example, record keeping is nominal until the creation of the first draft. The proposals, ideas, contributions presented during that time are just tagged as “considered” or “treated,” without any specific recognition. However, the record keeping gets very structured and rigorous after the first draft. The bug-fixing contributions that are adopted into the specification are tagged with more official-sounding names such as “approved,” no matter whether they are very trivial or significant. These uneven record-keeping and naming practices have created some very simpleton, amateurish and really flawed IPR evaluation methods. More on this in later articles.
Nonetheless, 3GPP specification development is a consensus-based, democratic process, by design. This necessitates collaboration among members, who many times have opposite interests. This approach indeed has made 3GPP a great success, resulting in the cellular industry to excel and thrive.
With the basic understanding of the organizations and processes in place, we are now well equipped for the next part of our IPR journey—understanding how developing standards is a system design endeavor solving end-to-end problems, not just a collection of disparate technologies, as we are given to believe. And that’s exactly what my next blog in the series will explore. Be on the lookout!
In my previous article in the series, I described the organizations and the process of creating cellular standards. I explained how it is an almost a magical process, where scores of industry players, many of whom are staunch competitors come together in a consensus-based approach to approve new standards. In this article I will delve into the specifics of how patents, often referred to as Intellectual Property Rights (IPR) are created, valued, licensed, and administered.
Cellular patents are created during the standardization process
The cellular standardization process is primarily a quest to find the best solutions for a systematic problem. The winning innovations borne out of that process create valuable patents. You can guarantee that almost all the ideas presented as candidates for standardization hit the patent offices in various countries before coming to 3GPP. The value of those innovations and thereby patents dramatically increases when accepted and incorporated into standards. Inclusion in the standard is also the stamp of approval that the innovation is the best of the crop, as it has won over other competing ideas, as I explained my previous article.
Another important aspect, especially relevant to cellular patents, is that the innovations presented to standards are the solution to solve an end-to-end system problem. This means those ideas are not specific to just the device or the network, but a comprehensive solution that touches many parts of the system. So, many times, it is very hard to delineate the applicability of those ideas to only one part or section of the system. For example, if you consider MIMO (Multiple Input Multiple Output) technique, it needs a complete handshake between the device and the network to work. Additionally, many patents might touch many subsystems within the device or the network, which further complicates the effort to isolate their relevance to specific parts. For example, consider how the power management and optimization in a smartphone works, which makes AP, Modem and other subsystems wake up or go to sleep in sync. That innovation might touch all those subsystems in the phone.
All patents not created equal
Thousands of patents go into building cellular wireless systems, be it devices, radio infrastructure or core networks. At a very basic level, these patents can be divided into two categories: Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEP or NEP). SEPs are those which are absolutely necessary to build a standard compliant product, and that can’t be circumvented. Hence, they are highly valued. On the other hand, non-SEPs are relevant to standards, but may not necessary for the basic functioning of the standard compliant products and can be designed around. For example, for 4G LTE devices, patents that define using OFDMA for cellular connectivity are SEPs, whereas patents that improve the battery life of the devices could be considered as non-SEPs.
3GPP and Standard Development Organizations (SOD) strongly encourage early disclosure of IPR that members consider essential, or potentially essential for standards. Further, they also mandatorily require licensing of SEPs on fair, reasonable and non-discriminatory (FRAND) terms. There are no such licensing requirements for non-SEPs.
While 3GPP or SDOs make FRAND compliance for SEPs mandatory, they don’t enforce or regulate any specific monetary value for them. They consider the licensing to be a commercial transaction outside their purview, and hence let the market forces decide their worth.
How to value patents?
According to some estimates, there were 250,000 active patents covering smartphones in 2012. And when I write this article in 2019, I am sure that number has become even bigger. Then the issue becomes how to determine the value of these patents, and how best to license and administer them to others.
With the sheer number of patents involved, it is impossible to manage licensing on an individual patent basis. It is even more impractical to license them on a subsystem or at the component level, as mentioned before, it is hard to delineate their applicability to a specific part. So, it indeed is a hard problem to solve. Since cellular standards have been around for a few decades now, it is worthwhile to examine how historically licensing has been dealt with.
In the 2G days when the cellular markets started expanding, there were a handful of well-established large players such as Ericsson, Nokia, Motorola, Nortel. Alcatel, Siemens and others. These players not only developed the technologies but also had their own devices and network infrastructure offerings. Since it was a small group of players, and all of them needed each other’s technology to make their products, they resorted to a simple method of bartering, also known as cross-licensing. Some industry observers and participants accused them of artificially inflating the value of their patents to make it very hard for any new players to enter the market.
With the advent of 3G, Qualcomm appeared on the scene with a unique horizontal business model. Qualcomm’s core business was to invent in advanced mobile technology, make it accessible to the ecosystem through licensing, and enable everyone to build compelling products based on its technology (Qualcomm initially invested in infrastructure, mobile device and service provider businesses, which they eventually divested). Qualcomm’s licensing made the initial investment more reasonable and the technologies accessible for the OEMs, which significantly reduced the entry barrier. The rise of Apple, Samsung, LG as well as the score of Chinese OEMs can be attributed to it.
Taking the market forces approach, Qualcomm decided to license the full portfolio of patents, including tens of thousands of patents, for a percentage of the wholesale selling price of the phone. They put a cap on the fee when the price of phone prices started getting higher. Qualcomm decided to license the IPR to the phone OEMs because that’s where the full value of their innovations is realized. Apparently, this was also the approach all the patent holders during that time, including Ericsson, Nokia and other practiced, as attested by some of these companies during Qualcomm vs. FTC trial. This practice has continued until now and has withstood the challenges all over the world. Of course, there have been challenges and changes to the actual fees charged. But the approach has still been largely intact.
Usually, the actual licensing rates are confidential among the licensee and licensors. We got some details during Qualcomm’s court cases around the world. As of now, what we know is, for example, Qualcomm charges 3.25% of the device wholesale price for its SEPs, and 5% for the full portfolio including both SEPs and non-SEPs. The device price base is capped at a max of $400.
There are others in the industry, such as Apple who are attempting to change this decade-old approach and proposing a new approach, sometimes referred to as the Smallest Saleable Patent Practicing Unit (SSPPU) pricing. Their argument is that most of Qualcomm’s SEP ’s value is in the modem, and hence the licensing fee should be based on the price of the modem and not the phone. Obviously, Qualcomm disagrees, and both are fighting it out in the courtrooms around the world.
Being an engineer myself, I know that when designing a solution, engineers don’t consider the constraints of limiting it to a specific unit, or subsystem or apart. Instead, they come up with the best solution that effectively solves the problem. Often, by the virtue of such an approach, the solution involves the full system, as I explained in two examples earlier. So, in my view, limiting the value to a specific unit is a very simpleton, impractical approach and grossly undervalues the monetizing ability of innovations. Hence, I believe, the current approach should continue, and let the market forces decide what actual price is.
The raging court battles between Apple and Qualcomm regarding licensing are underway now, and we will see what the courts decide. In the next article, I will look at some of these recent battles between the two behemoths, what were the basis, how it affected the IPR landscape and more. Please be on the lookout.
The statement “All patents are not created equal” seems like a cliché, but is absolutely true! The differences between patents are multi-dimensional and much more nuanced than what meets the eye. I slightly touched upon this in my previous article. There is denying that going forward, patents will play an increasingly bigger role in cellular, not only pitting companies against each other but also countries against one another for superiority and leadership in technology. Hence it is imperative that we understand how patents are differentiated, and how their value changes based on their importance.
Let me start with a simple illustration. Consider today’s cars, which have lots of different technologies and hence patents. When you compare the patents for the car engine, to say, the patents for the doors, the difference between relative importance is pretty clear. If you look at the standards for building a car, probably the patents for both the engine and the door are listed as listed essential, i.e., SEPs (Standard Essential Patents). However, the patent related to the engine is at the core of the vehicle’s basic functionality. The patent for the door, although essential, is clearly less significant. Another way to look at this is, without the idea of building the engine; there is not even a need for the idea for doors. That means the presence of one is the reason for other’s existence. The same concepts also apply to cellular technology and devices. Some patents are invariably more important than others. For example, if you consider the 5G standard, the patents that cover the Scalable-OFDMA are fundamental to 5G. These are the core of 5G’s famed flexibility to support multiple Gigabits of speeds, very low latency, and extremely high reliability. You can’t compare the value of that patent to another one that might increase the speed by a few kilobits in a rare use case. Both patents, although being SEPs, are far apart in terms of value and importance.
On a side note, if you would like to know more about SEPs, check out my earlier article here.
That brings us to another classic challenge of patent evaluation—patent counting. Counting is the most simplistic and easy to understand measure—whoever has the most patents is the leader! Well, just like most simple approaches, counting also has a big issue—it is highly unreliable. Let me again explain it with an illustration. Consider one person having 52 pennies and a second person having eight quarters. If we apply simple counting as a metric, the first person seems to be the winner, which can’t be farther from the truth. Now applying the same concept to cellular patents, it would look stupid to call somebody a technology leader purely based on the number of patents they own, unless you know what they are.
When you look at the 5G standard, it has thousands of SEPs. If you count patents for Scalable-OFDMA and other similar fundamental and core SEPs with the same weight as minor SEPs that define peripheral and insignificant protocols and other things, you would be highly undervaluing the building blocks of the technology. So, simply counting without understanding the importance of the patents for technology leadership is very flawed. Also, the process of designating a certain patent as a SEP or not is nuanced as well, which makes the system vulnerable to rigging and manipulation, resulting in artificially increased SEP counts. I will cover this in the later articles. This potential for inflating the numbers further exacerbates the problem of patent counting.
In conclusion, it is amply clear that all patents are not created equal, and simpleton patent counting is not the best measure to understand the positioning of somebody’s technology prowess. One has to go deeper and understand their importance to realize the value. In my next articles, I will discuss the key patents that define 5G and explore alternate methods for patent evaluations that are possibly more robust and logical. In the meantime, beware and don’t be fooled by entities claiming to be leaders because of the sheer volume of their patent portfolio.
Demystifying cellular patents and licensing – Part 4
3GPP is this mystic organization that many seem to know, but few understand it. The key players of this efficient and well-regarded organization work often without the fanfare or public recognition. But no more! As part of this article series, I go behind the doors, explore the organization, meet the hard-working people, and bare the details on its inner workings.
Side note, if you would like to understand the cellular standardization process, please read my previous articles in the series here, here, and here.
“3GPP is a membership-driven organization. Any company interested in telecommunications can join, through one of its SDOs (Standard Development Organizations)” said Mr. Balazs Bertenyi of Nokia Corporation, the current chair of TSG-RAN and a 3GPP veteran. “One of the important aspects of 3GPP is that a large portion of its working-level office bearers are members themselves and are elected by the other fellow members.”
I became a proud member of 3GPP through the American SDO, ATIS, earlier this year.
3GPP organization structure
3GPP consists of three layers, as shown in the schematic: Project Coordination Group (PCG) at the top, which is more ceremonial; three Technical Specifications Groups (TSG) in the middle, each responsible for a specific part of the network; multiple Working Groups (WG) at the bottom, where the actual standards development occurs. There are many ad-hoc groups formed within each of these as well. All these groups meet regularly, as shown in the example meeting cycle.
Inner workings of WGs and the unsung heroes
Let’s start with the WGs, specifically the ones that are part of TSG-RAN. Being an RF Engineer, these are closest to my heart. However, this discussion applies equally to other TSGs/WGs as well. There are six WGs within TSG-RAN, each with one chair and two vice-chairs.
The best way to understand the group’s workings is to analyze how a fundamental 5G feature such as Scalable OFDMA would be standardized. There might be a few proposals from different member companies. The WGs have to evaluate these proposals in detail, run simulations for various scenarios to understand the performance, the pros and cons, competitive benefits, and so on. They have to decide the best solution and develop standards to implement it across the system. As evident, the WG chair must facilitate the discussion in an orderly, fair, and impartial way, and let the group reach a consensus decision. As you can imagine, this task is a combination of science and art—bringing people together through collaboration, personal relationships, and making sure they arrive at meaningful conclusions—all of this while under tremendous time pressure.
In such a situation, WG members expect the chair to be fair, balanced, and trustworthy. Many times, the members whose companies they represent are bitter competitors with diagonally opposite interests, each trying to push their views and assertions for adoption. “It is quite a task bringing these parties together for a consensus-based agreement, in the true spirit of 3GPP,” says Mr. Bertenyi. “It requires deep technical knowledge, a lot of patience, empathy, leadership, and ability to find common ground to be a successful WG chair.” That is the reason why 3GPP’s process of electing chairpersons through the ballot, instead of nomination, makes perfect sense.
The members of WG vote and elect somebody they trust and have respect for to lead the group. Before taking over, the employer of the newly elected officer has to formally sign a support letter declaring that the officer will get all the support from his company to successfully undertake his duties as a neutral chair. “From then on the elected officer stops being a delegate for his company, and becomes a neutral facilitator working in the interest of 3GPP and the industry” added Mr. Bertenyi. “Being a chair, I have presided over many decisions that were not supported by my company but were the best way forward in a given dispute. I have seen it often happen in WGs as well. For example, I saw Wanshi Chen, chair of RAN-1 do the same many times.”
The WG members are primarily inventors trying to develop solutions for difficult technological challenges. The WG chairs are at the forefront of this effort, and by virtue of that, it is not uncommon for them to be prolific inventors themselves and be a party to a large number of patents. This, in fact, proves that they are worthy of the leadership role they are given.
“It wouldn’t be untrue to say that the hard-working WG chairs are truly unsung heroes of 3GPP, and they deserve much respect and accolades,” says Mr. Bertenyi. “I am extremely proud to be working with all the chairs of our RAN WGs—Wanshi Chen of Qualcomm heading RAN-1, Richard Burbidge of Intel heading RAN-2, Gino Masini of Ericsson heading RAN-3, Xutao Zhou of Samsung heading RAN-4, Jacob John of Motorola heading RAN-5, Jurgen Hoffman of Nokia heading RAN-6.”
Responsibilities of TSG and PCG
While the WGs are workhorses, TSG sets the direction and manages resource allocation and on-time delivery of specifications.
There are three TSGs, one each for Radio and Core Networks and a third for systems work. Each of the TSGs has a chair and three vice-chairs, all elected by the members. They provide direction based on market conditions and needs. For example, the decision to accelerate 5G timelines in 2016 was taken by the TSG-RAN. The chairs are usually accomplished experts and excellent managers. I witnessed how effectively Mr. Bertenyi conducted the recent RAN#84 plenary while being fair, cheerful, and decisive at the same time.
PCG on record is the highest decision-making body, dealing mostly with non-technical project management issues. It is chaired by the partner SDOs on a rotational basis. It provides oversight, formally adopts the TSG work items, and ratifies election results and the resources commitments.
Elections and leadership tenure
As mentioned, all the working-level 3GPP office bearers are duly elected by fellow 3GPP members in a completely transparent ballot process. The standard tenure of each office bearer is two years. But often they are reelected for a second term based on their performance, as recognition for their effective leadership. Many times members start with vice-chair position and move on to the chair level, again based on their performance.
In closing
3GPP is a truly democratic, consensus-based organization. Its structure and culture that encourages collaboration, even among bitter business rivals, has made it a premier standards development organization. The well-managed cellular technology roadmap and success of the mobile industry at large is a testament to 3GPP’s systematic and broad-based approach.
Quick Note – I will be attending the next RAN-1 WG meeting scheduled for Aug 26- 30th 2019 in Prague, Czech Republic. So, stay tuned for the 3GPP Rel.16 and Rel.17 progress report.
While the 5G race rages on, so does the race to be perceived as the technology leader in 5G. This race transcends companies, industries, regions, and even countries. No major country, be it the new power such as China or existing leaders such as the US and Europe, wants to be seen as laggard. In this global contest, 5G patents and IPR (Intellectual Property Rights) is the most visible battleground. With so many competing entities and interests, it indeed is hard to separate substance from noise. One profound truth prevails even with all the chaos: Quality of inventions always beats quantity.
The fierce competition to be the leader has made companies make substantial investments to innovate new technology as well as play a key role in standards development. Since the leadership battles are also fought in the public domain, the claims of leadership has been relegated to simplistic number counting, such as how many patents one has, or much worse, how many contributions one has submitted to the standards. In the past, there have been many reports dissecting these numbers in many ways and claiming one or the other company to be the leader.
The awakening – Quality matters
Fortunately, now there seems to be some realization of the perils of this simplistic approach to a complex issue. There have been reports recently about why the quality, not the quantity matters. For example, last month, the well known Japanese media house, Nikkei, published this story based on the analysis of Patent Result, a Tokyo-based research company. Even the Chair of the 3GPP RAN group, Mr. Balazs Bertenyi, published a blog highlighting how technology leadership is much beyond simple numbers.
Ills of contributions counting
One might ask, what’s wrong with number counting, after all, isn’t it simple and easy to understand? Well, simple is not always the best choice for complex issues. Let me illustrate this with a realistic example. One can easily create the illusion of technology leadership by creating a large number of standards contributions. The standards body 3GGP, being a member-run organization, has an open policy for contributions. As I explained in the first article of this “Demystifying cellular patents” series, there is a lot of opportunity to goose-up the number of contributions during the “bug-fix” stage when the standard is being finalized. Theoretically, any 3GPP member can make an unlimited number of contributions, as long as nobody opposes them. Since 3GPP is also a consensus-driven organization, its members are hesitant to oppose fellow member’s contributions, unless they are harmful. It’s an open question whether anybody has exploited this vulnerability. If one looks closely, they might find instances of this. Nonetheless, the possibility exists, and hence simply, the number of contributions can’t be an indicator for anything important, let alone technology leadership.
<<Side note: You can read all the articles in the series to understand the 3GPP standardization process here.>>
In his blog, Mr. Bertenyi says, “…In reality, flooding 3GPP standards meetings with contributions is extremely counterproductive...” It unnecessarily increases the workload on the standards working groups and extends the timelines, while reducing the focus on the contributions that really matter.
So what matters? Again, Mr. Bertenyi explains, “…The efficiency and success of the standards process are measured in output, not input. It is much more valuable to provide focused and well-scrutinized quality input, as this maximizes the chances of coming to high-quality technical agreements and results.”
Contrasting quantity with quality
Another flawed approach is measuring technology prowess by counting the number of patents the company holds. Well, unlike mere contributions, the number of patents has some value. However, this number can’t be the only or meaningful measure for leadership. What matters is actually the specific technology those patents bring to the table. Meaning, how important they are to the core functioning of the system. The Nikkei article, which is based on Patent Result’s analysis, sheds light on this subject.
Patent Result did a detailed analysis of the patents filed in the U.S. by major technology companies, including Huawei, Intel, Nokia, Qualcomm, and many others. It assessed the quality of the patents according to a set of criteria, including originality, actual technological applications, and versatility. Their ranking based on the quality of patents was far different than that of the number of patents.
Some might ask, isn’t the SEP (Standard Essential Patent) designation supposed to separate the essential, i.e., important ones from non-important ones? Well, in 3GPP, SEP designation is a self-declaration. Because of that, there is ample scope for manipulation. This process is a major issue in itself, and a story for another day! So, if something is an SEP, it doesn’t necessarily mean it is valuable. In my previous article “All patents are not created equal,” I had compared and contrasted two SEPs in a car: one for the engine of the car and another for its fancy doors. While both are “essential” to make a car, the importance of the first is magnitudes higher than the second. On the same strain, you couldn’t call a company with a large number of “car-door” kinds of patents to be a leader over somebody who has fewer but more important “car-engine” level patents.
So, the bottom line is, when it comes to patents, quality beats quantity any day of the week, every time!
As I discussed in my previous articles, the industry is finally waking up to the fact that when it comes to patents, quality indeed matters much more than quantity. Also, the realization that simpleton approaches such as standards contribution counting or counting the mere number of patents doesn’t give an actual picture of technology leadership. At the same time, assessing the quality of patents has been a challenge. While the gold standard, in my view, is market-based valuation, new quality accessing metrics and methods are emerging. These are designed to consider many aspects such as how fundamental and market impacting the inventions are, how wide the reach of the patents is, how many other patents are derived from them etc. and try to come up with a quality score. I will explore many of them as part of this article series, here is the discussion on the first one on the list.
<<Side note: You can read the previous articles in the series here. >>
Patent Asset Index™ by LexisNexis® Patent Sight®
Patent Sight is a leading patent analytics and valuation firm, based in Germany. Its services are utilized by many leading institutions in the world, including the European Commission. Patent Sight has developed a unique methodology that considers the importance of the patent in the hierarchy of the technologies, its geographical coverage, and other parameters to provide a score called the Patent Asset Index. This index allows industry as well as general audiences to not only understand the comparative value of the patents that various companies hold but also rank them in terms of technology leadership.
Here are some of the Patent Sight charts regarding 4G and 5G patents, presented at a recent webinar hosted by Gene Quinn of IPWatchDog. During the webinar, William Mansfield of Patent Sight shared these charts. The first chart shows the number of patents filed by some of the top cellular companies between 2000 and 2018. As is evident, if only quantity was the metric, one could say that companies such as Qualcomm, Huawei, Nokia, LG, and Samsung, are far ahead of the others.
Now let’s look at the Patent Asset index chart of the same companies:
Under this assessment, the scene is vastly different. Qualcomm is still in the lead, and there is a drastic change in the ranking as well as the relative standings of others. Qualcomm is far ahead of its peers, followed by Samsung as a distant second, followed by LG, Nokia, and InterDigital. Surprisingly, Huawei, which was neck-to-neck with Qualcomm in terms of sheer number patents, is much farther behind.
Why quality vs. quantity comparisons matter?
Unquestionably patents are borne out of important innovations. However, as I have explained in this article, all patents are not created equal. Also, when it comes to cellular patents, there is a much-believed myth that Standard Essential Patents (SEPs), as the name suggests, are extremely important, and are core to the technology. However, because of 3GPP’s self-declaration policy, this designation is not as reliable as it seems and is highly susceptible to abuse. For example, companies with deep pockets that are interested in boosting their patent profile might invest large sums in developing non-core patents and declaring them as SEPs. That’s why the quality indicators such as the Patent Asset Index and other such approaches are important tools to assess the relative value of the patent portfolios. In the next articles, I will discuss other indicators and the specific parameters and the methodologies involved in the quality determination. So be on the lookout!
As a keen industry observer, I have seen with awe, the attention patents (aka IPR- Intellectual Property Rights) have recently gotten. And that has everything to do with the importance 5G has gotten. Most of the stakeholders now realize that IPR leadership indeed means technology leadership. But the issue that many do not understand is, how to determine IPR leadership. A lot of them, especially gullible media, falsely believe that owning a large number of patents represents leadership, no matter how insignificant those parents are. I have been on a crusade to squash that myth and have written many articles, published a few podcasts to that effect. Gladly though, many are realizing this now, and speaking out. I came across one such report titled “5G Technological Leadership,” published by the well-known US think tank, Hudson Institute.
Infrastructure is only one of the many 5G challenges
The report recognizes the confusion the 5G policy discussion in the US is mired in, and how misdirected the strategy discussions have been. It rightly points out that the well-publicized issues of lack of 5G infrastructure vendor diversity, as well as the size and speed of 5G deployments, are only small and easy to understand parts of the multifaceted 5G ecosystem. The authors of the report, Adam Mossoff & Urška Petrovčič strongly suggest that it would be wrong for the policymakers to only focus on these aspects. I could not agree more.
How to determine technology leadership?
A much more important aspect of 5G is the ownership of the foundational and core technologies that underpin its transformation ability. 5G being a key element of the future of almost every industry on the planet, whoever owns those core technologies will not only win the 5G race but also will wield unassailable influence on the global industry and the larger economy.
As mentioned earlier, technology leadership stems from IPR ownership. This is not lost on companies and countries that aspire to be technology leaders. This is clearly visible in the number of 5G patents filed by various entities. And that brings us to the critical question “Does having a large number of patents bring technology leadership?”
Patent counting is an unreliable method
It is heartening to hear that the report decisively says that patent counting is an unreliable method to determine 5G leadership, and it would mistake to use it as such. Further, the report asserts that the decision boils down to the quality of those patents, not quantity. The quality of patents here means; how fundamental and important they are for the functioning of 5G systems.
Sides note: Please check out these two articles (Article 1, Article 2) to understand how to determine the quality of patents.
The misguided focus on patent quantity has made many companies and even countries to pursue options that are on the fringes of what is considered ethical. For example, the report attributes the recent rise in 5G patents filed by Chinese individuals and companies to the government’s direct subsidies for filing patents, not necessarily to the increase in innovation. There might be other unscrupulous reasons too, such as companies over declaring Standard Essential Patents to achieve broad coverage or to avoid unknowingly violating the disclosure requirements, and others.
As I have discussed in my previous articles and podcasts, the standards-making body 3GPP’s honor-based system has enough loopholes for bad actors to goose up their patent count without adding much value or benefit.
The Hudson Institute report quotes an important point raised by the UK Supreme court—Reliance on patent counting also risks creating “perverse incentives,” wherein companies are incentivized to merely increase the number of patents, instead of focusing on innovation.
All this boils down to one single fact—when it comes to patents, the quality of patents is much more important than quantity.
In closing
After the initial misguided focus on the quantity of patents as a measure of technology leadership, the realization of the importance of the quality of patents is slowly sinking in. As the awareness of the transformational impact of 5G is spreading, the awareness about the importance of the quality of 5G patents is growing as well. Hudson Institute, being a think tank and an influential public policy organization, is rightly pointing out the key issues that are either missing or misdirected in the national technology policy debate. This is especially true for the 5G patent quality discussion. Hope the policymakers, and the industry takes notice and reward companies with high-quality patents while penalizing the manipulators.
If you would like to read more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Samsung Networks
The virtualization of cellular networks has been ongoing for some time. But virtualizing the Radio Access Network (RAN) has always been an enigma and was the final frontier for the trend. The rising star of 5G infrastructure business—Samsung—jumped on to the virtualized RAN (vRAN) bandwagon with their announcement yesterday. I think this will prove to be another turning point in moving the industry from decades-old “custom hardware + integrated software,” approach toward the modern, efficient, and flexible vRAN architecture.
What is vRAN and why does it matter?
Even since the dawn of the cellular industry, radio networks were always thought to be the most complex part of the equation. It was mainly because of the dynamic nature of the wireless links, compounded by the challenges of mobility. The “custom hardware + integrated software” approach proved to be the winning combination to solve that complexity. The resulting operator lock-in, and the huge entry barrier it created for new entrants, made the established infrastructure players to wholeheartedly embrace that approach. As the cellular technology moved from 2G to 3G, 4G, and now 5G, the complexity of the radio networks grew exponentially, keeping the approach intact.
But things are rapidly changing. Thanks to the accelerated growth of computing, now, it indeed is possible to break this combination and use commercial off-the-shelf (COTS) hardware and disaggregated software. This new approach is called vRAN.
The advantages of vRAN are obvious. It allows flexibility, drastically reduced entry barriers for new players, which leads to an expanded ecosystem. Operators will be able to choose the best hardware and software from different players and deploy the best-performing systems. All this choice increases competition, and substantially reduces costs, while increasing the pace of innovation.
Samsung’s 5G vRAN offerings
Samsung has announced full, end-to-end vRAN offerings for 5G (and 4G). These include virtual Central Unit (vCU), virtual Digital Unit (vDU), and existing Radio Units (RU). According to the press release, vCU was already commercialized in April 2019, and the full system was demonstrated to customers in April 2020. Samsung’s vCU and vDUs run on Intel x86 based COTS servers.
Let me explain the role of these units without going into too much detail. vCUs are responsible for non-real-time functions, such as radio resource management, ciphering, retransmission, etc. On the other hand, vDUs contain the real-time functions related to the actual delivery of data to the device through the RUs. RUs convert digital signals into wireless waves. A single vCU can typically manage multiple vDUs, and a single vDU can connect to multiple RUs.
“Our vRAN solutions can deliver the same reliability and performance as that of today’s legacy systems,” said Alok Shah, Vice President, Networks Strategy, BD, & Marketing at Samsung Electronics, “while bringing flexibility and cost benefits of virtualization to our customers.”
Another important aspect of the announcement is the support for Dynamic Spectrum Sharing (DSS), which allows 5G to utilize the 4G spectrum. This is extremely crucial, especially for operators who have limited low or mid-band 5G spectrum. Shah mentioned that they have put a lot of emphasis to ensure DSS smooth interworking between the new vRAN 5G and the legacy 4G systems.
A significant step for the industry
Samsung made everybody’s head turn when it won a significant share of the 5G market in the USA, beating long-term favorites such as Ericsson and Nokia. This came on the heels of its 5G wins in South Korea, and strong 4G performance in hyper-competitive and large market like India. Additionally, Samsung’s strong financial position gives it a distinct advantage over its traditional rivals.
So, when such a strong player adopts a new trend, the industry will take notice. Until now, the vRAN vendor ecosystem consisted primarily of smaller disruptive players, such as Mavenir, Altiostar, Parallel Wireless, and others. Major cloud players such as Facebook, Intel, Google, Qualcomm, and others are largely observing the developments from outside. Nokia, another major legacy vendor recently announced its 5G vRAN offerings as well, with the general availability slated for 2021. Samsung’s announcement makes vRAN much more real, and future that much brighter. Also, Samsung being a challenger, has much more to gain with vRAN than its legacy competitors such as Ericsson, Nokia, and Huawei.
vRAN also opens the possibility for Open RAN, in which vCUs, vDUs, and RUs from different vendors can work with each other, providing even more flexibility for operators. Although Samsung didn’t specifically mention this in the PR, Shah confirmed that the use of standardized open interfaces makes their vRAN system inherently open. He also pointed to their growing portfolio of Open RAN compliant solutions, developed based on multiple collaborations with US operators. Open RAN and vRAN have gotten even more attention and importance because of the geopolitical issues surrounding the US ban of Huawei, and the national security concerns.
Side note: If you would like to learn more about Open RAN architecture and its relevance to addressing the U.S. government’s concerns with Huawei, listen to this Tantra’s Mantra podcast episode.
The generational shift which requires major re-hauling of network infrastructure is a perfect opportunity for operators to pursue new technologies and a new approach. However, the move to vRAN will be gradual. Greenfield 5G operators such as Dish Network in the USA might start off with vRAN, some of the US operators looking at building out 5G on the new mid-band spectrum might use vRAN for that as well, so are the enterprises building private networks. The migration of larger legacy networks will be gradual and will happen over a period of time.
In closing
After a long period of skepticism, it seems the market forces are aligning for vRAN. Because of its enormous benefits in terms of flexibility, and cost-efficiency, there is a lot of interest in it. There is also strong support for this approach from large industry players. In such a situation, Samsung’s announcement has the potential to be a turning point in moving the industry toward vRAN. In my view, Samsung with its end-to-end virtualized portfolio, and a solid financial position is strongly positioned to exploit that move. For a keen industry observer like me, it would be fascinating to watch how the vRAN saga unfolds.
For more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
While the media is abuzz with the news of Samsung Foldable smartphones, being a network engineer at heart, I am more excited about Verizon and Samsung’s recent announcement about the successful completion of 5G virtual RAN (vRAN) trials using the C Band spectrum. Verizon’s adoption of vRAN for its network build, and Samsung’s support for advanced features such as Massive MIMO (mMIMO) for its vRAN portfolio bodes very well for the rapid 5G expansion in the USA. I recently spoke to Bill Stone, VP of technology development and planning at Verizon, and Magnus Ojert, VP and GM at Samsung’s Network Business, regarding the announcement as well as the progress of C Band 5G deployments.
The joint trial
The trials were conducted over Verizon’s live networks in Texas, Connecticut, and Massachusetts. Since the spectrum is still being cleared for use, Verizon had to get a special clearance from FCC. The trials used Samsung’s containerized, cloud-native, fully virtualized RAN software and hardware solutions supporting 64T64R mMIMO configuration for trials. This configuration is extremely important to Verizon for many reasons that I will explain later in the article. This trial is yet another critical milestone in Verizon’s race to build the C Band 5G network.
Verizon’s race to deploy C Band 5G network
After spending $53B on C Band auctions, Verizon is in a race against itself and its competition to put the new spectrum to use. It needs to have a robust network in place before the strong 5G demand outpaces the capacity of its current network. As many of you might know, Verizon is currently using the Dynamic Spectrum Sharing (DSS) technique to opportunistically use its 4G spectrum for 5G, along with focused mmWave deployments. Verizon also needs an expansive coverage footprint to effectively compete against T-Mobile, which is capitalizing on the spectrum-trove it got through the Sprint acquisition.
Verizon is busy like a beehive—signing deals with tower companies, site-prep work for deployments, working closely with its vendors, running many trials, and so on. Owning a significant portion of the fiber backhaul to sites is helping Verizon expedite the buildout. Stone confirmed that vRAN will be the mainstay for their C Band deployments, and they are firmly on the path to transition to virtual and Open RAN across the entire network. This will give Verizon more flexibility, agility, and cost-efficiency in enabling new services in the future, especially during the later phases of 5G, when the service expands beyond the smartphone and mobile broadband market. He added that the trials like this one are a great step in that direction. Although their vRAN equipment supports open interfaces, the initial deployments will only be single-vendor. I think the—single-vendor vRAN followed by multi-vendor Open RAN— is a smart strategy that will be adopted by many operators.
The most interesting C Band development all the industry is watching is how Verizon’s plan to use its AWS band (1.7 GHz) site-grid for C Band (3.5 GHz) will pan out. According to Stone, one way Verizon is looking to compensate for C Band’s smaller coverage footprint is to use the 64T64R antenna configuration. He expects this to improve the uplink coverage, which is the limiting factor. He added that the initial results from the trial are very encouraging.
The coverage benefit will necessitate a rather expensive 64T64R configuration across most of its outdoor macro sites. Verizon is also looking at small cells, indoor solutions, and other options to provide comprehensive coverage. He aptly said, “All the above” is his mantra when it comes to using these options to expand coverage. Considering that robust network and coverage are Verizon’s key differentiators, there is not much margin for error in its C Band deployments.
Samsung leading with its mMIMO and vRAN portfolio
After scalping a surprise win by getting a substantial share of Verizon’s 5G contract, Samsung has been consolidating its position by continuously expanding its RAN portfolio. Ojert emphasized that they are working very closely with Verizon for a speedy and successful C Band rollout.
Side note: To know more about Samsung’s network business, please listen to this Tantra’s Mantra podcast interview of Alok Shah, VP Samsung Networks.
Being a disruptor, Samsung has been an early adopter of vRAN and Open RAN architectures. It understands that the key success factor for these new architectures is providing performance that meets or exceeds that of legacy networks. The 64T64R has almost become a litmus test for whether the new approaches can easily evolve to support complex features such as mMIMO.
There have already been commercial deployments of legacy networks supporting 64T64R. Hence, it becomes a de facto bar for any new large-scale vRAN deployments. The telecom industry is hard at work to make it a reality. Verizon’s plan to use it to close the coverage gap of the C Band makes it almost mandatory for all its vendors.
Running these trials on live networks, that too at multiple locations makes a great proof-point for the readiness of Samsung’s gear for large-scale deployments. Ojert emphasized that by being a major supplier for cutting-edge 5G networks in Korea that use a similar spectrum, Samsung better understands the characteristics of the band. He added that they will utilize the entire portfolio of Samsung solutions including small cells, indoor solutions, and others in helping Verizon build a robust network.
C Band commercial deployments and service
FCC is expected to clear up to 60 megahertz of the total up to 200 megahertz of C Band spectrum later this year. Verizon is projecting to have C Band 5G service in the initial 46 markets in the first quarter of 2022, covering up to 100 million people. It will expand that as the additional spectrum is cleared, to reach an estimated 175 million people by 2024.
The initial deployments will be based on the Rel. 15 version of 5G, with the ability to do a firmware upgrade to Rel. 16, and beyond, for services such as URLLC, as well as Stand-Alone configuration.
C Band (along with its mmWave) spectrum indeed is a potent option for Verizon to substantially expand 5G services, effectively compete, and prepare for the strong evolution of 5G. It will be interesting to watch how the rollout will change the market landscape.
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
As an eventful 2021, which witnessed 5G becoming mainstream despite all the challenges, comes to a close, the analyst part of my mind is reviewing and examining major disruptions in the cellular market brought by 5G. The rise of Samsung, mostly known for its flagship galaxy phones and shiny consumer electronics, as a global 5G infrastructure leader really dawned on me as a key one.
As a keen industry observer, I have been tracking Samsung Networks for a long time. A little more digging and research revealed how systematically it charted a path from its solid home base in Korea to its disruptive debut in the USA, followed by expanding its influence in Europe and other advanced markets. All the while building a comprehensive 5G technology and product portfolio.
In this article, I will try to follow its growth steps in the last two years and explore how it is well-positioned to lead in the upcoming 5G expansion.
Strong presence at home and early success in India built the Samsung foundation
Korean operators like Korea Telecom and SK Telecom have always been at the bleeding edge of cellular technology, even from 3G days. As their key supplier, Samsung’s technology prowess has been a significant enabler for these operators’ leadership, especially in 4G and 5G. That has also helped Samsung to be ahead of the curve.
Samsung’s first major international debut was in India in 2013, supporting Reliance Jio, a new cellular player that turned the Indian cellular and broadband market upside down. Samsung learned valuable lessons there about deploying very large-scale, expansive cellular networks.
The leadership at home combined with the experience in India provided Samsung a solid foundation for the next phase of its global expansion.
Disruptive debut in the USA that changed the infra landscape
U.S. cellular industry observers sulking about the lack of 5G infra vendor diversity were pleasantly surprised when Samsung won a large share of Verizon’s contract to build the world’s first 5G network. That was a major disruption because of two reasons. First, Samsung virtually replaced a well-established player, Nokia. And second, it’s Verizon, for whom the network is not just a differentiation tool but the company’s pride. Verizon entrusting Samsung with the deployment of its high-profile, business-critical, first 5G network, speaks volumes about Samsung’s technical expertise and product superiority.
Over the years, Samsung has scored many key 5G wins in the U.S., including early 5G-ready Massive MIMO deployments for Sprint (now T-Mobile), supplying CBRS-compliant solutions to AT&T and 4G and 5G network solutions for US cellular.
These U.S. wins were the result of a well-planned strategy, executed with surgical precision. Samsung started 5G work in the U.S. as early as 2017 with testing and trials. In fact, Samsung was the first to receive FCC approval for its 5G infra solution, in 2018, quickly followed by outdoor and indoor 5G home routers.
It’s not just the initial contract wins and delivering on the promise. Samsung has been consistently collaborating with operators in demonstrating, trialing and deploying new and advanced 5G features such as 64T64R Massive MIMO and virtual RAN, c-band support, indoor solutions, small cells and more.
In other words, Samsung has fully established itself as a major infra player in the lucrative and critical U.S. market. The rapid deployment of 5G, even in rural areas, and the impending rip and replace of Chinese infrastructure for national security reasons bode well for Samsung’s growth prospects in the country.
Samsung methodically expands into Europe, Japan and elsewhere
After minting success in the high-stakes U.S. market, Samsung signed a contract with Telus of Canada in 2020. Canada was a simple expansion, and going after other advanced markets, such as Europe and Japan, was a natural next step.
Europe is one of the most competitive and challenging markets to win. Not only it is the home to two well-established infra players–Ericsson and Nokia – but also the biggest market outside China for Huawei and ZTE. Samsung has seen early success with some of the key players in Europe. For example, it successfully completed a trial with Deutsche Telecom in the Czech Republic, potentially giving Samsung access to DT’s extensive footprint in the region. Recently, Vodafone UK selected Samsung as the vRAN and Open RAN partner for its sizable commercial deployment, and Samsung is collaborating with Orange for Open RAN in France. Getting into these leading operators in the region is a significant accomplishment. In my view, with the other players such as Telefonica being very keen on vRAN and Open RAN, entry there is only a matter of time.
Even with these wins, it is still early. The company’s strategy in Europe is still unfolding. A significant tailwind for Samsung is the heightened national security concern, which has significantly slowed the traction of Chinese players. Additionally, onerous U.S. restrictions have seriously crippled Huawei.
Japan has always been the most advanced market. So far, it is dominated by local players such as NEC and Fujitsu. Expanding its wings there, Samsung has been collaborating with KDDI on 5G since 2019. It also got into the other major operator NTT DoCoMo earlier this year with the contract to supply O-RAN compliant solutions.
Comprehensive technology and product portfolio that fueled all this growth
5G has always been characterized as a race. That means the first to market and the leaders will emerge as winners taking a large share of the value created by 5G. Interestingly, it has played out as such so far. The investments in 5G are so large that once companies establish leadership and ecosystem relationships, it is extremely hard to change or displace them.
Realizing that, Samsung invested big and early in 5G technology development. Being both a network and device supplier, it can utilize that investment over a much broader portfolio. Samsung conducted pioneering 5G testing and field trials as early as 2017 and 2018, in Japan with KDDI. When many in the industry were still debating the ability of mmWave to support mobility, Samsung collaborating with SK Telecom, demonstrated successful 5G video streaming in a race car moving at 130 Mph speed. Samsung was also the industry’s first to introduce mmWave base stations with integrated antennas, significantly simplifying deployment.
In the emerging area such as Edge-Cloud, Samsung is already working with major Cloud providers such as Microsoft and IBM and chipset players such as Marvel.
Currently, Samsung has one of the most comprehensive portfolio of network solutions, software stack and tools, support for all commercial 5G bands, including both Sub-6 GHz and mmWave, with advanced features such as Massive MIMO, for indoor and outdoor deployments, for new architectures such as vRAN and Open RAN, for public or private networks and so on.
One of the major advantages of Samsung, when compared to its infra competitors, is its strong financial strength that comes from being part of a huge industrial conglomerate. In businesses like 5G, where investments are large, risks are high and payback times are long, such financial strength can decide between winning and going out of business.
In closing
Samsung Networks’ journey from its humble beginnings in Korea to a global 5G infrastructure leader is fascinating. It has invested heavily to become a technology leader, and has successfully used that leadership along with meticulous planning and execution to be a global leader in the 5G infra business.
It is still early days for both 5G and Samsung. It will be interesting to watch how Samsung can utilize this early lead to capture even bigger opportunities created by the expanding 5G’s reach and new sectors such as Industrial IoT.
In the meantime, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
Samsung Networks held its mid-year analyst day last week, giving an update on their progress on the vRAN/Open RAN front, Dish deployment, and the opportunities they see in the Private Networks space. I was among a few key analysts they invited to their offices in Dallas for the meeting. I came out of the meeting well informed about their strategy and future path, which is following the trajectory I discussed in my earlier articles here.
Strong vRAN/Open RAN progress
Since launching its vRAN portfolio, Samsung has steadily expanded its sphere of influence in North America, Europe, and Asia. Although its surprising debut at Verizon was with legacy products, Samsung Network has used its market-leading vRAN/Open RAN portfolio as leverage to expand its reach, including at Verizon’s c-band deployments and at newer customers, regions, and markets. Having both legacy and vRAN support makes them an ideal partner with any operator, be it the ones continuing to use the legacy approach for faster deployment and expansion of 5G or the ones looking to utilize newer architectures for building future-proof networks, or even the ones looking to bridge between the two.
The chart below captures the continuing successes Samsung Networks has witnessed in the last couple of years.
As Verizon’s VP Bill Stone explained to me during a recent interview, a significant portion of their c-band deployment is vRAN. An operator like Verizon, who considers its network a differentiator, putting full faith in Samsung’s vRAN portfolio shows the latter’s product quality and maturity.
Vodafone UK partnered with Samsung Networks to commercialize its first Open RAN site and has plans to expand it to more than 2,500 sites. You can read more about this in my earlier article here.
To clarify, people often confuse between vRAN and Open RAN. vRAN is the virtualization of RAN functions so that you can run them on commercial off-the-shelf (COTS) hardware. In contrast, Open RAN is building a system with hardware and software components with open interfaces from different vendors. vRAN is firmly on its way to becoming mainstream. However, there are still challenges and lingering questions about Open RAN. That’s why the progress of early Open RAN adopters such as Dish, interests everybody in the industry.
Samsung’s recent announcement regarding 2G support for vRAN was interesting. I knew that there are some 2G markets out there. But was surprised to see the size of this market, as illustrated in the chart below:
This option of supporting 2G on the same Open RAN platform will help operators efficiently support the remaining customers and eventually transition them to 4G/5G while using the same underlying hardware. From the business side, this option will help Samsung Networks break into new customers, especially in Europe.
Powering America’s first-ever Open RAN network with Dish
Nothing illustrates more than one of the world’s most watched new 5G operators fully committed to Open RAN launches its network with you as the primary infra vendor. Dish has a long list of firsts: the first fully cloud-native vRAN and Open RAN network in the US; the first multi-vendor Open RAN network in the US; the First to use public cloud for its deployment, and more.
As evident from many auctions, public disclosures, and this study by Allnet Insights & Analytics, Dish has a mix of many different spectrum bands with highly variable characteristics. They include bands from 600 MHz to 28 GHz, bandwidths ranging from 5 MHz to 20 MHz, paired (FDD), unpaired (TDD and supplemental downlink), licenses in crosshairs with satellite broadband operators, and so on. Dish has embarked on a unique journey of being a major greenfield countrywide cellular provider in the US in a few decades while adopting a brand-new architecture such as Open RAN. Additionally, it also has tight regulatory timelines to meet. In such a scenario, it needs a reliable, versatile, financially strong infra partner with a solid product portfolio. Above all, it needs a vendor fully committed to Open RAN. Dish seems to have found such a partner in Samsung Networks.
To be clear, it is still very early days for Dish and Open RAN. The whole industry is watching their progress with open and watchful eyes.
Finding a foothold in the private networks market
Private Networks is one of the most hyped concepts in the cellular industry today. Indeed, 5G Private Networks have a great prospect with Industry 4.0 and other futuristic trends. But based on my interactions with many players in the space, customers’ real needs seem to be plain-vanilla mobile broadband connectivity. In many cases, be it large warehouses, educational institutions, or enterprises with sprawling campuses, cellular Private Networks will be needed for use cases requiring seamless mobility, expanded coverage (indoor and outdoor), increased capacity, and in some cases, higher security. And these will complement Wi-Fi networks.
During the event, Samsung Networks explained how they are addressing these immediate and prospective long-term needs of the market, with examples of early successes. These include deployments at Howard University in the USA, a relationship with NTT East in Japan, and the latest collaboration with Naver Cloud in South Korea.
Naver also has deployed an indoor commercial 5G Private Network in its office. The network, covering a sizeable multi-story building, serves a bunch of autonomous robots. These robots work as office assistants, providing convenience services, such as delivering packages, coffee, and lunch boxes to Naver employees throughout the building. All the robots are controlled by Naver’s cloud-based AI. The need for 5G instead of Wi-Fi stems from mobility, low latency, coverage, and capacity requirements.
Mobility is needed for reliable connectivity with hand-offs when robots are moving around. Low latency is required to connect robots to cloud AI for seamless operations. Extended coverage and capacity are needed to ensure the connectivity of robots is not degraded by the traffic from all the other office machines, including computers, printers, network drives, and others.
Naver and Samsung are planning to market such concept and services to other customers.
In closing
The analyst meeting provided other analysts and me with a good understanding of Samsung Networks’ current traction in vRAN/Open RAN and an overview of their strategy for the future.
It seems Samsung Network is well poised to expand its market with its vRAN/Open RAN portfolio, along with support for legacy architecture. Dish being a bellwether for Open RAN, the industry is very closely watching its success and its collaboration with Samsung Networks.
Private Networks is an emerging concept for 5G with great potential. Samsung Networks seems to have scored some early partners and deployment wins.
The 5G infrastructure market expansion is exciting, and Samsung seems to have gotten a good head start. It will be interesting to see how it evolves, especially with the fears of global recession looming.
Meanwhile, to read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Samsung Networks’ news cycle started weeks before the much-awaited Mobile World Congress 2022, making its mark in Europe. The cycle continued, with many more announcements coming right before, during and after the event. The notable ones were: building a solid coalition to streamline virtualized RAN (vRAN) and open RAN, expansion into the red-hot private networks domain and traction in public safety deployments.
All this point to Samsung Networks evolving from its initial disruptor role to a market and thought leadership role, tracking the trajectory I had detailed last year in this article.
Building comprehensive, interoperable vRAN/open RAN ecosystem
As I had explained in my recent Forbes article, the biggest challenge of new architectures like vRAN and open RAN is stitching together a system with disparate pieces from many different companies. Most of these pieces, by definition, are generic and off-the-shelf (COTS – Commercial Off the Shelf). In such case, it is an arduous task for operators and system integrators to ensure these pieces interwork seamlessly and operate as a single system. Moreover, this system has to meet and exceed the performance of legacy architectures. Understanding this challenge, Samsung Networks is taking charge to innovate and build a comprehensive ecosystem of vRAN/open RAN players with fully interoperable solutions.
An announced coalition consists of well-known brands with a proven track record. It has cloud infra players such as Dell and HPE, chipset giants such as Intel, and cloud software platform players such as Red Hat and Wind River. I wouldn’t be surprised if the roster grows with additional partners such as Qualcomm, Marvel and hyperscalers in the near future.
The primary objective of the coalition is to develop fully interoperable, deployment-ready, pre-tested, and pre-integrated vRAN and Open RAN solutions. Anybody who has done system integration knows that even though, in theory, standards-compliant products should interwork, during actual deployments, nasty surprises always spring up. This collaboration is designed to remove that exact element of surprise and make deployments seamless, predictable, and cost-effective.
By joining hands with Samsung Networks, all these players who are leaders in their respective domains have recognized the leadership and growing influence of the company.
CBRS and Private Networks deployments
Private Networks have attracted a lot of attention lately. There has been much news regarding deployment plans, commitments, and trials. Samsung Networks was among the first to deploy an actual commercial Private Network on the campus of Howard University.
On the second day of MWC, Samsung Networks announced that NTT East selected it as the partner for Private Network deployments in the eastern region of Japan. This followed successful completion of 5G Standalone (SA) network testing by both the companies. 5G SA is a crucial feature for Private Networks, especially for delivering massive IoT and mission-critical services to enterprises, large industries, and others.
In the USA, CBRS shared spectrum is touted as the ticket to Private Networks. After a somewhat slow start, CBRS deployments have been picking up pace in the last couple of years. During MWC, Samsung announced a collaboration with Avista Edge Inc, for an interesting use case of the CBRS spectrum. Avista Edge is a last-mile, fixed wireless access (FWA) technology provider, with an innovative approach to delivering broadband. As part of the deal, Avista Edge will offer broadband services to rural communities through electric utilities and Internet Service Providers. Samsung will provide its On-Go Alliance certified Massive MIMO radios and compact core network to Avista Edge.
Right after MWC, Samsung also announced another CBRS deal—with Mercury Broadband in collaboration with t3 Broadband. Mercury Broadband is a rural broadband provider, and t3 Broadband is an engineering services company. Samsung will provide its 6T64R Massive MIMO radios and baseband units for more than 500 FWA sites across Kansas, Missouri, and Indiana. The network is expected to expand to additional states through 2025.
Public safety partnership and new mmWave use case
Samsung Networks and the Canadian operator TELUS announced the country’s first Mission Critical Push-to-X (MCPTX) deployment, serving first responders, public safety workers, and others. It will be deployed over TELUS’s 4G and 5G networks and has already been trialed with select customers. The broader commercial availability is expected in the for later part 2022.
Samsung Networks’ MCPTX solution packs a comprehensive suite of tools, offering: real-time audio and video communication between the first responders, priority access in congested networks during natural disasters, connected ambulances, and vehicular traffic controls.
In an interesting use case of mmWave, Samsung Networks signed a deal with all three Korean operators to provide a high capacity mmWave backhaul to the subway Wi-Fi system in Seoul. Seoul is one of the highly connected cities in the world, and data consumption continues to grow. The system will provide high capacity backhaul to Wi-Fi Access Points in the subway stations and trains, allowing users to enjoy extreme speeds, capacity, and better broadband experience while in transit. This set-up was successfully trialed in September 2021.
In closing
After impressive 5G rollouts in the USA over the years, including its most recent Verizon C-band deployment, Samsung Networks is set to establish a solid foothold in Europe. Further, it is becoming a recognized leader in vRAN/Open RAN, and is widening its appeal to rural players and private network providers around the globe.
Its announcements at MWC 2022 provided solid proof of its expansion strategy and early success. I’ll be interested to see how Samsung Network grows and tracks the trajectory outlined in my 2021 article.
Prakash Sangam is the founder and principal at Tantra Analyst, a leading boutique research and advisory firm. He is a recognized expert in 5G, Wi-Fi, AI, Cloud and IoT. To read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
MWC 2023 turned out to be a graduation party for Samsung Networks, from a market disruptor to a mature, reliable and confident 5G infrastructure leader. This was evident from the flurry of announcements made around the event, including its own, as well as from operators and other ecosystem partners.
The announcement season actually started late last year when DellOro Group crowned Samsung Networks the vRAN/open RAN market leader. To top that, during MWC, Samsung Networks announced its next-gen vRAN 3.0, as well as many collaborations and partnerships.
To my credit, Samsung Networks followed the trajectory I outlined in this article in 2021. It has meticulously built and expanded its global footprint and created a sizeable ecosystem of partners that are technology and market leaders in their respective domains.
Next-gen infrastructure solutions
Unlike other large infrastructure vendors such as Ericsson, Huawei and Nokia, Samsung was an early and enthusiastic adopter of vRAN/open RAN architecture. Being a challenger and a disrupter made that decision easy — it didn’t have any sacred cows to sacrifice, i.e., legacy contracts and relationships. That gave it a considerable head start that it continues to maintain.
The vRAN/open RAN transition is shaping up to be a two-step process: First, a disaggregated, cloud-native, single vendor, fully virtualized RAN (vRAN), with open interfaces, followed by a multi-vendor truly open RAN. Many of Samsung Network’s competitors are still on the first step, deploying their first commercial base stations. In contrast, Samsung Networks has already moved on to the second step (more on this later).
Samsung Networks announced its next-gen solutions, dubbed vRAN 3.0, which brings many performance optimizations and significant power savings. The former brings a key feature that supports up to 200 MHz of bandwidth with 64T64R massive MIMO configuration. That almost entirely covers the mid-band spectrum of U.S. operators. The latter involves optimizing usage and sleep cycles of CPU cores to match user traffic, thereby minimizing power consumption. These software-only features (with the proper hardware provisioning) exemplify the benefits of a disaggregated vRAN approach, where the new capabilities can be rapidly developed and deployed.
Also, part of vRAN 3.0 is the Samsung Cloud Orchestrator. It streamlines the onboarding, deployment and operation processes, making it easier for operators to manage thousands of cell sites from a unified platform.
Although large parts of vRAN/open RAN are software-defined, the key radio technologies still reside in hardware. That is where Samsung Networks has a strong differentiation. It is the only major network vendor that can design, develop and manufacture 4G and 5G network chipsets in-house.
Strong operator traction and contract wins
Samsung Networks’ collaboration with Dish Wireless is notable at many levels. Dish Wireless is one of the biggest open RAN greenfield deployments. Its trust in keeping Samsung Networks as a primary vendor says a lot. It is also a multi-vendor deployment, wherein Samsung Networks is integrating its own as well as Fujitsu’s radio units (RU) into the network. Interestingly, Marc Rouanne, EVP and chief network officer of Dish Wireless, joined Samsung Networks’ analyst briefing at MWC and showered lavish praise on their work together, especially on system integration, the Achilles heel of open RAN.
Vodafone has been a great success story for Samsung Networks. After successfully launching the U.K. network with the famous Golden Cluster and integrating NEC radios, both companies are now extending their collaboration to Germany and Spain.
In Japan, Samsung’s Network’s relationship with KDDI has grown tremendously. Leading up to MWC, they announced the completion of network slicing trials, followed by commercial 5G open RAN deployment along with Fujitsu (for RU) and a contract for a 5G standalone core network, a first for Samsung outside Korea.
A recent Dell’Oro report identified North America and Asia-Pacific as the growth drivers for vRAN/open RAN. Although Europe is a laggard, even then, that region’s revenue is expected to top $1 billion by 2027. Apart from the above announcements, Samsung Networks has announced many operator engagements and contract wins across these three regions over the years. So, geographically, Samsung Networks is putting the bets in the right places.
Expanding the partner ecosystem
Success in the infrastructure business is decided by the company you keep and the partnerships you nourish. That is even more true with vRAN/open RAN, where networks are cloud-native, software-defined, and multi-vendor, with open interfaces.
There was a long list of partner announcements around MWC 2023. The cloud platform provider VMWare is working with Samsung Networks for the Dish deployment. Another provider, Red Hat, announced a study that can save significant power for operators when their platform and Samsung Networks’ RAN are working together.
Cloud computing provider Dell Technologies announced through its 5G Head of Marketing Scott Heinlein‘s blog a collaboration to integrate Samsung Network’s vCU and vDU with its PowerEdge servers.
Finally, Intel, in its announcement, confirmed that Samsung had validated its 4th Gen Intel Xeon Scalable processors for the core network.
Again, these are just the MWC 2023 announcements. There were many more in the last few years.
In summary, through its differentiated solutions, strong operator traction and robust partnerships, Samsung Networks has graduated from a credible disrupter to a reliable, mature infrastructure player, especially for vRAN/open RAN. It was vividly on display with all its glory at MWC 2023 through its proven track record, product, operator, and partner announcements. I can’t wait to see how its next chapter unfolds while global networks transition to new architectures.
Meanwhile, if you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast. If you want to know more about the vRAN/open RAN market, check out these articles.
Samsung recently opened the doors to its North American Samsung Networks Innovation Center in Plano, TX, further boosting its presence in the region. This state-of-the-art facility, supported by development centers and well-equipped labs, not only helps Samsung Networks and its partners to optimize, test and showcase their 5G products and services, but it also signifies the company’s strong commitment to support the needs of customers and build new partnerships in the region.
I got to tour the Innovation Center and the labs firsthand a couple of weeks ago and was impressed by the facilities. The opening of the Innovation Center is even more opportune, considering that we are at the cusp of the second phase of 5G, driven primarily by architecture like vRAN/open RAN, new business propositions like private networks, new and exciting use cases such as Industrial IoT, URLLC and XR. This center will be a valuable asset for Samsung Networks and its customers and partners in experiencing new technologies in real life, and ultimately helping make those technologies mainstream.
This is yet another step in the remarkable global growth of Samsung Networks in the 5G era, which I have documented in the article series here.
Showcase of the best of Samsung Networks’ technology
The front end of the expansive Samsung facilities is the Innovation Center, which houses many live demonstration areas highlighting various technologies and use cases. The current set-up includes demos of vRAN/open RAN with network orchestration, fixed wireless access (FWA) both FR-1 (Sub6 Ghz) and live FR-2 (mmWave) systems, private network with low-latency based IIoT use cases, X.R. and others.
The most impressive for me was the Radio Wall of Fame — a vast display of Samsung Networks’ radios deployed (and ready to be deployed) in the Americas, supporting a wide range of the spectrum, output power, form factors, bandwidths, bands and band combinations, MIMO configurations and more. It is awe-inspiring that in a short span, Samsung Networks has developed almost all the configurations desired by customers in the Americas.
Optimizing and perfecting technologies for the Americas
The hallmark of any successful infrastructure player is to “think global and act local,” as markets are won by best addressing the specific needs of local and regional customers, which might often be disparate. Like other major cellular infra players, most of Samsung Networks’ core development happens offshore. But most, if not all, the customization and optimization happens in the country, including the crucial lab and field testing.
The best example of this localization is the fact that Samsung supports spectrum bands and band combinations needed for U.S. operators, including its unique shared CBRS band. There are estimated more than 10,000 possible band combinations defined by 3GPP, many of which are necessary in the USA. “Supporting and testing all the band combinations operators require is an arduous task, and that’s precisely where our well-equipped labs come into play,” says Vinay Mahendra, director of engineering, Networks Business, Samsung Electronics America, “The combinations are tested for compliance, optimized for performance, and can be demonstrated to operators at this facility before deploying them in the field.” This applies to many other local needs, such as configurations, deployment scenarios, and use cases. The new Plano Innovation Center is the showcase, and existing labs there and elsewhere in the country serve as the brains and plumbing.
Testing ground for partners
A 5G network is an amalgamation of different vendors, and seamless interoperability between them is a basic need. This need elevates the complexity to a new level with vRAN/open RAN, where software and hardware are disaggregated and might come from different vendors. A typical multi-vendor open RAN network could have different RU, D.U., CU vendors, cloud orchestration and solution providers, chip and cloud providers, etc. Integrating all those hardware and software pieces and making the system work together is no small task. It requires close collaboration among vendors, ensuring the system is thoroughly tested and pre-certified, so that the disruptions and issues in the field and hence the time and costs can be minimized. That’s exactly the role of the Innovation Center and the labs.
The next phase of 5G will be driven by non-traditional applications, services and use cases, such as IIoT, mission critical services, X.R., private networks, and many others that we haven’t even imagined yet. Those must be developed, tested, perfected, and showcased before being offered on commercial networks. Being a market leader, Samsung, with its partners, is in the driving seat to enable these from the network side. Again, a task cut out for its Innovation Center.
In closing
Samsung Networks’ Innovation Center in the U.S. is opening at the critical juncture when 5G is ready for its next phase in the country, exploring new deployment models, architectures and use cases. The center and the adjoining labs will serve as a centerpiece for the company and its partners to develop and commercialize that next phase. It will help Samsung Networks showcase its innovations and partner technologies and show company’s commitment to its customers in the region.
I am looking forward to seeing new technologies and concepts being demonstrated there.
If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
5G Integrated Access Backhaul (IAB)
5G is the hottest trend now, so much so that even the Covid-19 pandemic, which has badly ravaged the global economy, could not stop its meteoritic rise. Apple’s announcement to support 5G across its portfolio cemented 5G’s market success. With 5G device shipments expected to grow substantially in 2021, naturally, the industry’s focus is on ensuring expanded coverage and delivering on the promise of gigabit speeds and extreme capacity.
However, it is easier said than done, especially for the new mmWave band, which has a smaller coverage footprint. Leading 5G operators such as Verizon and AT&T have gotten a bad rap because of their limited 5G coverage. One technology option is integrated access backhauls (IABs) with self-interference cancellation (SLIC) that enable operators to deploy hyper-dense networks and quickly expand coverage.
mmWave Bands And Network Densification
Undeniably, making mmWave bands viable for mobile communication is one of the biggest innovations of 5G. That has opened a wide swath of spectrum, almost a tenfold increase, for 5G. However, because of their RF characteristics, mmWave bands have a much smaller coverage footprint. According to some studies, mmWave might need seven times the sites or more to provide the same coverage as traditional Sub-6GHz bands. So, to make the best use of mmWave bands, hyper-dense deployments are needed. Operators are trying to use lampposts and utility posts for deployment to achieve such density.
The biggest challenge for hyper-dense deployment is providing rapid and cost-effective backhaul. Backhauls are a significant portion of the CAPEX and OPEX of any site. With a large number of sites needed for mmWave, it is an even harder, more time-consuming and overly expensive process to bring fiber to each of them. A good solution is to incorporate IABs, which use wireless links for backhaul instead of fiber runs. IABs, which are an advanced version of relays used in 4G, are being introduced in the 3GPP Rel. 16 of 5G.
In typical deployments, there would be one fiber backhaul site, called a donor, say at a crossroad and a series of IABs installed on lampposts along the roads connected to it in a cascade configuration. IABs act as donors to other IABs as well to provide redundancy. They can also connect to devices, which would be beneficial now and in the future.
Drawbacks Of Traditional Relays And IABs
While IABs seem like an ideal solution, they do have challenges. The biggest one is their lower efficiency. I’ve observed that it can be as low as 60% during high-traffic load scenarios. This means you will need almost double the IABs to provide the same capacity as regular mmWave sites.
IABs can be deployed in two configurations based on how the spectrum is used for both of its sides (access and backhaul): using the same spectrum on both sides, or using a different spectrum for each side.
Using the same spectrum on both sides creates significant interference between the two sides (known as self-interference) and reduces efficiency. Using a different spectrum requires double the amount of spectrum, which also drastically reduces efficiency. Operators are always spectrum-constrained. Hence, in most cases, they cannot afford this configuration. Moreover, this creates mobility issues and leads to other complexities such as frequency planning, which needs to be maintained and managed on an ongoing basis.
So, in my opinion, the best approach is to use the same spectrum for both sides and try to eliminate or minimize the self-interference.
SLIC Maximizes IAB Efficiency
SLIC is a technique to cancel interference caused by both the links using the same spectrum. It involves generating a signal that is directly opposite to the undesired signal such as interference and canceling it. For example, for the access link, the signal from the traffic link is the undesired signal and vice versa. This technique has been known in theory for a long time, but thanks to recent technological advances, it is now possible to implement it in actual products. In fact, there are already products for 4G networks in the market that implement SLIC.
For 5G IABs, I’ve observed that SLIC can increase the IAB efficiency to as high as 100%, meaning IABs provide the same capacity as regular mmWave sites. 5G IABs with SLIC have been developed, and leading operators such as Verizon and AT&T have already completed their testing and trials and are gearing up for large-scale commercial deployments in 2021 and beyond.
In Closing
Unlike 4G relays, which were primarily used for coverage extension or rapid, short-term deployments (for example, to connect temporary health care facilities built for accommodating rapid surge in Covid-19 hospitalizations), operators should consider IABs with SLIC as an integral part of their network design. In addition, operators have to decide on an optimal mix of IAB and donor sites so that it provides adequate capacity while minimizing the overall deployment cost.
Mobilizing mmWave bands was one of the major achievements of 5G. However, their smaller coverage footprint could be a challenge, requiring hyper-dense deployments. The biggest hurdle for such deployments is quick and cost-effective backhaul solutions such as IABs. Further, SLIC techniques maximize the efficiency of those IABs.
5G has seen unprecedented traction; many flagship devices are already in the market, and many more are on the way, including the much-rumored and anticipated iPhone 5G. After the excitement of limited initial launches, when operators are starting the large-scale deployments, the basic question they are faced with is whether to focus on coverage or capacity. Well, the right answer is both, but that is easier said than done, especially for operators such as Verizon and AT&T that have limited low and mid-band (aka Sub-6Hz) spectrum.
In a series of articles, I will discuss this dilemma and explore the solutions that the industry is working on to effectively address it. Especially the ones such as Integrated Access Backhaul (IAB) that have shown early promise, and many innovations that not only enable such solutions but also make them efficient. This is the first article in the series.
When launching a 5G network, the easiest thing is to utilize sub-6GHz bands, if you have access to them, and provide a basic coverage layer. That is exactly what Sprint (now part of T-Mobile) in the US and many operators outside the US did. However, the amount of bandwidth available in the sub-6GHz spectrum is limited, and hence the capacity in those networks would quickly be used up, especially if the growth of 5G continues as predicted. There is every indication that it will, for example, contrary to what many people expected, 5G deployment in the US is not affected by the Covid-19 pandemic. This means those operators will soon have to move to the bandwidth-rich high-band spectrum, i.e. millimeter wave bands (mmWave). These bands have more than ten-times available spectrum than sub-6GHz, and are critical to deliver on the promise of 5G—multi-gigabit user speeds, the extreme capacity to offer new services, be it fixed wireless access to homes and offices, massive IoT, Mission Critical Services, or bringing new user experiences on a massive scale.
Operators such as Verizon and AT&T, who did not have access to enough Sub-6GHz bands, leapfrogged and took the bold step of launching 5G with mmWave spectrum. This spectrum is far different in many aspects than others that the mobile industry has used so far.
<<Side note: If you would like to know more about mmWave bands, check out my article – Is mmWave just another band for 5G?>>
The biggest differences between Sub-6GHz and mmWave bands are coverage and indoor penetration. Because of their RF properties, mmWave bands have smaller coverage footprint and do not penetrate solid objects such as walls. Although this was long known by experts, it came almost as a shock to uninformed general industry observers. Operators, especially Verizon, got a lot of flak from the media on this. Some even doubted the feasibility of mmWave bands. Thanks to the extensive field tests, any lingering doubts are now duly resolved. In fact, almost all global regions are now working toward allocating the mmWave spectrum for 5G.
By the virtue of a smaller footprint, mmWave will need more sites than Sub-6GHz to provide similar coverage. For example, simulations run by Kumu Networks estimate that 26 GHz spectrum will need seven to eight times more sites than 3.5 GHz spectrum, as shown in the figure below:
The ideal 5G deployment strategy for operators is to utilize sub-6GHz to provide expansive, city, and country-wide coverage, and utilize dense deployment of mmWave, as shown in the figure, in high-traffic dense urban, urban and even in pockets of suburban areas to provide extreme capacity. Because of the density and a large amount of spectrum available, the mmWave cluster will provide magnitudes higher capacity than sub-6GHz clusters. Additionally, such dense deployments are much easier with mmWave, because of their smaller coverage footprint.
Many operators are working with city governments and utilities to deploy mmWave sites on lampposts, which should provide good densification. Studies have shown that such deployments could provide excellent results, supporting a large number of subscribers with a huge amount of capacity resulting in excellent user experience. FCC, being proactive, has been working to streamline regulations for the deployment of such outdoor sites.
Clearly, lampposts, and in some cases building tops, are the ideal spots for mmWave installations, because they readily have access to power, which is one of the two key requirements for a new site. However, the other requirement—backhaul, is a far different story. Since these are high capacity sites, they need fiber or other high bandwidth means of backhaul. The first issue is, there may not be fiber drops near all the lampposts. Even if there are, bringing fiber to each post is not only extremely time consuming and very expensive, but also hard to manage and maintain on an ongoing basis. This means the industry has to look for alternate cost-effective, and easy to install solutions that offer bandwidth and latency similar to fiber.
Realizing this, the industry body 3GPP has been working on an interesting solution called Integrated Access Backhaul (IAB). IABs are being standardized in Rel. 16, and further enhanced in Rel. 17. Rel. 16 is expected to be finalized in July of this year and followed by Rel 17 in 2021.
<<Side note: If you would like to know more about 3GPP standardization and Rel 17, please check this article series – The Chronicles of 3GPP Rel. 17.>>
IABs use wireless links for both backhaul and access (i.e. regular user traffic). As evident, they will need a large amount of licensed spectrum to offer fiber-like backhaul performance. But that raises a lot of questions —such as “Don’t IABs decrease the available spectrum for access? How would that affect the network capacity? Can you still deliver on the grand promises of 5G?” and many more.
All those are valid questions and concerns. What if I say that there are ways to make and deploy IABs without compromising on the available spectrum? More like having the cake and eating it too, yes, that is possible! How, you ask? Well, you will have to wait for my next article to find out!
Also, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
One of the exciting features the recently finalized 3GPP Rel. 16 brings to 5G is the support for Integrated Access Backhaul (IAB). IABs have the potential to be a game-changer, especially for millimeter Wave (mmWave) deployments by solving the key challenge of backhaul. However, the traditional design of IABs offers low efficiency. In this article, I will take a deep dive into IABs, their deployment configurations, and most importantly, the techniques needed to improve their efficiency.
Side Note: If you would like to learn more about 3GPP Rel. 16, check out this article “3GPP Rel. 16–Stage set for the next phase of 5G, but who is leading?”
What are IABs and how do they work?
IABs are cell sites that use wireless connectivity for both user traffic (access) as well as backhaul. IAB’s predecessor— relays—have been around since 4G days. IABs are essentially improved and rechristened relays. If you have heard of Sprint “Magic Box,” then you have already heard about relays and to some extent IABs as well.
So far, relays were used primarily to extend coverage in places where it was challenging or uneconomical to deploy traditional base stations with fiber or ethernet backhauls. They were also useful when connectivity needs were immediate and temporary. A great use case was the recent COVID-19 crisis when temporary healthcare facilities with full connectivity had to be built very quickly. There are many such applications, for example, indoor deployments in retail stores, shopping malls, etc., where operators do not have access to fiber.
However, with expanded capabilities, IABs have a much bigger role to play in 5G, especially for mmWave deployments who have gotten a bad rap for having smaller coverage footprint. IABs allow operators to rapidly deploy mmWave sites and expand coverage by solving the teething backhaul issue.
IABs are deployed just like any other mmWave sites, of course without requiring pesky fiber runs. As shown in the figure below, IABs connect to donor sites in the same way as smartphones or any other devices. The main donor sites will need high capacity fiber backhaul. One or more IABs can connect to a single donor site. There could be multi-hop deployments, meaning IABs could also act like donors to other IABs. Each IAB could connect to multiple sites or IABs, providing redundancy. This configuration lends itself very well for mesh architecture in the future as well. IABs are transparent to devices, meaning devices connect to IABs just as they would to any regular base stations.
IABs are ideal for mmWave deployments
As I had explained in my previous article, mmWave 5G deployments need a dense cluster of sites to provide good outdoor coverage. Since bringing backhaul to all these sites is cumbersome and expensive, using IABs for such deployments is ideal. For example, in city centers, there could be a handful of donor sites with fiber backhaul, connecting to clusters of IABs around them. As evident, with such approach operators could provide much broader coverage with much fewer fiber runs, in a very short time. The savings and ease of installation are quite obvious.
It should be noted that unlike regular sites, IABs do not add new capacity. They instead share the capacity of the donor site much more efficiently across a much larger coverage area. Since the mmWave band has lots of spectrum, capacity may not be a limitation. Ultimately, the level of data traffic and the amount of spectrum operators have access to will decide the appropriate mix of donor sites and IABs.
One of the issues with IABs is interference. Since donors and IABs use the same spectrum, they might interfere with each other. But thanks to the smaller coverage footprint of mmWave bands, the interference is relatively minimal, compared to traditional bands. Another big advantage of mmWave bands is the support for beamforming and beamsteering techniques. These techniques allow the signal (beam) between all the nodes to be very narrow and highly directional, which further reduces interference.
Performance challenges of IABs
The biggest challenge of IABs is their lower efficiency. Since they use the wireless link for both sides (towards donor and user), they have to either use a separate spectrum or time-share between the sides. In both cases, efficiency is reduced, as the first case uses twice the spectrum, and the latter allows only one side to be active at any time. Let me explain, reasons for it.
If the same spectrum is used for both sides, there will be huge self-interference, meaning the transmitter from one side feeds into the receiver of the other side making interference so high that signal from actual users is drowned out and can’t be heard. So, the spectrum for both sides must be different. Since operators are often short on spectrum, they cannot afford this configuration. Even if they could, there are many complexities, such as requiring frequency planning, inability to support mobile IABs, confusion in handover between the two frequencies, and many more.
Hence, almost every deployment utilizes an alternate approach called Half-Duplex, in which the sides are tuned ON alternatively. The IAB ON/OFF timing has to be synchronized across the network to avoid interference. The situation is even more complicated if there are multi-hop deployments.
The best way to understand the performance of IABs is to simulate a typical system and analyze various scenarios. Kumu Networks, a leader in relay technology, did exactly that. Here is a quick overview of what they found out.
They simulated a typical city intersection, as shown in the figure here. They put a fiber-fed donor at a city intersection and a cluster of IABs along the streets, some connected directly, others in multi-hops. The aggregate throughput is calculated for the entire system with one, two, and multiple hops.
This chart shows the performance of the system, plotting the aggregate throughput of all users in the system vs. the number of hops. The red line in the chart represents the traditional Half-Duplex configuration that we just discussed. With this configuration, the throughput goes down significantly as the number of hops in the system increase. This is because the more hops there are, the less time slice each IABs gets, and lower the throughput.
You also see a blue line on the chart. This represents the Full-Duplex configuration, for which the throughput slightly increases and stabilizes even when more hops are added. Obviously, Full-Duplex is the most desired configuration.
Now, what is Full duplex? As the name suggests, it is keeping both sides of the IAB switched ON all the time, while using the same spectrum. So, with this configuration, there is no need for additional spectrum, no more time-sharing, and hence no more reduced efficiency. But didn’t we just discuss why this is not possible because of self-interference?
Well, what if I say that there are techniques to effectively cancel that self-interference? I know you are intrigued by this and want to know more. But for that, you will have to wait for my next article. So, be on the lookout!
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Many 5G operators are quickly realizing that Integrated Access Backhauls (IABs) are an ideal solution to expand 5G coverage. This is even more important for operators such as Verizon and AT&T, who are primarily utilizing millimeter Wave (mmWave) bands for 5G. As I explained in my earlier articles, traditional techniques only allow half-duplex IAB operation, which severely limits its usability. The SeLf Interference Cancellation (SLIC) technique enables the full-duplex IAB operation and offers full capacity and efficiency. In essence, it just not IABs, but IABs with SLIC are the most efficient, and hassle-free way to expand 5G mmWave coverage.
Side note: If you would like to learn more about IABs, and how to deploy hyperdense mmWave networks, please check out the other articles in the IABs article series.
What is self-interference, and why is it a challenge?
The traditional configuration for deploying IABs is half-duplex, where the donor and access (user) links timeshare the same spectrum, thus significantly reducing the efficiency. The full-duplex mode, where both the links are ON at the same time, is not possible as the links interfere with each other—the transmitter of one link feeding into the receiver of the other. This “self-interference” makes both the links unusable and the IAB dysfunctional.
So, let’s look at how to address this self-interference. As shown in the figure, IAB has two sets of antennas, one for the donor link, and another for the access link. The best option to reduce self-interference is to isolate both the antennas/links. Based on the years of work on the cousins of IABs—repeaters, and relays—we know that for the full-duplex mode to work, this isolation needs to be 110 – 120 dB.
Locating the donor and access antennas far apart from each other or separating them with a solid obstruction could yield significant isolation. However, since we would like to keep the IAB unit small and compact, with integrated antennas, there is a limit to how much separation you could achieve this way.
The mmWave bands have many advantages over sub-6GHz bands in achieving such isolation. Their antennas are small, so isolating them is relatively easy. Since they also have a smaller coverage footprint, the interference they spew into the other link is relatively smaller. That is why I think IABs are ideal for mmWave bands. If you would like to know more about this, check out the earlier articles.
The lab and field testing done by a leading player Kumu networks indicates that for mmWave IABs, the isolation that can be achieved by intelligent antennas separation is as high as 70 dB. That means the remaining 40-50 dB has to come from some other means. That is where the SLIC comes into play.
How does SLIC work?
To explain interference cancellation in simple words, you create a signal that is directly opposite to the interfering signal and inject that into the receiver. This opposite signal negates the interference leaving behind only the desired signal.
The interference cancellation can be implemented either in the analog domain or the digital one. Each is implemented at different sections of the IAB. Analog SLIC is typically done at the RF Front End (RFFE) subsystem, and the digital SLIC is implemented in or around the modem subsystem.
Side note: If you would like to know more technical details on self-interference cancellation, please check this YouTube video.
Again, when it comes to mmWave IABs, because of their RF characteristics, almost all the needed additional 40-50 dB of isolation can be achieved only through digital SLIC. Here are the frequency response charts of a commercial-grade mmWave digital SLIC IP block developed by Kumu Networks. This response is for a 28 GHz, 400 MHz mmWave system, and as evident, it can reduce the interference, i.e. increase the isolation by 40-50 dB.
SLIC enables full-duplex IABs
Here is a chart that further illustrates the importance of SLIC in enabling full-duplex operation of IABs.
t plots the IAB efficiency against the amount of isolation. The efficiency here is measured as the total IAB throughput when compared to the throughput of a regular site with a fiber backhaul. As can be seen, IAB in full-duplex mode is more efficient than half-duplex, if the isolation is 90 dB or more. And with 120 dB of isolation, IAB can provide the same amount of capacity as that of a regular mmWave site. It is pretty clear that SLIC is a must to make IABs really useful for 5G.
When will IABs with SLIC be available?
Well, there are two parts to that question. Let’s look at the second part first. SLIC is not a new concept. In fact, it is available in the products being shipped today. For example, Kumu Network’s LTE Relays that support SLIC are already deployed by many operators. And they already have developed the core IP for 5G mmW digital SLIC and it is currently being evaluated by many of its customers. As mentioned before, the frequency chart showing the interference cancellation is from the same IP block.
Now, regarding the first part, 3GPP Rel. 16, which introduced IABs was finalized only a few months ago in Jun 2020. It usually takes 9-12 months for the new standard to be supported in commercial products. Verizon and AT&T are already testing IABs and have publicly disclosed that they will start deploying them in their networks in 2021.
Final thoughts
In a series of articles, we took a very close look at 5G IABs, especially for the mmWave deployments. The first article examined why hyper densification of mmWave sites is a must for 5G operators, the second article explained how IABs address the main challenge of cost-effective backhaul, and this article illustrates why SLIC is a basic need for highly efficient, full-duplex operation of IABs.
5G mmWave IABs are a powerful combination of well-understood concept, proven technology, and an ideal spectrum band. No wonder the industry is really excited about their introduction. The finalization of 3GPP Rel. 16 has set the IAB commercialization in motion, and operators can’t wait for them to be deployed in their networks.
For more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
IoT Device Security
As the awareness of the transformative nature of 5G is increasing, the industry is slowly waking up to the enormous challenge of securing not only the networks, but also all the things these networks connect and the vital data they carry. When it comes to the Internet of things (IoT), the challenges of security couldn’t be bigger, and the stakes involved couldn’t be higher. The spread of IoT in homes, enterprises, industries, governments, and other places is making wireless networks the backbone of the country’s critical infrastructure. Safeguarding it against potential threats is a basic national security need.
With 5G set to usher in industry 4.0—the next industrial revolution, governments across the globe are understandably taking a keen interest in how 5G is deployed in their countries. There has naturally been a lot of emphasis on its security aspects. The current focus has primarily been on the network infrastructure side. Many countries, such as the USA, Australia, and New Zealand, have put restrictions on buying equipment from certain network infrastructure vendors such as Huawei and ZTE. As stated by these governments, their concerns are regarding the lack of clarity about the ownership and control of these vendors. While these concerns are valid, focusing only on the infrastructure side is not sufficient. It might even be more dangerous because it might give a false sense of security.
Infrastructure-focused security is insufficient
Network infrastructure is only one part of the story. Telecommunications is often referred to as “two-to-tango” as it needs both infrastructure as well as devices to make the magic happen. So, to have foolproof security, one needs to cover both ends of the wireless link, especially for IoT. Securing only the network side would be akin to fortifying the front door while keeping the back door ajar. Let me illustrate this with a real-life scenario. Consider something as benign as traffic lights, which at the very outset, don’t seem to need strong security. But what if somebody hacked into and turned off all the traffic lights in a major metropolitan area? That would surely bring the city to a screeching halt, resulting in a major disruption, and even loss of life. The impact could be even worse if power meters are hacked, causing severe disruption. It would be an outright catastrophe if critical systems, such as the national power grid, are attacked, bringing the whole country to its knees.
When it comes to IoT devices, conventional wisdom is to secure only the most expensive and sophisticated pieces of equipment. However, often, simple devices such as utility meters are more vulnerable to attacks because they lack strong hardware and software capabilities to employ powerful security mechanisms. And they can cause huge disruptions.
IoT device security is a must
IoT devices are the weakest link in providing comprehensive system-wide security. More so because IoT’s supply chain and security considerations are far too different and much more nuanced than those of smartphones. Typically, the development and commercialization of smartphones are always under the purview of a handful of large reputed organizations such as device OEMs, OS providers, and chipset providers. Whereas the IoT device ecosystem is highly fragmented with a large number of relatively unknown players. Usually, large players such as Qualcomm, and Intel provide cellular IoT chipsets. A different set of companies use those chipsets to make integrated IoT modules. Finally, the third set of companies use those modules to create IoT end-user devices. Each of these players adds their own hardware and software components into the device during different stages of development. Because of this, IoT devices are far more vulnerable than smartphones.
Address IoT device security during the procurement
It is evident that IoT users have to be extremely vigilant regarding security and integrity of the entire supply chain. This includes close scrutiny of the origin of the modules and the devices, as well as a detailed evaluation of the reputation, business processes/practices, long-term viability and reliability of the module and device vendors. Because of the high stakes involved, there is also a possibility of malicious third-parties infiltrating the supply chain and compromising the devices even without the knowledge of vendors. Case in point, the much-publicized Bloomberg Business Week report about allegedly tampered motherboards vividly exposed the possibility of such vulnerability. Although the allegations, in that case, are not yet fully corroborated or debunked, it confirms beyond doubt that such vulnerabilities do exist.
It is abundantly clear that the more precautions IoT users take during the procurement and deployment phases, the better it is. Because of the sheer volume, and the long life of IoT devices, it is virtually impossible to quickly rectify or replace them after the security vulnerabilities or infiltrations are identified.
The time to secure IoT devices is now!
Looking beyond the current focus on 5G smartphones, 5G Massive IoT will be upon us in no time. Building upon the solid foundation of LTE IoT, Massive IoT, as the name suggests, will connect anything that can and needs to be connected. This will span homes, enterprises, industries, critical city, state, and national infrastructure, including transportation, smart grids, emergency services, and more. Further, with the introduction of Mission Critical Services, the reach of 5G is going to be even broad and deep. All this means the security challenges and stakes are going to get only bigger and more significant.
So, it is imperative for the cellular industry, and all of its stakeholders to get out of the infrastructure-centric mentality and focus on comprehensive, end-to-end security. Every IoT device needs to be secured, no matter how small, simple, or insignificant it seems, because the system is only as secure as its weakest link. The time to address device security is right now, while the networks are being built, and the number of devices is relatively small and manageable.
Nowadays, security and privacy are on everybody’s mind. Hardly a day goes by without the news of security breaches at major institutions. Most of the time, the reporting is focused on the cloud or network infrastructure, hardly ever on devices. However, when it comes to cellular IoT, devices are the most vulnerable, as I explained in my previous article. IoT devices, being very simple, are usually much easier to hack in to, and can compromise the whole system.
The IoT device ecosystem is unique and far different than that of smartphones, in many aspects. Because of that, security challenges are also different, and many of them are related to a unit called IoT module, which is at the heart of any IoT device. To really understand the scope and impact of these challenges, it is important to closely look at the market landscape of the entire cellular IoT ecosystem. It is even more relevant now, considering that today’s 4G LTE cellular IoT will evolve into 5G Massive IoT.
Unique device ecosystem, much different from smartphones
The cellular IoT device ecosystem has far different considerations, especially from the security and privacy perspectives. The ecosystem includes modem chipset providers, many of whom are the same as those of smartphones, as well as a few smaller players. Cellular IoT also has a different category of vendors, called module providers. They take the barebones chipsets and add their own software and hardware to develop modules with standard interfaces and such. Device vendors develop IoT devices largely based on these modules. Modules simplify the connectivity and operator certification-related complexity so that the device vendors concentrate on developing use case-specific devices. Essentially, modules are a key link in the value chain between chipset providers and IoT device vendors.
Chipset and device market landscape
In the device ecosystem, the chipset market is dominated by the same large and well-known smartphone modem vendors, such as Qualcomm, Intel, MediaTek, Huawei (HiSilicon), Sequans, Altair, and others. They provide a full range of solutions with varying degrees of advanced features, including single and multimode options for eMTC, NB-IoT, with support for 3G, 2G, GPS, onboard processing and so on. Apart from the advanced features, the overall cost is a major consideration for the industry.
The cellular IoT device ecosystem is very large and diverse. The vendors are usually small and possess expertise in specific use cases. They don’t necessarily have the skillset and scale to justify designing devices based off the IoT chipsets. That’s where module vendors come in. Traditionally, IoT vendors were mostly from the US and Europe. However, there has recently been a surge in vendors from China, who are completely unknown outside the country. Many of them have taken cues from and have duplicated device and module designs from traditional vendors. The proliferation of Chinese vendors is primarily due to the Chinese government’s concerted effort and heavy investment in IoT in the country. The Chinese government’s well-funded large IoT projects coupled with considerable subsidies provided by operators such as China Mobile and China Telecom has created an ideal environment for these companies to flourish. The recently awarded 5G contracts are a great example of how the Chinese government and operators support Chinese vendors. These companies, emboldened by their success in China, are now trying to pursue global opportunities. Since they are leveraging the investments and subsidies availed in China, they can be extremely price-competitive in global markets.
IoT module market landscape
IoT modules are the “bridge of trust” between the well-known chipset vendors and the unknown device vendors. Module vendors also work with the regulators and cellular operators for certification, which addresses a significant hurdle for device vendors. The certification ensures smooth and rapid deployment of these devices in the field. As evident, the selection of module vendors is key to ensure device and system security.
The module vendor market comprises of a mix of existing and emerging players. Some players such as Gemalto (Siemens M2M at the time), Sierra Wireless (+acquisition of Sony Ericsson M2M and Wavecom), Telit (+acquisition of Motorola M2M) have been around since the 2G days. Others such as U-Blox entered the market during 3G and early part of 4G, leveraging their mobile expertise. Finally, the emerging module vendors from China, who just like IoT device vendors in the country, have grown at a fast pace, with substantial government support and operator subsidies. There is a long list of such players. A few among them, such as Quectel, SIMCom, Longsung, Fibocom, and Norway, are eyeing global markets. Many others may be looking with watchful eyes at how the initial players fare in their endeavor, before stepping out themselves.
Ecosystem challenges
Anybody who has looked closely at the IoT market realizes that the biggest challenge is its relatively low margins across the board, be it chipsets, modules or devices. Considering that the module vendors are relatively small compared to the chipset, infrastructure, cloud, or application vendors, they don’t have a lot of leverage, resulting in an extreme margin squeeze. In such a situation, increasing market share becomes crucial, putting even more pressure on pricing. This is exactly where government-funded projects and operator subsidies that the Chinese vendors enjoy at home starts to matter and alter the landscape. Because of government support at home, their pricing can be artificially low, reaching predatory levels.
Speaking to some of the sources in the industry reveals that there is indeed a race to the bottom when it comes to module pricing. If it persists, there is a real danger of non-Chinese players becoming financially unviable. This is of grave concern, especially when we are getting ready to move to 5G. Supporting 5G will need huge upfront investments, and the pay off period could be very long. If these companies can’t earn enough profit, they can’t afford to invest in 5G, and potentially, in the worst case, exit the market.
What do these challenges mean for the cellular IoT Industry?
If you feel like you have seen this movie before, you are not wrong! If you examine the turn of events in the cellular infrastructure market during the late 90s and early 2000s, the situation is almost identical. During that time, major American and European cellular infrastructure vendors failed to anticipate such threat and were unable to compete with emerging Chinese rivals that were allegedly supported by their government. Many American and European vendors such as Motorola, Lucent, Siemens, Ericsson, Nokia, with decades of experience and successful existence had to perish, merge, or downsize. Chinese upstart vendors such as Huawei and ZTE found a ripe market and quickly took away market share, grew exponentially, and became dominant players.
Why is the comparison with the past relevant, and why is it a security concern? Well, IoT devices are the weakest link in the security of the overall system. The industry needs to be as concerned about the security of IoT vendors, as much as with the infrastructure vendors, if not more.
What happens if we don’t heed to the teachings of the past? What are the implications for the security and privacy of IoT networks? I will explore those questions in my next article. So, be on the lookout!
In my previous articles here, and here, I explained the rationale for increased focus on device security and its challenges. The threats are more acute, especially from unknown foreign vendors offering predatory pricing. After reading the articles, a few people questioned me about the ills of such a situation and even suggested that the fierce competition will keep the pricing low and vendors in check. In this article, I will explore whether such short-term thinking will help or hurt the industry in the long-term and examine some what-if scenarios. I will also draw parallels to some historical lessons, and finally, offer suggestions on how the IoT ecosystem could protect itself.
Learning from history
The best parallel to what is happening in the IoT vendors space is the situation of American and European cellular Infrastructure vendors during the 3G transition, in the late 90s and early 2000s. I vividly remember it because I was amidst all of it, working for one such company. The world was slowly moving from 2G to 3G. The infra behemoths mostly from US and European companies, including, Lucent, Motorola, Nortel, Nokia, Siemens, Alcatel, and others were trying to get their customers to move to 3G quickly. However, they soon faced unprecedented headwinds from unknown Chinese companies named Huawei and ZTE, offering extremely low pricing. It was alleged that their low pricing was not only because of their lower cost but also more importantly because of the support from their governments. American and European vendors, confident because of their decades of heritage and experience, never took these players seriously. But alas, because of the dot com bust, and intense price pressure, many of those behemoths folded in no time. Others cobbled together to survive, but as a much smaller shadow of their former self. Only two among them remain in business, that too largely because of the US market where Chinese vendors are not allowed. From the ecosystem perspective, there are far fewer choices of vendors globally, and even fewer in the US.
So, what can we learn from this harrowing experience? Well, simply making decisions on cost alone might be very attractive in the short run, but might have negative long-term consequences. Once the landscape changes, it cannot be put back.
Perils of inaction now
If this practice of offering artificially low prices on IoT devices and modules because of Chinese government subsidies goes unchecked, none of the non-Chinese vendors can sustain low margins and will edge towards bankruptcy or exit the market. Very soon, there would be anybody of repute left.
In such a situation, the IoT needs of critical infrastructures such as power grid, smart cities, installations of national security, and others, will not have any option but to rely on unknown suppliers without any proven track record or reputation. The case would be similar for large enterprises, industrial complexes, and such where IoT devices are a basic staple. The confidence in the security of IoT devices should be unquestionable and not even up for debate. Consider 5G Massive IoT, which will build on the solid foundation of 4G IoT. Additionally, going forward sharing of spectrum between defense and civilian cellular networks is going to be the norm. An early example of such an arrangement is CBRS, which allows sharing of spectrum between the US Navy and cellular operators. Any security breach in such deployments could expose the critical military operations for sabotage. These include radar and satellite communication systems.
Generally, there are risks with relying on a group of suppliers all coming from the same region/country. What if, trade wars flare up, resulting in high tariffs, or even worse, import/export bans, similar to the recent US ban of Huawei? In such a case, the whole critical infrastructure could come to a screeching halt — also, such vulnerability provides a huge advantage to the foreign country in any trade negotiations.
Many of the Chinese vendors are very small without any public, reliable information on their background, ownership, business, objectives, or motives. What if they plan to conquer the market now with low pricing, and increase prices exorbitantly soon after all the competition has diminished? Even worse, what if they had ulterior motives? No matter how much these companies vouch for their authenticity and business objectives, unless they can open themselves for close scrutiny or better yet, list on some of the reputed stock exchanges in the US or Europe, it is extremely hard to be convinced of their authenticity. If you consider the headwinds that Huawei is facing, even with its significant brand recognition, the path for unknow IoT companies will be even harder, if not virtually impossible.
How to ensure device security
Historically, utilities and many critical national infrastructure providers have been very conservative in their vendor selection. They make their vendors go through an extreme, multi-level vetting process, covering both technical as well as financial viability. They should continue this practice and include evaluation of overall ecosystem health, long-term impacts, and diversity of suppliers. Private enterprises should get the cue from them and be very careful in their vendor selection as well. The assessment should also include import bans, trade wars, and other such unlike yet catastrophic considerations.
The IoT users should evaluate the lifetime cost of ownership of their IoT devices, instead of just the initial cost. IoT devices typically have a very long life, extending ten years in some cases. During such a long time, the cost of maintenance, timely upgrades, quick fixing of security flaws exceeds the original procurement cost of the device. Additionally, these institutions should examine and understand the motivation behind predatory pricing and act with a long-term point of view.
As a last resort, the government and regulators should look at putting safeguards in place for procurement of critical infrastructure. The focus should not just be on the network, but equally, if not more on the devices as well. For example, the US government banned some vendors from supplying cellular network infrastructure. There could be a case be made for similar safeguard for devices for critical uses as well.
The biggest step the IoT users, be it government agencies or private enterprises, can take is to make sure to create an environment to nurture diverse, strong, reputable, and reliable players who value security.
The Federal Communications Commission (FCC) will vote on Friday to virtually block Huawei’s access to the U.S. market, but this rare bipartisan action only protects one element of America’s digital infrastructure. In reality, the likeliest and most susceptible security vulnerabilities aren’t well understood by policymakers, and we’re at the beginning of a very long fight.
In the $2.4 trillion telecom sector, the dawn of 5G is more than a buzzword. It’s truly a new era full of great promise, as well as great danger. But our policymakers’ focus has only been on the big companies with name recognition, without attention paid to the less prominent ones that might pose much larger security risks.
Huawei and ZTE (another major Chinese manufacturer up for the FCC’s vote, but which doesn’t get the same publicity) are easy targets for the uninformed masses who fear all things China. Meanwhile, the national security threat from other Chinese-subsidized and foreign-controlled telecom companies is potentially more vast and insidious than our leaders in Washington, DC understand and acknowledge.
There’s been no mention by politicians, in news media or on social media about the security risks posed by devices or cellular modules – the mini-computers that make up the brains of the Internet of Things (IoT). There will be 43 billion in the world by 2023, and consequently they’re the favored target for hackers. Unlike phones or chipsets, these modules are untraceable once embedded in devices. These elements are so critical in connected infrastructure that If a hostile state or player gains control with intent to attack the U.S., it’s far more horrific to imagine the scale of destruction than with a compromised smartphone or social media account.
Unauthorized access to your iPhone or Facebook enables spying. But access to an IoT device enables direct action in the real world. Shutting off power to Washington, DC. Turning off traffic lights in Manhattan. Pumping the breaks on autonomous cars in San Francisco. Stopping heat in winter to homes in Minnesota. Interfering with medical devices in Florida.
Forget the compromised security of smartphones. A compromised module – one of dozens that’ll be in every American home within the next few years – could mean literal life or death.
Five of the top ten IoT module manufacturers are Chinese, and they rake in 71 percent of the industry’s revenue using the same government backing and Huawei playbook to stifle competition in the U.S. and Europe. China’s heavy investment in IoT in the country – coupled with considerable government subsidies – allow Sunsea, Fibocom and Quectel to be extremely price-competitive in global markets.
Industry insiders have been vocal in sharing stories of these companies slashing module prices below reasonable production costs. Driving out competition with a questionable pricing structure – and the consequent potential for future manipulation of affordability and availability – adds another layer to the concerns regarding 5G security.
It’s arguable that Chinese vendors Sunsea, Fibocom and Quectel are clones of Huawei, especially since they’ve effectively cornered the global market for the most critical components in the IoT. That’s why it’s important for politicians and security experts to glance up from their research on Huawei to better understand the implications of U.S. reliance on Chinese IoT manufacturers.
The U.S. government shouldn’t ban a company just for being China-based, nor target one just for being in the business of telecommunications or technology. Not every tech company in China is a stooge for the government with unreserved, evil intent. In fact, companies like Quectel and Fibocom thrive in good part due to legitimate innovation, amazing engineers and good quality.
Nonetheless, the FCC will vote on Friday on Huawei and ZTE. We must hope that this is just a first salvo in making 5G and the Internet of Things secure, with more investigation and possible action to come. If the Trump Administration truly wants to protect the American people from foreign interference via smart devices, the FCC and Congress need to be more strategic in looking at potential threats beyond the flashiest names.
The millions of IoT devices we use knowingly or unknowingly make our modern societies function. These include utility meters, traffic lights, and they even connect to the national grid. 5G is elevating their use to even higher levels and making them an integral part of the country’s critical infrastructure.
But that also is making that infrastructure more vulnerable to security threats. Reps. Mike Gallagher and Raja Krishnamoorthi of the U.S. House Select Committee on China understand this threat and are rightly sounding alarm bells. It’s fascinating how these seemingly benign and almost invisible IoT devices can be such a grave threat.
IoT devices are an integral part of the national critical infrastructure
The U.S. IoT market is massive, estimated to be $199B in 2024, according to Statista. IoT technology is found in almost any connected device for individual or industrial use. Since IoT devices manage and control the country’s critical assets, including power, water, natural gas, and many industries, even more with 5G IoT, they are part of national critical infrastructure.
Imagine the havoc the sudden collapse of the national grid or large-scale disruption of utilities can create. Such catastrophes can bring the country to a screeching halt, threaten lives, and cause lasting damage.
Despite its critical role, IoT security hasn’t gotten the attention of regulators and governments it deserves. It was considered a “business risk” to be managed by the industry. Fortunately, that is starting to change. The recent letters from the congressmen to the FCC, the Department of Defense, and the Treasury Department regarding cellular connectivity modules used in IoT devices indicate that lawmakers are now treating this as a national security issue.
Vulnerabilities of IoT devices
When it comes to cellular IoT devices, the biggest threat is the security of the connectivity module (aka IoT module) on which they are built. This module is the gatekeeper, which controls all the data going in and out of the device. If the module is compromised, the whole device, and in many cases all the systems it connects to, are compromised.
Connectivity modules could have many vulnerabilities. There could be backdoors built into the hardware or the software when modules are shipped from the factory (called “Zero Day” attacks) or introduced during numerous upgrades modules receive during their more than ten years of lifespan. These upgrades are similar to the ones our smartphones receive but are usually automatically executed.
Because of prohibitive costs, operators can’t examine and verify all the devices and their firmware updates. No matter who and how these vulnerabilities are created, they can be exploited by bad actors. If those bad actors are state-sponsored, the risk is even higher.
As FBI Director Christopher Wray mentioned in his recent testimony, “Hackers are positioning on American infrastructure in preparation to wreak havoc and cause real-world harm to American citizens and communities.”
The attackers can stay dormant for a long time and attack at a time of their choosing. Hence, it wouldn’t be wrong to say that any device with such vulnerabilities can become a ticking national security timebomb.
IoT security: A tragedy of commons
IoT is a largely low-margin, low-revenue (per subscription) business with a highly cost-competitive market. Most operators manage security as a business risk. They invest just enough to protect against fraud and liability. National security probably never makes it to their priority list.
Considering the complexity, cost, and potential risks involved, the responsibility of ensuring the security of IoT devices, from a national security perspective, rests squarely on the regulators and the government. The simple and highly reliable approach to achieve that seems to be establishing a fully trusted supply chain comprising local players and players from trusted national partners.
This is where things get complicated. According to Counterpoint Research, almost a quarter of the US cellular connectivity module is controlled by one Chinese company, Quectel. More alarmingly, a large portion of the IoT modules used in the cellular network used by first responders called FirstNet are also Chinese.
And that’s precisely why these congressmen are concerned and asking relevant US departments to intervene. As opined by many law experts, Chinese laws require all Chinese companies “to support, provide assistance, and cooperate in national intelligence work.”
So, then the question arises: Is the Huawei-like approach of totally banning these companies the right strategy? If not, are there any other remedies available? What are the pitfalls? All these questions need to be addressed before taking any substantive action. Look out for my next article for details on them and possible answers.
Always Connected PCs (ACPCs)
Have you heard the phrase “converting poison into medicine?” Well, that’s kind of what is happening to the PC industry now. Let me explain. Not too long ago, the rise of powerful smartphones and tablets, which were primarily powered by ARM processors, decimated the PC market. Interestingly, the tenets of smartphones – always connected, long battery-life, thin and light weight— that caused the downfall of PCs are bringing life back into them. The introduction of ultra-thin laptops and 2-in-1s are making PCs get their mojo back. In early December 2018, Qualcomm announced a major step in this smartphonification of laptops. Their new world’s first 7nm Snapdragon 8cx compute platform not only embodies all those hallmark characteristics of a smartphone, but also will provide the performance that will meet or exceed that of traditional intel x86 processors. Most importantly Snapdragon 8cx will run the full Windows 10 Enterprise version, and will natively run browsers and many other applications.
Qualcomm dipped their toes into the PC market by creating a new category, aptly named Always Connected PC (ACPC), which used their repurposed mobiles SoCs. They started with Snapdragon 835 and very recently Snapdragon 850. All these were built for Android OS, later optimized for Windows 10 and for computing devices. They had restricted Windows version, and offered limited performance mainly because the applications were run using ARM to x86 translators. They were good enough for use cases with light and simple tasks such as browsing, video etc., but not ready for processor intensive apps or enterprise-grade use cases. But the story is completely different for newly announced Snapdragon 8cx.
Qualcomm said that Snapdragon 8cx is purpose-built from the ground up for computing and Windows 10. Supposedly they have been working on this since 2015! Snapdragon 8cx indeed shares the architecture with, and was announced at the same time as, their flagship Snapdragon 855 mobile SoC. This will naturally attract the skepticism that just like previous version, this platform might also be slightly tweaked version of the mobile SoC. However, when you look closely at the significant difference between the building blocks of the two, it is quite clear that indeed Snapdragon 8cx is a different breed. For example, 8cx has the much more powerful Kryo 495 CPU vs. 485 on Snapdragon 855. The clocking configuration for the eight cores of the CPU is different as well. The Snapdragon 8cx has more advanced Adreno 680 Extreme vs. 640 in the mobile SoC. The Snapdragon 8cx has features that are only found in high-end enterprise laptops, such as support for dual HDR 4k displays, up to 16 GB RAM, NVMe SSD, UFS 3.0 and many more. Most importantly, during the launch event, Microsoft confirmed the Windows 10 Enterprise support for the Snapdragon 8cx, which indeed is a strong vote of confidence to the platform. Additionally, many popular applications such as Chrome, Firefox, Microsoft Edge, Internet Explorer browsers as well as Gameloft, Hulu and other applications run in the native mode and a wide range of apps are optimized for ARM on Windows.
When you combine these features along with trendsetting X24 LTE modem that provides up to 2 Gbps peak speed, Quick Charge 4, advanced audio capabilities with aptX HD codec, as well as the hallmark ARM features, multiday battery-life, always-on connectivity, I think there is no question that Snapdragon compute platform and ARM architecture is ready for primetime, and is well-equipped to challenge the dominance of Intel x86 based platforms in performance computing. Qualcomm’s claim that Snapdragon 8cx performance is comparable to a competitor (supposedly Intel core I-5) and is delivered at twice the battery-life should send chill down Intel’s spine.
Qualcomm confirmed that Snapdragon 8cx can be integrated with X50 modem for 5G connectivity, But for some reason it didn’t make it a major selling point. Looks like they are worried about the 5G taking away all the goodness of the compute effort, or perhaps there might be laptops which will not support 5G. Qualcomm is tight-lipped about the reasons. In my view, although X24 modem has excellent performance, ACPC with 5G is the ultimate ACPC one could have. After all it’s the “connected” PC, why not supersize it and make it the best on all aspects? Also, the huge capacity gains and efficiency improvements of 5G will enable operators to offer very attractive “always on” unlimited plans.
Coming back to the competitive landscape, ultra-thin PCs are the most profitable tier for Intel. They have had a good run with them so far. Some devices such as Microsoft’s Surface Pro and HP’s Folio have shown that Intel I-5 core processors can be designed into attractive fanless laptops with long battery-life, However, most other Intel x-86 based laptops fall much short. With Snapdragon 8cx based laptops planned to hit during second half of 2019, amidst the busy back to school and holiday seasons, it would be interesting to see how Qualcomm and Intel platforms will compete and perform. Come 2020, this will very quickly turn in to not just processors battle but also a 5G battle.
With 5G, the ACPC battle gets even more interesting. Based on Qualcomm’s comments, it seems that they will have 5G based ACPC in the market in early 2020, if not in late 2019. Intel has announced its own 5G connected laptop plans with Sprint. Knowing x-86 performance and their delayed 5G modems, lt will be a tall order for Intel to beat the battery -life and more mature 5G connectivity of Qualcomm ACPCs. With connected ultra-thin, long battery-life laptops continue to gain popularity and Qualcomm catching up in performance, Intel must adapt to extremely fast pace of innovation that smartphonificaton is bringing to PC industry to compete effectively.
A bunch of recent events, including the announcement of Microsoft Surface Pro X and Samsung Galaxy Book S, are supporting a turning point in the largely stagnant laptop market. These devices, dubbed as always-on, always-connected PCs (ACPCs), bring the hallmark characteristics of smartphones to laptops while also providing enterprise-class computing performance. As a long-time observer and an industry analyst, I strongly believe that ACPCs are set to transform laptops and redefine personal computing.
After revolutionizing portable personal computing in the late 1980s and ’90s, laptops have not changed much. Of course, they have become a bit thinner, lighter and more powerful. But considering that you still need to carry the charger and look for Wi-Fi or other connectivity wherever you go, you can’t call those incremental improvements a big leap. These incremental steps look even smaller when compared to the speed at which smartphones have evolved.
ACPCs completely change the outlook for laptops and accelerate the pace of innovation. They are always on, connected to LTE or 5G, can run a full day without needing a recharge and provide performance at par with or better than today’s bulky laptops. All of this is made possible by a new breed of processors with micro-architecture similar to the ones used in smartphones.
Smartphone Revolution Powered By Arm Processors
Ever since their debut in the early 2000s, smartphones have been dominating the personal computing space. They have rapidly grown in both performance and influence. Almost all of today’s smartphones are powered by processors with a micro-architecture designed by the British company Arm Holdings. Smartphone players such as Apple and Qualcomm use processor cores designed by Arm.
(Full disclosure: Qualcomm is a client of my company, Tantra Analyst.)
These processors have been proven to be power-efficient. Designed primarily for portable devices, they seem to have previously focused more on power consumption than processing capability. But the evolution of these processors and the optimizations from the original equipment manufacturers (OEMs) have dramatically improved their performance in recent years. This has set Arm processors up for performance-focused devices such as laptops, PCs and even servers.
Laptops Have Survived The Test Of Times
Laptops have defied many predictions of ultimate demise. It was netbooks they said would kill the laptops, but they ended up just being a fad. Then it was tablets that were supposed to replace laptops. But they never scaled up.
The way I see it, the biggest trait of laptops, which made them stand strong against these odds, was their ability to be a productivity and content creation tool — be it for personal and consumer-type use cases or enterprise ones. The basic needs for such use cases are excellent performance and support for thousands of existing Windows applications.
Writing The Next Chapter Of Laptops
The first attempt at making the Windows operating system (OS) compatible with Arm processors was circa 2012, called Windows RT, designed for tablets. But it turned out to be a dud, mainly because it couldn’t run existing applications. Its makers, Microsoft and Qualcomm, still believing in the concept, doubled their efforts. This round made sure Windows 10 and all those existing applications would work flawlessly on Arm processors used in ACPCs.
It is debatable whether ACPCs are a new category or an existing yet transformed laptop category. Some OEMs such as Lenovo, Samsung and Asus are continuing with traditional clamshells, whereas others like Microsoft are trying out the 2-in-1 model with detachable displays that covert to fully functional tablets.
I think it is telling that many PC vendors have introduced ACPCs. I believe that the attractiveness of bringing the smartphone-like battery life and user experience to laptops, the proliferation of 5G, along with a strong commitment from Microsoft and the entire PC ecosystem makes it clear that ACPCs are the future of laptops.
What’s Inside The ACPCs?
ACPCs are powered by Qualcomm Snapdragon platforms. The first-generation devices used optimized versions of Snapdragon SD835 and SD850. But the latest ones, including Samsung Galaxy Book S and Surface Pro X, use purpose-built Snapdragon 8cx (Pro X uses a modified version of 8cx chip called SQ1). Snapdragon 8cx has a powerful CPU and GPU, as well as strong artificial intelligence capability.
I’ve seen many popular browsers, video game platforms and media player developers porting their applications to run natively on Arm processors. Likewise, many enterprise vendors have ported their applications on Windows on Arm. Adobe announced that its drawing and painting applications will be available to ACPCs. And according to Microsoft, Surface Pro X offers three-times higher performance compared to the previous generation Surface Pro 6 that used a conventional x86 processor. So, there is no question in my mind that ACPCs are now primed for running high-performance workloads of consumers as well as enterprises.
The progress of ACPCs may be slower than some might have expected, but it takes time to transform an industry with more than three decades of history. I believe the Arm micro-architecture ready for performance-focused computing has repercussions beyond laptops, as there could be many applications and use cases.
What This Means For Marketers
Because of the stagnant market, it seems that marketers have gradually reduced their attention to laptops and, instead, moved their strategies toward media more suited for smartphones. I believe ACPCs will drastically change that equation. Marketers will likely need to quickly pivot their marketing plans and spend. Specifically, the 2-in-1 model almost creates a new category of devices, and marketers will be well served if they capitalize on this growing popularity and devise their marketing plans around them.
We are at the turning point of personal computing, and at the dawn of a new era with devices powered by Arm micro-architecture. It will be interesting to watch it unfold, especially for an analyst and a keen industry observer like me.
The fun of being an analyst is that you get to test new gadgets firsthand and share your opinions without any inhibitions. It also comes with a sense of responsibility towards your readers. I got my Microsoft Surface Pro X about two weeks ago and have been using it as my daily driver ever since. My verdict – it is an excellent productivity notebook for a pro user like me, who extensively uses office applications, browsing, videos, and social media. Beyond that, it also signals the dawn of a new class of always-on, always-connected notebooks (aka ACPCs) that will redefine personal computing.
<<Side note: If you would like to know more about ACPCs, please check out my earlier articles here and here>>
Easy set-up
I bought a 16GB/256GB Pro X model with a keyboard and stylus. The windows set-up on this was a breeze. The impressive part was the ease of enabling cellular connectivity, like a smartphone—push the nano-SIM in, a couple of clicks, and you are ready to go. I have been using connected laptops since 2008/3G days. It was always a pain to transfer a subscription from one laptop to another. Although I didn’t utilize it, a user-removable SSD drive is another neat feature. The best part of this machine is its always ON feature, just like smartphones. You come in front of it, your face is recognized, and it is ready to go. Additionally, OneDrive allowed me to move files from my old laptop seamlessly.
Ever since setting it up, I have been using it as my primary computer for working in my home office, for meetings with clients, bringing it to my son’s karate and other classes, etc. Thanks to the Snapdragon/SQ1 processor, Pro X is so thin, and light, carrying it around is extremely convenient.
A solid productivity machine
The biggest character of Pro X is that it is a great workhorse, and using it is a joy! Its bright display is beautiful, and its thin bezels make a full 13” screen fit in a small form factor. Coming from my 13.3” laptop, I felt homely. I am a power user of many of the Microsoft Office tools, including Word, Excel, PowerPoint, and Outlook. The user experience was very snappy and super responsive, even when multi-tasking with lots of documents, spreadsheets, and presentations. Switching between windows of the same app or between different apps was very smooth.
I use emails on Outlook as my to-do list—keeping many email windows (more than 15) open till the action items in them are dealt with. My previous laptops had issues dealing with this, especially when the laptop was put to sleep and turned back on. Many times Outlook would become unresponsive, requiring restarts. But Outlook on Pro X has been pretty stable so far.
A lot of my work happens through the browser, and Chrome is my favorite. I usually have more than ten tabs open that span multiple Gmail accounts, local, national, and international news sites with video feeds, ads, etc., Tweetdeck and Twitter pages, Yahoo finance page, multiple forums that I regularly follow, Whatsapp web, Google Sheets and Google Photos that I share with my wife, Facebook, and others. I also use tabs as my to-do list. My kids call me crazy when they see how many tabs I use. Surprisingly, the user experience was smooth even with those many tabs open. As you might know, Chrome currently runs in the emulator mode. Microsoft recently announced the beta of their Edge browser that will run natively on ARM processors (i.e., on SQ1) that would further improve the performance and battery life. I am thinking of migrating to Edge and evaluate the experience myself.
So, all in all, I was very impressed with the workload Pro X could take and proved itself as a solid machine.
A perfect companion for travel and offsite work – battery life and connectivity
The biggest differentiation of ACPCs such as Pro X, as touted by Microsoft, Qualcomm, and Arm, is their more than a full day of battery life. I really experienced it while using Pro X. I would always have at least 10 -20% of battery left after a full day of work (8-9 hours). That was using a mix of Wi-Fi and cellular connectivity. I bet I could eke out even more with optimized screen brightness and connectivity settings.
Pro X transformed how I go out for meetings and travel. I would always bring the charger with my old laptop to avoid battery anxiety, which necessitated carrying a bag. Once I decided to get the bag, I would throw in lots of “just-in-case” items that I hardly use. But with Pro X, viola! No anxiety, no charger, no bag, and none of the other junk! This thing is so sleek, light, and stylish. I carry it as a notebook! And a nice stylus with handwriting converter to boot! Additionally, with fast charging, its battery can go from 0 to 100% in a little over an hour.
For a road-warrior like me, integrated cellular connectivity is a no brainer. It is such a relief that I am always connected, no matter where — no need to search for Wi-Fi, no worries of security and privacy, etc. Also, no need to use my phone’s hotspot and worry about its battery running out.
What about gaming and other incompatible apps?
This is the most frequent question I encountered when carrying or using Pro X in public. Well, I am not a gamer, and, it turns out, I don’t use those x-86 apps that don’t have 32-bit versions, which are needed to run them on Pro X. So, I am not the best person to give a judgment on that.
There have been reports of people having trouble running games on this. That has actually worked in my favor! Ever since I opened the Pro X package, my teenage son had his eye on this thing, always tinkering with it. I think he tried a few of his favorite games, such as Minecraft, Fortnite, CS:GO. I have a feeling either they didn’t work, or he didn’t like the user experience. That is because, after the first couple of days, he resorted back to his powerful gaming rig. Obviously, Pro X is no match to his purpose-build beefy desktop.
What are the misses?
I think the biggest miss is its steep price tag. Even the most basic configuration with only the keyboard would cost $1,100 plus tax. So, this is no mainstream computer but targeted toward those who value its premium design and features.
Despite the premium cost, I was surprised that there was no cellular data plan included. I would have expected Microsoft to bundle at least a few months, if not a year, of data to let consumers evaluate the always-connected experience.
Pro X is a notebook, literally not a laptop. As with any Surface Pro, it is almost impossible to use it on your lap.
Heralding the ACPC era
Many people might review Pro X like any other expensive gadget, on its merits and misses. However, the relevance of Pro X is far beyond this one product. Its performance conclusively proves that ACPCs are real, and can deliver on the promises their proponents Qualcomm, Microsoft, and Arm have been making for the last two years. Pro X also shows the strong commitment these companies have for the ACPC concept. As mentioned, Pro X is not a mainstream device, but it will herald a new era of personal computing, and I am sure there will be more cost-effective options soon that will make arm-based ACPCs mainstream.
Qualcomm, during its annual Tech Summit in Maui, Hawaii, unveiled a comprehensive portfolio of platforms for Always-On, Always-Connected PCs (ACPCs) to cover the full spectrum of tiers and use cases. This announcement further solidifies the industry’s move toward ACPCs, led by Qualcomm, Microsoft, and Arm.
<<Side note – If you would like to know more about ACPCs, please check out my earlier articles here, here and here. >>
A broad portfolio of offerings
The Snapdragon 8cx, announced at the same event last year, was the first real ACPC platform that brought Arm chips into the performance and enterprise computing space. Since then, the 8cx has powered a handful of devices, including trend-setting Microsoft Surface Pro X, stylish Samsung Galaxy Book S, and the first 5G supported Lenovo ACPC. Many other designs are in the pipeline.
While the Snapdragon 8cx was targeted at the premium and high-performance segment, the newly announced Snapdragon 8c and Snapdragon 7c offer OEMs the choice to address to the other tiers in the highly competitive laptop space. The tiering is based on CPU, GPU, and DSP performance, Artificial Intelligence (AI), and Machine Learning (ML) capabilities, and cellular connectivity speeds. However, Qualcomm never forgets to emphasize that even with tiering, all the platforms squarely deliver on the ACPCs famed promise of smartphone-like ultra-thin form-factor, multiday battery life, and excellent connectivity, without any compromises. This promise is attractive for any tier, and that’s why almost every major PC OEM has embraced ACPCs.
Snapdragon 8c for everyday laptops
The key aspect of Snapdragon 8c is enabling sub-$800, highly capable, consumer, and enterprise ACPCs that excel in high productivity workloads, as well as top-notch entertainment and multimedia performance. The 8c is a beast sporting a 7nm octa-core Kryo 490 CPU, Adreno 675 GPU, 4-channel LPDDR4x memory, support for NVMe SSD, and UFS 3.0, dedicated Hexagon AI/ML Tensor Accelerator, integrated Snapdragon X24 LTE modem, and many other impressive features.
Snapdragon 8c offers 30% higher system performance than its predecessor—Snapdragon 850, more than 6 Trillion Operations Per Second (TOPS) AI/ML, and up to 2 Gbps of cellular speed.
You can get more detailed specifications of this platform here.
Snapdragon 7c for entry-level ACPCs
The primary focus of Snapdragon 7c is to bring the ACPC experience to even the cost-conscious entry-level laptops. These laptops are highly functional, with a sub-$400 price point. The 7c sports 8nm octa-core Kryo 468 CPU, Adreno 618 GPU, 2-channel LPDDR4x memory, robust AI/ML support unheard of at this tier, and integrated Snapdragon X15 LTE modem, among other things.
It offers 25% higher performance than competing solutions in the entry tier, more than 5 TOPS AI/ML, and up to 800 Mbps of cellular speed.
You can get the detailed specifications of this platform here.
Busting the myths of portability
Till now, portability in computing always meant a complex trade-off between weight and size, performance, battery life, and cost. If you wanted a thin and portable computing device, the only option was to use a tablet and be content with limited performance and crippled functionality, without the support for productivity OS such as Windows 10. On the other hand, if you wanted robust performance and long battery life, you had to cope with large and bulky devices with extended battery packs. If you wanted a combination of these features, you had to be ready for a hefty price tag.
But with ACPCs, you get uncompromised experience without any tradeoffs— Arm architecture that offers superior battery life and performance, full Windows 10 support for unhindered productivity, integrated cellular modem for always-on connectivity. All of that together in a thin, light-weight, and very attractive form factors, just like your smartphone.
The ACPCs are essentially aligning the computing industry with the smartphone industry. That will bring the smartphone industry’s hallmark of rapid innovation to the computing industry. Together both will benefit from the large economies of scale, cost-efficiency, and a huge ecosystem of OEMs, app developers, consumers, and enterprise players. That, in turn, has the potential to revitalize the stagnant and uninteresting laptop market and bring it much needed excitement and growth.
In other words, ACPCs are set to challenge the status quo of Intel’s x86 architecture and revolutionize the laptop/personal computing market.
In closing
Qualcomm’s announcement expanding the reach of ACPCs illustrates how the “Windows on Snapdragon” concept that Qualcomm, Microsoft, and Arm envisioned a few years ago is slowly but steadily coming to fruition. The comprehensive portfolio of platforms will pave the way for making ACPCs mainstream, bringing their benefits to all market segments, not just for the premium tier.
It will be interesting to see how the tussle between deeply rooted traditional x86 architecture and the disruptive Arm architecture unfolds and shapes the laptops and personal computing space.
While smartphones are all the rage in 5G, the market trends are aligning for a quiet revolution of 5G-enabled laptops (5GPCs) and other non-smartphone computer devices. The world’s first 5GPC, Lenovo’s Yoga 5G, was introduced at CES 2020, kick-starting the process. Although always-connected, always-on laptops (ACPCs) have been around for some time, their widespread adoption has been constrained mainly because of restrictive and expensive data pricing. The extremely high capacity and improved efficiency of 5G, which allows operators to offer attractive pricing combined with the remarkable improvement in the performance of ACPCs, has the potential to push the 5GPC market into high gear.
5G Offers The Best Network Technology For ACPCs
5G traction has been beyond anybody’s expectations. As of the end of 2019, 348 operators were investing in 5G and 61 operators had already commenced 5G services. The operators who have launched are steadily expanding their coverage. The introduction of dynamic spectrum sharing (DSS) — which allows 5G to use the 4G spectrum, expected commercially in the second half of 2020 — will substantially improve coverage. Thanks to the diligent work of regulators around the world, 5G has over 10 times more spectrum than 4G in many cases. That includes all the bands: higher (e.g., millimeter wave), middle (e.g., 2.5 and 3.5 GHz) and lower (e.g., 600 MHz).
Although 5G’s super-high speeds get all the attention, the biggest advantage of 5G is its extreme capacity, thanks to all that spectrum. That means cellular operators have the opportunity, more than ever, to experiment with new pricing and data plans. We already see glimpses of that in the true unlimited data plans for smartphones and fixed wireless access (FWA) services and plans. I strongly believe that 5GPCs will be a worthy addition to the new horizons operators will explore with 5G.
For the operators pouring billions of dollars into 5G network build-out, the sooner and the more users they get on that network, the better. The abundant capacity of the 5G network allows operators to move laptop users into a new usage paradigm: from today’s “data sipping, only turning on the cellular connection when needed, always conscious of hitting the data limit” mindset to the “anywhere, anytime, worry-free” paradigm.
5G also allows true service bundling: a single contract and attractive pricing for smartphones, FWA, laptops and other connected devices. This, while reducing the cost for users, will increase the overall average revenue per user (ARPU) for operators. Bundled pricing brings service stickiness and builds long-term customer relationships. Operators could also work with 5GPC device OEMs to bundle the connectivity as part of the device cost, for at least the first months/year of 5G service. As a seasoned ACPC user, I know that once you experience the liberation of not looking for hot spots and constant worries of the safety of hot spots, hardly anybody will go back, as long the cost of that experience is reasonable.
5GPCs Will Be The Best ACPCs
ACPCs have been continuously improving their performance and are now ready to be productivity, enterprise and performance laptops. For example, the recently announced world’s first 5GPC by Lenovo offers high performance and 24-hour battery life. (Full disclosure: The laptop is powered by Qualcomm Snapdragon 8cx, and Qualcomm is a client of mine.) With a 5GPC, you can work from virtually anywhere without worrying about being near a power outlet or a Wi-Fi hot spot. The data speeds with 5G should be far better than any regular hot spot would provide.
With today’s traditional laptops that have shorter battery life, even if you had cellular connectivity, the untethered experience is limited because you have to always think of charging options. The extremely long battery life of ACPCs makes them truly untethered. Not being tethered physically or wirelessly is an exhilarating experience. And it is logical to think people would be willing to spend a little bit more for this higher perceived value.
5GPCs will be particularly attractive for enterprises. There are many reasons for this, and the biggest one is security. One of the main security risks for enterprises is their employees connecting laptops to unknown, unsecured Wi-Fi hot spots. With 5GPCs, IT departments will be certain that their employees will always be connected to a secure known 5G network. The potential costs of lost data or security breaches would certainly outweigh any minimal increase in the cost of 5G cellular connectivity. Also, 5GPCs bring many other benefits to enterprises: Integrated GPS allows reliable asset tracking and security mechanisms such as geofencing; being always on, laptops will always be up to date with the latest security patches and updates. Of course, the increase in employee productivity by being reliably connected all the time with excellent speeds goes without saying.
5GPCs will bring much-needed excitement to the largely stagnant laptop market. If managed properly, the 5GPC trend has the potential to create a new full replacement cycle, which might last for years.
All the stars are aligning for 5GPC to be an attractive market for the industry. 5GPCs have the performance to make the best use of 5G and provide a differentiated experience. Both consumers and enterprises will benefit enormously from 5GPCs. Cellular operators can utilize 5G’s extreme capacity to offer services that make true anywhere, always-connected, fully untethered experiences possible. But it will only be a reality if they can offer attractive and innovative pricing and data plans. With major 5GPC device announcements trickling in and operators looking to expand their 5G offerings, it will be interesting to see how the story of 5GPCs plays out.
For the last few weeks, while the influencer world was busy with testing and reviewing the Samsung Galaxy S20 and Galaxy Z Flip smartphones, I was diligently using and testing another equally important and impressive Samsung product—Galaxy Book S—the latest always on, always connected PC (ACPC). My verdict? It defines what portable laptops are meant to be. However, being an analyst, I can’t stop myself from giving the rundown on why I think so and how it provides a glimpse of the future of laptops.
Purchasing and setting up Book S
The Galaxy Book S comes in only one configuration—the Snapdragon 8cx processor, 8GB LPDDR4X RAM, and 256 GB SSD (MicroSD slot supporting up to 1TB) with Windows 10 Home OS. I bought mine on the Samsung website. Ordering was a breeze, although Samsung may confuse buyers by showing only Verizon and Sprint as the supported carriers. I bought the Verizon version by paying in full ($999 + tax). However, it came factory unlocked and it worked perfectly fine with Sprint, T-Mobile, and Google Fi. I am reasonably sure, would work with AT&T as well. I have sought clarification from Samsung on whether the Verizon and Sprint versions are different SKUs and have any major differences, such as spectrum bands supported, carrier aggregation combinations, etc. I am yet to hear back from them (will update this article if I do in a reasonable time). Surprisingly, I believe Samsung is artificially limiting the reach, and the market opportunity by only showing two operators, even though it works with virtually any operator. This is important because other laptops in this category only support certain operators. For example, HP Spectre works only with AT&T and T-Mobile.
The set-up was easy. I did have an issue with the keyboard backlight not working, which was resolved with a Windows update. Backlighting has three levels, which is nice, but the first step is dim enough that you might confuse it for not working except in low light situations.
Incredibly thin and light, with extremely long battery life – perfect for travel or the office
I have used a lot of laptops in my professional life, and that is an understatement. By far, this is the thinnest, lightest laptop that did everything I wanted, while providing the longest battery life. The official dimensions can be found here. My workloads are primarily productivity-focused. As I had explained in my earlier article, I use more than 15 email windows, multiple sessions of Microsoft Office applications including Word, Excel, PowerPoint, and usually have more than 20 browser tabs open at a time. The Samsung Galaxy Book S with its Snapdragon 8cx processor never struggled under this load. There is something to be said about the new chromium-based Microsoft Edge browser, which comes as a default. It is fast, stable and supports Chrome extensions, so I never miss my previous favorite Chrome browser! Edge provides native ARM64 support, so its battery life performance versus Chrome which runs in 32-bit simulation mode is beyond compare on the Snapdragon compute platform.
The Galaxy Book S is a perfect companion for a road warrior like me. However, thanks to COVID-19, my travel is severely curtailed. During the limited travel I did with the Galaxy Book S, I never carried its charger for single-day trips or in town meetings. That means no backpacks, no other bags to carry, just the Book S like a notebook. At the end of each of those days, I ended the day with more than 30-40% of the battery still remaining. Truly remarkable.
Without travel, I have converted the Galaxy Book S into my home workstation. With external 32’ WQHD (1440p) monitor, mouse and keyboard, all connected through a USB-C hub, I almost forget that it is a laptop, such is the user experience!
The Galaxy Book S always gets compliments about its thinness and weight, whether I use it in meetings or when I go to my son’s karate class etc. Many wonder how one could fit a fan in such a thin chassis. Some of my curious IT friends even tried to search for the fan and vents! It is the kicker to tell them that it has no fan or vents, thanks to the Qualcomm Snapdragon 8cx processor inside.
The secret behind the incredible size and battery life of the Galaxy Book S
The biggest challenge laptop designers face is the tradeoff between size (thinner and lighter) vs. performance and battery life. Designers seem to have reached a saturation point in that tradeoff. It all boils down to the thermal characteristics of today’s processors—higher the performance, more the power used, and more the heat generated. There are two options to manage this heat—either use a fan and proper ventilation or throttle the performance. Most of today’s laptops, even the ones such as MacBook Air, utilize fans, which makes them big and bulky while also increasing the power consumed. Premium sleek devices such as the older generation Microsoft’s Surface line-up uses throttling which compromises the user experience. In terms of increasing battery life, the only option is adding bigger batteries, which increases weight.
Now comes the Snapdragon 8cx compute platform used in the Samsung Galaxy Book S. Built using the best from Qualcomm’s mobile heritage, combined with the performance you’d expect of a PC. It is based on Arm’s architecture, offering similar performance as x86 based Core i5. Snapdragon 8cx provides consistently higher performance with minimal heat production in an extremely power-efficient way. So, without fans or cooling constraints, and without the need for bigger, heavier batteries, device designers can develop extremely thin, light, and high-performance laptops, such as Samsung’s Galaxy Book S, whose battery-life is measured in days not hours.
Galaxy Book S vs. Surface Pro X
Since I have reviewed and have been using the Microsoft Surface Pro X for the last few months, a comparison between the two is another question I am often asked. Well, I like them both. They have some common uses but many where one is more suited than the other. For example, as I had explained in my article, Pro X can be off-balance when you try using it on your lap, whereas the Galaxy Book S proved to be a perfect fit for such uses. As a detachable 2-in-1, the Pro X is ideal if you like to use your device also as a tablet and use the stylus. The Galaxy Book S is a clamshell design that is more suitable for a driver or a workstation easily connected through USB-C docks and such. Although the Galaxy Book S has less RAM (8GB vs. 16GB), I haven’t seen that affect my productivity apps much. But if you are using more graphics and processor-intensive applications, the difference might be more apparent. Of course, Pro X, with all the accessories costs upwards of $1500, whereas Galaxy Book S is around $1000. I currently use both devices. All my content is on OneDrive and these being always connected, I can seamlessly switch between the two, no matter where I am.
The biggest concern of ACPCs still remains the app compatibility. More apps are being ported over to run natively in ARM64, though there are applications, like some games and video editors and such, that are still incompatible. It is worth noting though that most of those demanding applications don’t run well on other thin and light notebooks either. The other concern for some is around high cellular data pricing, but operators now have bundled options where one can get reasonably priced unlimited add-on data plans.
A glimpse of the future
The Samsung Galaxy Book S is only the second ACPC based on Snapdragon 8cx, and supports the best in class 4G LTE connectivity, with peak speeds up to 1.2Gbps. But we are at the dawn of 5G, which promises to provide multiple gigabit user speeds, extreme capacity, and lower latency. 5G ACPCs (aka 5GPCs) will be the best devices to utilize this unprecedented connectivity everywhere, as I have explained here. Book S gives a glimpse of what those 5GPCs have to offer in the years to come. In fact, the world’s first 5GPC has already been announced, and many are on the horizon. I can’t wait to get my hand on those!
It is bliss, as an engineer, to witness a whopping 2Gpbs speed on a live commercial network, using an off the shelf device. And that was my experience a few weeks ago, using the new Lenovo Flex 5G on Verizon’s live mmWave network in San Diego. It is even more amusing considering that I had tested 9.6 Kbps (yes, Kilo bites per second)) speeds on 2G networks only two decades ago, and 10s of Mbps only a few years ago.
The Flex 5G is the world’s first 5G PC and it’s powered by the Qualcomm Snapdragon 8cx 5G compute platform, using the Snapdragon X55 5G Modem-RF system. It represents what ideal productivity 5G PC should be—Ultra high-speed mmWave and Sub-6GHz 5G connectivity, the famed long battery life of Always Connected PCs (ACPCs), robust performance, and lightweight fanless design—all of which are enabled by the Snapdragon processor.
It is a perfect device for a user like me—a professional, who is always on the move, who needs top-notch connectivity, light, and high-performing laptop, without the hassle of constantly looking for Wi-Fi hotspots and power outlets.
Immediately after buying the Flex 5G, I couldn’t stop myself from testing and tweeting my initial thoughts. I used it extensively as my daily driver and travel companion for more than a month, and I came out very impressed.
Side note: If you would like to know more about ACPCs, including reviews of the Microsoft Surface Pro X and the Samsung Galaxy Book S, check out my other articles in this series.
Solid and highly functional build
Built in Lenovo’s popular Yoga style (in fact, this laptop is called the ‘Lenovo Yoga 5G’ outside the U.S.), the Flex 5G‘s aluminum and magnesium body looks sleek and stylish. At 2.9lbs., it is slightly heavier than other ACPCs I have used (Surface Pro X and Galaxy Book S), but you really don’t feel that much of a difference when carrying it around as it is still very light and portable. I especially liked its rubbery back and sides which offer a very satisfying firm grip when holding it, and stability when placed on uneven surfaces. This came very handy during my recent RV trip with the family. The Flex 5G would sit firmly, no matter where I placed it—on the seat, on the table, or anywhere else—even when driving on bumpy roads.
Blazing fast 5G connectivity
The Flex 5G’s claim to fame is its 2 Gbps 5G mmWave speed. Unlike many peak speed claims, you can actually get that speed when standing close to the base station! But generally, when you move away from the base station and when the network load increases, speeds will move to hundreds of Mbps, though still notably better than 4G and better than most home networks. I did extensive testing on Verizon’s 5G UWB (mmWave) live network in San Diego and was blown away by the speed.
When I tested, Verizon had two sites in San Diego, but they seem to have added two more recently. The coverage is limited to a couple of blocks around those sites. Most of my testing was near the University Heights site. I could get speeds in excess of 1 Gbps more than a block away, as long as there was line of sight (LoS). I would get decent speeds even without LoS, but would quickly drop to 4G LTE when moved behind buildings or major obstructions. But thanks to the Flex 5G’s dual connectivity, the handoffs in and out of 5G coverage were seamless. I have included screen captures of some of the test results. Verizon has good 4G coverage, offering high speeds in the area, which was a big plus.
I did some speed test comparison between the Flex 5G and Samsung Galaxy S20, which also utilizes the Snapdragon X55 5G Modem-RF system. Generally, the speeds on the Flex 5G were slightly higher, and coverage little bit better than S20. I would attribute that to the laptop having better antennas (probably with higher gain), better spacing, and fewer near-end obstructions such as hand and other body parts.
During the testing, I discovered that Ookla, Netflix Fast, and other speed test sites will not give full speed when checked on browsers (Edge, Chrome, and Firefox). The speeds topped at 600 – 700 Mbps. But Windows 10 apps showed the full gigabit speeds. This confused me a bit. When checked, Ookla could not give any specific reason for such behavior and suggested to always use the app for accurate results. This indicates that browsers are not yet optimized to utilize such high speeds, and that might create user experience challenges, if not addressed soon.
Days-long battery life
The Flex 5G, just like the other ACPCs I have reviewed, lives up to its promise of long battery life. It sports a 4-cell 60Wh battery, slightly bigger than comparable Yoga laptops. This is made possible by the Qualcomm Snapdragon 8cx 5G compute platform, which is thermally efficient so devices utilizing the solution don’t need a fan or any other specialized cooling, so there is extra space and weight margin. This also helps the Flex 5G remain lighter than other comparable models.
Instead of testing Lenovo’s claimed 26 hours of video playback time, I tested the laptop for my typical productivity use. This included multiple email tabs, lots of browser tabs, Microsoft 365, Zoom and other conference call apps, YouTube, audio/podcast recording/editing, and others. I got more than two days of battery life from a single charge while doing these things. The laptop was connected primarily through Wi-Fi with occasional cellular use. The battery lasted even longer during my limited travels as the usage was lower, but it was always using a cellular connection. I wish I had done more testing during travel, but Covid-19 didn’t allow it. Since I often travel to most of the major cities and areas Verizon and other operators are deploying 5G, I could have fully utilized the benefits of 5G connectivity.
Performance tuned for productivity
The Flex 5G is a perfect machine for productivity. I found its processing power to be more than adequate for all my usage (mentioned above). Even with all these applications running, it never got hot. I am not a gamer, nor do I use any high-intensity graphics applications, so I cannot speak to application compatibility or the performance for those needs. Also, it is worth noting that such thin, lightweight laptops are not targeted for such users anyway.
One revelation was how accustomed I have gotten to the absence of fan noise during my more than 8 months of using Snapdragon-powered ACPCs. A couple of weeks ago, when I had to use a buddy’s laptop, its fan noise was so distracting and drove me crazy. Once you experience the pure silence of these ACPCs, it’s hard to go back to traditional devices with loud, heavy fans.
The Flex 5G comes with Windows 10 Pro and one year of free Microsoft 365 Personal. It has 2×2 11AC with MU-MIMO Wi-Fi (aka WiFi5) which has excellent performance. I was especially impressed with the quality of the on-board microphone. I was moderating a 5G panel at the recently held IWCE Virtual event, and my headset broke at the last minute, so I had to use the laptop mic, and I was really impressed by how good it sounded.
Some misses and room for improvement
Despite the excellent overall experience, there are some misses too. The 256 GB SSD is rather small for a premium productivity laptop. It is even worse considering that there are no upgrade options: the SSD is not field-replaceable (soldered to the board), and there is no micro SD slot. For its thickness and weight, Lenovo could have provided a full-sized USB-A port, in addition to, or instead of one of the two USB-C ports. Also, it currently only supports Verizon 5G connectivity in the United States (unlocked version works only in 4G mode with other operators).
Verizon’s extremely limited 5G coverage leaves a lot to be desired. mmWave needs dense deployment of sites, as I had explained in my earlier article, and I hope they do so soon. They will also soon enable the Dynamic Spectrum Sharing (DSS) feature, which allows 5G to use the existing 4G spectrum, which will tremendously help to rapidly expand 5G coverage. But with limited 4G spectrum, gigabit speeds will not be possible. Snapdragon X55 inherently supports DSS. Verizon also needs to improve its customer support system for ACPCs. I had some issues activating the device and the frontline reps had no clue where to redirect me. It took a few tries and a couple of hours to get to the right person and get my service going.
The Lenovo Flex 5G is available for $1399 on the Verizon website (but shows $1699 on the Lenovo webpage for some reason), which is anywhere from $200-$300 higher than comparable thin, lightweight premium productivity laptops. Considering that this is first of its kind, and you are futureproofing your investment, it might be worthwhile for many mobile professionals like me. A lot also depends on how quickly the 5G coverage improves, and how soon we will start traveling and moving around again like before.
In closing
The Lenovo Flex 5G lives up to its promise of the world’s first 5G PC and shows what a 5G PC should be. It delivers on all the characteristics of a Snapdragon-powered ACPC – a sleek fanless design, lightweight build, multi-day battery life, crested with ultra-high-speed mmWave 5G connectivity. The device’s 5G usability is currently somewhat limited by Verizon’s coverage. However, they are working hard to add more mmWave sites and bring DSS, which should substantially expand coverage. The Flex 5G currently delivers a great computing experience now, and will only be enhanced as 5G coverage grows.
To read more reviews like this as well as to get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
FTC vs. Qualcomm Antitrust Trial
The ongoing saga between FTC and Qualcomm
It is unbelievable when one of the world’s richest companies complains that it is an undue burden to pay for the innovations that power its high margin products. But it sure looks like a well-orchestrated war on innovation with sinister motives, when a government agency such as the FTC (Federal Trade Commission) joins hands with it in beating down its much smaller (10x) supplier that is a proven technology pioneer.
I am talking about the trial that is underway between the FTC and Qualcomm in the U.S. District Court in San Jose, California. I am not a lawyer, instead, a passionate engineer who was part of the 2G, 3G, 4G, and now 5G transitions. I know first-hand what it takes to conceive, build, and deploy wireless technologies. Here are my thoughts on this legal tussle and its potential consequences.
Wireless communication, especially for broadband data, is a fascinating invention that it is largely invisible—literally and metaphorically. Unlike beautiful smartphone screens, artful industrial designs, or clever apps, wireless has been an enigma attracting little attention or appreciation. You only realize its importance when out of coverage! Oh, the agony, the insecurity, and the fear of missing out! The device is called a smart “phone” for a reason: without the “phone” functionality, most of those smarts have little value!
“Wireless data” is the defining technology of the smartphone, not just another feature
Why am I explaining the importance of wireless data? In the current FTC trial, the Commission’s lawyers and witnesses put forward two complaints: 1) Licensing fees should be based on the modem’s price, not that of the device, and 2) Qualcomm’s licensing fees are too high. Looking at the first, wireless data is the fundamental and defining technology of any smartphone. Also, it is a misconception to think that wireless data technology is only contained within the “modem” block. In reality, the functionality is the result of a comprehensive system design that makes the smartphone work as a complete device, with all subsystems and software in it. Additionally, the design includes complex interactions with numerous infrastructure and network (radio, core, and cloud) elements to function as a well-orchestrated system. So, it would be disingenuous and utterly ridiculous to limit the value of all of this technology to a small percentage of the price of a modem.
On the licensing fees argument, fees should be determined by the value the technology imparts to the overall usefulness of the device, and not correlated with a single isolated part. Also, the valuation of wireless technology should be market-driven, not arbitrarily or subjectively determined by the FTC or other regulatory authority. If you accept the notion of regulatory price-fixing, then why stop with Intellectual Property (IP)? Why not also regulate the price of smartphones? If you look at the recent price increases, it may not be a bad an idea after all! Jokes aside, as witnessed by the spectacular proliferation of smartphones over the last decade, market pricing of wireless technology IP has benefited the mobile industry and the consumers.
The value of Qualcomm’s IP has been accepted by most of the industry, as illustrated by more than 300 negotiated licenses. Moreover, after a lengthy investigation by and negotiations with the Chinese regulator, the NDRC (National Development and Reform Commission), Qualcomm agreed to a settlement that included rates deemed fair by the Chinese agency. It is telling that even Chinese OEMs agree that the licensing rates are fair, despite these OEMs having far thinner margins and much smaller scale than Apple, who makes most of the mobile industry’s profits (almost 90% by some estimates). So, it would seem that the subjective claim of Apple–“license fees are too high”–doesn’t pass the sniff test. It is interesting to note that many of FTC’s witnesses in the trail, such as Huawei, Apple, and Intel, are Qualcomm’s arch-rivals.
Will the FTC case against Qualcomm help or harm consumers?
Let’s examine the premise of this case and how it relates to FTC’s mission, which is to ensure fair competition so that consumers benefit from wider choices and lower prices.
When you look at the US smartphone market, there are two dominant players, and others are smaller, emerging players. I believe any negative action by FTC will further exacerbate this situation by eliminating these smaller players. Wireless innovation is extremely hard, time-consuming, and capital intensive. Qualcomm invests billions of dollars in R&D every year. A lot of this investment is done very early, years before a market even exists, which means there are significant risks involved. For example, Qualcomm has been investing in 5G since 2014, and commercial devices will only start entering the market in 2019 and 2020. For a company like Qualcomm, the only way to recoup such large, ongoing investments is to license its technology to as many smartphone OEMs as possible. Moreover, most of these OEMs don’t have the money to do their own R&D, and they rely on Qualcomm’s innovations to cost-effectively compete with the big OEMs. This creates a vibrant, highly competitive marketplace that offers consumers a wider range of choices and affordable prices, the ultimate goal of FTC. A great example of this is 4G LTE, which enabled many new and very innovative smartphone OEMs to enter the market. They are growing stronger and are expected to be formidable competitors in 5G. The virtuous cycle repeats as Qualcomm reinvests large portions of its licensing revenue back into R&D to offer a continuous stream of innovations.
In the absence of an entity like Qualcomm, most OEMs would be deprived of new technologies. Only a few big OEMs would be able to invest billions into technology development, and it’s unlikely that these vertically-integrated players would share most of their technology with others. Most other OEMs would not be able to afford to invest on their own and probably exit the market. This outcome would be the opposite of the FTC’s mission. If you don’t believe this, look at how aggressively Apple, Samsung, and Huawei have been trying to vertically integrate by either acquiring or building as much of their own technology as possible.
Beware of the consequences
Any attempt to trivialize or delegitimize Qualcomm’s IP and its role in the industry will have a long-lasting impact not only on the smartphone market but on the entire tech industry. If the FTC undermines companies’ ability to earn rewards for the investments, or worse, arbitrarily caps the value of their technology, it will discourage the American innovation and severely curtail the flow of capital to those innovations. Small and medium-sized companies that are the backbone of this innovation engine will be the most affected. So, in essence, this trial may (unwittingly?) amount to a war on the American innovation engine, and a negative outcome will ultimately hurt American consumers by decimating competition and choice in the marketplace; this is the antithesis of the FTC’s very existence and charter.
Analyzing the long term impacts of FTC’s activist litigation
In all the chaos of allegations, counter allegations, scores of testimonies, rebuttals, cross-examinations, and others, I humbly request that Judge Koh and the FTC pause for a moment and ponder this question: “If Qualcomm loses this case, who will win?” No, it’s not the FTC; the real winner would be China, in the form of its proxy Huawei (and to a lesser extent, Apple).
In my previous article, I explained how FTC’s activist attempt to fight Qualcomm will result in reduced competition, limited choice, increased prices, and will ultimately do great harm to consumers and the industry. This is clearly against FTC’s sworn mission and the very reason for its existence. But the importance of this case goes much further and beyond the FTC; it goes directly to the core of the purpose of the United States government itself, which is to protect the lives, the assets, and the interests of citizens of this great country. Today, technological advances define the future of countries. Rightly so, the U.S. government has made the protection of its intellectual property one of its main objectives. However, FTC’s actions are summarily against that objective.
Qualcomm is a well-oiled innovation engine
As the trial progressed, a lot of interesting facts have come to the light of day. It is undeniably clear that Qualcomm has been and continues to be a well-oiled innovation engine, efficiently cranking out technologies and products. In the testimony on Friday, Jan 25th, 2019, Christopher Johnson of Bain & Company reluctantly spilled the beans from the competitive analysis they did for Intel. They benchmarked investments, execution, and productivity between Intel and Qualcomm, especially pertaining to the development of wireless technologies and products. Bain’s analysis showed that Qualcomm’s investment on the SoCs (System on Chip) was comparable to that of Intel, but produced three times as many products. The report also showed that Qualcomm invested much more than Intel in developing wireless technologies and modems, which are at the heart of all mobile devices and networks.
With Qualcomm’s strong performance, no wonder weaker modem chipset players couldn’t compete and quickly folded. For example, companies such as Broadcom (which consolidated assets from Renesas, and Beceem), ST Ericsson, and Texas Instruments exited the business. Other players such as Infineon were bought by bigger companies like Intel. As a result, the majority of smartphone OEMs, be it new ones such as Apple, Samsung, LG, and a whole slew of Chinese OEMs, or legacy OEMs such as Motorola, Sony, Blackberry, and others, ultimately ended up using Qualcomm’s chipsets. In other words, Qualcomm’s strong market position was primarily because of its clear vision, incredibly talented engineers, and military-precision execution. However, this position didn’t give them the market power as alleged by FTC or make them immune to competition. As proven time and again, the highly-competitive mobile market only rewards winners, and harshly punishes those that stumble. Nokia’s spectacular demise from its peak is a great example of this. Specific to Qualcomm, the failure of the Snapdragon 810 chipset which came after the blockbuster Snapdragon 800, made many OEMs quickly abandon Qualcomm and take their business elsewhere. In the fast-changing mobile industry, market power is a misnomer, and only the companies that have the right foresight, investment and execution survive and thrive.
Down payment for the next-gen technologies
When analyzing the value of cellular IP and modem chipsets, conventional wisdom might be to only consider the share of a company’s contribution in the current generation and to evaluate accordingly. However, many fail to understand that wireless technology is not static, but a series of evolutions, and multiple releases within each evolution (G, or generation). For OEMs to be successful, the key is to leverage a steady stream of technologies and solutions to feed multiple generations of products. That means, the price they are paying for today’s technology also includes a down payment for the next generation of technologies they will need down the road. For example, when OEMs were selling 3G devices in 2006 and 2007, Qualcomm’s R&D engineers were already working on 4G technologies, funded in large part by licensing revenue from all of those OEMs’ devices. And when 4G was growing exponentially in 2014 and 2015, Qualcomm was already heavily re-investing in 5G. Essentially, Qualcomm has acted like an R&D design house for the entire smartphone industry ever since 2G. It is a virtuous cycle of innovation and re-investment, one generation after another.
What happens, if this cycle of innovation and re-investment is disrupted?
If Qualcomm loses this trial, and its ability to recoup investments through licensing technology at market prices is severely curtailed, Qualcomm will undeniably have to reduce investment in risky new technologies. Remember that 5G is still in its infancy, and the industry still has a long way to go to achieve its promise of changing the world. As articulated in testimonies in the trial, it is not just the investment that matters; Qualcomm’s vision, brain trust, and execution will also be severely hampered. Damage to Qualcomm will create a big void that no other American company may be able to fill, and any public company would be faced with the same challenge of not being able to recoup its investments with fair returns. There are not many companies in the U.S. that have the expertise, and fewer still, the efficient horizontal business model of Qualcomm, as made amply clear by Bain’s analysis.
China’s premier technology provider, Huawei, would be more than happy to fill this void, and with tacit support from the Chinese government. Unlike publicly-traded American companies, Huawei enjoys freedom from the worries about access to capital for investment, and it’s not particularly worried about returning a profit to investors. Remember that Advanced Information technology is among the top of “Made in China 2025” goals set out by the Chinese government. Capitalizing on its current momentum, Huawei would willingly take the world’s R&D crown. And the FTC would unwittingly be handing over the tiara on a silver platter.
The irony is that other parts of the U.S. government, for example, the U.S. Department of Justice, are busy pressuring other governments to keep Huawei at bay for security concerns. They even criminally charged Huawei for IP violations and other charges. Yet the FTC is upholding Huawei as its key, credible witness in undermining Qualcomm, the crown jewel of U.S. innovation. What could you call this travesty? The tragedy of democracy, the lethargy of bureaucracy? No matter what you call it, this is indeed a national disgrace.
It’s been more than a month since arguments rested for the FTC vs. Qualcomm case. Every passing day is increasing the anxiety of people on both sides of the issue. The media is rife with the rumors, leaks, and loud calls for the U.S. Government to intervene for national security reasons and take CIFIUS-like action.
FTC vs. Qualcomm might seem like any other antitrust case, but in reality the outcome could potentially jeopardize U.S. national security. Qualcomm is the undisputed leader in technologies and R&D that power cellular systems such as 3G, 4G and now 5G. Telecommunication networks are the plumbing that connects the country, and cellular technology is its brain. Any country that wants to control its destiny should own that technology, or at the very least, have significant influence in steering the evolution of its capabilities. If the FTC case seriously damages Qualcomm, China’s Huawei will claim its place and be the global champion of cellular technology.
But, you might ask, hasn’t the government already addressed this issue by banning Huawei in the U.S.? Well, that would be akin to shutting off one faucet in a house while water is free to flow through all of the others. There is much more to cellular technology than just the network infrastructure. Let me explain.
What it takes to be a leader in the cellular technology:
To be a leader in the cellular technology, one needs deep, end-to-end system expertise. One needs years of experience designing new wireless systems, standardizing them, building and enabling a large ecosystem to commercialize them, and continuously evolving them after they launch. Very few companies possess such capabilities; most specialize in one or a few specific areas. For example, companies like Apple focus on devices, and others like Ericsson and Nokia focus on network infrastructure.
The leading companies that have complete systems expertise are Qualcomm and Huawei (Of course, there is also Samsung, I will discuss about that in a later article). Let’s take a closer look at these leaders, starting with Huawei. The rise of Huawei is worthy of a business school case-study. It has meticulously built its businesses, allegedly with strong financial and bureaucratic support from the Chinese Government. Huawei realized the importance of cellular technology and standardization, and started very early, since the 2G days. It initially focused on infrastructure products, then strategically expanded into smartphones, and subsequently developed its own platforms for modem, application processor, neural processor, even reportedly its own operating system, and other key technologies. Huawei owns virtually all key technologies in the cellular value chain and is also a force to be reckoned with in 5G standardization. No wonder Huawei is considered the crown jewel and a role model for the Chinese government’s global technology ambitions.
On the other side is Qualcomm, which to uninformed eyes might look like any other chipset supplier that can easily be dispensed with and replaced. However, upon closer inspection, one realizes that it is a systems engineering company with deep, and unmatched end-to-end wireless competence. Qualcomm has gained valuable experience leading the successful commercialization of 2G, 3G, and 4G. The intensity with which the company almost single-handedly drove the acceleration of 5G has clearly shown its capabilities. For 5G, Qualcomm co-developed the full system architecture and design from the ground up, including fundamental technologies and algorithms. Qualcomm’s R&D teams also built complete prototype systems to develop, test, and perfect the technologies that the company contributed to 3GPP to define and standardize 5G. Qualcomm, because of its unwavering focus on engineering and technology instead of glitzy consumer marketing and brand, isn’t a household consumer name unlike many of its competitors.
Some might then ask: why only Qualcomm, why can’t other U.S. giants that are much larger and have greater financial wherewithal, take on Huawei? When it comes to the mobile industry, other than Qualcomm, there might only be two other companies that could come close — Apple and Intel. Let’s look at them more closely.
Although Apple is the profit leader in smartphones, reportedly raking in almost 80% of all mobile industry profits, it is pretty thin on the cellular technology front. Instead, its strategy has been to optimize existing technologies, and bring them into its vertically-integrated devices and closed ecosystem. Apple is indeed more focused on developing proprietary technologies that improve user experience and increase the appeal of its devices. Despite being a dominant smartphone player since the 3G days, Apple hasn’t brought any groundbreaking innovations to the cellular ecosystem or cellular standards. The company is never on the leading edge of cellular technology adoption either. Specifically, with 5G, it is more than a year behind almost every other major smartphone OEM, including smaller players such as Xiaomi, Vivo, Oppo, and far behind rivals Samsung and Huawei. Short of using its bounty of more than $200 Billion to buy another wireless technology leader (which could run into serious antitrust scrutiny), Apple would find it very hard, if not impossible, to compete with Huawei in the 5G+ technology race. Even if it developed the necessary competence, Apple’s vertical integration strategy would likely make it keep all IP to itself, and not license it to others. I really don’t see the company making a U-turn and becoming the cellular technology torchbearer for the country.
Then there’s Intel, which has ruled the PC industry for many decades. It might be because of its apathy toward the cellular industry in its early days (Intel sold its division that built processors for early smartphones to Marvel), the company has never succeeded in becoming a force to reckon with. Intel’s heavy bet on WiMAX didn’t pan out, instead, putting the company years behind in LTE. Even after buying Infineon, a strong modem player of yesteryears, the company still seems to be struggling in wireless. Intel did score a major victory last year by claiming 100% of iPhone modem share, albeit only offering the performance of Qualcomm’s previous generation of modems. To date, Intel’s 5G wireless story is not promising either. It seems to be almost one year and two generations behind its peers. Apple’s recent aggressive stance in growing its modem competence doesn’t bode well for Intel either. Also, I have lots of doubts about Intel’s end-to-end system capabilities. As a result, I believe Intel is in no position to compete with Huawei.
The bottom line is, Qualcomm is the only safe bet for the U.S. to maintain its edge in 5G and beyond.
What happens if Qualcomm is weakened by an adverse FTC trial ruling?
Qualcomm’s (and the U.S.’s) fate is hanging in the balance, pending the outcome of the FTC Trial. One might wonder what would happen if Qualcomm were to lose this case. Qualcomm’s licensing business, which generates the bulk (2/3) of its profits, might be seriously impacted. Without going into hypothetical scenarios, one thing would be certain: the company’s ability to invest in fundamental cellular technology development would be severely curtailed. Its virtuous cycle of technology development and plowing profits back into future technology R&D would come to a screeching halt. U.S. dominance of cellular technology would likely rapidly decline, and eventually end. With strong market presence and the Chinese Government’s backing, Huawei would be virtually unstoppable and would exert significant influence on the definition of future of cellular technologies… and it’s doubtful that it would have the U.S.’s interests and needs at heart.
Most affected would be smaller OEMs. Without substantial resources, or access to cutting-edge technology IP and advanced, high-performance platforms from Qualcomm, they would not be able to compete in the premium tier against vertical players like Apple, Huawei, and Samsung. The premium smartphone market in the U.S. would become an even greater duopoly (Apple and Samsung) and oligopoly outside the U.S. (the former two plus Huawei). It’s no wonder that both Apple and Huawei are strong supporters of (and collaborators with) the FTC’s case.
In the end, the real losers will be consumers, who will have no choice but to bend to the whims of these increasingly powerful vertical players… vendors that have already shown a strong affinity for increasing smartphone prices.
So, for the U.S. government, the time to act is now. I hope that saner instincts will prevail, resulting in actions that will protect, preserve, and propel U.S. technology, innovation, and the country’s vital communication infrastructure.
While the final decision on the FTC vs. Qualcomm case is still pending from the last two months, the new developments have put the very premise of FTC’s case in question. The details revealed during the Apple vs. Qualcomm trial and the ensuing settlement are making the pillars of the FTC case crumble. Everybody is eagerly waiting for the FTC’s next move, and wondering how all of this will affect Judge Koh’s final decision, if she eventually has to give one.
One might ask, “What is the relevance of the Apple vs. Qualcomm litigation on the FTC case?” Well, Apple was one of the key witnesses and a major force behind the FTC case. The underlying principles, claims, and counterclaims are same between the two, so much so that Apple’s main arguments presented during the case with Qualcomm were almost verbatim to what was put forward in the FTC trial. So, both cases are undeniably intertwined, and the result of one will affect the other.
FTC’s claims are in serious jeopardy
At a very high-level, the majority of FTC’s allegation can be combined into three claims:
-
Qualcomm’s licensing practices are not compliant with FRAND (Fair Reasonable and Non-Discriminatory) terms, and that has harmed the cellular industry, including Apple
-
Licensing at the device level is not justified
-
Qualcomm’s alleged market power combined with its licensing policies have harmed competitors such as Intel
Let’s evaluate the merits of each of these claims, especially in the wake of the settlement and the new information it has brought to light.
Apple was one of the strongest forces behind FTC’s case against Qualcomm. The documents revealed during the Apple vs. Qualcomm case show that the ultimate reason behind Apple’s litigation (including FTC case) was to reduce its royalty cost. There was no alleged harm. Even during the trial, the FTC failed to produce any concrete evidence to show the harm to the industry caused by Qualcomm’s licensing practices. Now, Apple signing a long-term licensing contract as part of the settlement clearly shows that Qualcomm’s licensing practices are indeed fair and market driven. Furthermore, the other over one hundred licensing contracts Qualcomm has signed with many OEMs including majors such as Samsung, and LG proves this point as well. All of this debunks FTC’s first claim.
As it became very apparent during the trial, licensing at the device level is a decades-old industry norm. All the Intelectual Property (IP) holders practice this because it is the most efficient and practical way to capture the value of IP. Stipulating a cap on the maximum device price for license fee calculations makes the practice even more meaningful and fair. As disclosed during the trial, Qualcomm’s licensing fees are up to 5 percent of the wholesale price of the phone, with a device price cap of $400. This license includes a portfolio of more than 130,000 Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEPs). For reference, in another related case between Apple and Qualcomm in San Diego, the jury awarded $1.41 per device to Qualcomm for just three non-SEPs. That is a far cry when compared to the $7.5 for every iPhone that Apple was paying before the dispute started. So again, FTC’s second claim has no merit. On a side note, If you would like to know more about patents and licensing, check out my explainer articles here: Part-1 and Part-2.
There was no dearth of drama on the day Apple and Qualcomm settled the dispute. The settlement news broke while the opening statements were still being presented in the court. The Qualcomm’s stock shot up by record levels immediately after the settlement. Mere hours after the settlement news, Intel announced their decision to exit the 5G smartphone modem business. Some might think that Intel decision to quit proves FTC’s claim of harm to competitors. However, closer scrutiny reveals a different story.
By Intel’s own admission, the reason for their decision was Apple signing a multiyear modem supply deal with Qualcomm, as part of the settlement. As publically discussed in many forums, the most likely reason for Apple to ditch Intel in favor of Qualcomm was the realization that Intel wouldn’t be able to meet Apple’s hefty 5G modem needs. This indeed is a major miss by Intel, considering that they are currently the sole modem supplier to Apple’s latest iPhone. Their inability to deliver the right modem solution for such a large and almost guaranteed opportunity clearly shows a profound and fundamental flaw in Intel’s operations and execution strategy. By all counts, 5G was a level playing field for Intel as well as everybody else in the race including Qualcomm, and Intel failed to deliver. In such a case, it is reasonable to argue that, this might as well be the case with 4G LTE. That means, whatever harm the FTC has claimed for Intel in 4G LTE was because of its inability to deliver, and not because of Qualcomm’s alleged market power or licensing policies. This proves that FTC’s third claim is completely flawed as well.
Who stands to benefit from FTC trial now?
With Apple and Qualcomm settling, and Intel exiting 5G smartphone modem market and mulling strategic options for its modem business, the question arises, “Who stands to benefit now from the continuation of FTC case?” The surprising answer is China’s Huawei, as it was FTC’s third collaborator along with Apple and Intel. This is such an unfortunate and disgraceful situation that an arm of the US government is directly helping a foreign entity, against a US company who is heralded as the country’s 5G leader. This is even more ironic and embarrassing, considering that the US government has virtually banned Huawei for national security reasons!
What could be the possible outcome?
With all the major claims of the FTC discredited, its case is in serious jeopardy. As Judge Koh noted during the closing stages of the trial, this case is very complex with a huge amount of evidence to examine. The hurried summary judgment that Judge Koh gave in the early part of the trial, the radical remedy that the FTC is seeking, and the recent developments, complicate the case even further.
The FTC didn’t make a strong case, to begin with, it looks even weaker now. That means, it is almost impossible for Judge Koh to give a judgment that might permanently alter cellular IP licensing regimen being practiced for decades. In my view, the only possible option for the FTC now is to settle with Qualcomm and save its face, especially considering that anything other than that will help Huawei. I am sure Judge Koh will be happy with that outcome as well. Any decision other than that will surely be challenged in the appellate court and most likely be overturned.
The telecom industry is still digesting the surprising and far-reaching decision by Judge Koh of the U.S. Northern California District Court. The expansive court order is as hard to digest as it is to comprehend. If you thoroughly read it (yes, I have, all the 233 pages!), it seems that Judge Koh had already made up her mind long before the trial, and hand-picked specific points from testimonies, evidence, and circumstances to suit her narrative. However, the battle rages on: Qualcomm is appealing the decision at the U.S. Ninth Circuit Court of Appeals. Meanwhile, the company is requesting a stay from Judge Koh until the appeal is heard. I think this is a mere formality, as I expect Judge Koh to reject the stay request. If and when that happens, Qualcomm will request a stay from the Ninth Circuit Court. While all of these court proceedings play out over the next few months, if not years, it is important to consider the havoc this decision and the possible denial of stay might cause in the market. It is even more crucial because we are at a critical juncture in the global 5G race, and this decision will affect how different companies, and perhaps more importantly, different countries progress.
In my previous article, I had briefly touched upon the question “who might benefit from an adverse decision against Qualcomm.” Since that fear has become a reality now, a more detailed discussion and evaluation of some what-if scenarios is in order.
At the very outset, there is no question that Huawei and China are the biggest beneficiaries. With this legal quagmire, the attention of Qualcomm’s executives and many of its engineers may be divided between trying to prevail in the legal fight and making great technology. This distraction gives Huawei (and in turn, China,) a leg-up, allowing it to strengthen its position in 5G. When you dig a little deeper, you will realize that, if Qualcomm’s request for a stay is not granted, the situation gets even direr.
What happens if the stay request is denied?
As I have discussed in my previous article, licensing revenues are the lifeblood of Qualcomm’s virtuous cycle of technology development, commercialization, and monetization. Judge Koh’s order threw a monkey wrench in to that cycle, exposing almost all of Qualcomm’s licensing contracts to renegotiation risk. Based on the news articles, it seems that recent deals with Apple and Samsung could be safe for some time; but I can’t imagine both of those behemoths not trying to use the court’s decision to eke out more concessions from Qualcomm. If you remember, during a separate trial, Qualcomm produced documentary evidence that showed how Apple intentionally tried to harm Qualcomm’s licensing business. Bottom line is, Qualcomm’s every licensing contract could be up for grabs. The company’s much-publicized, recent licensing spat with LG offers a glimpse of how convoluted and long these renegotiations could get.
Let’s look at the biggest block of the licensing lot, the Chinese OEMs that bring in a large portion of Qualcomm’s licensing revenue. Just like LG, all of these OEMs buy chipsets from Qualcomm. That means just as LG is trying to do, they might also ask for chipset based licensing. But most of them, if not all, license Qualcomm’s full portfolio, including cellular SEPs (Standard Essential Patents), non-cellular SEPs (e.g. Wi-Fi and Bluetooth), and non-SEPs (NEPs). However, the court order only applies to cellular SEPs. Given Judge Koh’s ruling, how would you negotiate a licensing deal that would span all these different kinds of patents? It would seem that the only option would be for Qualcomm and its licensees to examine more than 130,000 patents, one-by-one, and license on a la carte basis. As one could imagine, that would be a herculean task. Taking this insanity further, many of these are system-level patents, which mean they may cover more than just the modem or any single chip, and span different parts of the system and software. For example, if you consider MIMO, an important feature of 4G and 5G, the technology covers not just the modem but also RFICs and antennas, phones, and network equipment. Would patents related to MIMO be licensed based on modem pricing or RFIC, or antennas, or base stations? Also, different vendors produce these components. So, would all those vendors have to get licenses for cellular SEPs? So many complex questions with few clear answers!
If your head is not yet spinning with the complexity, consider this absurdity: Qualcomm would still be free to license all patents other than cellular SEPs at the device level. This means, there might be a case wherein the prices of non-SEPs would be higher than that of SEPs, which at some level defies logic! The point is, licensing could get so complex that it might take years to agree on how to structure meaningful contracts. A side note, look for my next article on the range of absurdities this court order is causing. Also, if you would like to know more about cellular licensing, please read my articles here, here and here.
The real threat of 5G investments getting strangled
During the uncertainty of lengthy negotiations and the complexity of restructuring of contracts, it is highly likely that many OEMs would be tempted to stop paying royalties. This would be similar to what Huawei is doing during its negotiations with Qualcomm now, and to what Apple did until it settled with Qualcomm back in March 2019. Such a large-scale disruption could mean that the revenue stream that feeds the Qualcomm’s R&D engine would go dry. The direct casualty of such an outcome would be the development of 5G, and America’s leadership in 5G. As you might know, we are only in the early stages of 5G. A lot of what 5G promises is still under development. All of that requires billions of dollars of investment and multiple years of sustained development, with a long lead time for revenue generation. Any interruption to Qualcomm’s licensing revenue could directly impact Qualcomm’s ability to create those inventions and the development of 5G. The world would be at the mercy of China for the future of 5G, and to deliver technologies for Industry 4.0, and others.
Handing a powerful lever to China in the trade war
The fact that a large portion of Qualcomm’s licensing revenue comes from Chinese OEMs has huge significance when the United States is in a bitter trade war with China. As evident from developments, both countries will use whatever leverage they have to get the upper hand. In such a case, the considerable revenue stream for a strategic American company will surely be weaponized and used as a bargaining chip by China in the broader trade negotiations. It is no secret that the Chinese government wields considerable influence over these OEMs. If you think about it, this is such a potent tool, not only for trade negotiations but also to severely hurt America’s prospects for 5G leadership.
Whose interest is FTC fighting for?
It is abundantly clear that the real and biggest beneficiaries of the FTC’s and Judge Koh’s actions are neither the American People, nor American companies, but ironically, China and Chinese companies. And this too, to the detriment of American 5G leadership and at the expense of an American technology company that has been hailed as a 5G leader by the U.S. Government itself. This is exactly the reason the U.S. Department of Justice voluntarily tried to impress upon Judge Koh that she be cognizant of the implications of her decision for America’s national interests.
On the closing note, to those who value free markets and fair competition, I would like to point them to recently finalized 5G infrastructure contracts in China. Huawei won the lion’s share of these contracts, clearly showing how the Chinese government protects its companies. Who is there to protect the American companies? Far from protecting its own national interests, a U.S. government agency is effectively fighting tooth and nail to hurt a legitimate American company and help the Chinese. What an irony!
Last week’s remarkable decision of the United States Court of Appeals for the Ninth Circuit (appellate court) consisting of three judges, finally brings some common sense into FTC’s bizarre antitrust case against Qualcomm. The appellate court granted Qualcomm’s request to stay the United States District Court for the Northern District of California’s (lower court) ruling, which had far-reaching implications for the entire U.S. patent regimen.
Side note: If you are new to the subject would like to understand the background, please read my previous articles here, here, here, here and here.
What did the appellate court say?
The court order must have sounded like music to Qualcomm’s ears. Even they could not have written it better! Don’t be confused by the title of the court order which says “partial stay,” Qualcomm actually got all of what it requested, and then some. The tone, the language, the arguments, the selection of phrases and words, the precedence cited, the direct denunciation of the lower court’s decision, everything screams a thumping Qualcomm victory.
First, it says that the application of the Sherman Act (antitrust law) to the case is not accurate, as private businesses have discretion on who they deal with. That means, Qualcomm is free to license its Standard Essential Patents (SEPs) to whomever they choose — effectively negating the lower court’s order of mandatorily licensing of SEPs to rival chipmakers on exhaustive basis.
Second, it acknowledges that there is a stark difference of opinion between two governmental agencies tasked with enforcement of antitrust laws— FTC and Department of Justice (DOJ). This is in complete contrast to the lower court’s abject disregard for DOJ’s request to conduct additional briefings before imposing remedies, and be considerate about the effects of broad and far-reaching remedies that alter market dynamics and jeopardize national security.
Third, it clearly states that the appellate court is satisfied with Qualcomm’s argument that its practice of licensing only to devices OEMs and charging royalties at the device level doesn’t violate any antitrust laws. This is again the opposite of one of the key rulings of the lower court. The appellate court goes on to even mention the extraordinary step taken by the sitting FTC commissioner— Maureen K Ohlhausen, publically expressing her dissent to the theory urged in the complaint and adopted by the lower court.
Fourth, it says that it also agrees with Qualcomm’s strong argument that implementing the lower court ruling, before the appeal decision, will do irreparable harm to its business. This was one of the easiest things to understand and realize to anybody even with a hint of knowledge of the licensing and wireless business. The lower court’s complete disregard for such logical reasoning was appalling to the keen observers of this case like me.
Finally, the appellate court concludes that the difference of opinion between FTC and all the other relevant government agencies, including DOJ, Department of Defense, and Department of Energy, warrants the stay be granted. It further points out that these government agencies have opined that the lower court’s adverse action against Qualcomm threatens national security and “has the effect of harming rather than benefiting consumers.”
If you feel like you have heard these arguments before, you are right. These are the same arguments I put forward in my previous articles here, here, here, here and here.
What’s next?
The biggest kicker in the appellate court’s order is its ridicule of the lower court’s order as “.. a trailblazing application of the antitrust laws or instead of an improper excursion beyond the outer limits of the Sherman Act..”
To be sure, the lower courts are supposed to implement the law based on precedence, and not be a trailblazer!
Further, the appeal hearing is scheduled for Jan 2020, much quicker than usual timelines. The tone of the appellate court order, the decisive and unambiguous way in which the panel has struck down all the major aspects lower court’s assertions, strongly suggests that the overturn of its ruling is imminent. The urgency in scheduling the appeal hearing also indicates the importance appellate court imparts to this case. Qualcomm filed its long opening brief to the court on Aug 24th,2019.
Final thoughts
This appellate court decision was longtime coming. Actually, the whole trial was a series of bizarre turns of events. From the judge arbitrarily limiting the evidence period to March 2018, excluding the pertinent evidence thereafter, to strange explanation for summarily discounting defendant’s in-court live testimony, because the judge felt that the witnesses looked “prepared” to using an extremely narrowly defined potential violation for an extremely broad and industry-altering remedy and so on. But fortunately the saner senses have finally prevailed, and justice is being served the right way, albeit delayed. Now all the eyes are on the Jan 2020 hearings.
Qualcomm got a reprieve when the United States Court of Appeals for the Ninth Circuit stayed the decision of United States District Court for the Northern District of California’s (DC) in its antitrust case. Immediately after the stay, Qualcomm filed its opening brief (175 pages long), which was followed by a flurry of supporting Amicus Briefs (each more than 40 pages) from different companies, U.S. government, a retired circuit court judge, and groups of experts. While all of them criticize DC’s ruling, two of them choose to be neutral; all others were strongly in favor of Qualcomm.
<<Side note: If you would like to know more about Ninth Circuit court ruling, and the complete FTC vs. Qualcomm saga, check out this article series.>>
Principal arguments
The briefs supporting Qualcomm strongly condemn DC’s ruling. Their arguments can be summed up into three major themes:
-
DC either misunderstood or misapplied the US antitrust laws, as well as the precedence. The proponents claim that Qualcomm’s licensing approach, “No license No chips” policy or alleged “higher licensing prices” don’t violate Sherman Act. Also, Qualcomm’s decision to only license to device OEMs is not against the Fair and Reasonable and Anti-Discriminatory (FRAND) principles of Standards Developments Organizations (SDOs). Additionally, they claim the FTC or court did not show apparent consumer harm.
-
The remedies imposed by DC are very broad and far-reaching. The ruling applies to every aspect of Qualcomm’s licensing business including all of its global contracts; in many cases, those are even outside the purview of FTC or the DC. For example, contracts with Chinese OEMs for devices to be sold only in China are beyond FTC’s authority.
-
The ruling creates widespread disruption to the decades-old licensing regimen that has proven to encourage innovation, be efficient, and easy to implement. If licensing based on Smallest Saleable Patent Practice Unit (SSPPU) becomes mandatory, that will put almost every existing licensing deal that doesn’t use SSPPU, up for renegotiation. The proponents claim that because many patents span multiple functional units, DC’s ruling will create an unfathomable mess of who to license who, at what rate, and how.
The focus of each Amicus Brief
All the briefs came with a heavy dose of related precedence. Since the supporters are from different fields, each of them stressed on different parts of the argument, as highlighted in the sections below:
U.S. Department of Justice (DoJ):
One of DoJ’s main points is, alleged “unreasonably high royalty” is not anti-competitive; on the contrary, they quote from precedence that high royalties enable “risk-taking that produces innovation and economic growth.”
DoJ also emphasizes that Sherman Act violation requires “harm to completion” and not just “harm to competitors” as alleged by DC. DoJ ridicules DC’s “misunderstanding” of antitrust law, and also reminds it about the CFIUS’ action to block the takeover of Qualcomm because of national security reasons.
Judge Paul R. Michel (Ret.) – Served on Circuit Court for more than 20 years
Judge Michel states that SSPPU is a mere tool to avoid jury confusion. He argues, since this was a bench trial, and because of the sheer number of complex patents (~140,000) that cover multiple functional units, use of SSPPU does not make any sense.
The judge also points to the disastrous outcomes when the SSPPU was mandatorily applied to IEEE standards 802.11ah and ai, which were ultimately rejected by ANSI (American National Standards Institute).
A group of 20 antitrust and patent law professors and experts
These experts, including a retired chief judge of the federal circuit court of appeals (Randall R. Rader), who came up with the SSPPU concept, point out that the antitrust law needs actual proof of the harm (e.g., economic analysis), not just “Per Se” or “theory-driven arguments.” They condemn DC for using the discredited theory of Mr. Shapiro (without using his name) and simplistic documentary evidence, such as email, instead of concrete economic evidence to establish anti-competitive conduct.
They draw an interesting parallel between the decade long antitrust crusade against IBM, launched at the closing days of Johnson administration and that of Qualcomm, filed during the last days of Obama administration. They point out that DoJ learned its lessons about the ill effects of antitrust overreach by pushing IBM, an American technology jewel, to certain bankruptcy, and warn against repeating it.
International Center for Law & Economics (ICLE)
ICLE, a group which has many antitrust and economics experts, opines that this “case is a prime—and potentially disastrous—example of how the unwarranted reliance on inadequate inferences of anticompetitive effect lead to judicial outcomes utterly at odds with Supreme Court precedent.”
Further, ICLE quotes one of the previous relevant judgments that seem to uproot the crux of DC’s argument—“The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system.”
Cause of Action Institute (CoA)
CoA, a non-partisan government oversight group, comes down rather heavily on both DC and FTC. It reiterates the words of a sitting FTC commissioner who called this trial “a product of judicial alchemy, which is both bad law and bad public policy.”
Further, CoA asserts that FTC exceeded its statutory authority in at least four ways, including the reasons that DC’s “injunction violates due process and is unenforceable for vagueness.”
Alliance of U.S. Startups & Inventors for Jobs (USIJ)
USIJ states the fact that the cellular industry is one of the most competitive, dynamic, and thriving markets, and there is no need for regulatory or judicial interference. Instead, it suggests that the FRAND complaints and the other concerns can be better resolved by using contract and patent law rather than antitrust law. They say that the latter would be akin to using a hammer instead of a scalpel.
It warns that DC’s ruling will stop companies from participating in standardization, and that will be anticompetitive and will harm consumers.
InterDigital
InterDigital emphasizes that antitrust law shouldn’t trump innovation, and it points out how the law is being misused to make inventors “accept sub-FRAND royalties.” It also cautions about how antitrust overreach will weaken innovative US companies, and make their leadership replaced by foreign companies supported by their governments, who may not have the US’s best interests at heart.
InterDigital doesn’t specifically mention whether it supports Qualcomm or not.
Dolby
Dolby comes out strongly in favor of keeping the flexibility of patent holders in deciding where in the value chain they license. They insist that this allows the innovators to maximize returns on their huge investments and fairly compensates them for the risks.
Dolby faults DC in misinterpreting the FRAND commitments to SDOs and suggests that there are no mandatory requirements to license at any specific level or to any specific providers. It also highlights the confusion and the havoc it would create if the well-established end product based licensing, practiced across many industries, is altered in any way.
Dolby only asks for the reversal of DC’s summary judgment instructing Qualcomm to license to rival chips makers.
Nokia
Nokia points out the difficulties in licensing at a component level, and how patents cover more than a single functional unit, and how SSPPU is not applicable at all. While highlighting these inconsistencies in DC’s decision, it remains neutral.
In closing
There is a striking commonality in what Qualcomm has claimed in its briefing and all the Amicus Briefs coming from this diverse set of experts and in some cases competitors such as InterDigital. That suggests that there indeed is a strong case to be made against DC’s ruling. As I have pointed out in my earlier article, the appellate court seems to agree with many of these assertions as can be gleaned from the stay ruling. I would be highly surprised if the appellate court doesn’t overturn many of the draconian rulings of the DC.
Also, In response to Qualcomm’s briefing, FTC is expected to file its briefing sometime in October or November, and any Amicus Briefs supporting it will follow soon after. Come back to my column here for the latest developments and what they mean.
The stage is set for Feb 13th, 2020, hearing of FTC vs. Qualcomm antitrust case at the United States Court of Appeals for the Ninth Circuit (Ninth Circuit). In preparation, FTC, Qualcomm, and many interested parties have filed their briefs in support and against the decision by the United States District Court for the Northern District of California (lower court).
In the briefs, FTC’s subtle change in tactic caught my eye. They seem to have changed their “hero” argument. They are now trying to make Qualcomm’s alleged breach of FRAND (Fair Reasonable and Non-Discriminatory) commitments to Standard Setting Organization (SSOs), their main argument, while treading lightly on their earlier key, albeit discredited, “surcharge on competitor” theory. Is it a sign of FTC losing confidence in its case? Also, their FRAND breach argument seems to be on shaky ground.
<<Side Note: If you would like to understand the history of this case, please refer to my earlier articles on the subject>>
I spent many hours meticulously reading through all the briefs (~1500 pages). They are complex, with lots of legal jargon, illustrations, and citations. Here is a high-level summary of the arguments and my opinions on their effectiveness.
The hypothetical “surcharge on competitors” argument
FTC and its supporters are still relying on the theory put forward by Prof. Carl Shapiro. They also have provided torturous examples and illustrations. However, this theory was rejected by the US Court of Appeals for the District of Columbia Circuit in a separate case—United States vs. AT&T. The court’s rejection, as stated, was based on the evidence of actual market performance. Interestingly, both these cases have lots of similarities. Just like AT&T’s case, FTC’s arguments are also based only on theory, without any empirical study of actual market conditions. Moreover, the developments in the market completely debunk Dr. Shapiro’s theory. Unfortunately, those developments could not be included in the trial as evidence, because they happened outside the discovery period of the trial.
According to the theory, Qualcomm allegedly abused its monopoly power to create an imaginary surcharge on the competitors, making their chipsets more expensive. In reality, around 2016, Apple, who was exclusively using Qualcomm’s chipsets, also started using Intel’s chipsets. This fact virtually nullifies the monopoly power allegation. To a large extent, it also disproves the claim that the alleged imaginary surcharge was disincentivizing competitors. Alas! None of this mattered in the trial because of a stringent discovery timeline.
FTC claims that this imaginary surcharge reduced competitors’ profit and hampered their investment in R&D. That seems like a ridiculous argument when you consider that those competitors are behemoths like Intel, and the OEMs are giants like Apple. Looking at all these contradictions, it is clear why FTC is not pushing this argument as hard as it did in the lower court.
Is “harm to competitors” the same as “harm to the competitive process?”
For claiming antitrust law violations, prosecutors must prove harm to the competitive process. FTC is arguing that Intel being late with CDMA and LTE chipsets, and players such as Broadcom and ST Ericsson exiting the market prove harm to competition. Many experts, including the US Department of Justice (DoJ), argue that such instances as well as companies making less profit show harm to competitors, but not necessarily to the competitive process.
During the trial in the lower court, there was ample evidence presented to explain the reasons behind the problems competitors faced — none instigated by Qualcomm. For example, documents presented by Intel’s strategy consultant Bain and Company attributed Intel’s delay to faulty execution; an executive from ST Ericsson opined that they couldn’t execute fast enough to keep up with Qualcomm and rapidly lost the market share, which resulted in their exit.
The reasons for competitors not faring well in CDMA and being late in LTE were pretty clear to the keen industry observers like me. Regarding CDMA, not many chipset vendors were interested in that market as they thought the opportunity was small and fast diminishing. There were only a couple of large CDMA operators (Circa 2006), and with LTE on the horizon, they thought CDMA would quickly disappear. Hence they never invested in it. Much to their chagrin, CDMA thrived for many years, allowing Qualcomm to enjoy a monopoly. Ultimately, Intel acquired a small vendor—Via Telecom—in 2015 to get CDMA expertise. On the LTE front, nobody foresaw the exponential growth of LTE smartphones. Qualcomm, because of its early investment and cellular standards leadership in LTE, surged ahead, leaving others in the perpetual catch-up mode. For example, even when the LTE market has stabilized, Qualcomm chipsets had superior performance.
Alleged practice of “license for chips” policy
FTC claims that it has factually proven Qualcomm’s alleged “license for chips” policy, where Qualcomm would only sell its highly coveted chips if the OEMs sign the license agreement. Qualcomm disagrees. In my view, FTC’s evidence is pretty scant and unconvincing. It includes a few emails with some text that alludes to such intention (license for chips). In many of these emails, the main topics of discussion seem to be something unrelated. There were a couple of testimonies from Qualcomm’s OEMs, mentioning how they “felt” the overhang of this policy during negotiations. But they didn’t have any tangible evidence. There was only one concrete instance—a mail with a veiled threat. But the evidence presented in response showed that Qualcomm top management swiftly dealt with it, and condemned any such practice by its lower cadres.
Another of FTC’s claims is regarding an agreement between Qualcomm and Apple, through which Qualcomm paid Apple for a commitment to use its chipsets in a majority of the devices. FTC alleges that this amounts to Qualcomm indirectly subsidizing licensing fees, and that violates antitrust law. This also is part of the imaginary surcharge to competitor argument. Qualcomm claims that, as stated in the contract, the payment was to compensate Apple for the expenses it would incur in modifying its designs to incorporate Qualcomm chipsets, and was a traditional volume discount. When the contract was signed, Apple was already the market leader with multiple successful iPhone models and was using a different vendor’s chipset. That would indicate Qualcomm didn’t posses any monopoly power over Apple. The contract and the payment were revocable, which Apple ultimately did. So, it is questionable whether it can be treated as a subsidy.
Is FRAND commitment “duty to deal?”
Now to the new “Hero” argument. FTC claims that Qualcomm’s FRAND commitment to the US-based SSOs binds it to license its Standard Essential Patents (SEPs) to rival chip vendors (aka duty to deal). The SSOs in question are ATIS (Alliance for Telecommunications Industry Solutions), and TIA (Telecom Industry Association). The argument is, Qualcomm’s decision to not license to rival chipmakers is a violation of antitrust law. Many of the third parties on the FTC’s side overwhelmingly support this argument as well, for obvious reasons. Well, this at the surface seems like a simple and compelling argument. But it has multiple facets.
<<Side Note: If you would like to understand SEP and the patents process, refer to this article series>>
First, do these commitments mean holders have to license the patents, or is it enough to provide access to them? Second, whether FRAND violation, if true, amounts to an antitrust violation, which is usually a much higher bar? Third, which is more interesting—Are patents practiced by the chipsets or by the end devices (e.g., smartphones)? If latter, then licensing and violation only occurs at the device level, so no real need to license to chipset vendors. Fourth, the policies and practices of the biggest SSO —ETSI (European Telecommunications Standards Institute). ETSI’s policies are considered as the gold standard for SSOs. Interestingly, in its decades of history, ETSI has never compelled its members to license to rival chipset vendors or at the chip/component level. Many of the current SEP holders, such as Nokia, Ericsson, and others, strongly supported this approach during the trial. Well, I have merely scratched the surface of this argument. Since this is now FTC’s main argument, indeed, it needs close scrutiny, which I will do in my next article.
If you have been following this case and feel that you have heard these arguments before, you are right! Both sides made these arguments in the lower court and still sticking to them, except for FTC’s subtle change. It will be interesting to see how the Ninth Circuit considers these arguments. I will be in court to witness and report it. Make sure to follow my updates on twitter @MyTechMusings.
As promised in my previous article, here is a detailed discussion on FTC’s FRAND (Fair Reasonable And Non-Discriminatory) argument in its antitrust case against Qualcomm. FTC argues that Qualcomm agreeing to the FRAND (Fair and Reasonable Anti Discriminatory) requirements of Standards Setting Organizations (SSO) binds them to license patents to all applicants; Qualcomm declining to license its Standard Essential Patents (SEPs) to rival chipset vendors amounts to an antitrust violation. The FRAND requirements are more nuanced than what they appear to an untrained eye. I will dig deeper and try to decipher the arguments as well as examine the industry’s practices for more than two decades.
<<Side Note: If you would like to know the full history of this case, please refer to my article series. >>
What does FRAND commitment to SSOs mean?
The SSOs in question here are TIA (Telecommunications Industry Association), which developed CDMA standards, and ATIS (The Alliance for Telecommunications Industry Solutions), which developed LTE standards. Both organizations require their members to mandatorily sign the IPR policy document, which includes the FRAND requirements.
TIA has a 24-page IPR Policy document. The most relevant portions to this case are on pages 8 and 9:
(2) (b) A license under any Essential Patent(s), the license rights which are held by the undersigned Patent Holder, will be made available to all applicants under terms and conditions that are reasonable and non-discriminatory, which may include monetary compensation, and only to the extent necessary for the practice of any or all of the Normative portions for the field of use of practice of the Standard
The first part of this section is pretty straight forward. But the part marked in red is what is at issue here. In layman’s terms, this means the patent holder agrees to give a license for the practice of the standard. In other words, licenses to the applicants whose products practice the standard. Qualcomm argues that devices—and not chipsets—practice the standards. They point to the actual language/text of the standards as evidence. It is customary for the patents to state, “UE (User Equipment, aka device) shall do this,” or “Base station shall do that,” etc. And the standards never state, “Chipset shall do this or that.” Considering that, Qualcomm argues, they are not required to license SEPs to chipset vendors, but only to device vendors. To that effect, they also point out that they have never sued any chipset vendors for patent infringement.
Now, let’s look at the ATIS IPR policy, which is governed by the “Patent Policy as adopted by ATIS and as set forth in the “Operating Procedures for ATIS Forums and Committees,” a 26-page document. The most relevant portions are on page 10 and 11:
“…Statement from patent holder
Prior to approval of such a proposed ANS, ATIS shall receive from the identified party or a party authorized to make assurances on its behalf, in written or electronic form (b) assurance that a license to such essential patent claim(s)will be made available to applicants desiring to utilize the license for the purpose of implementing the standard. (i) under reasonable terms and conditions that are demonstrably free of any unfair discrimination…”
Again, looking at the highlighted part, Qualcomm argues, as stated in the standard, chipsets don’t implement the standard, but the devices do. So, there is no need for them to license to chipset vendors!
Is a violation of SSO commitment violation of US antitrust law?
Even if you consider that SSO IPR policies are violated, then the question becomes, “does that amount to a violation of US antitrust law?” One argument is that the alleged FRAND violation is a commercial matter and can easily be dealt with through contract and patent law, instead of policy tools such as antitrust law. In his Amicus Brief in support of Qualcomm, Hon Judge Paul R. Michel (Ret.) of US circuit court gave a compelling simile: “as a general proposition, the hammer of antitrust law is not needed to resolve FRAND disputes when more precise scalpels of contract and patent law are effective.”
Even the United States Court of Appeals for the Ninth Circuit (Ninth Circuit) panel, while granting Qualcomm’s request for a stay, ridiculed the lower court’s ruling as “… a trailblazing application of the antitrust laws or …an improper excursion beyond the outer limits of the Sherman Act..”
Precedence and other considerations
3GPP (3rd Generation Partnership Project), the cellular specifications group, prefers all the SSOs across the world to have consistent IPR policies. ETSI (European Telecommunications Standards Institute) is one of the major players among the eight SSOs that are the organizational partners of 3GPP. There has been much discussion at ETSI regarding the issue of component-level licensing, such as licensing to chipset vendors. But ETSI has never stated that it supports or requires its members to offer component-level licensing. So, the lower court decision creates inconsistency between ATIS, ETSI, and other SSOs, whose impacts go far beyond this case.
<<Side Note: If you would like to learn more about 3GPP’s organizational structure and operational procedures, please refer to this article series.>>
More than two decades of cellular patent licensing history proves that the device-level licensing works smoothly and efficiently. Although the discussions related to this case are mostly about modem chipsets, typical devices have hundreds of different components. If licensing is brought to the component-level, it would be a logistical and legal nightmare for OEMs to understand, and negotiate separate licenses with all those vendors, as I explained in this article. Also, probably every existing cellular IPR contract will have to be rewritten.
Final thoughts
So far, there have been only a few minor cases in the telecom industry regarding the violation of FRAND commitments. FTC’s case against Qualcomm is the first major case where its relevance to antitrust law is being tested. The decision of this trial will be a defining moment in the “component vs. device-level” licensing debate. Qualcomm seems to have strong arguments, and the earlier Ninth Circuit panel agreed with most of them. But now the appeals hearing has a new panel of judges, which brings a new set of uncertainties to the case. As promised before, I will be there in person to witness the appeals hearing of this historic case. Be sure to follow my Twitter feed @MyTechMusings for the latest.
The title best describes the current situation after the recent hearing in the more-than-yearlong saga between FTC and Qualcomm. On Feb 13th, 2020, a three-judge panel of the US Court of Appeals for the Ninth Circuit (Ninth Circuit) heard Qualcomm’s appeal to reverse the ruling of the US District Court of Northern California (lower court). During the hearing, the panel asked a lot of skeptical questions to FTC regarding its position, arguments, and precedents, probed Qualcomm’s stance, and almost snubbed the US Department of Justice (DoJ). Although the judges appeared confused in the beginning, they seemed to have gotten the main points toward the end. Based on the verbal and non-verbal communications of the judges, Qualcomm definitely had a more positive day than FTC.
<<Side note: If you would like to understand the history of the case, please refer to the article series “FTC vs. Qualcomm Antitrust Trial”>>
I was fortunate enough to be in the court to witness the hearing. The appeals panel consisted of three judges: Judge Callahan, Judge Rawlinson, and Judge Murphy III. Being in front of them, I was able to observe lots of their non-verbal cues, such as subtle changes in mood and facial expressions, inaudible grunts, how keenly were they listening to whose arguments, etc., which many people watching online might have missed.
With only about 50 minutes allocated to the hearing, both parties only focused on the main points. What caught my eye was that during Qualcomm’s arguments, judges were more in the listening mode and only prodding Qualcomm for clarifications. But during FTC’s time, they were more skeptical, often questioning and challenging FTC counsel’s assertions, and mostly in the “so what” mode. This is unlike other appeals cases, where usually appellants (Qualcomm in this case) face more scrutiny.
<<Side note: Please refer to my articles here and here for more details on the arguments at play in the case>>
Duty to deal
FTC massively hurt their case by conceding that Judge Koh had erred in citing the Aspen Skiing case as the precedent for “Duty to Deal,” i.e. the ruling that Qualcomm has the duty to license its patents to competitors. Judge Callahan even went to the extent of saying that the house of cards, i.e. FTC’s case, starts to fall if the card of Aspen case is pulled out. Qualcomm obviously made a field day with it, quoting lower court’s argument that “Duty to Deal” was one leg of the three-legged stool, and with that gone, the case couldn’t stand (literally and figuratively). FTC’s alternate precedents of Caldera and United Shoe Company cases, or argument about Qualcomm breaching FRAND commitments to Standards Setting Organizations (SSOs) didn’t seem to impress the panel. So, I am positive that this ruling will be reversed.
“No license no chips” policy
This argument confused the heck out of judges. Multiple times Judge Callahan asked and confirmed that Qualcomm was not accused of the “No chips No license” policy, which obviously is antitrust conduct. She even suggested that probably Judge Koh of the lower court was confused about that as well! In other words, she didn’t think “No License No Chips” was anti-competitive. There was a clear difference of opinion between FTC’s and Qualcomm’s counsels on how OEMs expressed their views on the policy. FTC said that many witnesses from smartphone OEMs had given testimonies about paying higher royalties because of the risk of not getting chips. On the other hand, Qualcomm said that there was only one witness, from one OEM, in a non-monopoly market. To my recollection attending those hearings, mostly OEM expressed that they felt such policy existed, but never showed any evidence of Qualcomm practicing it. So, obviously, the panel will have look at the actual testimonies to make their determination. There was no discussion on whether this policy itself was illegal or not. but using this policy for creating the alleged surcharge on competitors.
Surcharge on competitors
If no license no chips discussion was confusing, this torturous surcharge claim hypothesis knocked the wind out of judges! Judge Murphy even said that he was having a hard time keeping up with all these things! I don’t blame them. Most of FTC’s time was spent on making the judges understand what FTC calls a surcharge, how it affects competition in their view etc. As expected, the panel challenged this claim from multiple angles—precedence, market evidence, harm to competition not competitors, etc. and tried to poke holes in FTC’s position.
Here are the notable questions and challenges. Judge Rawlinson asked “… what would be wrong with that (higher royalty fees), doesn’t the Supreme court say that patent holders have the right to price their patents, what would be anticompetitive about that?” and “..What case says that it is anti-competitive to move (cost) from chip to patent?” Judge Callahan asked, “Why did the OEMs say it’s unfair because they have to buy a license anyway?”; “..who is a Goliath here, Apple is more of a Goliath than Qualcomm”; “..your argument that Qualcomm’s licensing fees increase rival’s cost doesn’t make sense to me…” ; “There seems to be….. a conflation of profitable and anti-competitive (one means the other).”; “… weren’t there multiple competitors enter the …market successfully beginning around 2015, leading to a precipitous decline in Qualcomm’s market (share)? Judge Murphy III asked, “…why don’t we let OEMs exercise their right in patent law to file (cases for) predatory pricing, abuse of monopoly, etc. (instead of antitrust law)?” These were mere samples.
The panel was unconvinced and most likely will still be even after looking at the documents.
Chip volume incentives or royalty discount
This issue was not discussed as much as others but was used as a basis for other arguments. FTC claims that Qualcomm’s volume discount to Apple is exclusionary and anti-competitive. Qualcomm, during its rebuttal, argued that licensing and chipset are two separate contracts and it doesn’t make sense to combine them. Again, this is another issue where the judges will have to look at the documentation and decide.
Is the “Threat to national security” argument justified?
This is the first time that DoJ and FTC are on opposite sides of a case. Qualcomm ceded five minutes of their time to DoJ. DoJ’s major claim is that the lower court’s global and expansive remedy harms national security. Judge Murphy seemed hostile against DoJ and asked whether they have any market analysis or financial evidence to prove the claim. DoJ counsel, although startled by the question, came back with a reasonable explanation that the basis for the case was 3G and 4G, but applying the remedy to 5G will negatively affect the country’s standing in 5G. 5G being such a crucial technology for many aspects of the country, DoJ and other government departments (Department of Defense and Department of Energy) are convinced that implementing the ruling will harm the country. FTC counsel was quick to capitalize on Judge Murphy’s assertion and discount the security concern as a simple abstraction without any supporting studies.
I am not sure whether the panel will consider the security question seriously.
What does all this mean?
You have to consider that the hearing is only one part, albeit an extremely important one, in resolving the case. The court will examine all the briefs, and case documentation before making a final decision. One could argue that the cues from the hearing may be overblown, for example, all those questions and challenges could just be the judges probing both parties to completely understand their stance and such. However, specific things such as difficulty in fully grasping the FTC’s argument, and understanding its point of view clearly indicate that the judges don’t believe those arguments and are not taking them at the face value. It also suggests that the FTC’s arguments are not as robust as the lower court thought they were.
From Qualcomm’s perspective, after a clear win with the stay, this hearing turned out to be very positive. The FTC had a major initial setback because of the Aspen Skiing reversal, but at least made the panel understand its arguments. Whether the panel agrees with them or not is a separate matter. In my view, Judge Callahan and Judge Rawlinson seem to be aligned with Qualcomm’s arguments and Judge Murphy seems to be neutral or slightly aligned with FTC’s argument. Ultimately, as Judge Murphy III succinctly put it, “anticompetitive behavior is illegal… hyper-competitive behavior is not… this case asks us to draw the line between the two.” Meaning, the judges have to decide whether Qualcomm’s behavior is anticompetitive or hyper-competitive.
What’s next?
There is no fixed timing for the Ninth Circuit’s decision. The expectation is six to twelve months. The decision doesn’t have to be unanimous, meaning, only two of the three judges have to agree.
In terms of outcome possibilities, the panel could completely knock down all the lower court’s rulings, or fully uphold them, or do anything in between. Meaning, it could agree to some parts of the ruling and reverse the others or make a determination on some and send the others back to the lower court to reconsider. No matter what the panel’s decision is, either party can request a full panel review, which involves all the 20+ judges at the Ninth Circuit, and further knock on the Supreme Court’s door. If Qualcomm loses, especially the claims that affect its licensing policy, I am sure it will go to the Supreme Court. On the other hand, if the FTC loses, it might ask for the full panel review and let it go after that.
As it stands today, I think Qualcomm is in a pretty good situation and more likely to win than the FTC.
Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get updates on this trial as well as the telecom industry at large.
The United States Court of Appeals for the Ninth Circuit (Ninth Circuit) gave a landmark decision in favor of Qualcomm, on Aug 11th 2020, in the long running antitrust case brought about by FTC. This was a highly anticipated outcome in the multi-year saga, which saw fortunes go back and forth between the parties. The detailed opinion written by Judge Callahan, representing the panel of three judges, is a tell-a-tale of how FTC mischaracterized Qualcomm’s business model, and how the United States District Court for the Northern District of California (lower court) misjudged the case. The ruling vacated all the decisions of the lower court, including the partial summary judgement. I spoke to Don Rosenberg, EVP, and General Counsel of Qualcomm, who of course was quite pleased with the outcome. He said, “we felt vindicated by the appeals court’s ruling and are looking forward to continue bringing path-breaking innovation like 5G to life.”
Ninth Circuit’s decision was not just relevant for this case, but clarifies a whole slew of long-standing issues, and will set a defining precedent for IPR licensing in the future, especially from an antitrust point of view.
Side note: If you would like to know the full background of the case, refer to my earlier articles in the FTC vs. Qualcomm article series.
Well expected outcome
The recent developments in the case had made me predict such ruling. The Ninth Circuit’s stay of the lower court’s decision, and the language used in that order, the tone of the in-person hearing, and the deep skepticism the panel showed in their questioning made it amply clear the direction the panel was tilting.
The case indeed had a lot of unusual and rather interesting turn of events from beginning to end. It was filed in the last days of the last administration with only a few FTC commissioners in the office. One of those commissioners who was opposed to this move wrote a scathing opinion in The Wall Street Journal, publicly disparaging the case. The new incoming chair of FTC recused himself from the case, which left the case on autopilot with FTC staff taking charge. The instigators, major supporters and witnesses moved away from the case midway—Apple and Huawei settled their licensing disputes with Qualcomm, Intel exited the modem market. The US Department of Justice, which shares the antitrust responsibility with FTC, went strongly against FTC, it even became a party to the hearing and pleaded against the case. But the biggest surprise for me was the ferocity with which the Ninth Circuit tore down and reversed every decision of the lower court, including the summary judgement.
Highlights of the ruling
This indeed was a complex technical case, where the judges had to quickly develop full understanding of the industry. Rosenberg highlighted the challenges of appellate court judges “They have to work on the record that somebody else has created for them, including lots of documentary evidence, witness testimony, lower court’s assertions and more” he added “considering that, the judges did an amazing job, cutting through the noise and really getting to the core issues and opine on them.” The interesting thing I found reading through more than 50-page ruling is, how it summarized and reduced the case into five key questions:
-
Whether Qualcomm’s “no license, no chips” policy amounts to “anticompetitive conduct against OEMs” and an “anticompetitive practice in patent license negotiations”
-
Whether Qualcomm’s refusal to license rival chipmakers violates both its FRAND commitments and an antitrust duty to deal under § 2 of the Sherman Act
-
Whether Qualcomm’s “exclusive deals” with Apple “foreclosed a ‘substantial share’ of the modem chip market” in violation of both Sherman Act provisions
-
Whether Qualcomm’s royalty rates are “unreasonably high” because they are improperly based on its market share and handset price instead of the value of its patents
-
Whether Qualcomm’s royalties, in conjunction with its “no license, no chips” policy, “impose an artificial and anticompetitive surcharge” on its rivals’ sales, “increasing the effective price of rivals’ modem chips” and resulting in anticompetitive exclusivity
The panel decided that FTC and lower courts were wrong on all counts. Rosenberg said that the opinion gave very logical, persuasive and point to point arguments with obviously relevant citations to refute all those assertions. Here are some of the excerpts from the opinion:
“…OEM-level licensing policy, .. was not an anticompetitive violation of the Sherman Act.”
“…to the extent Qualcomm breached any of its #FRAND commitments, the remedy for such a breach was in contract or tort law…”
“…”no license, no chips” policy did not impose an anticompetitive surcharge on rivals…”=
“…We now hold that the district court went beyond the scope of the Sherman Act…”
” Thus, it [Qualcomm] does not “compete”—in the antitrust sense—against OEMs like Apple and Samsung in these product markets. Instead, these OEMs are @Qualcomm’s customers…”
“…OEM level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,”
“…while Qualcomm’s policy toward OEMs is “no license, no chips,” its policy toward rival chipmakers could be characterized as “no license, no problem…”
“…even if we were to accept the district court’s conclusion that Qualcomm royalty rates are unreasonable, we conclude that the district court’s surcharging theory still fails as a matter of law and logic.”
“…neither the Sherman Act nor any other law prohibits companies from (1) licensing their SEPs independently from their chip sales; (2) limiting their chip customer base to licensed OEMs…”
“…Our job is not to condone or punish Qualcomm for its success, but rather to assess whether the FTC has met its burden under the rule of reason … We conclude that the FTC has not met its burden…”
What this means for the industry
This indeed was a landmark decision with long ranging consequences. It surely clears the clouds of uncertainty that were hanging over Qualcomm’s licensing business for a long time. It will also be a welcome decision for many other patent holders and licensors. The precedent this case has set will be used for resolving patent related antitrust issues for a long time to come. Here are some of the specific things I think are relevant:
-
Device-level licensing is not anti-competitive
-
FRAND and patent violations are outside the purview of the antitrust law, and are better handled under the contract law
-
Royalties of one company do not have to be in-line with the rates other companies charge
-
Surcharge on competitors may have to be direct, at least the “effective surcharges” from complex inferencing do not work
Rosenberg said “Qualcomm’s novel licensing model and its policies have now gone through intense global legal litigation and have successfully proven themselves. Now we are more confident and working hard to innovate and to expand the reach of 5G and bring its benefits to the world.”
What is next for the case?
The FTC has not made comments on its next steps. It does have a couple of options. It could ask for what is called an “en banc hearing” in which the whole Ninth Circuit bench (or a major part of it) is asked to hear the case. But for that to happen, a majority of the judges would have to vote to agree to the hearing. Even after the en banc hearing, either party could knock on the doors of the Supreme Court and ask whether it would be willing to hear the case.
But, keeping all the theoretical options aside, I think a unanimous verdict, ferocious opinion coupled with the fact that all of the lower court’s decisions were vacated, makes it very less likely for FTC to keep pushing the case further. Since the instigators and supporters have also moved on, there is no incentive for anybody to keep it going. The FTC might ask for an en banc hearing anyway as a face-saving step as that does not require significant effort from its side. Since en banc is a large effort, and many other judges will have to spend a lot of time and energy to fully understand such a highly technical and complex case to give any verdict, I doubt they will grant it. Hence, I am confident that in many respects, this is the end of the road for the case.
As we await the FTC’s response, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Right before the passing of the deadline, as expected, the Federal Trade Commission (FTC) took another swing at Qualcomm by filing a request to reconsider the recent appellate court decision. But to everybody’s surprise, the FTC Chair and Trump appointee Joseph J. Simons, coming out of recusal, authorized that decision.
This request will again set in motion activities at The United States Court of Appeals for the Ninth Circuit (Ninth Circuit). After a few more weeks of action, I believe, eventually, this case will go into the history books as a great precedent for antitrust law in the realm of patents and licensing. Interestingly, Apple which was the alleged instigator of this case is already using this precedent to fight its case against Epic Games!
Side note: If you would like to know the full background of the case, refer to my earlier articles in the FTC vs. Qualcomm article series.
Well expected action by FTC but not by its chair
Even after the emphatic rebuke from the unanimous Ninth Circuit panel, FTC was well expected to file this request called en banc, as I predicted in my earlier article. There are many reasons for it: First, it doesn’t require much effort, only a short brief need to be submitted. Second, even in the unlikely event that its request is accepted, the rehearing will be short with minimal participation from FTC. Third, FTC would not like to appear as if it has given up on the case.
The most surprising thing was FTC’s chairman Simons siding with the other two commissioners resulting in the 3-2 in favor of en banc. He was recused from the case till May 2020, because his previous employer, Paul Weiss Rifkind Wharton & Garrison, advised Qualcomm on its unsuccessful bid to buy NXP Semiconductors. Since he is a Trump appointee, and the FTC case was filed in the wee hours of the Obama administration, even without the full commission in office, it was widely assumed that he would be against the case. Additionally, the administration’s Department of Justice (DoJ), Department of Defense, and few departments are also against the case, and in an unusual move, DoJ forced themselves into the Ninth Circuit hearing and argued against FTC.
The reasons behind Simons vote are not clear. Trump tweeting about government agencies not acting against tech companies might have made him show some action but on the wrong target. Since this was an easy move for FTC, he must have thought of going along with FTC staff during the last step of this case. Or maybe he actually believes in the case? We can only speculate. FTC taking the full 45 days available to file the request was also interesting. Maybe they are taking a more critical look at the case. As you may know, because of the 2-2 tie at the commision, FTC staff was running the show till now.
How does en banc work?
En banc is a process through which either of the parties requests the entire bench of the Ninth Circuit to reconsider the case. If you recollect, the earlier decision was heard by a three-member panel. Now, the full bench with 29 judges, minus any recusals, will take a vote on the request. If the majority votes to accept the request, the case will be assigned to another panel of 11 judges for a rehearing. The rehearing is expected to be short, only requiring Qualcomm to submit a reply to FTC’s en banc brief. No new evidence, and typically no physical hearing.
The rehearing has a quite high bar. Historically, less than one percent of the requests have been accepted. Only cases that are consequential for precedence, or that contradict any previous rulings or resolve any previous contradictions in the circuit are accepted. Also, the bench’s view of whether the panel has correctly applied the appropriate laws is a crucial consideration.
What is FTC arguing?
The 83-page long brief filed by FTC relies on many of their same arguments presented earlier in the case. Here are a few things, that are new and worth noting:
-
Argues that the Ninth Circuit panel only examined the applicability of the antitrust law to patents and licensing, and opined it is not, which obviously FTC disagrees
-
Points out that the panel did not disagree with any of District Judge Koh’s findings, and hence they must be true. Further, they refer to them as “facts” which I think is a big leap of faith
-
Relies heavily on United Shoes and Microsoft antitrust cases and attempts to draw strong parallels between them and Qualcomm. Clearly, they have learned their lesson and have moved away from the Aspen Skiing case!
-
Argues that Qualcomm’s royalties are inflated because of its chip monopoly, because, as claimed unsuccessfully before, its peers’ licensing revenues are much lower.
Side note: If you would like to know more about patent evaluation and how major companies rank in terms of cellular patents, check out this article series and this Tantra’s Mantra podcast.
What’s next and what does all this mean?
As mentioned, the next step is bench voting, if voted yes, the panel rehearing. The voting usually takes a week or two, and if the rehearing ensues, Qualcomm will have 21 days to reply, followed by a few more weeks for the hearing. So, the whole thing should be relatively short, maybe a couple of months.
It is not clear how the rehearing will be executed. Everything will be at the discretion of the panel. It may relook at the full case, or only some aspects of the case, and similarly full or partial remedies if it comes to that.
Considering that two sets of Ninth Circuit judges have given sided in favor of Qualcomm—one set of three granting the stay, and another set of three giving the decision, it was a unanimous decision that completely reversed the District Court’s ruling including the summary judgment, combined with a compelling 53-page opinion written by Judge Callahan, it is highly unlikely that the bench will vote of rehearing. Note that the judges have to rule against the judgment of six of their colleagues to vote yes. Also, if it goes to the rehearing, the panel has to study this highly complex case in depth to come to any reasonable conclusion.
Other than the fact that this is an important case for royalties, licensing and antitrust that affect a large portion of the economy with 5G, every other aspect of the case points to a No vote.
If FTC’s request is rejected, or if it loses the rehearing, it still has the option to go to the Supreme Court. In fact, they can approach the Supreme Court even during the en banc process.
Considering how far the case has come, my money is on en banc request getting rejected. In the unlikely case of this going to the rehearing, I have a strong feeling that the panel’s decision will be reaffirmed. If either of these happens, I think it would be futile for the FTC to go to the Supreme Court, and I seriously think it will not try to do that, as there are many negative consequences and long term risks, with little chance of success.
As we await the en banc decision, if you would like more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter atTantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
After pointlessly fighting tooth and nail for almost two years, FTC will now be forced to end the case, after the latest setback at The United States Court of Appeals for the Ninth Circuit (Ninth Circuit). The Ninth Circuit’s well expected en banc denial, following a series of upsets, put the death nail in the coffin. After the direct, clear, and very short seven-line opinion, I am certain that FTC will not even imagine knocking on the doors of the U.S. Supreme Court.
This decision clears all the clouds hovering around Qualcomm–the country’s 5G crown jewel. This will also have a long-lasting impact not only on its licensing business and policies but also on the technology industry and innovations as a whole.
Side note: If you would like to know the full background of the case, follow this FTC vs. Qualcomm article series.
A wave of setbacks for the FTC
After some initial success at the United States District Court for the Northern District of California (US District Court), FTC has constantly seen setbacks, and at times, very harsh rebukes at the Ninth Circuit.
First, the three-judge panel unanimously accepted Qualcomm’s request for the stay, with a ruling that almost ridiculed the US District Court’s decision. The panel opined it as “…a trailblazing application of the antitrust laws or … an improper excursion beyond the outer limits of the Sherman Act…”
Second, when another three-judge appeals panel heard the case, its questioning and doubting FTC’s confusing arguments made it amply clear which way the panel was leaning.
Third, the actual unanimous judgment almost shredded the US District Court’s decision and completely reversed it and threw it out, including the initial summary judgment. The opinion written by Judge Callahan was a tell-a-tale of how US District Court Judge, Lucy Koh miss-applied the antitrust laws.
Finally, this wholesome denial of the en banc request was yet another strong strike against FTC’s unfounded fascination in continuing the unworthy prosecution of a free and very successful American enterprise. It indeed quashed the hopes of some who thought the surprise move of FTC Chairman Joseph J. Simons, a Trump appointee, to authorize the en banc request had brought life back into the case.
In retrospect, the case has gone through a whole slew of US Federal judges—six judges of the panels, to some extent the full Ninth Circuit bench of more than 25 judges. But the only sympathizer for FTC, from the US legal system, seems to be Judge Lucy Koh of the US District Court. As an observer who attended almost all the court hearings, I found her handling of the case to be bizarre. Some of the examples of her strange behavior include: artificially limiting the discovery period which skewed the case, clinging on to the hypotheses such as “tax on the competitor,” which were rejected by other courts and judges, rejecting the testimonies of all of Qualcomm’s executives, including that of its highly respected and revered founder, and an industry veteran, Dr. Irwin Jacobs.
A series of unfortunate events
As I have indicated many times in my earlier articles, this case had a lot of oddities right from the beginning and they continued throughout the proceedings. The case was filed in the last days of the previous administration, with only partial commission present. The sitting FTC commissioner publicly criticized the case by writing a harsh rebuke on The Wall Street Journal. When the full commission was constituted, the Chairman recused from the case, making the decision a tie with two commissioners supporting and the other two opposing. That made the case almost run on autopilot, managed by the FTC staff. Apple, which was one of the alleged instigators and a major witness in the case, settled with
Qualcomm and ended its active support.
Many U.S. Government agencies opposed FTC’s action. The U.S. Department of Justice, which shares the responsibility and partners with FTC on antitrust matters, vehemently opposed the case and even took the unprecedented step of testifying against it at the appeals hearing. Many legal scholars and previous FTC commissioners, Ninth Circuit judges, opined against the case.
What’s next?
Although FTC has a theoretical option of knocking on the door of the US Supreme Court, I don’t think these series of setbacks and strong rebukes leave it any option other than to close the case and move on. If the appeals decision was not unanimous, not a complete reversal, or the en banc was accepted, there was some justification. Without any of those, it would be utterly stupid for FTC to continue the case and waste even more taxpayer money.
If they had any doubts, the Ninth Circuit’s en banc unambiguous opinion, which is mere seven lines long makes it pretty clear. That is the shortest court document that I have ever seen and analyzed. Many go up to a hundred pages or more. This decision for sure clears all the doubts around Qualcomm’s licensing policies and the industry- standard practice of licensing to OEMs. That means the practice of calculating licensing fees based on the price of the device (with caps, of course) is completely valid and legal. The case establishes a pretty significant precedence for licensing practices and applicability of antitrust laws. It will have a long-lasting impact on not only the cellular but almost the entire technology industry and beyond. With 5G set to transform almost every industry on the planet, the repercussions of the case are impossible to overstate. Look for a detailed article on this from me soon.
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Demystifying semiconductor techs that move industry forward
The tech industry has seen a blistering pace of innovation and market dominance. Global equity markets are swayed by how Apple, Amazon, Google, Facebook, Microsoft, Netflix, Intel, Nvidia perform. Seven out of the top ten companies in S&P500 and three out of the top 10 Dow Jones Industrial Average Index components are tech companies. The meteoritic rise of these giants was primarily fueled by unprecedented advancements in computing, especially mobile computing. So much so that the global economic future is guided by consumers’ mobile-first experiences and technologies that enable those experiences.
In a series of articles, I will explore the history and evolution of computing, i.e. semiconductor technologies, how they have shaped our present, and will define our future. Additionally, I will provide my commentary on some of the critical industry events that have influenced this evolution, and analysis of how the developments in the industry that are underway have the potential to change the course and drastically alter the future the industry has collectively envisioned.
Semiconductor technology evolution – a tale of two architectures
When you look at the evolution of semiconductor technology and architectures, there are two clear paths. First, Intel’s x86 architecture that dominates the server, desktop, and laptop computing space. And second, Arm Ltd of the U.K., which controls almost all the mobile computing space. Historically, x86’s primary focus has been performance, sometimes at the expense of power consumption. On the other hand, Arm has been feverishly focused on lower power consumption, but limited performance.
However, both companies are trying to evolve their architectures to improve on both performance and power consumption axes. Intel’s latest x86 laptop processors have improved much over their predecessors in terms of battery life. Arm processors have improved leaps and bounds in performance over the years, rivaling even Intel in personal computing processors, while still maintaining their low-power heritage. Currently, these architectures have limited overlap in terms of use cases and markets. But the turf war between them has been brewing for some time and is about to get brutal pretty quickly. Apple moving from Intel’s x86 processors to their own Arm-based M1 processor for Mac laptops is a good indication of that.
The future of technology will run on Arm
There is no doubt the future will be dictated by the mobile-first experiences that users are accustomed to and expect from everything tech, and everywhere else. That means, almost everything will be mobile, untethered, and wireless. 5G is providing even bigger impetus and extending that trend beyond the consumer segment to industrial as well. All this means, all the untethered devices from simple consumer devices to large machines in factories will run on batteries, which in turn means, lower power consumption is going to be of paramount importance.
Arm’s inherently low-power consumption will surely be the architecture of choice for the untethered world. Although Arm only dominates the mobile compute world today, its processing capabilities are evolving rapidly and with the thousands of innovative companies working on its technology, it is on track to expand beyond that space. E.g. the server market where Intel x86 has complete domination. Arm is trying to make a play, as even there, power consumption is becoming a challenge and big cloud companies are looking for low-power solutions. Industrial IoT, Automotive, Edge-Cloud, and many other segments are ripe for digital transformation and are good candidates for Arm adoption.
Arm’s “horizontal” business model
Unlike Intel, which has a vertical model of developing architecture and fabricating its own processors, Arm has adopted a “horizontal” business model. It develops the architecture and processor technology and licenses them in different flavors to semiconductor companies. Because of this model, Arm has enabled thousands of big and small companies including giants such as Apple, Samsung, Qualcomm, Microsoft, and others, to make market-leading and even market-defining products based on its architecture. If you are using any consumer electronics product that has some sort of processor in it, most likely it is based on Arm technology.
Arm’s horizontal business model is one of the key reasons behind the tech boom. While Arm focuses on continually improving the architecture and developing a strong roadmap, its large partner ecosystem focuses on developing processors and end products. The software ecosystem develops services to best exploit these technologies and products, creating an endless cycle of innovation that has fueled the tech boom.
Recent developments at Arm
The recent announcement of Nvidia buying Arm from its owner Softbank came as a shock to many who were part of this innovation cycle. This move has the potential to completely upend the whole ecosystem and may require significant realignment. Interestingly, Nvidia competes with almost all of Arm’s major customers in some shape or form. Additionally, Nvidia and Arm have quite different strategies, approaches, target market segments, and customer base, which makes it even more nerve-wracking for the ecosystem.
As evident, this is a multifaceted issue, with numerous primary, secondary, and tertiary impacts on Arm’s future as well as its huge ecosystem. In a series of articles, I will analyze all those dimensions very closely and present my thoughts on the subject. So, be on the lookout!
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Right after the last Nvidia quarterly earnings release, Jim Cramer, host of CNBC Mad Money spoke to Jensen Huang, CEO of Nvidia regarding the deal with Arm. Most of his questions were softballs, but what caught my attention was Jensen’s comment that Arm was not a must for Nvidia’s success, but a nice to have. That got me thinking and made me take a deeper dive into the rationale for the merger. Here are some of my thoughts on why Nvidia needs Arm more than vice versa.
Nvidia’s announcement of its intent to acquire Arm from Softbank has brought Arm out of the shadows and into the limelight. Arm has always been a silent performer, quietly powering the modern smartphone revolution. Its inner workings have been an enigma for many industry observers. And now, many are scrambling to understand what Arm does, and how Nvidia’s buyout will affect the semiconductor industry, the competitive landscape, and the future of tech at large. If you do not yet know the importance of Arm, consider this: almost any tech gadget you can think of, be it a simple IoT device, a game console, a smartphone, or even a modern car, has been touched by Arm technology in some shape or form. Its importance and reach are only going to expand, as the whole world moves toward untethered and low-power computing, as I explained in my earlier article here. Hence, the impact on the industry of its buyout by Nvidia is going to be oversized and impossible to overstate.
Side note: You can read the full article series here.
Arm’s licensing model
To scrutinize Nvidia’s rationale effectively, one has to really understand Arm’s business model, especially its licensing model. In simple terms, Arm is the design house of power-efficient processors (aka cores) for the entire tech industry. It makes money by licensing those technologies in different forms. It offers three types of licenses—Processor, Optimized Processor, and Architecture. Let us look at each of these more closely.
The first, Processor License, is simply the permission to use processor cores designed by Arm. Licensees cannot change Arm’s designs but are free to implement them however they like in their own solutions. For example, Qualcomm, Samsung, and Huawei have this type of license. They combine multiple types of Arm cores (e.g., CPU, GPU, or other types, and in some cases, different sizes of cores) alongside other proprietary cores to make their semiconductor Systems on a Chip (SoC’s). They also optimize the cores to achieve greater performance and to provide differentiation from other SoC’s. You might have heard about how Qualcomm Snapdragon, Samsung Exynos, and (Huawei) HiSilicon Kirin platforms perform differently. That difference is because each company uses and optimizes Arm cores differently. So, such a license is for players that have the technical and financial wherewithal to do such optimizations.
The second, Optimized Processor License, is a bit more involved and detailed, where Arm not only provides the basic processor design but also, optimizations to achieve a certain level of guaranteed performance. This license is well-suited to companies that do not have the capabilities to implement and optimize a design, for example, smaller IoT chipset providers. This is probably Arm’s most popular option, with thousands of licensees.
The third, Architecture License, is also sometimes referred to as an Instruction Set Architecture (ISA) License or simply, Instruction Set License, and is the most minimalistic option. ISA licensees only get access to Arm’s instruction set and can design their own cores that run those instructions. Apple is such a licensee. Its A-series processors used in iPhones, iPads, and the new M1 processor used in Macs are designed by Apple but, run Arm’s instruction set. Nvidia, Google, Microsoft, Qualcomm, and Tesla also possess architecture licenses.
Why is Nvidia buying Arm?
The reasons Nvidia has given for buying Arm can be grouped into three categories of benefits: 1) Using Arm’s vast ecosystem to distribute Nvidia’s Intellectual Property (IP); 2) Invest in Arm architecture to consolidate and expand its reach in the data center market; 3) Co-invent the Edge-cloud with Arm’s and Nvidia’s technologies.
In general terms, these reasons seem very attractive and complementary, benefiting both companies and their shareholders. They seem to benefit the industry at large as well, by giving others access to Nvidia’s market-leading graphics IP and accelerating the growth of data center and Edge-cloud markets. However, when you remove the covers and dig a bit deeper, there are quite a few peculiarities to consider.
First—Nvidia distributing its IP to the Arm ecosystem: from a business model perspective, Nvidia and Arm could not be more dissimilar. Arm is a pure-play licensing company that derives most, if not all, of its revenues from licensing. That means it is a neutral player across the whole ecosystem because it licenses its technology to all, and does not compete with any of its customers. On the other hand, to my knowledge the only thing Nvidia licenses is its CUDA software, and at no charge. One reason CUDA is free is because it only runs on Nvidia GPUs. Nvidia makes most of its money from its highly differentiated, high-margin GPU hardware and integrated software. Given this lucrative revenue stream, it is hard to fathom Nvidia’s willingness to license its GPU IP to Arm’s ecosystem, which would diminish its differentiation and destroy those sky-high margins. This could be particularly problematic, as some Arm licensees are in the process of developing products for the data center, where Nvidia makes most of its money. Nvidia’s licensing revenues and margins, like Arm’s, would be a pittance compared to Nvidia’s existing product revenues and margins. Unless there is another more plausible explanation where margins and revenue stream are not sacrificed, it is hard for anybody to buy this argument.
Second—Helping Arm expand into the data center market: this seems like a novel idea… the significant financial and other resource infusions Nvidia can make into the program could certainly accelerate Arm’s current trajectory. However, the “Arm for data center market” effort is already well underway, mainly because the data center service providers themselves have realized the importance of power-efficient processing, for financial as well as environmental (carbon footprint) reasons. Cloud giants such as Amazon, Google, and Facebook have reportedly been working on their own in-house, Arm-based platforms. Arm already seems to have the financial and market support it needs. On the contrary, Nvidia with its high-performance, but energy-guzzling GPUs will need low-power CPUs to complement (and improve) its portfolio, especially since the data center market is becoming extremely energy conscious. Additionally, it is likely Arm, with its decades-long experience in low-power design, that can teach a trick or two to Nvidia to help reduce the power consumption of its GPU designs. So, although Nvidia’s resources might help Arm, it seems Nvidia needs Arm equally, if not more.
Third—Co-inventing the Edge-cloud: unlike Arm in the data center market, this ship has sailed for some time. Power-efficient design is a basic necessity for edge compute, and one of the reasons that Arm is at the center of this universe. Thousands of small and large companies, including the cloud titans, are investing in, and developing technologies for the Edge-cloud. Nvidia will be a noteworthy addition to that ecosystem, but only one of many such players. Also, with power at a usability premium for Edge-cloud use cases and workloads, Nvidia has to pivot from its performance-only focused design philosophy to more power-efficient architectures. In this market, Arm will be of greater value to Nvidia than the other way around.
Upon closer examination of the three main reasons cited by Nvidia for the acquisition, one seems unconvincing, and the other two seem counter to Nvidia’s logic because it appears Nvidia would benefit more from Arm than vice versa. Moreover, Arm and its customers are already on the path with which Nvidia is proposing to help Arm. But, if the merger goes through, Arm, instead of being a neutral supplier with no conflicts of interest with its customers, would instead become a technology supplier as well as a competitor for its customers in the Cloud, the Edge-cloud, PC’s, the automotive industry, and AI. This dichotomy might affect Arm’s vast ecosystem and its unwavering support for the architecture. Also, Arm has developed its architecture and its business with significant inputs from its ecosystems. Ecosystem players would likely be disincentivized to share their inputs with a competitor, Nvidia-Arm. Nvidia’s resources, it seems, would not come without opportunity cost to Arm.
I am sure you are aware of news reports citing many concerned ecosystem players reaching out to the FTC and other antitrust agencies about the acquisition. You might even be wondering what these players, including behemoths like Google, Samsung, Qualcomm, and even Apple, are worried about? Well, that is the topic of my next article… so be on the lookout!
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
The Chronicles of 3GPP Rel. 17
Have you ever felt the joy and elation of being part of something that you have only been observing, reading, writing about, and admiring for a long time? Well, I experienced that when I became a member of 3GPP (3rd Generation Partnership Project) and attended RAN (Radio Access Network) plenary meeting #84 last week in the beautiful city of Newport Beach, California. RAN group is primarily responsible for coming up with wireless or radio interface related specifications.
The timing couldn’t be more perfect. This specific meeting was, in fact, the kick-off of 3GPP Rel. 17 discussions. I have written extensively about 3GPP and its processes on RCR Wireless News. You can read all of them here. Attending the first-ever meeting on a new release was indeed very exciting. I will chronicle the journey of Rel. 17, through a series of articles here on RCR Wireless News, and this is the first one. I will report the developments and discuss what those mean for the wireless as well as the many other industries 5G is set to touch and transform. If you are a standards and wireless junkie, get on board, and enjoy the ride.
3GPP Rel. 17 is coming at an interesting time. It is coming after the much publicized and accelerated Rel. 15 that introduced 5G, and Rel. 16 that put a solid foundation for taking 5G beyond mobile broadband. Naturally, the interest is what more 5G could do. The Rel. 17 kick-off meeting, as expected, was a symposium of great ideas, and a long wish list from prominent 3GPP members. Although many of the members submitted their proposals, only a few, selected through a lottery system, got the opportunity to present in the meeting. Nokia, KPN, Qualcomm, Indian SSO (Standard Setting Organization), and few others were among the ones who presented. I saw two clear themes in most of the proposals: First, keeping enough of 3GPP’s time and resources free to address urgent needs stemming from the nascent 5G deployments; second, addressing the needs of new verticals/industries that 5G enables.
Rel. 17 work areas
There were a lot of common subjects in the proposals. All of those were consolidated into four main work areas during the meeting:
-
Topics for which the discussion can start in June 2019
-
The main topics in this group include mid-tier devices such as wearables without extreme speeds or latency, small data exchange during the inactive state, D2D enhancements going beyond V2X for relay-kind of deployments, support for mmWave above 52.6 GHz, Multi-SIM, multicast/broadcast enhancements, and coverage improvements
-
-
Topics for which the discussion can start in September 2019
-
These include Integrated Access Backhaul (IAB), unlicensed spectrum support and power-saving enhancements, eMTC/NB-IoT in NR improvements, data collection for SON and AI considerations, high accuracy, and 3D positioning, etc.
-
-
Topics that have a broad agreement that can be directly proposed as Work Items or Study Items in future meetings
-
1024 QAM and others
-
-
Topics that don’t have a wider interest or the ones proposed by single or fewer members
As many times emphasized by the chair, the objective of forming these work areas was only to facilitate discussions between the members to come to a common understanding of what is needed. The reason for dividing them into June and September timeframe was purely for logistical reasons. This doesn’t imply any priority between the two groups. Many of the September work areas would be enhancements to items being still being worked on in Rel. 16. Also, spacing them out better spreads the workload. Based on how the discussions pan out, the work areas could be candidates for Work Items or Study Items in the December 2018 plenary meeting.
Two specific topics caught my attention. First, making 5G even more suitable for XR (AR, VR, etc.) and second, AI. The first one makes perfect sense, as XR evolution will have even stringent latency requirements and will need distributed processing capability between device and edge-cloud etc. However, I am not so sure about AI. I don’t how much scope there is to standardize AI, as it doesn’t necessarily require interoperability between devices of different vendors. Also, I doubt companies would be interested in standardizing AI algorithms, which minimizes their competitive edge.
Apart from technical discussions, there were questions and concerns regarding following US Government order to ban Huawei. This was the first major RAN plenary meeting after the executive order imposing the ban was issued. From the discussions, it seemed like “business as usual.” We will know the real effects when the detailed discussions start in the coming weeks.
On a closing note, many compare the standardization process to watching a glacier move. On the contrary, I found it to be very interesting and amusing, especially how the consensus process among the competitors and collaborates work. The meeting was always lively, with a lot of arguments and counter-arguments. We will see whether my view changes in the future! So, tune in to updates from future Rel. 17 meetings to hear about the progress.
I just returned from a whirlwind session of 3GPP RAN Plenary #86, held at the beautiful beach town of Sitges in Spain. The meeting finalized a comprehensive package with more than 30 Study and Work Items (SI and WI) for Rel 17. With a mix of new capabilities and significant improvements to existing features, Rel 17 is set to define the future of 5G. It is expected to be completed by mid or end of 2021.
<<Side note, if you would like to understand more about how 3GPP works, read my series “Demystifying Cellular Standards and Licensing” >>
Although the package looks like a laundry list of features, it gives a window into the strategy and capabilities of different member companies. Some are keen on investing in new, path-breaking technologies, while others are looking to optimize existing features or working on the fringe or very specific areas.
The Rel. 17 SI and WIs can be divided into three main categories.
Blazing new trail
These are the most important new concepts being introduced in Rel. 17 that promise to expand 5G’s horizon.
XR (SI) – The objective of this is to evaluate and adopt improvements that make 5G even better suited for AR, VR, and MR. It includes evaluating distributed architecture harnessing the power of edge-cloud and device capabilities to optimize latency, processing, and power. Lead (aka Rapporteur) – Qualcomm
NR up to 71 GHz (SI and WI) – This is in the new section because of a twist. The WI is to extend the current NR waveform up to 71 GHz, and SI is to explore new and more efficient waveforms for the 52.6 – 71 GHz band. Lead – Qualcomm and Intel
NR-Light (SI) – The objective is to develop cost-effective devices with capabilities that lie between the full-featured NR and Low Power Wireless Access (e.g., NB-IoT/eMTC). For example, devices that support 10s or 100 Mbps speed vs. multi-Gigabit, etc. The typical use cases are wearables, Industrial IoT (IIoT), and others. Lead – Ericsson
Non-Terrestrial Network (NTN) support for NR & NB-IoT/eMTC (WI) – A typical NTN is the satellite network. The objective is to address verticals such as Mining and Agriculture, which mostly lie in remote areas, as well as to enable global asset management, transcending contents and oceans. Lead – MediaTek and Eutelsat
Perfecting the concepts introduced in Rel. 16
Rel. 16 was a short release with an aggressive schedule. It improved upon Rel. 15 and brought in some new concepts. Rel 17 is aiming to make those new concepts well rounded.
Integrated Access & Backhaul – IAB (WI) – Enable cost-effective and efficient deployment of 5G by using wireless for both access and backhaul, for example, using relatively low-cost and readily available millimeter wave (mmWave) spectrum in IAB mode for rapid 5G deployment. Such an approach is especially useful in regions where fiber is not feasible (hilly areas, emerging markets). Lead – Qualcomm
Positioning (SI) – Achieve centimeter-level accuracy, based only on cellular connectivity, especially indoors. This is a key feature for wearables, IIoT, and Industry 4.0 applications. Lead – CATT (NYU)
Sidelink (WI) – Expand use cases from V2X-only to public safety, emergency services, and other handset-based applications by reducing power consumption, reliability, and latency. Lead – LG
Small data transmission in “Inactive” mode (WI) – Enable such transmission without going through the full connection set-up to minimize power consumption. This is extremely important for IIoT use cases such as sensor updates, also for smartphone chatting apps such as Whatsapp, QQ, and others. Lead – ZTE
IIoT and URLLC (WI) – Evaluate and adopt any changes that might be needed to use the unlicensed spectrum for these applications and use cases. Lead – Nokia
Fine-tuning the performance of basic features introduced in Rel. 15
Rel. 15 introduced 5G. Its primary focus was enabling enhanced Broadband (eMBB). Rel. 16 enhanced many of eMBB features, and Rel. 17 is now trying to optimize them even further, especially based on the learnings from the early 5G deployments.
Further enhanced MIMO – FeMIMO (WI) – This improves the management of beamforming and beamsteering and reduces associated overheads. Lead – Samsung
Multi-Radio Dual Connectivity – MRDC (WI) – Mechanism to quickly deactivate unneeded radio when user traffic goes down, to save power. Lead – Huawei
Dynamic Spectrum Sharing – DSS (WI) – DSS had a major upgrade in Rel 16. Rel 17 is looking to facilitate better cross-carrier scheduling of 5G devices to provide enough capacity when their penetration increases. Lead – LG
Coverage Extension (SI) – Since many of the spectrum bands used for 5G will be higher than 4G (even in Sub 6 GHz), this will look into the possibility of extending the coverage of 5G to balance the difference between the two. Lead – China Telecom and Samsung
Along with these, there were many other SI and WIs, including Multi-SIM, RAN Slicing, Self Organizing Networks, QoE Enhancements, NR-Multicast/Broadcast, UE power saving, etc., was adopted into Rel. 17.
Other highlights of the plenary
Unlike previous meetings, there were more delegates from non-cellular companies this time, and they were very actively participating in the discussions, as well. For example, a representative from Bosch was a passionate proponent for automotive needs in Slidelink enhancements. I have discussed with people who facilitate the discussion between 3GPP and the industry body 5G Automotive Association (5GAA). This is an extremely welcome development, considering that 5G will transform these industries. Incorporating their needs at the grassroots level during the standards definition phase allows the ecosystem to build solutions that are market-ready for rapid deployment.
There was a rare, very contentious debate in a joint session between RAN and SA groups. The debate was to whether set RAN SI and WI completion timeline to 15 months, as planned now, or extend it to 18 months. The reason for the latter is TSG-SA being late with Rel. 16 completion, and consequently lagging in Rel. 17. Setting an 18-month completion target for RAN will allow SA to catch up and align both the groups to finish Rel. 17 simultaneously. However, RAN, which runs a tight ship, is not happy with the delay. Even after a lengthy discussion, the issue remains unresolved.
<<Side Note: If you would like to know the organization of different 3GPP groups, including TSGs, check out my previous article “Who are the unsung heroes that create the standards?” >>
It will be amiss if I don’t mention the excellent project management skills exhibited by the RAN chair Mr. Balazs Bertenyi of Nokia Bell Labs. Without his firm, yet logical and unbiased decision making, it would have been impossible to finalize all these things in a short span of four days.
In closing
Rel. 17 is a major release in the evolution of 5G that will expand its reach and scope. It will 1) enable new capabilities for applications such as XR; 2) create new categories of devices with NR-Light; 3) bring 5G to new realms such as satellites; 4) make possible the Massive IoT and Mission Critical Services vision set out at the beginning of 5G; while also improving the excellent start 5G has gotten with Rel. 15 and eMBB. I, for one, feel fortunate to be a witness to see it transform from concept to completion.
With COVID-19 novel coronavirus creating havoc and upsetting everybody’s plans, the question on the minds of many people that follow standards development is, “How will it affect the 5G evolution timeline?” The question is even more relevant for Rel. 16, which is expected to be finalized by Jun 2020. I talked at length regarding this with two key leaders of the industry body 3GPP—Mr. Balazs Bertenyi, the Chair of RAN TSG and Mr. Wanshi Chen, Chair of RAN1 Working Group (WG). The message from both was that Rel 16 will be delivered on time. The Rel. 17 timelines were a different story though.
<<Side note: If you would like to know more about 3GPP TSGs and WGs, refer to my article series “Demystifying Cellular Patents and Licensing.” >>
3GPP meetings are spread throughout the year. Many of them are large conference-style gatherings involving hundreds of delegates from across the world. WG meetings happen almost monthly, whereas TSG meetings are held quarterly. The meetings are usually distributed among major member countries, including the US, Europe, Japan, and China. In the first half of the year, there were WG meetings scheduled in Greece in February, and Korea, Japan, and Canada in April, as well as TSG meetings in Jeju, South Korea in March. But because of the virus outbreak, all those face-to-face meetings were canceled and replaced with online meetings and conference calls. As it stands now, the next face-to-face meetings will take place in May, subject to the developments of the virus situation.
Since 3GPP runs on consensus, the lack of face-to-face meetings certainly raises concerns about the progress that can be made as well as its possible effect on the timelines. However, the duo of Mr. Bertenyi and Mr. Wanshi are working diligently to keep the well-oiled standardization machine going. Mr. Bertenyi told me that although face-to-face meetings are the best and the most efficient option, 3GPP is making elaborate arrangements to replace them with virtual means. They have adopted a two-step approach:1) Further expand the ongoing email-based discussions; 2) Multiple simultaneous conference calls mimicking the actual meetings. “We have worked with the delegates from all participant countries to come up with a few convenient four-hour time slots, and will run simultaneous on-line meetings/conference calls and collaborative sessions to facilitate meaningful interaction,” said Bertenyi “We have stress-tested our systems to ensure its robustness to support a large number of participants“
Mr. Wanshi, who leads the largest working group RAN 1, says that they have already completed a substantial part of Rel 16 work and have achieved functional freeze. So, the focus is now on RAN 2 and RAN3 groups, which is in full swing. The current schedule is to achieve what is called ASN.1 freeze in June 2020. This milestone establishes a stable specification-baseline from which vendors can start building commercial products.
Although, it’s reasonable to say that notwithstanding any further disturbances, Rel. 16 will be finalized on time. However, things are not certain for Rel. 17. Mr. Bertenyi stated that based on the meeting cancellations, it seems inevitable that the Rel. 17 completion timeline will shift by three months to September 2021.
It goes without saying that the plans are based on the current state of affairs in the outbreak. If the situation changes substantially, all the plans will go up in the air. I will keep monitoring the developments and report back. Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get the latest on standardization and the telecom industry at large.
It is election time at the 3GPP, and last week was the ballot for the chairmanship of the prestigious RAN Technical Specification Group (TSG). Dr. Wanshi Chen of Qualcomm came out as a winner after a hard-fought race. I caught up with Wanshi right after the win to congratulate him and discuss his vision for the group as well as the challenges and opportunities that lie ahead. Here is a quick primer on the 3GPP ballot process and highlights from my discussion with Wanshi.
Side note: If you would like to know more about 3GPP Rel. 17, please check out the earlier articles in the series.
3GPP TSGs and elections
As I have explained in my article series “Demystifying cellular licensing and patents,” 3GPP has three TSGs, responsible for the radio access network, core network, services and system aspects, and are aptly named TSG-RAN, TSG-CN, and TSG-SA. Among these, TSG-RAN is probably the biggest in terms of size, scope, and number of activities. It is managed by one chair and three vice-chairs. The chair ballot was last week (started from March 16th, 2021) and the vice-chair ballot is happening as this article is being published.
The primary objective of the RAN chair is to ensure all the members are working collaboratively to develop next-generation standards through the 3GPP’s marquee consensus-based, impartial approach. The chair position has a lot of clout and prestige associated with it. The chairmanship truly represents the collective confidence of the entire 3GPP community in the position, providing vision and leadership to the entire industry. The RAN TSG chair leadership is especially crucial now when the industry is at a critical juncture of taking 5G beyond the conventional cellular broadband to many new industries and markets.
For the candidates, the 3GPP election is a long-drawn process, starting more than a year before the actual ballot. The credibility, and the competence of the individual candidates, as well as the companies they represent, are put to test. Although delegates vote as individuals in a secret ballot, the competitive positioning between the member companies, and sometimes the regional dynamics may play an important role.
During the actual election, the winner is decided if any candidate gets more than 71% of the votes, either in the first or the second round. If not, a third run-off round ensues, and whoever gets a simple majority there wins the race. This time, there were four candidates in the fray – Wanshi Chen of Qualcomm, Mathew Baker of Nokia, Richard Burbidge of Intel, and Xu Xiaodong of China Mobile. The election did go to the third run-off round, where Wanshi Chen won against Mathew Baker by a comfortable margin.
New chair’s vision for the next phase of 5G
Dr. Wanshi Chen is a prolific inventor, a researcher, and a seasoned standards leader. He has been part of 3GPP for the last 13 years. He is currently the Chair of the RAN-1 Working Group and was also a vice-chair of the same group before that. RAN-1 is one of the largest working groups within 3GPP, with up to 600 delegates. Wanshi has successfully presided over the group during its critical times. For example, he took over the RAN-1 chairmanship right after the 5G standardization acceleration, and was instrumental in finalizing 3GPP Rel. 15 in record time. Following that he also played a key role in finishing Rel. 16 on time as planned, despite the enormous workload and the unprecedented disruptions caused by the onset of the Covid-19 pandemic.
The change in RAN TSG guard is happening at a crucial time for 5G when it is set to transform the many verticals and industries beyond smartphones. 3GPP has already set a solid foundation with Rel.16, Rel. 17 development is in full swing, and Rel. 18 is being conceptualized. The next chair will have the unique opportunity to shape the next phase of 5G. Wanshi said “Industry always looks to 3GPP for leadership in exploring the new frontiers, providing the vision, and developing technologies and specifications to pave the way for the future. It is critical for 3GPP to maintain a fine balance between the traditional and newer vertical domains and evolve as a unified global standard by considering inputs from all regions of the world.”
Entering new markets and new domains is always fraught with challenges and uncertainties. However, “Such transitions are not new to 3GPP,” says Wanshi, “We worked across the aisle and revolutionized mobile broadband with 4G, and standardized 5G in a record time. I am excited to be leading the charge and extremely confident of our ability to band together as an industry and proliferate 5G everywhere.”
It is indeed interesting to note that Qualcomm was also at the helm of RAN TSG when 5G was accelerated. Lorenzo Cascia, Qualcomm’s VP of Technical Standards, and another veteran of 3GPP said “The primary task of the chair is to foster consensus among all member companies, and facilitating the continued expansion of 5G, and potentially formulating initial plans toward the industry’s 6G vision,” he added, “having known Wanshi for years, I am extremely confident of his abilities to lead 3GPP toward that vision.”
The tenure of the chair is two years, but usually, people serve two consecutive terms, totaling four years. That means Wanshi will have a minimum of two years and a maximum of four years to show his magic, starting from Jun 2021. I wish all the best to him in his new position. I will be closely watching him as well as 3GPP as 5G moves into its next phase.
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
The twin events of 3GPP RAN Plenary #92e and Rel. 18 workshops are starting to shape the future of 5G. The plenary substantially advanced Rel.17 development and the workshop kick-started the Rel 18 work. Amidst these two, 3GPP also approved the “5G Advanced” as the marketing name for releases 18 and beyond. Being a 3GPP member, I had the front row seats to witness all the interesting discussions and decisions.
With close to 200 global operators already live with the first phase of 5G, and almost every cellular operator either planning, trialing, or deploying their first 5G networks, the stage is set for the industry to focus on the next phase of 5G.
Solid progress on Rel. 17, projects mostly on track
The RAN Plenary #92-e was yet another virtual meeting, where the discussions were through a mix of emails and WebEx conference sessions. It was also the first official meeting for the newly elected TSG RAN chair Dr. Wanshi Chen of Qualcomm, and three vice-chairs, Hu Nan of China Mobile, Ronald Borsato of AT&T, and Axel Klatt of Deutsche Telekom.
Most of the plenary time was spent on discussing various aspects of Rel. 17, which has a long list of features and enhancements. For easy reference and better understanding, I divide them (not 3GPP) into three major categories as below:
New concepts:
Enhancements for better eXtended Reality (XR), mmWave support up to 71 GHz, new connection types such as NR – Reduced Capability (RedCap, aka NR-Light), NR & NB-IoT/eMTC, and Non-Terrestrial Network (NTN).
Improving Rel.16 features
Enhanced Integrated Access & Backhauls (IAB), improved precise positioning and Sidelink support, enhanced IIoT and URLLC functionality including unlicensed spectrum support, and others.
Fine-tuning Rel. 15 features
Further enhanced MIMO (FeMIMO), Multi-Radio Dual Connectivity (MRDC), Dynamic Spectrum Sharing (DSS) enhancements, Coverage Extension, Multi-SIM, RAN Slicing, Self-Organizing Networks (SON), QoE Enhancements, NR-Multicast/Broadcast, UE power saving, and others.
For details on these features please refer to my article series “The Chronicles of 3GPP Rel. 17.”
There was a lot of good progress made on many of these features in the plenary. All the leads reaffirmed the timeline agreed upon in the previous plenary. It was also decided that all the meetings in 2021 will be virtual. The face-to-face meetings will hopefully start in 2022.
3GPP RAN TSG meeting schedule (Source: 3gpp.org)
Owing to the workload and the difficulties of virtual meetings, the possibility of down-scoping of some features was also discussed. These include some aspects of FeMIMO and IIoT/URLLC. Many delegates agreed that it is better to focus on a robust definition of only certain parts of the features rather than diluted full specifications. The impact of this down-scoping on the performance is not fully known at this point. The discussion is ongoing, and a final decision will be taken during the next RAN plenary #93e in September 2021.
The dawn of 5G Advanced
The releases 18 and beyond were officially christened as 5G Advanced in May 2021, by 3GPP’s governing body Project Coordination Group (PCG). This is in line with the tradition set by HSPA and LTE, where the evolutionary steps were given “Advanced” suffixes. 5G Advanced naming was an important and necessary decision to demarcate the steps in the evolution and to manage the over-enthusiastic marketing folks jumping early to 6G.
The 5G Advanced standardization process was kickstarted at the 3GPP virtual workshop held between Jun 28th and July 2nd, 2021. The workshop attracted a lot of attention, with more than 500 submissions from more than 80 companies, and more than 1200 delegates attending the event.
The submissions were initially divided into three groups. According to the TSG RAN chair, Dr. Wanshi the submissions were distributed almost equally among the groups:
-
eMBB (evolved Mobile BroadBand)
-
Non-eMBB evolution
-
Cross-functionalities for both eMBB and non-eMBB driven evolution.
After the weeklong discussions (on emails and conference calls), the plenary converged to identify 17 topics of interest, which include 13 general topics and three sets of topics specific to RAN Working Groups (WG) 1-3, and one set for RAN WG-4. Most of the topics are substantial enhancements to the features introduced in Rel. 16 and 17, such as MIMO, uplink, mobility, precise positioning, etc. They also include evolution to network topology, eXtended Reality (XR), Non-Terrestrial Networks, broadcast/multicast services, Sidelink, RedCap, and others.
The relatively new concepts that caught my attention are Artificial Intelligence (AI)/Machine Learning (ML), Full and Half Duplex operations, and network energy savings. These have the potential to set the stage for entirely new evolution possibilities, and even 6G.
Wireless Networks are extremely complex, highly dynamic, and vastly heterogenous. There cannot be any better approach than using AI/ML to solve the hard wireless challenges. E.g., cognitive RAN can herald a new era in networking.
Full-duplex IABs with interference cancellation broke the decades-old system of separating uplink and downlink either in frequency or time domains. Applying similar techniques to the entire system has the potential to bring the next level of performance in wireless networks.
Reducing energy consumption has emerged as one of the existential challenges of our times because of its impact on climate change. With 5G transforming almost every industry, it indeed is a worthy effort to reduce energy use. The mobile industry with the “power-efficient” approach embedded in its DNA has a lot to teach the larger tech industry in that regard.
In terms of the topics of discussion, Dr. Wanshi said that he cannot emphasize enough that they are not “Working Items” or “Study Items.” He further added that the list is a great starting point, but much discussion to rationalize and prioritize it is needed, which will start from the next plenary, scheduled for Sep 13th, 2021.
For the full list of Rel. 18/5G Advanced topics, please check this 3GPP post.
In closing
The events in the last few weeks have surely started to define and shape the future evolution of 5G. With Rel. 16 commercialization starting soon, Rel. 17 standardization nearing completion, and Rel. 18 activities getting off the ground, there will be a lot of exciting developments to look forward to in the near future. So, stay tuned.
Demystifying Cellular Patents and Licensing
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Editor’s note: This is the first in a series of articles that explores the sometimes obtuse process of standardizing, patenting and licensing cellular technologies.
Answers to the questions you always wondered but were afraid to ask
Patents spark joy in the eyes of the innovators! Patents not only recognize innovators’ hard work but also provide financial incentives to keep inventing and continue to make the world a better place. Unfortunately, patent licensing often referred to as Intellectual Property Rights (IPR) licensing, has recently gotten a bad rap. The whole IPR regimen seems mystical, veiled under a shroud of confusion, misinformation, and of course, controversies. But tearing that shroud reveals the fascinating metamorphosis of abstract concepts developing into technologies that transform people’s lives. This process, in turn, creates significant value for the inventors.
I have been exposed to the cellular IPR in my entire career. And I thought I understood it well. But my research regarding the various aspects of the IPR journey, including their creation, evaluation, and licensing, was a real eye-opener, even for me. In a series of articles, I will take you through the same amazing journey that will demystify the myths, the misunderstandings, and the misinterpretations. I will use the standardization of 4G, which has run its full course, and that of 5G, which is ongoing, as the vehicles for our journey. So, get on board, buckle up and enjoy the ride!
Organizations that build cellular standards
It all starts at the International Telecom Union (ITU), an arm of UNESCO (www.unesco.org), which is a part of the United Nations. For any new generation of standard (aka G), ITU comes up with a set of minimum performance requirements. Any technology that meets those requirements can be given that specific “G” moniker. For 4G, these requirements were called IMT-Advanced, and for 5G, they are called IMT-2020. In the earlier days of 4G, there were two technologies that got the moniker. One among them was developed by IEEE, called WiMAX, which no longer exists. The other was developed by the 3rd Generation Partnership Project (3GPP), the most important and visible global cellular specifications organization.
3GPP, as the name suggests, was formed during 3G days, and has been carrying the mantle ever since. 3GPP is a combination of seven telecommunications Standard Development Organizations (SDO), representing telecom ecosystems in different geographical regions. For example, the Alliance for Telecom Industry Solutions Association (ATIS) represents the USA; the European Telecommunications Standards Institute (ETSI) represents the European Union and so on. In essence, 3GPP is a true representation of the entire global cellular ecosystem.
3GPP develops specifications that are then affirmed as relevant standards by SDOs in their respective regions. 3GPP’s specifications are published as a series of Releases. For example, Release 10 (Rel. 10) had the specifications that met the ITU requirements for 4G (IMT-Advanced). 3GPP sometimes also gives marketing names to a set of these releases. For example, Rel. 8 – 9 were named as Long Term Evolution (LTE), Rel. 10-12 were named as LTE Advanced, and so on. The Rel.15 includes the specifications needed to meet 5G requirements.
To summarize, ITU stipulates the requirements for any “G,” 3GPP develops the specifications that meet those requirements, and the SDOs affirm those specifications as standards in their respective regions.
How the standards building process works
With those many organizations and their representatives involved, standards development is a long, arduous, and systematic process. 3GPP has several specification working groups focused on different parts of the cellular system and its interworking, including radio network, core network, devices, and others. The members of these groups are representatives of different SDOs.
Now coming to the actual process itself, the ITU requirements act as goals for 3GPP. The efforts start-off with members bringing their proposals, i.e. their innovations, to achieve the set goals. For example, for 4G one of the proposals was techniques to use OFDMA for the high-performance mobile broadband. These proposals are presented in each of the relevant groups. There are usually multiple of them for any given problem. All these proposals are discussed, closely scrutinized, and hotly debated. Ultimately, winning ideas emerge through a consensus process. One of the members of the group is then nominated to be the editor, and he/she distills the winning ideas into a working document. That document is continuously edited and refined in a series of meetings, and when stable, is published as the first draft of the specification. Publishing the first draft is a major milestone for any release. Companies usually start designing their commercial products based on the first draft.
The standard refinement process continues for a long time even after the first draft, this is akin to how software “bug fixing” and update process works. Members continuously submit contributions aka bug-fixes to refine the draft. Typically, these contributions are substantially higher in volume than the initial proposals. This is because the latter are radically new concepts or innovations, whereas the former could be trivial, such as editorial corrections. Once all the bug-fixing is done, the final specification is released.
As evident, for any new innovation to be accepted and included in the standard, it has to go through a rigorous vetting and has to withstand the intense scrutiny by peers and competitors. This means the inclusion is an explicit recognition by the industry that the said technology is a superior solution to the given problem.
3GPP contributions and record-keeping
3GPP is a highly bureaucratic organization, with a robust and well established administrative and record keeping system. But for historical reasons, the system is not equally rigorous throughout the process. For example, record keeping is nominal until the creation of the first draft. The proposals, ideas, contributions presented during that time are just tagged as “considered” or “treated,” without any specific recognition. However, the record keeping gets very structured and rigorous after the first draft. The bug-fixing contributions that are adopted into the specification are tagged with more official-sounding names such as “approved,” no matter whether they are very trivial or significant. These uneven record-keeping and naming practices have created some very simpleton, amateurish and really flawed IPR evaluation methods. More on this in later articles.
Nonetheless, 3GPP specification development is a consensus-based, democratic process, by design. This necessitates collaboration among members, who many times have opposite interests. This approach indeed has made 3GPP a great success, resulting in the cellular industry to excel and thrive.
With the basic understanding of the organizations and processes in place, we are now well equipped for the next part of our IPR journey—understanding how developing standards is a system design endeavor solving end-to-end problems, not just a collection of disparate technologies, as we are given to believe. And that’s exactly what my next blog in the series will explore. Be on the lookout!
In my previous article in the series, I described the organizations and the process of creating cellular standards. I explained how it is an almost a magical process, where scores of industry players, many of whom are staunch competitors come together in a consensus-based approach to approve new standards. In this article I will delve into the specifics of how patents, often referred to as Intellectual Property Rights (IPR) are created, valued, licensed, and administered.
Cellular patents are created during the standardization process
The cellular standardization process is primarily a quest to find the best solutions for a systematic problem. The winning innovations borne out of that process create valuable patents. You can guarantee that almost all the ideas presented as candidates for standardization hit the patent offices in various countries before coming to 3GPP. The value of those innovations and thereby patents dramatically increases when accepted and incorporated into standards. Inclusion in the standard is also the stamp of approval that the innovation is the best of the crop, as it has won over other competing ideas, as I explained my previous article.
Another important aspect, especially relevant to cellular patents, is that the innovations presented to standards are the solution to solve an end-to-end system problem. This means those ideas are not specific to just the device or the network, but a comprehensive solution that touches many parts of the system. So, many times, it is very hard to delineate the applicability of those ideas to only one part or section of the system. For example, if you consider MIMO (Multiple Input Multiple Output) technique, it needs a complete handshake between the device and the network to work. Additionally, many patents might touch many subsystems within the device or the network, which further complicates the effort to isolate their relevance to specific parts. For example, consider how the power management and optimization in a smartphone works, which makes AP, Modem and other subsystems wake up or go to sleep in sync. That innovation might touch all those subsystems in the phone.
All patents not created equal
Thousands of patents go into building cellular wireless systems, be it devices, radio infrastructure or core networks. At a very basic level, these patents can be divided into two categories: Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEP or NEP). SEPs are those which are absolutely necessary to build a standard compliant product, and that can’t be circumvented. Hence, they are highly valued. On the other hand, non-SEPs are relevant to standards, but may not necessary for the basic functioning of the standard compliant products and can be designed around. For example, for 4G LTE devices, patents that define using OFDMA for cellular connectivity are SEPs, whereas patents that improve the battery life of the devices could be considered as non-SEPs.
3GPP and Standard Development Organizations (SOD) strongly encourage early disclosure of IPR that members consider essential, or potentially essential for standards. Further, they also mandatorily require licensing of SEPs on fair, reasonable and non-discriminatory (FRAND) terms. There are no such licensing requirements for non-SEPs.
While 3GPP or SDOs make FRAND compliance for SEPs mandatory, they don’t enforce or regulate any specific monetary value for them. They consider the licensing to be a commercial transaction outside their purview, and hence let the market forces decide their worth.
How to value patents?
According to some estimates, there were 250,000 active patents covering smartphones in 2012. And when I write this article in 2019, I am sure that number has become even bigger. Then the issue becomes how to determine the value of these patents, and how best to license and administer them to others.
With the sheer number of patents involved, it is impossible to manage licensing on an individual patent basis. It is even more impractical to license them on a subsystem or at the component level, as mentioned before, it is hard to delineate their applicability to a specific part. So, it indeed is a hard problem to solve. Since cellular standards have been around for a few decades now, it is worthwhile to examine how historically licensing has been dealt with.
In the 2G days when the cellular markets started expanding, there were a handful of well-established large players such as Ericsson, Nokia, Motorola, Nortel. Alcatel, Siemens and others. These players not only developed the technologies but also had their own devices and network infrastructure offerings. Since it was a small group of players, and all of them needed each other’s technology to make their products, they resorted to a simple method of bartering, also known as cross-licensing. Some industry observers and participants accused them of artificially inflating the value of their patents to make it very hard for any new players to enter the market.
With the advent of 3G, Qualcomm appeared on the scene with a unique horizontal business model. Qualcomm’s core business was to invent in advanced mobile technology, make it accessible to the ecosystem through licensing, and enable everyone to build compelling products based on its technology (Qualcomm initially invested in infrastructure, mobile device and service provider businesses, which they eventually divested). Qualcomm’s licensing made the initial investment more reasonable and the technologies accessible for the OEMs, which significantly reduced the entry barrier. The rise of Apple, Samsung, LG as well as the score of Chinese OEMs can be attributed to it.
Taking the market forces approach, Qualcomm decided to license the full portfolio of patents, including tens of thousands of patents, for a percentage of the wholesale selling price of the phone. They put a cap on the fee when the price of phone prices started getting higher. Qualcomm decided to license the IPR to the phone OEMs because that’s where the full value of their innovations is realized. Apparently, this was also the approach all the patent holders during that time, including Ericsson, Nokia and other practiced, as attested by some of these companies during Qualcomm vs. FTC trial. This practice has continued until now and has withstood the challenges all over the world. Of course, there have been challenges and changes to the actual fees charged. But the approach has still been largely intact.
Usually, the actual licensing rates are confidential among the licensee and licensors. We got some details during Qualcomm’s court cases around the world. As of now, what we know is, for example, Qualcomm charges 3.25% of the device wholesale price for its SEPs, and 5% for the full portfolio including both SEPs and non-SEPs. The device price base is capped at a max of $400.
There are others in the industry, such as Apple who are attempting to change this decade-old approach and proposing a new approach, sometimes referred to as the Smallest Saleable Patent Practicing Unit (SSPPU) pricing. Their argument is that most of Qualcomm’s SEP ’s value is in the modem, and hence the licensing fee should be based on the price of the modem and not the phone. Obviously, Qualcomm disagrees, and both are fighting it out in the courtrooms around the world.
Being an engineer myself, I know that when designing a solution, engineers don’t consider the constraints of limiting it to a specific unit, or subsystem or apart. Instead, they come up with the best solution that effectively solves the problem. Often, by the virtue of such an approach, the solution involves the full system, as I explained in two examples earlier. So, in my view, limiting the value to a specific unit is a very simpleton, impractical approach and grossly undervalues the monetizing ability of innovations. Hence, I believe, the current approach should continue, and let the market forces decide what actual price is.
The raging court battles between Apple and Qualcomm regarding licensing are underway now, and we will see what the courts decide. In the next article, I will look at some of these recent battles between the two behemoths, what were the basis, how it affected the IPR landscape and more. Please be on the lookout.
The statement “All patents are not created equal” seems like a cliché, but is absolutely true! The differences between patents are multi-dimensional and much more nuanced than what meets the eye. I slightly touched upon this in my previous article. There is denying that going forward, patents will play an increasingly bigger role in cellular, not only pitting companies against each other but also countries against one another for superiority and leadership in technology. Hence it is imperative that we understand how patents are differentiated, and how their value changes based on their importance.
Let me start with a simple illustration. Consider today’s cars, which have lots of different technologies and hence patents. When you compare the patents for the car engine, to say, the patents for the doors, the difference between relative importance is pretty clear. If you look at the standards for building a car, probably the patents for both the engine and the door are listed as listed essential, i.e., SEPs (Standard Essential Patents). However, the patent related to the engine is at the core of the vehicle’s basic functionality. The patent for the door, although essential, is clearly less significant. Another way to look at this is, without the idea of building the engine; there is not even a need for the idea for doors. That means the presence of one is the reason for other’s existence. The same concepts also apply to cellular technology and devices. Some patents are invariably more important than others. For example, if you consider the 5G standard, the patents that cover the Scalable-OFDMA are fundamental to 5G. These are the core of 5G’s famed flexibility to support multiple Gigabits of speeds, very low latency, and extremely high reliability. You can’t compare the value of that patent to another one that might increase the speed by a few kilobits in a rare use case. Both patents, although being SEPs, are far apart in terms of value and importance.
On a side note, if you would like to know more about SEPs, check out my earlier article here.
That brings us to another classic challenge of patent evaluation—patent counting. Counting is the most simplistic and easy to understand measure—whoever has the most patents is the leader! Well, just like most simple approaches, counting also has a big issue—it is highly unreliable. Let me again explain it with an illustration. Consider one person having 52 pennies and a second person having eight quarters. If we apply simple counting as a metric, the first person seems to be the winner, which can’t be farther from the truth. Now applying the same concept to cellular patents, it would look stupid to call somebody a technology leader purely based on the number of patents they own, unless you know what they are.
When you look at the 5G standard, it has thousands of SEPs. If you count patents for Scalable-OFDMA and other similar fundamental and core SEPs with the same weight as minor SEPs that define peripheral and insignificant protocols and other things, you would be highly undervaluing the building blocks of the technology. So, simply counting without understanding the importance of the patents for technology leadership is very flawed. Also, the process of designating a certain patent as a SEP or not is nuanced as well, which makes the system vulnerable to rigging and manipulation, resulting in artificially increased SEP counts. I will cover this in the later articles. This potential for inflating the numbers further exacerbates the problem of patent counting.
In conclusion, it is amply clear that all patents are not created equal, and simpleton patent counting is not the best measure to understand the positioning of somebody’s technology prowess. One has to go deeper and understand their importance to realize the value. In my next articles, I will discuss the key patents that define 5G and explore alternate methods for patent evaluations that are possibly more robust and logical. In the meantime, beware and don’t be fooled by entities claiming to be leaders because of the sheer volume of their patent portfolio.
Demystifying cellular patents and licensing – Part 4
3GPP is this mystic organization that many seem to know, but few understand it. The key players of this efficient and well-regarded organization work often without the fanfare or public recognition. But no more! As part of this article series, I go behind the doors, explore the organization, meet the hard-working people, and bare the details on its inner workings.
Side note, if you would like to understand the cellular standardization process, please read my previous articles in the series here, here, and here.
“3GPP is a membership-driven organization. Any company interested in telecommunications can join, through one of its SDOs (Standard Development Organizations)” said Mr. Balazs Bertenyi of Nokia Corporation, the current chair of TSG-RAN and a 3GPP veteran. “One of the important aspects of 3GPP is that a large portion of its working-level office bearers are members themselves and are elected by the other fellow members.”
I became a proud member of 3GPP through the American SDO, ATIS, earlier this year.
3GPP organization structure
3GPP consists of three layers, as shown in the schematic: Project Coordination Group (PCG) at the top, which is more ceremonial; three Technical Specifications Groups (TSG) in the middle, each responsible for a specific part of the network; multiple Working Groups (WG) at the bottom, where the actual standards development occurs. There are many ad-hoc groups formed within each of these as well. All these groups meet regularly, as shown in the example meeting cycle.
Inner workings of WGs and the unsung heroes
Let’s start with the WGs, specifically the ones that are part of TSG-RAN. Being an RF Engineer, these are closest to my heart. However, this discussion applies equally to other TSGs/WGs as well. There are six WGs within TSG-RAN, each with one chair and two vice-chairs.
The best way to understand the group’s workings is to analyze how a fundamental 5G feature such as Scalable OFDMA would be standardized. There might be a few proposals from different member companies. The WGs have to evaluate these proposals in detail, run simulations for various scenarios to understand the performance, the pros and cons, competitive benefits, and so on. They have to decide the best solution and develop standards to implement it across the system. As evident, the WG chair must facilitate the discussion in an orderly, fair, and impartial way, and let the group reach a consensus decision. As you can imagine, this task is a combination of science and art—bringing people together through collaboration, personal relationships, and making sure they arrive at meaningful conclusions—all of this while under tremendous time pressure.
In such a situation, WG members expect the chair to be fair, balanced, and trustworthy. Many times, the members whose companies they represent are bitter competitors with diagonally opposite interests, each trying to push their views and assertions for adoption. “It is quite a task bringing these parties together for a consensus-based agreement, in the true spirit of 3GPP,” says Mr. Bertenyi. “It requires deep technical knowledge, a lot of patience, empathy, leadership, and ability to find common ground to be a successful WG chair.” That is the reason why 3GPP’s process of electing chairpersons through the ballot, instead of nomination, makes perfect sense.
The members of WG vote and elect somebody they trust and have respect for to lead the group. Before taking over, the employer of the newly elected officer has to formally sign a support letter declaring that the officer will get all the support from his company to successfully undertake his duties as a neutral chair. “From then on the elected officer stops being a delegate for his company, and becomes a neutral facilitator working in the interest of 3GPP and the industry” added Mr. Bertenyi. “Being a chair, I have presided over many decisions that were not supported by my company but were the best way forward in a given dispute. I have seen it often happen in WGs as well. For example, I saw Wanshi Chen, chair of RAN-1 do the same many times.”
The WG members are primarily inventors trying to develop solutions for difficult technological challenges. The WG chairs are at the forefront of this effort, and by virtue of that, it is not uncommon for them to be prolific inventors themselves and be a party to a large number of patents. This, in fact, proves that they are worthy of the leadership role they are given.
“It wouldn’t be untrue to say that the hard-working WG chairs are truly unsung heroes of 3GPP, and they deserve much respect and accolades,” says Mr. Bertenyi. “I am extremely proud to be working with all the chairs of our RAN WGs—Wanshi Chen of Qualcomm heading RAN-1, Richard Burbidge of Intel heading RAN-2, Gino Masini of Ericsson heading RAN-3, Xutao Zhou of Samsung heading RAN-4, Jacob John of Motorola heading RAN-5, Jurgen Hoffman of Nokia heading RAN-6.”
Responsibilities of TSG and PCG
While the WGs are workhorses, TSG sets the direction and manages resource allocation and on-time delivery of specifications.
There are three TSGs, one each for Radio and Core Networks and a third for systems work. Each of the TSGs has a chair and three vice-chairs, all elected by the members. They provide direction based on market conditions and needs. For example, the decision to accelerate 5G timelines in 2016 was taken by the TSG-RAN. The chairs are usually accomplished experts and excellent managers. I witnessed how effectively Mr. Bertenyi conducted the recent RAN#84 plenary while being fair, cheerful, and decisive at the same time.
PCG on record is the highest decision-making body, dealing mostly with non-technical project management issues. It is chaired by the partner SDOs on a rotational basis. It provides oversight, formally adopts the TSG work items, and ratifies election results and the resources commitments.
Elections and leadership tenure
As mentioned, all the working-level 3GPP office bearers are duly elected by fellow 3GPP members in a completely transparent ballot process. The standard tenure of each office bearer is two years. But often they are reelected for a second term based on their performance, as recognition for their effective leadership. Many times members start with vice-chair position and move on to the chair level, again based on their performance.
In closing
3GPP is a truly democratic, consensus-based organization. Its structure and culture that encourages collaboration, even among bitter business rivals, has made it a premier standards development organization. The well-managed cellular technology roadmap and success of the mobile industry at large is a testament to 3GPP’s systematic and broad-based approach.
Quick Note – I will be attending the next RAN-1 WG meeting scheduled for Aug 26- 30th 2019 in Prague, Czech Republic. So, stay tuned for the 3GPP Rel.16 and Rel.17 progress report.
While the 5G race rages on, so does the race to be perceived as the technology leader in 5G. This race transcends companies, industries, regions, and even countries. No major country, be it the new power such as China or existing leaders such as the US and Europe, wants to be seen as laggard. In this global contest, 5G patents and IPR (Intellectual Property Rights) is the most visible battleground. With so many competing entities and interests, it indeed is hard to separate substance from noise. One profound truth prevails even with all the chaos: Quality of inventions always beats quantity.
The fierce competition to be the leader has made companies make substantial investments to innovate new technology as well as play a key role in standards development. Since the leadership battles are also fought in the public domain, the claims of leadership has been relegated to simplistic number counting, such as how many patents one has, or much worse, how many contributions one has submitted to the standards. In the past, there have been many reports dissecting these numbers in many ways and claiming one or the other company to be the leader.
The awakening – Quality matters
Fortunately, now there seems to be some realization of the perils of this simplistic approach to a complex issue. There have been reports recently about why the quality, not the quantity matters. For example, last month, the well known Japanese media house, Nikkei, published this story based on the analysis of Patent Result, a Tokyo-based research company. Even the Chair of the 3GPP RAN group, Mr. Balazs Bertenyi, published a blog highlighting how technology leadership is much beyond simple numbers.
Ills of contributions counting
One might ask, what’s wrong with number counting, after all, isn’t it simple and easy to understand? Well, simple is not always the best choice for complex issues. Let me illustrate this with a realistic example. One can easily create the illusion of technology leadership by creating a large number of standards contributions. The standards body 3GGP, being a member-run organization, has an open policy for contributions. As I explained in the first article of this “Demystifying cellular patents” series, there is a lot of opportunity to goose-up the number of contributions during the “bug-fix” stage when the standard is being finalized. Theoretically, any 3GPP member can make an unlimited number of contributions, as long as nobody opposes them. Since 3GPP is also a consensus-driven organization, its members are hesitant to oppose fellow member’s contributions, unless they are harmful. It’s an open question whether anybody has exploited this vulnerability. If one looks closely, they might find instances of this. Nonetheless, the possibility exists, and hence simply, the number of contributions can’t be an indicator for anything important, let alone technology leadership.
<<Side note: You can read all the articles in the series to understand the 3GPP standardization process here.>>
In his blog, Mr. Bertenyi says, “…In reality, flooding 3GPP standards meetings with contributions is extremely counterproductive...” It unnecessarily increases the workload on the standards working groups and extends the timelines, while reducing the focus on the contributions that really matter.
So what matters? Again, Mr. Bertenyi explains, “…The efficiency and success of the standards process are measured in output, not input. It is much more valuable to provide focused and well-scrutinized quality input, as this maximizes the chances of coming to high-quality technical agreements and results.”
Contrasting quantity with quality
Another flawed approach is measuring technology prowess by counting the number of patents the company holds. Well, unlike mere contributions, the number of patents has some value. However, this number can’t be the only or meaningful measure for leadership. What matters is actually the specific technology those patents bring to the table. Meaning, how important they are to the core functioning of the system. The Nikkei article, which is based on Patent Result’s analysis, sheds light on this subject.
Patent Result did a detailed analysis of the patents filed in the U.S. by major technology companies, including Huawei, Intel, Nokia, Qualcomm, and many others. It assessed the quality of the patents according to a set of criteria, including originality, actual technological applications, and versatility. Their ranking based on the quality of patents was far different than that of the number of patents.
Some might ask, isn’t the SEP (Standard Essential Patent) designation supposed to separate the essential, i.e., important ones from non-important ones? Well, in 3GPP, SEP designation is a self-declaration. Because of that, there is ample scope for manipulation. This process is a major issue in itself, and a story for another day! So, if something is an SEP, it doesn’t necessarily mean it is valuable. In my previous article “All patents are not created equal,” I had compared and contrasted two SEPs in a car: one for the engine of the car and another for its fancy doors. While both are “essential” to make a car, the importance of the first is magnitudes higher than the second. On the same strain, you couldn’t call a company with a large number of “car-door” kinds of patents to be a leader over somebody who has fewer but more important “car-engine” level patents.
So, the bottom line is, when it comes to patents, quality beats quantity any day of the week, every time!
As I discussed in my previous articles, the industry is finally waking up to the fact that when it comes to patents, quality indeed matters much more than quantity. Also, the realization that simpleton approaches such as standards contribution counting or counting the mere number of patents doesn’t give an actual picture of technology leadership. At the same time, assessing the quality of patents has been a challenge. While the gold standard, in my view, is market-based valuation, new quality accessing metrics and methods are emerging. These are designed to consider many aspects such as how fundamental and market impacting the inventions are, how wide the reach of the patents is, how many other patents are derived from them etc. and try to come up with a quality score. I will explore many of them as part of this article series, here is the discussion on the first one on the list.
<<Side note: You can read the previous articles in the series here. >>
Patent Asset Index™ by LexisNexis® Patent Sight®
Patent Sight is a leading patent analytics and valuation firm, based in Germany. Its services are utilized by many leading institutions in the world, including the European Commission. Patent Sight has developed a unique methodology that considers the importance of the patent in the hierarchy of the technologies, its geographical coverage, and other parameters to provide a score called the Patent Asset Index. This index allows industry as well as general audiences to not only understand the comparative value of the patents that various companies hold but also rank them in terms of technology leadership.
Here are some of the Patent Sight charts regarding 4G and 5G patents, presented at a recent webinar hosted by Gene Quinn of IPWatchDog. During the webinar, William Mansfield of Patent Sight shared these charts. The first chart shows the number of patents filed by some of the top cellular companies between 2000 and 2018. As is evident, if only quantity was the metric, one could say that companies such as Qualcomm, Huawei, Nokia, LG, and Samsung, are far ahead of the others.
Now let’s look at the Patent Asset index chart of the same companies:
Under this assessment, the scene is vastly different. Qualcomm is still in the lead, and there is a drastic change in the ranking as well as the relative standings of others. Qualcomm is far ahead of its peers, followed by Samsung as a distant second, followed by LG, Nokia, and InterDigital. Surprisingly, Huawei, which was neck-to-neck with Qualcomm in terms of sheer number patents, is much farther behind.
Why quality vs. quantity comparisons matter?
Unquestionably patents are borne out of important innovations. However, as I have explained in this article, all patents are not created equal. Also, when it comes to cellular patents, there is a much-believed myth that Standard Essential Patents (SEPs), as the name suggests, are extremely important, and are core to the technology. However, because of 3GPP’s self-declaration policy, this designation is not as reliable as it seems and is highly susceptible to abuse. For example, companies with deep pockets that are interested in boosting their patent profile might invest large sums in developing non-core patents and declaring them as SEPs. That’s why the quality indicators such as the Patent Asset Index and other such approaches are important tools to assess the relative value of the patent portfolios. In the next articles, I will discuss other indicators and the specific parameters and the methodologies involved in the quality determination. So be on the lookout!
As a keen industry observer, I have seen with awe, the attention patents (aka IPR- Intellectual Property Rights) have recently gotten. And that has everything to do with the importance 5G has gotten. Most of the stakeholders now realize that IPR leadership indeed means technology leadership. But the issue that many do not understand is, how to determine IPR leadership. A lot of them, especially gullible media, falsely believe that owning a large number of patents represents leadership, no matter how insignificant those parents are. I have been on a crusade to squash that myth and have written many articles, published a few podcasts to that effect. Gladly though, many are realizing this now, and speaking out. I came across one such report titled “5G Technological Leadership,” published by the well-known US think tank, Hudson Institute.
Infrastructure is only one of the many 5G challenges
The report recognizes the confusion the 5G policy discussion in the US is mired in, and how misdirected the strategy discussions have been. It rightly points out that the well-publicized issues of lack of 5G infrastructure vendor diversity, as well as the size and speed of 5G deployments, are only small and easy to understand parts of the multifaceted 5G ecosystem. The authors of the report, Adam Mossoff & Urška Petrovčič strongly suggest that it would be wrong for the policymakers to only focus on these aspects. I could not agree more.
How to determine technology leadership?
A much more important aspect of 5G is the ownership of the foundational and core technologies that underpin its transformation ability. 5G being a key element of the future of almost every industry on the planet, whoever owns those core technologies will not only win the 5G race but also will wield unassailable influence on the global industry and the larger economy.
As mentioned earlier, technology leadership stems from IPR ownership. This is not lost on companies and countries that aspire to be technology leaders. This is clearly visible in the number of 5G patents filed by various entities. And that brings us to the critical question “Does having a large number of patents bring technology leadership?”
Patent counting is an unreliable method
It is heartening to hear that the report decisively says that patent counting is an unreliable method to determine 5G leadership, and it would mistake to use it as such. Further, the report asserts that the decision boils down to the quality of those patents, not quantity. The quality of patents here means; how fundamental and important they are for the functioning of 5G systems.
Sides note: Please check out these two articles (Article 1, Article 2) to understand how to determine the quality of patents.
The misguided focus on patent quantity has made many companies and even countries to pursue options that are on the fringes of what is considered ethical. For example, the report attributes the recent rise in 5G patents filed by Chinese individuals and companies to the government’s direct subsidies for filing patents, not necessarily to the increase in innovation. There might be other unscrupulous reasons too, such as companies over declaring Standard Essential Patents to achieve broad coverage or to avoid unknowingly violating the disclosure requirements, and others.
As I have discussed in my previous articles and podcasts, the standards-making body 3GPP’s honor-based system has enough loopholes for bad actors to goose up their patent count without adding much value or benefit.
The Hudson Institute report quotes an important point raised by the UK Supreme court—Reliance on patent counting also risks creating “perverse incentives,” wherein companies are incentivized to merely increase the number of patents, instead of focusing on innovation.
All this boils down to one single fact—when it comes to patents, the quality of patents is much more important than quantity.
In closing
After the initial misguided focus on the quantity of patents as a measure of technology leadership, the realization of the importance of the quality of patents is slowly sinking in. As the awareness of the transformational impact of 5G is spreading, the awareness about the importance of the quality of 5G patents is growing as well. Hudson Institute, being a think tank and an influential public policy organization, is rightly pointing out the key issues that are either missing or misdirected in the national technology policy debate. This is especially true for the 5G patent quality discussion. Hope the policymakers, and the industry takes notice and reward companies with high-quality patents while penalizing the manipulators.
If you would like to read more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Samsung Networks
The virtualization of cellular networks has been ongoing for some time. But virtualizing the Radio Access Network (RAN) has always been an enigma and was the final frontier for the trend. The rising star of 5G infrastructure business—Samsung—jumped on to the virtualized RAN (vRAN) bandwagon with their announcement yesterday. I think this will prove to be another turning point in moving the industry from decades-old “custom hardware + integrated software,” approach toward the modern, efficient, and flexible vRAN architecture.
What is vRAN and why does it matter?
Even since the dawn of the cellular industry, radio networks were always thought to be the most complex part of the equation. It was mainly because of the dynamic nature of the wireless links, compounded by the challenges of mobility. The “custom hardware + integrated software” approach proved to be the winning combination to solve that complexity. The resulting operator lock-in, and the huge entry barrier it created for new entrants, made the established infrastructure players to wholeheartedly embrace that approach. As the cellular technology moved from 2G to 3G, 4G, and now 5G, the complexity of the radio networks grew exponentially, keeping the approach intact.
But things are rapidly changing. Thanks to the accelerated growth of computing, now, it indeed is possible to break this combination and use commercial off-the-shelf (COTS) hardware and disaggregated software. This new approach is called vRAN.
The advantages of vRAN are obvious. It allows flexibility, drastically reduced entry barriers for new players, which leads to an expanded ecosystem. Operators will be able to choose the best hardware and software from different players and deploy the best-performing systems. All this choice increases competition, and substantially reduces costs, while increasing the pace of innovation.
Samsung’s 5G vRAN offerings
Samsung has announced full, end-to-end vRAN offerings for 5G (and 4G). These include virtual Central Unit (vCU), virtual Digital Unit (vDU), and existing Radio Units (RU). According to the press release, vCU was already commercialized in April 2019, and the full system was demonstrated to customers in April 2020. Samsung’s vCU and vDUs run on Intel x86 based COTS servers.
Let me explain the role of these units without going into too much detail. vCUs are responsible for non-real-time functions, such as radio resource management, ciphering, retransmission, etc. On the other hand, vDUs contain the real-time functions related to the actual delivery of data to the device through the RUs. RUs convert digital signals into wireless waves. A single vCU can typically manage multiple vDUs, and a single vDU can connect to multiple RUs.
“Our vRAN solutions can deliver the same reliability and performance as that of today’s legacy systems,” said Alok Shah, Vice President, Networks Strategy, BD, & Marketing at Samsung Electronics, “while bringing flexibility and cost benefits of virtualization to our customers.”
Another important aspect of the announcement is the support for Dynamic Spectrum Sharing (DSS), which allows 5G to utilize the 4G spectrum. This is extremely crucial, especially for operators who have limited low or mid-band 5G spectrum. Shah mentioned that they have put a lot of emphasis to ensure DSS smooth interworking between the new vRAN 5G and the legacy 4G systems.
A significant step for the industry
Samsung made everybody’s head turn when it won a significant share of the 5G market in the USA, beating long-term favorites such as Ericsson and Nokia. This came on the heels of its 5G wins in South Korea, and strong 4G performance in hyper-competitive and large market like India. Additionally, Samsung’s strong financial position gives it a distinct advantage over its traditional rivals.
So, when such a strong player adopts a new trend, the industry will take notice. Until now, the vRAN vendor ecosystem consisted primarily of smaller disruptive players, such as Mavenir, Altiostar, Parallel Wireless, and others. Major cloud players such as Facebook, Intel, Google, Qualcomm, and others are largely observing the developments from outside. Nokia, another major legacy vendor recently announced its 5G vRAN offerings as well, with the general availability slated for 2021. Samsung’s announcement makes vRAN much more real, and future that much brighter. Also, Samsung being a challenger, has much more to gain with vRAN than its legacy competitors such as Ericsson, Nokia, and Huawei.
vRAN also opens the possibility for Open RAN, in which vCUs, vDUs, and RUs from different vendors can work with each other, providing even more flexibility for operators. Although Samsung didn’t specifically mention this in the PR, Shah confirmed that the use of standardized open interfaces makes their vRAN system inherently open. He also pointed to their growing portfolio of Open RAN compliant solutions, developed based on multiple collaborations with US operators. Open RAN and vRAN have gotten even more attention and importance because of the geopolitical issues surrounding the US ban of Huawei, and the national security concerns.
Side note: If you would like to learn more about Open RAN architecture and its relevance to addressing the U.S. government’s concerns with Huawei, listen to this Tantra’s Mantra podcast episode.
The generational shift which requires major re-hauling of network infrastructure is a perfect opportunity for operators to pursue new technologies and a new approach. However, the move to vRAN will be gradual. Greenfield 5G operators such as Dish Network in the USA might start off with vRAN, some of the US operators looking at building out 5G on the new mid-band spectrum might use vRAN for that as well, so are the enterprises building private networks. The migration of larger legacy networks will be gradual and will happen over a period of time.
In closing
After a long period of skepticism, it seems the market forces are aligning for vRAN. Because of its enormous benefits in terms of flexibility, and cost-efficiency, there is a lot of interest in it. There is also strong support for this approach from large industry players. In such a situation, Samsung’s announcement has the potential to be a turning point in moving the industry toward vRAN. In my view, Samsung with its end-to-end virtualized portfolio, and a solid financial position is strongly positioned to exploit that move. For a keen industry observer like me, it would be fascinating to watch how the vRAN saga unfolds.
For more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
While the media is abuzz with the news of Samsung Foldable smartphones, being a network engineer at heart, I am more excited about Verizon and Samsung’s recent announcement about the successful completion of 5G virtual RAN (vRAN) trials using the C Band spectrum. Verizon’s adoption of vRAN for its network build, and Samsung’s support for advanced features such as Massive MIMO (mMIMO) for its vRAN portfolio bodes very well for the rapid 5G expansion in the USA. I recently spoke to Bill Stone, VP of technology development and planning at Verizon, and Magnus Ojert, VP and GM at Samsung’s Network Business, regarding the announcement as well as the progress of C Band 5G deployments.
The joint trial
The trials were conducted over Verizon’s live networks in Texas, Connecticut, and Massachusetts. Since the spectrum is still being cleared for use, Verizon had to get a special clearance from FCC. The trials used Samsung’s containerized, cloud-native, fully virtualized RAN software and hardware solutions supporting 64T64R mMIMO configuration for trials. This configuration is extremely important to Verizon for many reasons that I will explain later in the article. This trial is yet another critical milestone in Verizon’s race to build the C Band 5G network.
Verizon’s race to deploy C Band 5G network
After spending $53B on C Band auctions, Verizon is in a race against itself and its competition to put the new spectrum to use. It needs to have a robust network in place before the strong 5G demand outpaces the capacity of its current network. As many of you might know, Verizon is currently using the Dynamic Spectrum Sharing (DSS) technique to opportunistically use its 4G spectrum for 5G, along with focused mmWave deployments. Verizon also needs an expansive coverage footprint to effectively compete against T-Mobile, which is capitalizing on the spectrum-trove it got through the Sprint acquisition.
Verizon is busy like a beehive—signing deals with tower companies, site-prep work for deployments, working closely with its vendors, running many trials, and so on. Owning a significant portion of the fiber backhaul to sites is helping Verizon expedite the buildout. Stone confirmed that vRAN will be the mainstay for their C Band deployments, and they are firmly on the path to transition to virtual and Open RAN across the entire network. This will give Verizon more flexibility, agility, and cost-efficiency in enabling new services in the future, especially during the later phases of 5G, when the service expands beyond the smartphone and mobile broadband market. He added that the trials like this one are a great step in that direction. Although their vRAN equipment supports open interfaces, the initial deployments will only be single-vendor. I think the—single-vendor vRAN followed by multi-vendor Open RAN— is a smart strategy that will be adopted by many operators.
The most interesting C Band development all the industry is watching is how Verizon’s plan to use its AWS band (1.7 GHz) site-grid for C Band (3.5 GHz) will pan out. According to Stone, one way Verizon is looking to compensate for C Band’s smaller coverage footprint is to use the 64T64R antenna configuration. He expects this to improve the uplink coverage, which is the limiting factor. He added that the initial results from the trial are very encouraging.
The coverage benefit will necessitate a rather expensive 64T64R configuration across most of its outdoor macro sites. Verizon is also looking at small cells, indoor solutions, and other options to provide comprehensive coverage. He aptly said, “All the above” is his mantra when it comes to using these options to expand coverage. Considering that robust network and coverage are Verizon’s key differentiators, there is not much margin for error in its C Band deployments.
Samsung leading with its mMIMO and vRAN portfolio
After scalping a surprise win by getting a substantial share of Verizon’s 5G contract, Samsung has been consolidating its position by continuously expanding its RAN portfolio. Ojert emphasized that they are working very closely with Verizon for a speedy and successful C Band rollout.
Side note: To know more about Samsung’s network business, please listen to this Tantra’s Mantra podcast interview of Alok Shah, VP Samsung Networks.
Being a disruptor, Samsung has been an early adopter of vRAN and Open RAN architectures. It understands that the key success factor for these new architectures is providing performance that meets or exceeds that of legacy networks. The 64T64R has almost become a litmus test for whether the new approaches can easily evolve to support complex features such as mMIMO.
There have already been commercial deployments of legacy networks supporting 64T64R. Hence, it becomes a de facto bar for any new large-scale vRAN deployments. The telecom industry is hard at work to make it a reality. Verizon’s plan to use it to close the coverage gap of the C Band makes it almost mandatory for all its vendors.
Running these trials on live networks, that too at multiple locations makes a great proof-point for the readiness of Samsung’s gear for large-scale deployments. Ojert emphasized that by being a major supplier for cutting-edge 5G networks in Korea that use a similar spectrum, Samsung better understands the characteristics of the band. He added that they will utilize the entire portfolio of Samsung solutions including small cells, indoor solutions, and others in helping Verizon build a robust network.
C Band commercial deployments and service
FCC is expected to clear up to 60 megahertz of the total up to 200 megahertz of C Band spectrum later this year. Verizon is projecting to have C Band 5G service in the initial 46 markets in the first quarter of 2022, covering up to 100 million people. It will expand that as the additional spectrum is cleared, to reach an estimated 175 million people by 2024.
The initial deployments will be based on the Rel. 15 version of 5G, with the ability to do a firmware upgrade to Rel. 16, and beyond, for services such as URLLC, as well as Stand-Alone configuration.
C Band (along with its mmWave) spectrum indeed is a potent option for Verizon to substantially expand 5G services, effectively compete, and prepare for the strong evolution of 5G. It will be interesting to watch how the rollout will change the market landscape.
Meanwhile, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
As an eventful 2021, which witnessed 5G becoming mainstream despite all the challenges, comes to a close, the analyst part of my mind is reviewing and examining major disruptions in the cellular market brought by 5G. The rise of Samsung, mostly known for its flagship galaxy phones and shiny consumer electronics, as a global 5G infrastructure leader really dawned on me as a key one.
As a keen industry observer, I have been tracking Samsung Networks for a long time. A little more digging and research revealed how systematically it charted a path from its solid home base in Korea to its disruptive debut in the USA, followed by expanding its influence in Europe and other advanced markets. All the while building a comprehensive 5G technology and product portfolio.
In this article, I will try to follow its growth steps in the last two years and explore how it is well-positioned to lead in the upcoming 5G expansion.
Strong presence at home and early success in India built the Samsung foundation
Korean operators like Korea Telecom and SK Telecom have always been at the bleeding edge of cellular technology, even from 3G days. As their key supplier, Samsung’s technology prowess has been a significant enabler for these operators’ leadership, especially in 4G and 5G. That has also helped Samsung to be ahead of the curve.
Samsung’s first major international debut was in India in 2013, supporting Reliance Jio, a new cellular player that turned the Indian cellular and broadband market upside down. Samsung learned valuable lessons there about deploying very large-scale, expansive cellular networks.
The leadership at home combined with the experience in India provided Samsung a solid foundation for the next phase of its global expansion.
Disruptive debut in the USA that changed the infra landscape
U.S. cellular industry observers sulking about the lack of 5G infra vendor diversity were pleasantly surprised when Samsung won a large share of Verizon’s contract to build the world’s first 5G network. That was a major disruption because of two reasons. First, Samsung virtually replaced a well-established player, Nokia. And second, it’s Verizon, for whom the network is not just a differentiation tool but the company’s pride. Verizon entrusting Samsung with the deployment of its high-profile, business-critical, first 5G network, speaks volumes about Samsung’s technical expertise and product superiority.
Over the years, Samsung has scored many key 5G wins in the U.S., including early 5G-ready Massive MIMO deployments for Sprint (now T-Mobile), supplying CBRS-compliant solutions to AT&T and 4G and 5G network solutions for US cellular.
These U.S. wins were the result of a well-planned strategy, executed with surgical precision. Samsung started 5G work in the U.S. as early as 2017 with testing and trials. In fact, Samsung was the first to receive FCC approval for its 5G infra solution, in 2018, quickly followed by outdoor and indoor 5G home routers.
It’s not just the initial contract wins and delivering on the promise. Samsung has been consistently collaborating with operators in demonstrating, trialing and deploying new and advanced 5G features such as 64T64R Massive MIMO and virtual RAN, c-band support, indoor solutions, small cells and more.
In other words, Samsung has fully established itself as a major infra player in the lucrative and critical U.S. market. The rapid deployment of 5G, even in rural areas, and the impending rip and replace of Chinese infrastructure for national security reasons bode well for Samsung’s growth prospects in the country.
Samsung methodically expands into Europe, Japan and elsewhere
After minting success in the high-stakes U.S. market, Samsung signed a contract with Telus of Canada in 2020. Canada was a simple expansion, and going after other advanced markets, such as Europe and Japan, was a natural next step.
Europe is one of the most competitive and challenging markets to win. Not only it is the home to two well-established infra players–Ericsson and Nokia – but also the biggest market outside China for Huawei and ZTE. Samsung has seen early success with some of the key players in Europe. For example, it successfully completed a trial with Deutsche Telecom in the Czech Republic, potentially giving Samsung access to DT’s extensive footprint in the region. Recently, Vodafone UK selected Samsung as the vRAN and Open RAN partner for its sizable commercial deployment, and Samsung is collaborating with Orange for Open RAN in France. Getting into these leading operators in the region is a significant accomplishment. In my view, with the other players such as Telefonica being very keen on vRAN and Open RAN, entry there is only a matter of time.
Even with these wins, it is still early. The company’s strategy in Europe is still unfolding. A significant tailwind for Samsung is the heightened national security concern, which has significantly slowed the traction of Chinese players. Additionally, onerous U.S. restrictions have seriously crippled Huawei.
Japan has always been the most advanced market. So far, it is dominated by local players such as NEC and Fujitsu. Expanding its wings there, Samsung has been collaborating with KDDI on 5G since 2019. It also got into the other major operator NTT DoCoMo earlier this year with the contract to supply O-RAN compliant solutions.
Comprehensive technology and product portfolio that fueled all this growth
5G has always been characterized as a race. That means the first to market and the leaders will emerge as winners taking a large share of the value created by 5G. Interestingly, it has played out as such so far. The investments in 5G are so large that once companies establish leadership and ecosystem relationships, it is extremely hard to change or displace them.
Realizing that, Samsung invested big and early in 5G technology development. Being both a network and device supplier, it can utilize that investment over a much broader portfolio. Samsung conducted pioneering 5G testing and field trials as early as 2017 and 2018, in Japan with KDDI. When many in the industry were still debating the ability of mmWave to support mobility, Samsung collaborating with SK Telecom, demonstrated successful 5G video streaming in a race car moving at 130 Mph speed. Samsung was also the industry’s first to introduce mmWave base stations with integrated antennas, significantly simplifying deployment.
In the emerging area such as Edge-Cloud, Samsung is already working with major Cloud providers such as Microsoft and IBM and chipset players such as Marvel.
Currently, Samsung has one of the most comprehensive portfolio of network solutions, software stack and tools, support for all commercial 5G bands, including both Sub-6 GHz and mmWave, with advanced features such as Massive MIMO, for indoor and outdoor deployments, for new architectures such as vRAN and Open RAN, for public or private networks and so on.
One of the major advantages of Samsung, when compared to its infra competitors, is its strong financial strength that comes from being part of a huge industrial conglomerate. In businesses like 5G, where investments are large, risks are high and payback times are long, such financial strength can decide between winning and going out of business.
In closing
Samsung Networks’ journey from its humble beginnings in Korea to a global 5G infrastructure leader is fascinating. It has invested heavily to become a technology leader, and has successfully used that leadership along with meticulous planning and execution to be a global leader in the 5G infra business.
It is still early days for both 5G and Samsung. It will be interesting to watch how Samsung can utilize this early lead to capture even bigger opportunities created by the expanding 5G’s reach and new sectors such as Industrial IoT.
In the meantime, for more articles like this, and for an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, and listen to our Tantra’s Mantra podcast.
Samsung Networks held its mid-year analyst day last week, giving an update on their progress on the vRAN/Open RAN front, Dish deployment, and the opportunities they see in the Private Networks space. I was among a few key analysts they invited to their offices in Dallas for the meeting. I came out of the meeting well informed about their strategy and future path, which is following the trajectory I discussed in my earlier articles here.
Strong vRAN/Open RAN progress
Since launching its vRAN portfolio, Samsung has steadily expanded its sphere of influence in North America, Europe, and Asia. Although its surprising debut at Verizon was with legacy products, Samsung Network has used its market-leading vRAN/Open RAN portfolio as leverage to expand its reach, including at Verizon’s c-band deployments and at newer customers, regions, and markets. Having both legacy and vRAN support makes them an ideal partner with any operator, be it the ones continuing to use the legacy approach for faster deployment and expansion of 5G or the ones looking to utilize newer architectures for building future-proof networks, or even the ones looking to bridge between the two.
The chart below captures the continuing successes Samsung Networks has witnessed in the last couple of years.
As Verizon’s VP Bill Stone explained to me during a recent interview, a significant portion of their c-band deployment is vRAN. An operator like Verizon, who considers its network a differentiator, putting full faith in Samsung’s vRAN portfolio shows the latter’s product quality and maturity.
Vodafone UK partnered with Samsung Networks to commercialize its first Open RAN site and has plans to expand it to more than 2,500 sites. You can read more about this in my earlier article here.
To clarify, people often confuse between vRAN and Open RAN. vRAN is the virtualization of RAN functions so that you can run them on commercial off-the-shelf (COTS) hardware. In contrast, Open RAN is building a system with hardware and software components with open interfaces from different vendors. vRAN is firmly on its way to becoming mainstream. However, there are still challenges and lingering questions about Open RAN. That’s why the progress of early Open RAN adopters such as Dish, interests everybody in the industry.
Samsung’s recent announcement regarding 2G support for vRAN was interesting. I knew that there are some 2G markets out there. But was surprised to see the size of this market, as illustrated in the chart below:
This option of supporting 2G on the same Open RAN platform will help operators efficiently support the remaining customers and eventually transition them to 4G/5G while using the same underlying hardware. From the business side, this option will help Samsung Networks break into new customers, especially in Europe.
Powering America’s first-ever Open RAN network with Dish
Nothing illustrates more than one of the world’s most watched new 5G operators fully committed to Open RAN launches its network with you as the primary infra vendor. Dish has a long list of firsts: the first fully cloud-native vRAN and Open RAN network in the US; the first multi-vendor Open RAN network in the US; the First to use public cloud for its deployment, and more.
As evident from many auctions, public disclosures, and this study by Allnet Insights & Analytics, Dish has a mix of many different spectrum bands with highly variable characteristics. They include bands from 600 MHz to 28 GHz, bandwidths ranging from 5 MHz to 20 MHz, paired (FDD), unpaired (TDD and supplemental downlink), licenses in crosshairs with satellite broadband operators, and so on. Dish has embarked on a unique journey of being a major greenfield countrywide cellular provider in the US in a few decades while adopting a brand-new architecture such as Open RAN. Additionally, it also has tight regulatory timelines to meet. In such a scenario, it needs a reliable, versatile, financially strong infra partner with a solid product portfolio. Above all, it needs a vendor fully committed to Open RAN. Dish seems to have found such a partner in Samsung Networks.
To be clear, it is still very early days for Dish and Open RAN. The whole industry is watching their progress with open and watchful eyes.
Finding a foothold in the private networks market
Private Networks is one of the most hyped concepts in the cellular industry today. Indeed, 5G Private Networks have a great prospect with Industry 4.0 and other futuristic trends. But based on my interactions with many players in the space, customers’ real needs seem to be plain-vanilla mobile broadband connectivity. In many cases, be it large warehouses, educational institutions, or enterprises with sprawling campuses, cellular Private Networks will be needed for use cases requiring seamless mobility, expanded coverage (indoor and outdoor), increased capacity, and in some cases, higher security. And these will complement Wi-Fi networks.
During the event, Samsung Networks explained how they are addressing these immediate and prospective long-term needs of the market, with examples of early successes. These include deployments at Howard University in the USA, a relationship with NTT East in Japan, and the latest collaboration with Naver Cloud in South Korea.
Naver also has deployed an indoor commercial 5G Private Network in its office. The network, covering a sizeable multi-story building, serves a bunch of autonomous robots. These robots work as office assistants, providing convenience services, such as delivering packages, coffee, and lunch boxes to Naver employees throughout the building. All the robots are controlled by Naver’s cloud-based AI. The need for 5G instead of Wi-Fi stems from mobility, low latency, coverage, and capacity requirements.
Mobility is needed for reliable connectivity with hand-offs when robots are moving around. Low latency is required to connect robots to cloud AI for seamless operations. Extended coverage and capacity are needed to ensure the connectivity of robots is not degraded by the traffic from all the other office machines, including computers, printers, network drives, and others.
Naver and Samsung are planning to market such concept and services to other customers.
In closing
The analyst meeting provided other analysts and me with a good understanding of Samsung Networks’ current traction in vRAN/Open RAN and an overview of their strategy for the future.
It seems Samsung Network is well poised to expand its market with its vRAN/Open RAN portfolio, along with support for legacy architecture. Dish being a bellwether for Open RAN, the industry is very closely watching its success and its collaboration with Samsung Networks.
Private Networks is an emerging concept for 5G with great potential. Samsung Networks seems to have scored some early partners and deployment wins.
The 5G infrastructure market expansion is exciting, and Samsung seems to have gotten a good head start. It will be interesting to see how it evolves, especially with the fears of global recession looming.
Meanwhile, to read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Samsung Networks’ news cycle started weeks before the much-awaited Mobile World Congress 2022, making its mark in Europe. The cycle continued, with many more announcements coming right before, during and after the event. The notable ones were: building a solid coalition to streamline virtualized RAN (vRAN) and open RAN, expansion into the red-hot private networks domain and traction in public safety deployments.
All this point to Samsung Networks evolving from its initial disruptor role to a market and thought leadership role, tracking the trajectory I had detailed last year in this article.
Building comprehensive, interoperable vRAN/open RAN ecosystem
As I had explained in my recent Forbes article, the biggest challenge of new architectures like vRAN and open RAN is stitching together a system with disparate pieces from many different companies. Most of these pieces, by definition, are generic and off-the-shelf (COTS – Commercial Off the Shelf). In such case, it is an arduous task for operators and system integrators to ensure these pieces interwork seamlessly and operate as a single system. Moreover, this system has to meet and exceed the performance of legacy architectures. Understanding this challenge, Samsung Networks is taking charge to innovate and build a comprehensive ecosystem of vRAN/open RAN players with fully interoperable solutions.
An announced coalition consists of well-known brands with a proven track record. It has cloud infra players such as Dell and HPE, chipset giants such as Intel, and cloud software platform players such as Red Hat and Wind River. I wouldn’t be surprised if the roster grows with additional partners such as Qualcomm, Marvel and hyperscalers in the near future.
The primary objective of the coalition is to develop fully interoperable, deployment-ready, pre-tested, and pre-integrated vRAN and Open RAN solutions. Anybody who has done system integration knows that even though, in theory, standards-compliant products should interwork, during actual deployments, nasty surprises always spring up. This collaboration is designed to remove that exact element of surprise and make deployments seamless, predictable, and cost-effective.
By joining hands with Samsung Networks, all these players who are leaders in their respective domains have recognized the leadership and growing influence of the company.
CBRS and Private Networks deployments
Private Networks have attracted a lot of attention lately. There has been much news regarding deployment plans, commitments, and trials. Samsung Networks was among the first to deploy an actual commercial Private Network on the campus of Howard University.
On the second day of MWC, Samsung Networks announced that NTT East selected it as the partner for Private Network deployments in the eastern region of Japan. This followed successful completion of 5G Standalone (SA) network testing by both the companies. 5G SA is a crucial feature for Private Networks, especially for delivering massive IoT and mission-critical services to enterprises, large industries, and others.
In the USA, CBRS shared spectrum is touted as the ticket to Private Networks. After a somewhat slow start, CBRS deployments have been picking up pace in the last couple of years. During MWC, Samsung announced a collaboration with Avista Edge Inc, for an interesting use case of the CBRS spectrum. Avista Edge is a last-mile, fixed wireless access (FWA) technology provider, with an innovative approach to delivering broadband. As part of the deal, Avista Edge will offer broadband services to rural communities through electric utilities and Internet Service Providers. Samsung will provide its On-Go Alliance certified Massive MIMO radios and compact core network to Avista Edge.
Right after MWC, Samsung also announced another CBRS deal—with Mercury Broadband in collaboration with t3 Broadband. Mercury Broadband is a rural broadband provider, and t3 Broadband is an engineering services company. Samsung will provide its 6T64R Massive MIMO radios and baseband units for more than 500 FWA sites across Kansas, Missouri, and Indiana. The network is expected to expand to additional states through 2025.
Public safety partnership and new mmWave use case
Samsung Networks and the Canadian operator TELUS announced the country’s first Mission Critical Push-to-X (MCPTX) deployment, serving first responders, public safety workers, and others. It will be deployed over TELUS’s 4G and 5G networks and has already been trialed with select customers. The broader commercial availability is expected in the for later part 2022.
Samsung Networks’ MCPTX solution packs a comprehensive suite of tools, offering: real-time audio and video communication between the first responders, priority access in congested networks during natural disasters, connected ambulances, and vehicular traffic controls.
In an interesting use case of mmWave, Samsung Networks signed a deal with all three Korean operators to provide a high capacity mmWave backhaul to the subway Wi-Fi system in Seoul. Seoul is one of the highly connected cities in the world, and data consumption continues to grow. The system will provide high capacity backhaul to Wi-Fi Access Points in the subway stations and trains, allowing users to enjoy extreme speeds, capacity, and better broadband experience while in transit. This set-up was successfully trialed in September 2021.
In closing
After impressive 5G rollouts in the USA over the years, including its most recent Verizon C-band deployment, Samsung Networks is set to establish a solid foothold in Europe. Further, it is becoming a recognized leader in vRAN/Open RAN, and is widening its appeal to rural players and private network providers around the globe.
Its announcements at MWC 2022 provided solid proof of its expansion strategy and early success. I’ll be interested to see how Samsung Network grows and tracks the trajectory outlined in my 2021 article.
Prakash Sangam is the founder and principal at Tantra Analyst, a leading boutique research and advisory firm. He is a recognized expert in 5G, Wi-Fi, AI, Cloud and IoT. To read articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
MWC 2023 turned out to be a graduation party for Samsung Networks, from a market disruptor to a mature, reliable and confident 5G infrastructure leader. This was evident from the flurry of announcements made around the event, including its own, as well as from operators and other ecosystem partners.
The announcement season actually started late last year when DellOro Group crowned Samsung Networks the vRAN/open RAN market leader. To top that, during MWC, Samsung Networks announced its next-gen vRAN 3.0, as well as many collaborations and partnerships.
To my credit, Samsung Networks followed the trajectory I outlined in this article in 2021. It has meticulously built and expanded its global footprint and created a sizeable ecosystem of partners that are technology and market leaders in their respective domains.
Next-gen infrastructure solutions
Unlike other large infrastructure vendors such as Ericsson, Huawei and Nokia, Samsung was an early and enthusiastic adopter of vRAN/open RAN architecture. Being a challenger and a disrupter made that decision easy — it didn’t have any sacred cows to sacrifice, i.e., legacy contracts and relationships. That gave it a considerable head start that it continues to maintain.
The vRAN/open RAN transition is shaping up to be a two-step process: First, a disaggregated, cloud-native, single vendor, fully virtualized RAN (vRAN), with open interfaces, followed by a multi-vendor truly open RAN. Many of Samsung Network’s competitors are still on the first step, deploying their first commercial base stations. In contrast, Samsung Networks has already moved on to the second step (more on this later).
Samsung Networks announced its next-gen solutions, dubbed vRAN 3.0, which brings many performance optimizations and significant power savings. The former brings a key feature that supports up to 200 MHz of bandwidth with 64T64R massive MIMO configuration. That almost entirely covers the mid-band spectrum of U.S. operators. The latter involves optimizing usage and sleep cycles of CPU cores to match user traffic, thereby minimizing power consumption. These software-only features (with the proper hardware provisioning) exemplify the benefits of a disaggregated vRAN approach, where the new capabilities can be rapidly developed and deployed.
Also, part of vRAN 3.0 is the Samsung Cloud Orchestrator. It streamlines the onboarding, deployment and operation processes, making it easier for operators to manage thousands of cell sites from a unified platform.
Although large parts of vRAN/open RAN are software-defined, the key radio technologies still reside in hardware. That is where Samsung Networks has a strong differentiation. It is the only major network vendor that can design, develop and manufacture 4G and 5G network chipsets in-house.
Strong operator traction and contract wins
Samsung Networks’ collaboration with Dish Wireless is notable at many levels. Dish Wireless is one of the biggest open RAN greenfield deployments. Its trust in keeping Samsung Networks as a primary vendor says a lot. It is also a multi-vendor deployment, wherein Samsung Networks is integrating its own as well as Fujitsu’s radio units (RU) into the network. Interestingly, Marc Rouanne, EVP and chief network officer of Dish Wireless, joined Samsung Networks’ analyst briefing at MWC and showered lavish praise on their work together, especially on system integration, the Achilles heel of open RAN.
Vodafone has been a great success story for Samsung Networks. After successfully launching the U.K. network with the famous Golden Cluster and integrating NEC radios, both companies are now extending their collaboration to Germany and Spain.
In Japan, Samsung’s Network’s relationship with KDDI has grown tremendously. Leading up to MWC, they announced the completion of network slicing trials, followed by commercial 5G open RAN deployment along with Fujitsu (for RU) and a contract for a 5G standalone core network, a first for Samsung outside Korea.
A recent Dell’Oro report identified North America and Asia-Pacific as the growth drivers for vRAN/open RAN. Although Europe is a laggard, even then, that region’s revenue is expected to top $1 billion by 2027. Apart from the above announcements, Samsung Networks has announced many operator engagements and contract wins across these three regions over the years. So, geographically, Samsung Networks is putting the bets in the right places.
Expanding the partner ecosystem
Success in the infrastructure business is decided by the company you keep and the partnerships you nourish. That is even more true with vRAN/open RAN, where networks are cloud-native, software-defined, and multi-vendor, with open interfaces.
There was a long list of partner announcements around MWC 2023. The cloud platform provider VMWare is working with Samsung Networks for the Dish deployment. Another provider, Red Hat, announced a study that can save significant power for operators when their platform and Samsung Networks’ RAN are working together.
Cloud computing provider Dell Technologies announced through its 5G Head of Marketing Scott Heinlein‘s blog a collaboration to integrate Samsung Network’s vCU and vDU with its PowerEdge servers.
Finally, Intel, in its announcement, confirmed that Samsung had validated its 4th Gen Intel Xeon Scalable processors for the core network.
Again, these are just the MWC 2023 announcements. There were many more in the last few years.
In summary, through its differentiated solutions, strong operator traction and robust partnerships, Samsung Networks has graduated from a credible disrupter to a reliable, mature infrastructure player, especially for vRAN/open RAN. It was vividly on display with all its glory at MWC 2023 through its proven track record, product, operator, and partner announcements. I can’t wait to see how its next chapter unfolds while global networks transition to new architectures.
Meanwhile, if you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast. If you want to know more about the vRAN/open RAN market, check out these articles.
Samsung recently opened the doors to its North American Samsung Networks Innovation Center in Plano, TX, further boosting its presence in the region. This state-of-the-art facility, supported by development centers and well-equipped labs, not only helps Samsung Networks and its partners to optimize, test and showcase their 5G products and services, but it also signifies the company’s strong commitment to support the needs of customers and build new partnerships in the region.
I got to tour the Innovation Center and the labs firsthand a couple of weeks ago and was impressed by the facilities. The opening of the Innovation Center is even more opportune, considering that we are at the cusp of the second phase of 5G, driven primarily by architecture like vRAN/open RAN, new business propositions like private networks, new and exciting use cases such as Industrial IoT, URLLC and XR. This center will be a valuable asset for Samsung Networks and its customers and partners in experiencing new technologies in real life, and ultimately helping make those technologies mainstream.
This is yet another step in the remarkable global growth of Samsung Networks in the 5G era, which I have documented in the article series here.
Showcase of the best of Samsung Networks’ technology
The front end of the expansive Samsung facilities is the Innovation Center, which houses many live demonstration areas highlighting various technologies and use cases. The current set-up includes demos of vRAN/open RAN with network orchestration, fixed wireless access (FWA) both FR-1 (Sub6 Ghz) and live FR-2 (mmWave) systems, private network with low-latency based IIoT use cases, X.R. and others.
The most impressive for me was the Radio Wall of Fame — a vast display of Samsung Networks’ radios deployed (and ready to be deployed) in the Americas, supporting a wide range of the spectrum, output power, form factors, bandwidths, bands and band combinations, MIMO configurations and more. It is awe-inspiring that in a short span, Samsung Networks has developed almost all the configurations desired by customers in the Americas.
Optimizing and perfecting technologies for the Americas
The hallmark of any successful infrastructure player is to “think global and act local,” as markets are won by best addressing the specific needs of local and regional customers, which might often be disparate. Like other major cellular infra players, most of Samsung Networks’ core development happens offshore. But most, if not all, the customization and optimization happens in the country, including the crucial lab and field testing.
The best example of this localization is the fact that Samsung supports spectrum bands and band combinations needed for U.S. operators, including its unique shared CBRS band. There are estimated more than 10,000 possible band combinations defined by 3GPP, many of which are necessary in the USA. “Supporting and testing all the band combinations operators require is an arduous task, and that’s precisely where our well-equipped labs come into play,” says Vinay Mahendra, director of engineering, Networks Business, Samsung Electronics America, “The combinations are tested for compliance, optimized for performance, and can be demonstrated to operators at this facility before deploying them in the field.” This applies to many other local needs, such as configurations, deployment scenarios, and use cases. The new Plano Innovation Center is the showcase, and existing labs there and elsewhere in the country serve as the brains and plumbing.
Testing ground for partners
A 5G network is an amalgamation of different vendors, and seamless interoperability between them is a basic need. This need elevates the complexity to a new level with vRAN/open RAN, where software and hardware are disaggregated and might come from different vendors. A typical multi-vendor open RAN network could have different RU, D.U., CU vendors, cloud orchestration and solution providers, chip and cloud providers, etc. Integrating all those hardware and software pieces and making the system work together is no small task. It requires close collaboration among vendors, ensuring the system is thoroughly tested and pre-certified, so that the disruptions and issues in the field and hence the time and costs can be minimized. That’s exactly the role of the Innovation Center and the labs.
The next phase of 5G will be driven by non-traditional applications, services and use cases, such as IIoT, mission critical services, X.R., private networks, and many others that we haven’t even imagined yet. Those must be developed, tested, perfected, and showcased before being offered on commercial networks. Being a market leader, Samsung, with its partners, is in the driving seat to enable these from the network side. Again, a task cut out for its Innovation Center.
In closing
Samsung Networks’ Innovation Center in the U.S. is opening at the critical juncture when 5G is ready for its next phase in the country, exploring new deployment models, architectures and use cases. The center and the adjoining labs will serve as a centerpiece for the company and its partners to develop and commercialize that next phase. It will help Samsung Networks showcase its innovations and partner technologies and show company’s commitment to its customers in the region.
I am looking forward to seeing new technologies and concepts being demonstrated there.
If you want to read more articles like this and get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
5G Integrated Access Backhaul (IAB)
5G is the hottest trend now, so much so that even the Covid-19 pandemic, which has badly ravaged the global economy, could not stop its meteoritic rise. Apple’s announcement to support 5G across its portfolio cemented 5G’s market success. With 5G device shipments expected to grow substantially in 2021, naturally, the industry’s focus is on ensuring expanded coverage and delivering on the promise of gigabit speeds and extreme capacity.
However, it is easier said than done, especially for the new mmWave band, which has a smaller coverage footprint. Leading 5G operators such as Verizon and AT&T have gotten a bad rap because of their limited 5G coverage. One technology option is integrated access backhauls (IABs) with self-interference cancellation (SLIC) that enable operators to deploy hyper-dense networks and quickly expand coverage.
mmWave Bands And Network Densification
Undeniably, making mmWave bands viable for mobile communication is one of the biggest innovations of 5G. That has opened a wide swath of spectrum, almost a tenfold increase, for 5G. However, because of their RF characteristics, mmWave bands have a much smaller coverage footprint. According to some studies, mmWave might need seven times the sites or more to provide the same coverage as traditional Sub-6GHz bands. So, to make the best use of mmWave bands, hyper-dense deployments are needed. Operators are trying to use lampposts and utility posts for deployment to achieve such density.
The biggest challenge for hyper-dense deployment is providing rapid and cost-effective backhaul. Backhauls are a significant portion of the CAPEX and OPEX of any site. With a large number of sites needed for mmWave, it is an even harder, more time-consuming and overly expensive process to bring fiber to each of them. A good solution is to incorporate IABs, which use wireless links for backhaul instead of fiber runs. IABs, which are an advanced version of relays used in 4G, are being introduced in the 3GPP Rel. 16 of 5G.
In typical deployments, there would be one fiber backhaul site, called a donor, say at a crossroad and a series of IABs installed on lampposts along the roads connected to it in a cascade configuration. IABs act as donors to other IABs as well to provide redundancy. They can also connect to devices, which would be beneficial now and in the future.
Drawbacks Of Traditional Relays And IABs
While IABs seem like an ideal solution, they do have challenges. The biggest one is their lower efficiency. I’ve observed that it can be as low as 60% during high-traffic load scenarios. This means you will need almost double the IABs to provide the same capacity as regular mmWave sites.
IABs can be deployed in two configurations based on how the spectrum is used for both of its sides (access and backhaul): using the same spectrum on both sides, or using a different spectrum for each side.
Using the same spectrum on both sides creates significant interference between the two sides (known as self-interference) and reduces efficiency. Using a different spectrum requires double the amount of spectrum, which also drastically reduces efficiency. Operators are always spectrum-constrained. Hence, in most cases, they cannot afford this configuration. Moreover, this creates mobility issues and leads to other complexities such as frequency planning, which needs to be maintained and managed on an ongoing basis.
So, in my opinion, the best approach is to use the same spectrum for both sides and try to eliminate or minimize the self-interference.
SLIC Maximizes IAB Efficiency
SLIC is a technique to cancel interference caused by both the links using the same spectrum. It involves generating a signal that is directly opposite to the undesired signal such as interference and canceling it. For example, for the access link, the signal from the traffic link is the undesired signal and vice versa. This technique has been known in theory for a long time, but thanks to recent technological advances, it is now possible to implement it in actual products. In fact, there are already products for 4G networks in the market that implement SLIC.
For 5G IABs, I’ve observed that SLIC can increase the IAB efficiency to as high as 100%, meaning IABs provide the same capacity as regular mmWave sites. 5G IABs with SLIC have been developed, and leading operators such as Verizon and AT&T have already completed their testing and trials and are gearing up for large-scale commercial deployments in 2021 and beyond.
In Closing
Unlike 4G relays, which were primarily used for coverage extension or rapid, short-term deployments (for example, to connect temporary health care facilities built for accommodating rapid surge in Covid-19 hospitalizations), operators should consider IABs with SLIC as an integral part of their network design. In addition, operators have to decide on an optimal mix of IAB and donor sites so that it provides adequate capacity while minimizing the overall deployment cost.
Mobilizing mmWave bands was one of the major achievements of 5G. However, their smaller coverage footprint could be a challenge, requiring hyper-dense deployments. The biggest hurdle for such deployments is quick and cost-effective backhaul solutions such as IABs. Further, SLIC techniques maximize the efficiency of those IABs.
5G has seen unprecedented traction; many flagship devices are already in the market, and many more are on the way, including the much-rumored and anticipated iPhone 5G. After the excitement of limited initial launches, when operators are starting the large-scale deployments, the basic question they are faced with is whether to focus on coverage or capacity. Well, the right answer is both, but that is easier said than done, especially for operators such as Verizon and AT&T that have limited low and mid-band (aka Sub-6Hz) spectrum.
In a series of articles, I will discuss this dilemma and explore the solutions that the industry is working on to effectively address it. Especially the ones such as Integrated Access Backhaul (IAB) that have shown early promise, and many innovations that not only enable such solutions but also make them efficient. This is the first article in the series.
When launching a 5G network, the easiest thing is to utilize sub-6GHz bands, if you have access to them, and provide a basic coverage layer. That is exactly what Sprint (now part of T-Mobile) in the US and many operators outside the US did. However, the amount of bandwidth available in the sub-6GHz spectrum is limited, and hence the capacity in those networks would quickly be used up, especially if the growth of 5G continues as predicted. There is every indication that it will, for example, contrary to what many people expected, 5G deployment in the US is not affected by the Covid-19 pandemic. This means those operators will soon have to move to the bandwidth-rich high-band spectrum, i.e. millimeter wave bands (mmWave). These bands have more than ten-times available spectrum than sub-6GHz, and are critical to deliver on the promise of 5G—multi-gigabit user speeds, the extreme capacity to offer new services, be it fixed wireless access to homes and offices, massive IoT, Mission Critical Services, or bringing new user experiences on a massive scale.
Operators such as Verizon and AT&T, who did not have access to enough Sub-6GHz bands, leapfrogged and took the bold step of launching 5G with mmWave spectrum. This spectrum is far different in many aspects than others that the mobile industry has used so far.
<<Side note: If you would like to know more about mmWave bands, check out my article – Is mmWave just another band for 5G?>>
The biggest differences between Sub-6GHz and mmWave bands are coverage and indoor penetration. Because of their RF properties, mmWave bands have smaller coverage footprint and do not penetrate solid objects such as walls. Although this was long known by experts, it came almost as a shock to uninformed general industry observers. Operators, especially Verizon, got a lot of flak from the media on this. Some even doubted the feasibility of mmWave bands. Thanks to the extensive field tests, any lingering doubts are now duly resolved. In fact, almost all global regions are now working toward allocating the mmWave spectrum for 5G.
By the virtue of a smaller footprint, mmWave will need more sites than Sub-6GHz to provide similar coverage. For example, simulations run by Kumu Networks estimate that 26 GHz spectrum will need seven to eight times more sites than 3.5 GHz spectrum, as shown in the figure below:
The ideal 5G deployment strategy for operators is to utilize sub-6GHz to provide expansive, city, and country-wide coverage, and utilize dense deployment of mmWave, as shown in the figure, in high-traffic dense urban, urban and even in pockets of suburban areas to provide extreme capacity. Because of the density and a large amount of spectrum available, the mmWave cluster will provide magnitudes higher capacity than sub-6GHz clusters. Additionally, such dense deployments are much easier with mmWave, because of their smaller coverage footprint.
Many operators are working with city governments and utilities to deploy mmWave sites on lampposts, which should provide good densification. Studies have shown that such deployments could provide excellent results, supporting a large number of subscribers with a huge amount of capacity resulting in excellent user experience. FCC, being proactive, has been working to streamline regulations for the deployment of such outdoor sites.
Clearly, lampposts, and in some cases building tops, are the ideal spots for mmWave installations, because they readily have access to power, which is one of the two key requirements for a new site. However, the other requirement—backhaul, is a far different story. Since these are high capacity sites, they need fiber or other high bandwidth means of backhaul. The first issue is, there may not be fiber drops near all the lampposts. Even if there are, bringing fiber to each post is not only extremely time consuming and very expensive, but also hard to manage and maintain on an ongoing basis. This means the industry has to look for alternate cost-effective, and easy to install solutions that offer bandwidth and latency similar to fiber.
Realizing this, the industry body 3GPP has been working on an interesting solution called Integrated Access Backhaul (IAB). IABs are being standardized in Rel. 16, and further enhanced in Rel. 17. Rel. 16 is expected to be finalized in July of this year and followed by Rel 17 in 2021.
<<Side note: If you would like to know more about 3GPP standardization and Rel 17, please check this article series – The Chronicles of 3GPP Rel. 17.>>
IABs use wireless links for both backhaul and access (i.e. regular user traffic). As evident, they will need a large amount of licensed spectrum to offer fiber-like backhaul performance. But that raises a lot of questions —such as “Don’t IABs decrease the available spectrum for access? How would that affect the network capacity? Can you still deliver on the grand promises of 5G?” and many more.
All those are valid questions and concerns. What if I say that there are ways to make and deploy IABs without compromising on the available spectrum? More like having the cake and eating it too, yes, that is possible! How, you ask? Well, you will have to wait for my next article to find out!
Also, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
One of the exciting features the recently finalized 3GPP Rel. 16 brings to 5G is the support for Integrated Access Backhaul (IAB). IABs have the potential to be a game-changer, especially for millimeter Wave (mmWave) deployments by solving the key challenge of backhaul. However, the traditional design of IABs offers low efficiency. In this article, I will take a deep dive into IABs, their deployment configurations, and most importantly, the techniques needed to improve their efficiency.
Side Note: If you would like to learn more about 3GPP Rel. 16, check out this article “3GPP Rel. 16–Stage set for the next phase of 5G, but who is leading?”
What are IABs and how do they work?
IABs are cell sites that use wireless connectivity for both user traffic (access) as well as backhaul. IAB’s predecessor— relays—have been around since 4G days. IABs are essentially improved and rechristened relays. If you have heard of Sprint “Magic Box,” then you have already heard about relays and to some extent IABs as well.
So far, relays were used primarily to extend coverage in places where it was challenging or uneconomical to deploy traditional base stations with fiber or ethernet backhauls. They were also useful when connectivity needs were immediate and temporary. A great use case was the recent COVID-19 crisis when temporary healthcare facilities with full connectivity had to be built very quickly. There are many such applications, for example, indoor deployments in retail stores, shopping malls, etc., where operators do not have access to fiber.
However, with expanded capabilities, IABs have a much bigger role to play in 5G, especially for mmWave deployments who have gotten a bad rap for having smaller coverage footprint. IABs allow operators to rapidly deploy mmWave sites and expand coverage by solving the teething backhaul issue.
IABs are deployed just like any other mmWave sites, of course without requiring pesky fiber runs. As shown in the figure below, IABs connect to donor sites in the same way as smartphones or any other devices. The main donor sites will need high capacity fiber backhaul. One or more IABs can connect to a single donor site. There could be multi-hop deployments, meaning IABs could also act like donors to other IABs. Each IAB could connect to multiple sites or IABs, providing redundancy. This configuration lends itself very well for mesh architecture in the future as well. IABs are transparent to devices, meaning devices connect to IABs just as they would to any regular base stations.
IABs are ideal for mmWave deployments
As I had explained in my previous article, mmWave 5G deployments need a dense cluster of sites to provide good outdoor coverage. Since bringing backhaul to all these sites is cumbersome and expensive, using IABs for such deployments is ideal. For example, in city centers, there could be a handful of donor sites with fiber backhaul, connecting to clusters of IABs around them. As evident, with such approach operators could provide much broader coverage with much fewer fiber runs, in a very short time. The savings and ease of installation are quite obvious.
It should be noted that unlike regular sites, IABs do not add new capacity. They instead share the capacity of the donor site much more efficiently across a much larger coverage area. Since the mmWave band has lots of spectrum, capacity may not be a limitation. Ultimately, the level of data traffic and the amount of spectrum operators have access to will decide the appropriate mix of donor sites and IABs.
One of the issues with IABs is interference. Since donors and IABs use the same spectrum, they might interfere with each other. But thanks to the smaller coverage footprint of mmWave bands, the interference is relatively minimal, compared to traditional bands. Another big advantage of mmWave bands is the support for beamforming and beamsteering techniques. These techniques allow the signal (beam) between all the nodes to be very narrow and highly directional, which further reduces interference.
Performance challenges of IABs
The biggest challenge of IABs is their lower efficiency. Since they use the wireless link for both sides (towards donor and user), they have to either use a separate spectrum or time-share between the sides. In both cases, efficiency is reduced, as the first case uses twice the spectrum, and the latter allows only one side to be active at any time. Let me explain, reasons for it.
If the same spectrum is used for both sides, there will be huge self-interference, meaning the transmitter from one side feeds into the receiver of the other side making interference so high that signal from actual users is drowned out and can’t be heard. So, the spectrum for both sides must be different. Since operators are often short on spectrum, they cannot afford this configuration. Even if they could, there are many complexities, such as requiring frequency planning, inability to support mobile IABs, confusion in handover between the two frequencies, and many more.
Hence, almost every deployment utilizes an alternate approach called Half-Duplex, in which the sides are tuned ON alternatively. The IAB ON/OFF timing has to be synchronized across the network to avoid interference. The situation is even more complicated if there are multi-hop deployments.
The best way to understand the performance of IABs is to simulate a typical system and analyze various scenarios. Kumu Networks, a leader in relay technology, did exactly that. Here is a quick overview of what they found out.
They simulated a typical city intersection, as shown in the figure here. They put a fiber-fed donor at a city intersection and a cluster of IABs along the streets, some connected directly, others in multi-hops. The aggregate throughput is calculated for the entire system with one, two, and multiple hops.
This chart shows the performance of the system, plotting the aggregate throughput of all users in the system vs. the number of hops. The red line in the chart represents the traditional Half-Duplex configuration that we just discussed. With this configuration, the throughput goes down significantly as the number of hops in the system increase. This is because the more hops there are, the less time slice each IABs gets, and lower the throughput.
You also see a blue line on the chart. This represents the Full-Duplex configuration, for which the throughput slightly increases and stabilizes even when more hops are added. Obviously, Full-Duplex is the most desired configuration.
Now, what is Full duplex? As the name suggests, it is keeping both sides of the IAB switched ON all the time, while using the same spectrum. So, with this configuration, there is no need for additional spectrum, no more time-sharing, and hence no more reduced efficiency. But didn’t we just discuss why this is not possible because of self-interference?
Well, what if I say that there are techniques to effectively cancel that self-interference? I know you are intrigued by this and want to know more. But for that, you will have to wait for my next article. So, be on the lookout!
Meanwhile, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Many 5G operators are quickly realizing that Integrated Access Backhauls (IABs) are an ideal solution to expand 5G coverage. This is even more important for operators such as Verizon and AT&T, who are primarily utilizing millimeter Wave (mmWave) bands for 5G. As I explained in my earlier articles, traditional techniques only allow half-duplex IAB operation, which severely limits its usability. The SeLf Interference Cancellation (SLIC) technique enables the full-duplex IAB operation and offers full capacity and efficiency. In essence, it just not IABs, but IABs with SLIC are the most efficient, and hassle-free way to expand 5G mmWave coverage.
Side note: If you would like to learn more about IABs, and how to deploy hyperdense mmWave networks, please check out the other articles in the IABs article series.
What is self-interference, and why is it a challenge?
The traditional configuration for deploying IABs is half-duplex, where the donor and access (user) links timeshare the same spectrum, thus significantly reducing the efficiency. The full-duplex mode, where both the links are ON at the same time, is not possible as the links interfere with each other—the transmitter of one link feeding into the receiver of the other. This “self-interference” makes both the links unusable and the IAB dysfunctional.
So, let’s look at how to address this self-interference. As shown in the figure, IAB has two sets of antennas, one for the donor link, and another for the access link. The best option to reduce self-interference is to isolate both the antennas/links. Based on the years of work on the cousins of IABs—repeaters, and relays—we know that for the full-duplex mode to work, this isolation needs to be 110 – 120 dB.
Locating the donor and access antennas far apart from each other or separating them with a solid obstruction could yield significant isolation. However, since we would like to keep the IAB unit small and compact, with integrated antennas, there is a limit to how much separation you could achieve this way.
The mmWave bands have many advantages over sub-6GHz bands in achieving such isolation. Their antennas are small, so isolating them is relatively easy. Since they also have a smaller coverage footprint, the interference they spew into the other link is relatively smaller. That is why I think IABs are ideal for mmWave bands. If you would like to know more about this, check out the earlier articles.
The lab and field testing done by a leading player Kumu networks indicates that for mmWave IABs, the isolation that can be achieved by intelligent antennas separation is as high as 70 dB. That means the remaining 40-50 dB has to come from some other means. That is where the SLIC comes into play.
How does SLIC work?
To explain interference cancellation in simple words, you create a signal that is directly opposite to the interfering signal and inject that into the receiver. This opposite signal negates the interference leaving behind only the desired signal.
The interference cancellation can be implemented either in the analog domain or the digital one. Each is implemented at different sections of the IAB. Analog SLIC is typically done at the RF Front End (RFFE) subsystem, and the digital SLIC is implemented in or around the modem subsystem.
Side note: If you would like to know more technical details on self-interference cancellation, please check this YouTube video.
Again, when it comes to mmWave IABs, because of their RF characteristics, almost all the needed additional 40-50 dB of isolation can be achieved only through digital SLIC. Here are the frequency response charts of a commercial-grade mmWave digital SLIC IP block developed by Kumu Networks. This response is for a 28 GHz, 400 MHz mmWave system, and as evident, it can reduce the interference, i.e. increase the isolation by 40-50 dB.
SLIC enables full-duplex IABs
Here is a chart that further illustrates the importance of SLIC in enabling full-duplex operation of IABs.
t plots the IAB efficiency against the amount of isolation. The efficiency here is measured as the total IAB throughput when compared to the throughput of a regular site with a fiber backhaul. As can be seen, IAB in full-duplex mode is more efficient than half-duplex, if the isolation is 90 dB or more. And with 120 dB of isolation, IAB can provide the same amount of capacity as that of a regular mmWave site. It is pretty clear that SLIC is a must to make IABs really useful for 5G.
When will IABs with SLIC be available?
Well, there are two parts to that question. Let’s look at the second part first. SLIC is not a new concept. In fact, it is available in the products being shipped today. For example, Kumu Network’s LTE Relays that support SLIC are already deployed by many operators. And they already have developed the core IP for 5G mmW digital SLIC and it is currently being evaluated by many of its customers. As mentioned before, the frequency chart showing the interference cancellation is from the same IP block.
Now, regarding the first part, 3GPP Rel. 16, which introduced IABs was finalized only a few months ago in Jun 2020. It usually takes 9-12 months for the new standard to be supported in commercial products. Verizon and AT&T are already testing IABs and have publicly disclosed that they will start deploying them in their networks in 2021.
Final thoughts
In a series of articles, we took a very close look at 5G IABs, especially for the mmWave deployments. The first article examined why hyper densification of mmWave sites is a must for 5G operators, the second article explained how IABs address the main challenge of cost-effective backhaul, and this article illustrates why SLIC is a basic need for highly efficient, full-duplex operation of IABs.
5G mmWave IABs are a powerful combination of well-understood concept, proven technology, and an ideal spectrum band. No wonder the industry is really excited about their introduction. The finalization of 3GPP Rel. 16 has set the IAB commercialization in motion, and operators can’t wait for them to be deployed in their networks.
For more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
IoT Device Security
As the awareness of the transformative nature of 5G is increasing, the industry is slowly waking up to the enormous challenge of securing not only the networks, but also all the things these networks connect and the vital data they carry. When it comes to the Internet of things (IoT), the challenges of security couldn’t be bigger, and the stakes involved couldn’t be higher. The spread of IoT in homes, enterprises, industries, governments, and other places is making wireless networks the backbone of the country’s critical infrastructure. Safeguarding it against potential threats is a basic national security need.
With 5G set to usher in industry 4.0—the next industrial revolution, governments across the globe are understandably taking a keen interest in how 5G is deployed in their countries. There has naturally been a lot of emphasis on its security aspects. The current focus has primarily been on the network infrastructure side. Many countries, such as the USA, Australia, and New Zealand, have put restrictions on buying equipment from certain network infrastructure vendors such as Huawei and ZTE. As stated by these governments, their concerns are regarding the lack of clarity about the ownership and control of these vendors. While these concerns are valid, focusing only on the infrastructure side is not sufficient. It might even be more dangerous because it might give a false sense of security.
Infrastructure-focused security is insufficient
Network infrastructure is only one part of the story. Telecommunications is often referred to as “two-to-tango” as it needs both infrastructure as well as devices to make the magic happen. So, to have foolproof security, one needs to cover both ends of the wireless link, especially for IoT. Securing only the network side would be akin to fortifying the front door while keeping the back door ajar. Let me illustrate this with a real-life scenario. Consider something as benign as traffic lights, which at the very outset, don’t seem to need strong security. But what if somebody hacked into and turned off all the traffic lights in a major metropolitan area? That would surely bring the city to a screeching halt, resulting in a major disruption, and even loss of life. The impact could be even worse if power meters are hacked, causing severe disruption. It would be an outright catastrophe if critical systems, such as the national power grid, are attacked, bringing the whole country to its knees.
When it comes to IoT devices, conventional wisdom is to secure only the most expensive and sophisticated pieces of equipment. However, often, simple devices such as utility meters are more vulnerable to attacks because they lack strong hardware and software capabilities to employ powerful security mechanisms. And they can cause huge disruptions.
IoT device security is a must
IoT devices are the weakest link in providing comprehensive system-wide security. More so because IoT’s supply chain and security considerations are far too different and much more nuanced than those of smartphones. Typically, the development and commercialization of smartphones are always under the purview of a handful of large reputed organizations such as device OEMs, OS providers, and chipset providers. Whereas the IoT device ecosystem is highly fragmented with a large number of relatively unknown players. Usually, large players such as Qualcomm, and Intel provide cellular IoT chipsets. A different set of companies use those chipsets to make integrated IoT modules. Finally, the third set of companies use those modules to create IoT end-user devices. Each of these players adds their own hardware and software components into the device during different stages of development. Because of this, IoT devices are far more vulnerable than smartphones.
Address IoT device security during the procurement
It is evident that IoT users have to be extremely vigilant regarding security and integrity of the entire supply chain. This includes close scrutiny of the origin of the modules and the devices, as well as a detailed evaluation of the reputation, business processes/practices, long-term viability and reliability of the module and device vendors. Because of the high stakes involved, there is also a possibility of malicious third-parties infiltrating the supply chain and compromising the devices even without the knowledge of vendors. Case in point, the much-publicized Bloomberg Business Week report about allegedly tampered motherboards vividly exposed the possibility of such vulnerability. Although the allegations, in that case, are not yet fully corroborated or debunked, it confirms beyond doubt that such vulnerabilities do exist.
It is abundantly clear that the more precautions IoT users take during the procurement and deployment phases, the better it is. Because of the sheer volume, and the long life of IoT devices, it is virtually impossible to quickly rectify or replace them after the security vulnerabilities or infiltrations are identified.
The time to secure IoT devices is now!
Looking beyond the current focus on 5G smartphones, 5G Massive IoT will be upon us in no time. Building upon the solid foundation of LTE IoT, Massive IoT, as the name suggests, will connect anything that can and needs to be connected. This will span homes, enterprises, industries, critical city, state, and national infrastructure, including transportation, smart grids, emergency services, and more. Further, with the introduction of Mission Critical Services, the reach of 5G is going to be even broad and deep. All this means the security challenges and stakes are going to get only bigger and more significant.
So, it is imperative for the cellular industry, and all of its stakeholders to get out of the infrastructure-centric mentality and focus on comprehensive, end-to-end security. Every IoT device needs to be secured, no matter how small, simple, or insignificant it seems, because the system is only as secure as its weakest link. The time to address device security is right now, while the networks are being built, and the number of devices is relatively small and manageable.
Nowadays, security and privacy are on everybody’s mind. Hardly a day goes by without the news of security breaches at major institutions. Most of the time, the reporting is focused on the cloud or network infrastructure, hardly ever on devices. However, when it comes to cellular IoT, devices are the most vulnerable, as I explained in my previous article. IoT devices, being very simple, are usually much easier to hack in to, and can compromise the whole system.
The IoT device ecosystem is unique and far different than that of smartphones, in many aspects. Because of that, security challenges are also different, and many of them are related to a unit called IoT module, which is at the heart of any IoT device. To really understand the scope and impact of these challenges, it is important to closely look at the market landscape of the entire cellular IoT ecosystem. It is even more relevant now, considering that today’s 4G LTE cellular IoT will evolve into 5G Massive IoT.
Unique device ecosystem, much different from smartphones
The cellular IoT device ecosystem has far different considerations, especially from the security and privacy perspectives. The ecosystem includes modem chipset providers, many of whom are the same as those of smartphones, as well as a few smaller players. Cellular IoT also has a different category of vendors, called module providers. They take the barebones chipsets and add their own software and hardware to develop modules with standard interfaces and such. Device vendors develop IoT devices largely based on these modules. Modules simplify the connectivity and operator certification-related complexity so that the device vendors concentrate on developing use case-specific devices. Essentially, modules are a key link in the value chain between chipset providers and IoT device vendors.
Chipset and device market landscape
In the device ecosystem, the chipset market is dominated by the same large and well-known smartphone modem vendors, such as Qualcomm, Intel, MediaTek, Huawei (HiSilicon), Sequans, Altair, and others. They provide a full range of solutions with varying degrees of advanced features, including single and multimode options for eMTC, NB-IoT, with support for 3G, 2G, GPS, onboard processing and so on. Apart from the advanced features, the overall cost is a major consideration for the industry.
The cellular IoT device ecosystem is very large and diverse. The vendors are usually small and possess expertise in specific use cases. They don’t necessarily have the skillset and scale to justify designing devices based off the IoT chipsets. That’s where module vendors come in. Traditionally, IoT vendors were mostly from the US and Europe. However, there has recently been a surge in vendors from China, who are completely unknown outside the country. Many of them have taken cues from and have duplicated device and module designs from traditional vendors. The proliferation of Chinese vendors is primarily due to the Chinese government’s concerted effort and heavy investment in IoT in the country. The Chinese government’s well-funded large IoT projects coupled with considerable subsidies provided by operators such as China Mobile and China Telecom has created an ideal environment for these companies to flourish. The recently awarded 5G contracts are a great example of how the Chinese government and operators support Chinese vendors. These companies, emboldened by their success in China, are now trying to pursue global opportunities. Since they are leveraging the investments and subsidies availed in China, they can be extremely price-competitive in global markets.
IoT module market landscape
IoT modules are the “bridge of trust” between the well-known chipset vendors and the unknown device vendors. Module vendors also work with the regulators and cellular operators for certification, which addresses a significant hurdle for device vendors. The certification ensures smooth and rapid deployment of these devices in the field. As evident, the selection of module vendors is key to ensure device and system security.
The module vendor market comprises of a mix of existing and emerging players. Some players such as Gemalto (Siemens M2M at the time), Sierra Wireless (+acquisition of Sony Ericsson M2M and Wavecom), Telit (+acquisition of Motorola M2M) have been around since the 2G days. Others such as U-Blox entered the market during 3G and early part of 4G, leveraging their mobile expertise. Finally, the emerging module vendors from China, who just like IoT device vendors in the country, have grown at a fast pace, with substantial government support and operator subsidies. There is a long list of such players. A few among them, such as Quectel, SIMCom, Longsung, Fibocom, and Norway, are eyeing global markets. Many others may be looking with watchful eyes at how the initial players fare in their endeavor, before stepping out themselves.
Ecosystem challenges
Anybody who has looked closely at the IoT market realizes that the biggest challenge is its relatively low margins across the board, be it chipsets, modules or devices. Considering that the module vendors are relatively small compared to the chipset, infrastructure, cloud, or application vendors, they don’t have a lot of leverage, resulting in an extreme margin squeeze. In such a situation, increasing market share becomes crucial, putting even more pressure on pricing. This is exactly where government-funded projects and operator subsidies that the Chinese vendors enjoy at home starts to matter and alter the landscape. Because of government support at home, their pricing can be artificially low, reaching predatory levels.
Speaking to some of the sources in the industry reveals that there is indeed a race to the bottom when it comes to module pricing. If it persists, there is a real danger of non-Chinese players becoming financially unviable. This is of grave concern, especially when we are getting ready to move to 5G. Supporting 5G will need huge upfront investments, and the pay off period could be very long. If these companies can’t earn enough profit, they can’t afford to invest in 5G, and potentially, in the worst case, exit the market.
What do these challenges mean for the cellular IoT Industry?
If you feel like you have seen this movie before, you are not wrong! If you examine the turn of events in the cellular infrastructure market during the late 90s and early 2000s, the situation is almost identical. During that time, major American and European cellular infrastructure vendors failed to anticipate such threat and were unable to compete with emerging Chinese rivals that were allegedly supported by their government. Many American and European vendors such as Motorola, Lucent, Siemens, Ericsson, Nokia, with decades of experience and successful existence had to perish, merge, or downsize. Chinese upstart vendors such as Huawei and ZTE found a ripe market and quickly took away market share, grew exponentially, and became dominant players.
Why is the comparison with the past relevant, and why is it a security concern? Well, IoT devices are the weakest link in the security of the overall system. The industry needs to be as concerned about the security of IoT vendors, as much as with the infrastructure vendors, if not more.
What happens if we don’t heed to the teachings of the past? What are the implications for the security and privacy of IoT networks? I will explore those questions in my next article. So, be on the lookout!
In my previous articles here, and here, I explained the rationale for increased focus on device security and its challenges. The threats are more acute, especially from unknown foreign vendors offering predatory pricing. After reading the articles, a few people questioned me about the ills of such a situation and even suggested that the fierce competition will keep the pricing low and vendors in check. In this article, I will explore whether such short-term thinking will help or hurt the industry in the long-term and examine some what-if scenarios. I will also draw parallels to some historical lessons, and finally, offer suggestions on how the IoT ecosystem could protect itself.
Learning from history
The best parallel to what is happening in the IoT vendors space is the situation of American and European cellular Infrastructure vendors during the 3G transition, in the late 90s and early 2000s. I vividly remember it because I was amidst all of it, working for one such company. The world was slowly moving from 2G to 3G. The infra behemoths mostly from US and European companies, including, Lucent, Motorola, Nortel, Nokia, Siemens, Alcatel, and others were trying to get their customers to move to 3G quickly. However, they soon faced unprecedented headwinds from unknown Chinese companies named Huawei and ZTE, offering extremely low pricing. It was alleged that their low pricing was not only because of their lower cost but also more importantly because of the support from their governments. American and European vendors, confident because of their decades of heritage and experience, never took these players seriously. But alas, because of the dot com bust, and intense price pressure, many of those behemoths folded in no time. Others cobbled together to survive, but as a much smaller shadow of their former self. Only two among them remain in business, that too largely because of the US market where Chinese vendors are not allowed. From the ecosystem perspective, there are far fewer choices of vendors globally, and even fewer in the US.
So, what can we learn from this harrowing experience? Well, simply making decisions on cost alone might be very attractive in the short run, but might have negative long-term consequences. Once the landscape changes, it cannot be put back.
Perils of inaction now
If this practice of offering artificially low prices on IoT devices and modules because of Chinese government subsidies goes unchecked, none of the non-Chinese vendors can sustain low margins and will edge towards bankruptcy or exit the market. Very soon, there would be anybody of repute left.
In such a situation, the IoT needs of critical infrastructures such as power grid, smart cities, installations of national security, and others, will not have any option but to rely on unknown suppliers without any proven track record or reputation. The case would be similar for large enterprises, industrial complexes, and such where IoT devices are a basic staple. The confidence in the security of IoT devices should be unquestionable and not even up for debate. Consider 5G Massive IoT, which will build on the solid foundation of 4G IoT. Additionally, going forward sharing of spectrum between defense and civilian cellular networks is going to be the norm. An early example of such an arrangement is CBRS, which allows sharing of spectrum between the US Navy and cellular operators. Any security breach in such deployments could expose the critical military operations for sabotage. These include radar and satellite communication systems.
Generally, there are risks with relying on a group of suppliers all coming from the same region/country. What if, trade wars flare up, resulting in high tariffs, or even worse, import/export bans, similar to the recent US ban of Huawei? In such a case, the whole critical infrastructure could come to a screeching halt — also, such vulnerability provides a huge advantage to the foreign country in any trade negotiations.
Many of the Chinese vendors are very small without any public, reliable information on their background, ownership, business, objectives, or motives. What if they plan to conquer the market now with low pricing, and increase prices exorbitantly soon after all the competition has diminished? Even worse, what if they had ulterior motives? No matter how much these companies vouch for their authenticity and business objectives, unless they can open themselves for close scrutiny or better yet, list on some of the reputed stock exchanges in the US or Europe, it is extremely hard to be convinced of their authenticity. If you consider the headwinds that Huawei is facing, even with its significant brand recognition, the path for unknow IoT companies will be even harder, if not virtually impossible.
How to ensure device security
Historically, utilities and many critical national infrastructure providers have been very conservative in their vendor selection. They make their vendors go through an extreme, multi-level vetting process, covering both technical as well as financial viability. They should continue this practice and include evaluation of overall ecosystem health, long-term impacts, and diversity of suppliers. Private enterprises should get the cue from them and be very careful in their vendor selection as well. The assessment should also include import bans, trade wars, and other such unlike yet catastrophic considerations.
The IoT users should evaluate the lifetime cost of ownership of their IoT devices, instead of just the initial cost. IoT devices typically have a very long life, extending ten years in some cases. During such a long time, the cost of maintenance, timely upgrades, quick fixing of security flaws exceeds the original procurement cost of the device. Additionally, these institutions should examine and understand the motivation behind predatory pricing and act with a long-term point of view.
As a last resort, the government and regulators should look at putting safeguards in place for procurement of critical infrastructure. The focus should not just be on the network, but equally, if not more on the devices as well. For example, the US government banned some vendors from supplying cellular network infrastructure. There could be a case be made for similar safeguard for devices for critical uses as well.
The biggest step the IoT users, be it government agencies or private enterprises, can take is to make sure to create an environment to nurture diverse, strong, reputable, and reliable players who value security.
The Federal Communications Commission (FCC) will vote on Friday to virtually block Huawei’s access to the U.S. market, but this rare bipartisan action only protects one element of America’s digital infrastructure. In reality, the likeliest and most susceptible security vulnerabilities aren’t well understood by policymakers, and we’re at the beginning of a very long fight.
In the $2.4 trillion telecom sector, the dawn of 5G is more than a buzzword. It’s truly a new era full of great promise, as well as great danger. But our policymakers’ focus has only been on the big companies with name recognition, without attention paid to the less prominent ones that might pose much larger security risks.
Huawei and ZTE (another major Chinese manufacturer up for the FCC’s vote, but which doesn’t get the same publicity) are easy targets for the uninformed masses who fear all things China. Meanwhile, the national security threat from other Chinese-subsidized and foreign-controlled telecom companies is potentially more vast and insidious than our leaders in Washington, DC understand and acknowledge.
There’s been no mention by politicians, in news media or on social media about the security risks posed by devices or cellular modules – the mini-computers that make up the brains of the Internet of Things (IoT). There will be 43 billion in the world by 2023, and consequently they’re the favored target for hackers. Unlike phones or chipsets, these modules are untraceable once embedded in devices. These elements are so critical in connected infrastructure that If a hostile state or player gains control with intent to attack the U.S., it’s far more horrific to imagine the scale of destruction than with a compromised smartphone or social media account.
Unauthorized access to your iPhone or Facebook enables spying. But access to an IoT device enables direct action in the real world. Shutting off power to Washington, DC. Turning off traffic lights in Manhattan. Pumping the breaks on autonomous cars in San Francisco. Stopping heat in winter to homes in Minnesota. Interfering with medical devices in Florida.
Forget the compromised security of smartphones. A compromised module – one of dozens that’ll be in every American home within the next few years – could mean literal life or death.
Five of the top ten IoT module manufacturers are Chinese, and they rake in 71 percent of the industry’s revenue using the same government backing and Huawei playbook to stifle competition in the U.S. and Europe. China’s heavy investment in IoT in the country – coupled with considerable government subsidies – allow Sunsea, Fibocom and Quectel to be extremely price-competitive in global markets.
Industry insiders have been vocal in sharing stories of these companies slashing module prices below reasonable production costs. Driving out competition with a questionable pricing structure – and the consequent potential for future manipulation of affordability and availability – adds another layer to the concerns regarding 5G security.
It’s arguable that Chinese vendors Sunsea, Fibocom and Quectel are clones of Huawei, especially since they’ve effectively cornered the global market for the most critical components in the IoT. That’s why it’s important for politicians and security experts to glance up from their research on Huawei to better understand the implications of U.S. reliance on Chinese IoT manufacturers.
The U.S. government shouldn’t ban a company just for being China-based, nor target one just for being in the business of telecommunications or technology. Not every tech company in China is a stooge for the government with unreserved, evil intent. In fact, companies like Quectel and Fibocom thrive in good part due to legitimate innovation, amazing engineers and good quality.
Nonetheless, the FCC will vote on Friday on Huawei and ZTE. We must hope that this is just a first salvo in making 5G and the Internet of Things secure, with more investigation and possible action to come. If the Trump Administration truly wants to protect the American people from foreign interference via smart devices, the FCC and Congress need to be more strategic in looking at potential threats beyond the flashiest names.
The millions of IoT devices we use knowingly or unknowingly make our modern societies function. These include utility meters, traffic lights, and they even connect to the national grid. 5G is elevating their use to even higher levels and making them an integral part of the country’s critical infrastructure.
But that also is making that infrastructure more vulnerable to security threats. Reps. Mike Gallagher and Raja Krishnamoorthi of the U.S. House Select Committee on China understand this threat and are rightly sounding alarm bells. It’s fascinating how these seemingly benign and almost invisible IoT devices can be such a grave threat.
IoT devices are an integral part of the national critical infrastructure
The U.S. IoT market is massive, estimated to be $199B in 2024, according to Statista. IoT technology is found in almost any connected device for individual or industrial use. Since IoT devices manage and control the country’s critical assets, including power, water, natural gas, and many industries, even more with 5G IoT, they are part of national critical infrastructure.
Imagine the havoc the sudden collapse of the national grid or large-scale disruption of utilities can create. Such catastrophes can bring the country to a screeching halt, threaten lives, and cause lasting damage.
Despite its critical role, IoT security hasn’t gotten the attention of regulators and governments it deserves. It was considered a “business risk” to be managed by the industry. Fortunately, that is starting to change. The recent letters from the congressmen to the FCC, the Department of Defense, and the Treasury Department regarding cellular connectivity modules used in IoT devices indicate that lawmakers are now treating this as a national security issue.
Vulnerabilities of IoT devices
When it comes to cellular IoT devices, the biggest threat is the security of the connectivity module (aka IoT module) on which they are built. This module is the gatekeeper, which controls all the data going in and out of the device. If the module is compromised, the whole device, and in many cases all the systems it connects to, are compromised.
Connectivity modules could have many vulnerabilities. There could be backdoors built into the hardware or the software when modules are shipped from the factory (called “Zero Day” attacks) or introduced during numerous upgrades modules receive during their more than ten years of lifespan. These upgrades are similar to the ones our smartphones receive but are usually automatically executed.
Because of prohibitive costs, operators can’t examine and verify all the devices and their firmware updates. No matter who and how these vulnerabilities are created, they can be exploited by bad actors. If those bad actors are state-sponsored, the risk is even higher.
As FBI Director Christopher Wray mentioned in his recent testimony, “Hackers are positioning on American infrastructure in preparation to wreak havoc and cause real-world harm to American citizens and communities.”
The attackers can stay dormant for a long time and attack at a time of their choosing. Hence, it wouldn’t be wrong to say that any device with such vulnerabilities can become a ticking national security timebomb.
IoT security: A tragedy of commons
IoT is a largely low-margin, low-revenue (per subscription) business with a highly cost-competitive market. Most operators manage security as a business risk. They invest just enough to protect against fraud and liability. National security probably never makes it to their priority list.
Considering the complexity, cost, and potential risks involved, the responsibility of ensuring the security of IoT devices, from a national security perspective, rests squarely on the regulators and the government. The simple and highly reliable approach to achieve that seems to be establishing a fully trusted supply chain comprising local players and players from trusted national partners.
This is where things get complicated. According to Counterpoint Research, almost a quarter of the US cellular connectivity module is controlled by one Chinese company, Quectel. More alarmingly, a large portion of the IoT modules used in the cellular network used by first responders called FirstNet are also Chinese.
And that’s precisely why these congressmen are concerned and asking relevant US departments to intervene. As opined by many law experts, Chinese laws require all Chinese companies “to support, provide assistance, and cooperate in national intelligence work.”
So, then the question arises: Is the Huawei-like approach of totally banning these companies the right strategy? If not, are there any other remedies available? What are the pitfalls? All these questions need to be addressed before taking any substantive action. Look out for my next article for details on them and possible answers.
Always Connected PCs (ACPCs)
Have you heard the phrase “converting poison into medicine?” Well, that’s kind of what is happening to the PC industry now. Let me explain. Not too long ago, the rise of powerful smartphones and tablets, which were primarily powered by ARM processors, decimated the PC market. Interestingly, the tenets of smartphones – always connected, long battery-life, thin and light weight— that caused the downfall of PCs are bringing life back into them. The introduction of ultra-thin laptops and 2-in-1s are making PCs get their mojo back. In early December 2018, Qualcomm announced a major step in this smartphonification of laptops. Their new world’s first 7nm Snapdragon 8cx compute platform not only embodies all those hallmark characteristics of a smartphone, but also will provide the performance that will meet or exceed that of traditional intel x86 processors. Most importantly Snapdragon 8cx will run the full Windows 10 Enterprise version, and will natively run browsers and many other applications.
Qualcomm dipped their toes into the PC market by creating a new category, aptly named Always Connected PC (ACPC), which used their repurposed mobiles SoCs. They started with Snapdragon 835 and very recently Snapdragon 850. All these were built for Android OS, later optimized for Windows 10 and for computing devices. They had restricted Windows version, and offered limited performance mainly because the applications were run using ARM to x86 translators. They were good enough for use cases with light and simple tasks such as browsing, video etc., but not ready for processor intensive apps or enterprise-grade use cases. But the story is completely different for newly announced Snapdragon 8cx.
Qualcomm said that Snapdragon 8cx is purpose-built from the ground up for computing and Windows 10. Supposedly they have been working on this since 2015! Snapdragon 8cx indeed shares the architecture with, and was announced at the same time as, their flagship Snapdragon 855 mobile SoC. This will naturally attract the skepticism that just like previous version, this platform might also be slightly tweaked version of the mobile SoC. However, when you look closely at the significant difference between the building blocks of the two, it is quite clear that indeed Snapdragon 8cx is a different breed. For example, 8cx has the much more powerful Kryo 495 CPU vs. 485 on Snapdragon 855. The clocking configuration for the eight cores of the CPU is different as well. The Snapdragon 8cx has more advanced Adreno 680 Extreme vs. 640 in the mobile SoC. The Snapdragon 8cx has features that are only found in high-end enterprise laptops, such as support for dual HDR 4k displays, up to 16 GB RAM, NVMe SSD, UFS 3.0 and many more. Most importantly, during the launch event, Microsoft confirmed the Windows 10 Enterprise support for the Snapdragon 8cx, which indeed is a strong vote of confidence to the platform. Additionally, many popular applications such as Chrome, Firefox, Microsoft Edge, Internet Explorer browsers as well as Gameloft, Hulu and other applications run in the native mode and a wide range of apps are optimized for ARM on Windows.
When you combine these features along with trendsetting X24 LTE modem that provides up to 2 Gbps peak speed, Quick Charge 4, advanced audio capabilities with aptX HD codec, as well as the hallmark ARM features, multiday battery-life, always-on connectivity, I think there is no question that Snapdragon compute platform and ARM architecture is ready for primetime, and is well-equipped to challenge the dominance of Intel x86 based platforms in performance computing. Qualcomm’s claim that Snapdragon 8cx performance is comparable to a competitor (supposedly Intel core I-5) and is delivered at twice the battery-life should send chill down Intel’s spine.
Qualcomm confirmed that Snapdragon 8cx can be integrated with X50 modem for 5G connectivity, But for some reason it didn’t make it a major selling point. Looks like they are worried about the 5G taking away all the goodness of the compute effort, or perhaps there might be laptops which will not support 5G. Qualcomm is tight-lipped about the reasons. In my view, although X24 modem has excellent performance, ACPC with 5G is the ultimate ACPC one could have. After all it’s the “connected” PC, why not supersize it and make it the best on all aspects? Also, the huge capacity gains and efficiency improvements of 5G will enable operators to offer very attractive “always on” unlimited plans.
Coming back to the competitive landscape, ultra-thin PCs are the most profitable tier for Intel. They have had a good run with them so far. Some devices such as Microsoft’s Surface Pro and HP’s Folio have shown that Intel I-5 core processors can be designed into attractive fanless laptops with long battery-life, However, most other Intel x-86 based laptops fall much short. With Snapdragon 8cx based laptops planned to hit during second half of 2019, amidst the busy back to school and holiday seasons, it would be interesting to see how Qualcomm and Intel platforms will compete and perform. Come 2020, this will very quickly turn in to not just processors battle but also a 5G battle.
With 5G, the ACPC battle gets even more interesting. Based on Qualcomm’s comments, it seems that they will have 5G based ACPC in the market in early 2020, if not in late 2019. Intel has announced its own 5G connected laptop plans with Sprint. Knowing x-86 performance and their delayed 5G modems, lt will be a tall order for Intel to beat the battery -life and more mature 5G connectivity of Qualcomm ACPCs. With connected ultra-thin, long battery-life laptops continue to gain popularity and Qualcomm catching up in performance, Intel must adapt to extremely fast pace of innovation that smartphonificaton is bringing to PC industry to compete effectively.
A bunch of recent events, including the announcement of Microsoft Surface Pro X and Samsung Galaxy Book S, are supporting a turning point in the largely stagnant laptop market. These devices, dubbed as always-on, always-connected PCs (ACPCs), bring the hallmark characteristics of smartphones to laptops while also providing enterprise-class computing performance. As a long-time observer and an industry analyst, I strongly believe that ACPCs are set to transform laptops and redefine personal computing.
After revolutionizing portable personal computing in the late 1980s and ’90s, laptops have not changed much. Of course, they have become a bit thinner, lighter and more powerful. But considering that you still need to carry the charger and look for Wi-Fi or other connectivity wherever you go, you can’t call those incremental improvements a big leap. These incremental steps look even smaller when compared to the speed at which smartphones have evolved.
ACPCs completely change the outlook for laptops and accelerate the pace of innovation. They are always on, connected to LTE or 5G, can run a full day without needing a recharge and provide performance at par with or better than today’s bulky laptops. All of this is made possible by a new breed of processors with micro-architecture similar to the ones used in smartphones.
Smartphone Revolution Powered By Arm Processors
Ever since their debut in the early 2000s, smartphones have been dominating the personal computing space. They have rapidly grown in both performance and influence. Almost all of today’s smartphones are powered by processors with a micro-architecture designed by the British company Arm Holdings. Smartphone players such as Apple and Qualcomm use processor cores designed by Arm.
(Full disclosure: Qualcomm is a client of my company, Tantra Analyst.)
These processors have been proven to be power-efficient. Designed primarily for portable devices, they seem to have previously focused more on power consumption than processing capability. But the evolution of these processors and the optimizations from the original equipment manufacturers (OEMs) have dramatically improved their performance in recent years. This has set Arm processors up for performance-focused devices such as laptops, PCs and even servers.
Laptops Have Survived The Test Of Times
Laptops have defied many predictions of ultimate demise. It was netbooks they said would kill the laptops, but they ended up just being a fad. Then it was tablets that were supposed to replace laptops. But they never scaled up.
The way I see it, the biggest trait of laptops, which made them stand strong against these odds, was their ability to be a productivity and content creation tool — be it for personal and consumer-type use cases or enterprise ones. The basic needs for such use cases are excellent performance and support for thousands of existing Windows applications.
Writing The Next Chapter Of Laptops
The first attempt at making the Windows operating system (OS) compatible with Arm processors was circa 2012, called Windows RT, designed for tablets. But it turned out to be a dud, mainly because it couldn’t run existing applications. Its makers, Microsoft and Qualcomm, still believing in the concept, doubled their efforts. This round made sure Windows 10 and all those existing applications would work flawlessly on Arm processors used in ACPCs.
It is debatable whether ACPCs are a new category or an existing yet transformed laptop category. Some OEMs such as Lenovo, Samsung and Asus are continuing with traditional clamshells, whereas others like Microsoft are trying out the 2-in-1 model with detachable displays that covert to fully functional tablets.
I think it is telling that many PC vendors have introduced ACPCs. I believe that the attractiveness of bringing the smartphone-like battery life and user experience to laptops, the proliferation of 5G, along with a strong commitment from Microsoft and the entire PC ecosystem makes it clear that ACPCs are the future of laptops.
What’s Inside The ACPCs?
ACPCs are powered by Qualcomm Snapdragon platforms. The first-generation devices used optimized versions of Snapdragon SD835 and SD850. But the latest ones, including Samsung Galaxy Book S and Surface Pro X, use purpose-built Snapdragon 8cx (Pro X uses a modified version of 8cx chip called SQ1). Snapdragon 8cx has a powerful CPU and GPU, as well as strong artificial intelligence capability.
I’ve seen many popular browsers, video game platforms and media player developers porting their applications to run natively on Arm processors. Likewise, many enterprise vendors have ported their applications on Windows on Arm. Adobe announced that its drawing and painting applications will be available to ACPCs. And according to Microsoft, Surface Pro X offers three-times higher performance compared to the previous generation Surface Pro 6 that used a conventional x86 processor. So, there is no question in my mind that ACPCs are now primed for running high-performance workloads of consumers as well as enterprises.
The progress of ACPCs may be slower than some might have expected, but it takes time to transform an industry with more than three decades of history. I believe the Arm micro-architecture ready for performance-focused computing has repercussions beyond laptops, as there could be many applications and use cases.
What This Means For Marketers
Because of the stagnant market, it seems that marketers have gradually reduced their attention to laptops and, instead, moved their strategies toward media more suited for smartphones. I believe ACPCs will drastically change that equation. Marketers will likely need to quickly pivot their marketing plans and spend. Specifically, the 2-in-1 model almost creates a new category of devices, and marketers will be well served if they capitalize on this growing popularity and devise their marketing plans around them.
We are at the turning point of personal computing, and at the dawn of a new era with devices powered by Arm micro-architecture. It will be interesting to watch it unfold, especially for an analyst and a keen industry observer like me.
The fun of being an analyst is that you get to test new gadgets firsthand and share your opinions without any inhibitions. It also comes with a sense of responsibility towards your readers. I got my Microsoft Surface Pro X about two weeks ago and have been using it as my daily driver ever since. My verdict – it is an excellent productivity notebook for a pro user like me, who extensively uses office applications, browsing, videos, and social media. Beyond that, it also signals the dawn of a new class of always-on, always-connected notebooks (aka ACPCs) that will redefine personal computing.
<<Side note: If you would like to know more about ACPCs, please check out my earlier articles here and here>>
Easy set-up
I bought a 16GB/256GB Pro X model with a keyboard and stylus. The windows set-up on this was a breeze. The impressive part was the ease of enabling cellular connectivity, like a smartphone—push the nano-SIM in, a couple of clicks, and you are ready to go. I have been using connected laptops since 2008/3G days. It was always a pain to transfer a subscription from one laptop to another. Although I didn’t utilize it, a user-removable SSD drive is another neat feature. The best part of this machine is its always ON feature, just like smartphones. You come in front of it, your face is recognized, and it is ready to go. Additionally, OneDrive allowed me to move files from my old laptop seamlessly.
Ever since setting it up, I have been using it as my primary computer for working in my home office, for meetings with clients, bringing it to my son’s karate and other classes, etc. Thanks to the Snapdragon/SQ1 processor, Pro X is so thin, and light, carrying it around is extremely convenient.
A solid productivity machine
The biggest character of Pro X is that it is a great workhorse, and using it is a joy! Its bright display is beautiful, and its thin bezels make a full 13” screen fit in a small form factor. Coming from my 13.3” laptop, I felt homely. I am a power user of many of the Microsoft Office tools, including Word, Excel, PowerPoint, and Outlook. The user experience was very snappy and super responsive, even when multi-tasking with lots of documents, spreadsheets, and presentations. Switching between windows of the same app or between different apps was very smooth.
I use emails on Outlook as my to-do list—keeping many email windows (more than 15) open till the action items in them are dealt with. My previous laptops had issues dealing with this, especially when the laptop was put to sleep and turned back on. Many times Outlook would become unresponsive, requiring restarts. But Outlook on Pro X has been pretty stable so far.
A lot of my work happens through the browser, and Chrome is my favorite. I usually have more than ten tabs open that span multiple Gmail accounts, local, national, and international news sites with video feeds, ads, etc., Tweetdeck and Twitter pages, Yahoo finance page, multiple forums that I regularly follow, Whatsapp web, Google Sheets and Google Photos that I share with my wife, Facebook, and others. I also use tabs as my to-do list. My kids call me crazy when they see how many tabs I use. Surprisingly, the user experience was smooth even with those many tabs open. As you might know, Chrome currently runs in the emulator mode. Microsoft recently announced the beta of their Edge browser that will run natively on ARM processors (i.e., on SQ1) that would further improve the performance and battery life. I am thinking of migrating to Edge and evaluate the experience myself.
So, all in all, I was very impressed with the workload Pro X could take and proved itself as a solid machine.
A perfect companion for travel and offsite work – battery life and connectivity
The biggest differentiation of ACPCs such as Pro X, as touted by Microsoft, Qualcomm, and Arm, is their more than a full day of battery life. I really experienced it while using Pro X. I would always have at least 10 -20% of battery left after a full day of work (8-9 hours). That was using a mix of Wi-Fi and cellular connectivity. I bet I could eke out even more with optimized screen brightness and connectivity settings.
Pro X transformed how I go out for meetings and travel. I would always bring the charger with my old laptop to avoid battery anxiety, which necessitated carrying a bag. Once I decided to get the bag, I would throw in lots of “just-in-case” items that I hardly use. But with Pro X, viola! No anxiety, no charger, no bag, and none of the other junk! This thing is so sleek, light, and stylish. I carry it as a notebook! And a nice stylus with handwriting converter to boot! Additionally, with fast charging, its battery can go from 0 to 100% in a little over an hour.
For a road-warrior like me, integrated cellular connectivity is a no brainer. It is such a relief that I am always connected, no matter where — no need to search for Wi-Fi, no worries of security and privacy, etc. Also, no need to use my phone’s hotspot and worry about its battery running out.
What about gaming and other incompatible apps?
This is the most frequent question I encountered when carrying or using Pro X in public. Well, I am not a gamer, and, it turns out, I don’t use those x-86 apps that don’t have 32-bit versions, which are needed to run them on Pro X. So, I am not the best person to give a judgment on that.
There have been reports of people having trouble running games on this. That has actually worked in my favor! Ever since I opened the Pro X package, my teenage son had his eye on this thing, always tinkering with it. I think he tried a few of his favorite games, such as Minecraft, Fortnite, CS:GO. I have a feeling either they didn’t work, or he didn’t like the user experience. That is because, after the first couple of days, he resorted back to his powerful gaming rig. Obviously, Pro X is no match to his purpose-build beefy desktop.
What are the misses?
I think the biggest miss is its steep price tag. Even the most basic configuration with only the keyboard would cost $1,100 plus tax. So, this is no mainstream computer but targeted toward those who value its premium design and features.
Despite the premium cost, I was surprised that there was no cellular data plan included. I would have expected Microsoft to bundle at least a few months, if not a year, of data to let consumers evaluate the always-connected experience.
Pro X is a notebook, literally not a laptop. As with any Surface Pro, it is almost impossible to use it on your lap.
Heralding the ACPC era
Many people might review Pro X like any other expensive gadget, on its merits and misses. However, the relevance of Pro X is far beyond this one product. Its performance conclusively proves that ACPCs are real, and can deliver on the promises their proponents Qualcomm, Microsoft, and Arm have been making for the last two years. Pro X also shows the strong commitment these companies have for the ACPC concept. As mentioned, Pro X is not a mainstream device, but it will herald a new era of personal computing, and I am sure there will be more cost-effective options soon that will make arm-based ACPCs mainstream.
Qualcomm, during its annual Tech Summit in Maui, Hawaii, unveiled a comprehensive portfolio of platforms for Always-On, Always-Connected PCs (ACPCs) to cover the full spectrum of tiers and use cases. This announcement further solidifies the industry’s move toward ACPCs, led by Qualcomm, Microsoft, and Arm.
<<Side note – If you would like to know more about ACPCs, please check out my earlier articles here, here and here. >>
A broad portfolio of offerings
The Snapdragon 8cx, announced at the same event last year, was the first real ACPC platform that brought Arm chips into the performance and enterprise computing space. Since then, the 8cx has powered a handful of devices, including trend-setting Microsoft Surface Pro X, stylish Samsung Galaxy Book S, and the first 5G supported Lenovo ACPC. Many other designs are in the pipeline.
While the Snapdragon 8cx was targeted at the premium and high-performance segment, the newly announced Snapdragon 8c and Snapdragon 7c offer OEMs the choice to address to the other tiers in the highly competitive laptop space. The tiering is based on CPU, GPU, and DSP performance, Artificial Intelligence (AI), and Machine Learning (ML) capabilities, and cellular connectivity speeds. However, Qualcomm never forgets to emphasize that even with tiering, all the platforms squarely deliver on the ACPCs famed promise of smartphone-like ultra-thin form-factor, multiday battery life, and excellent connectivity, without any compromises. This promise is attractive for any tier, and that’s why almost every major PC OEM has embraced ACPCs.
Snapdragon 8c for everyday laptops
The key aspect of Snapdragon 8c is enabling sub-$800, highly capable, consumer, and enterprise ACPCs that excel in high productivity workloads, as well as top-notch entertainment and multimedia performance. The 8c is a beast sporting a 7nm octa-core Kryo 490 CPU, Adreno 675 GPU, 4-channel LPDDR4x memory, support for NVMe SSD, and UFS 3.0, dedicated Hexagon AI/ML Tensor Accelerator, integrated Snapdragon X24 LTE modem, and many other impressive features.
Snapdragon 8c offers 30% higher system performance than its predecessor—Snapdragon 850, more than 6 Trillion Operations Per Second (TOPS) AI/ML, and up to 2 Gbps of cellular speed.
You can get more detailed specifications of this platform here.
Snapdragon 7c for entry-level ACPCs
The primary focus of Snapdragon 7c is to bring the ACPC experience to even the cost-conscious entry-level laptops. These laptops are highly functional, with a sub-$400 price point. The 7c sports 8nm octa-core Kryo 468 CPU, Adreno 618 GPU, 2-channel LPDDR4x memory, robust AI/ML support unheard of at this tier, and integrated Snapdragon X15 LTE modem, among other things.
It offers 25% higher performance than competing solutions in the entry tier, more than 5 TOPS AI/ML, and up to 800 Mbps of cellular speed.
You can get the detailed specifications of this platform here.
Busting the myths of portability
Till now, portability in computing always meant a complex trade-off between weight and size, performance, battery life, and cost. If you wanted a thin and portable computing device, the only option was to use a tablet and be content with limited performance and crippled functionality, without the support for productivity OS such as Windows 10. On the other hand, if you wanted robust performance and long battery life, you had to cope with large and bulky devices with extended battery packs. If you wanted a combination of these features, you had to be ready for a hefty price tag.
But with ACPCs, you get uncompromised experience without any tradeoffs— Arm architecture that offers superior battery life and performance, full Windows 10 support for unhindered productivity, integrated cellular modem for always-on connectivity. All of that together in a thin, light-weight, and very attractive form factors, just like your smartphone.
The ACPCs are essentially aligning the computing industry with the smartphone industry. That will bring the smartphone industry’s hallmark of rapid innovation to the computing industry. Together both will benefit from the large economies of scale, cost-efficiency, and a huge ecosystem of OEMs, app developers, consumers, and enterprise players. That, in turn, has the potential to revitalize the stagnant and uninteresting laptop market and bring it much needed excitement and growth.
In other words, ACPCs are set to challenge the status quo of Intel’s x86 architecture and revolutionize the laptop/personal computing market.
In closing
Qualcomm’s announcement expanding the reach of ACPCs illustrates how the “Windows on Snapdragon” concept that Qualcomm, Microsoft, and Arm envisioned a few years ago is slowly but steadily coming to fruition. The comprehensive portfolio of platforms will pave the way for making ACPCs mainstream, bringing their benefits to all market segments, not just for the premium tier.
It will be interesting to see how the tussle between deeply rooted traditional x86 architecture and the disruptive Arm architecture unfolds and shapes the laptops and personal computing space.
While smartphones are all the rage in 5G, the market trends are aligning for a quiet revolution of 5G-enabled laptops (5GPCs) and other non-smartphone computer devices. The world’s first 5GPC, Lenovo’s Yoga 5G, was introduced at CES 2020, kick-starting the process. Although always-connected, always-on laptops (ACPCs) have been around for some time, their widespread adoption has been constrained mainly because of restrictive and expensive data pricing. The extremely high capacity and improved efficiency of 5G, which allows operators to offer attractive pricing combined with the remarkable improvement in the performance of ACPCs, has the potential to push the 5GPC market into high gear.
5G Offers The Best Network Technology For ACPCs
5G traction has been beyond anybody’s expectations. As of the end of 2019, 348 operators were investing in 5G and 61 operators had already commenced 5G services. The operators who have launched are steadily expanding their coverage. The introduction of dynamic spectrum sharing (DSS) — which allows 5G to use the 4G spectrum, expected commercially in the second half of 2020 — will substantially improve coverage. Thanks to the diligent work of regulators around the world, 5G has over 10 times more spectrum than 4G in many cases. That includes all the bands: higher (e.g., millimeter wave), middle (e.g., 2.5 and 3.5 GHz) and lower (e.g., 600 MHz).
Although 5G’s super-high speeds get all the attention, the biggest advantage of 5G is its extreme capacity, thanks to all that spectrum. That means cellular operators have the opportunity, more than ever, to experiment with new pricing and data plans. We already see glimpses of that in the true unlimited data plans for smartphones and fixed wireless access (FWA) services and plans. I strongly believe that 5GPCs will be a worthy addition to the new horizons operators will explore with 5G.
For the operators pouring billions of dollars into 5G network build-out, the sooner and the more users they get on that network, the better. The abundant capacity of the 5G network allows operators to move laptop users into a new usage paradigm: from today’s “data sipping, only turning on the cellular connection when needed, always conscious of hitting the data limit” mindset to the “anywhere, anytime, worry-free” paradigm.
5G also allows true service bundling: a single contract and attractive pricing for smartphones, FWA, laptops and other connected devices. This, while reducing the cost for users, will increase the overall average revenue per user (ARPU) for operators. Bundled pricing brings service stickiness and builds long-term customer relationships. Operators could also work with 5GPC device OEMs to bundle the connectivity as part of the device cost, for at least the first months/year of 5G service. As a seasoned ACPC user, I know that once you experience the liberation of not looking for hot spots and constant worries of the safety of hot spots, hardly anybody will go back, as long the cost of that experience is reasonable.
5GPCs Will Be The Best ACPCs
ACPCs have been continuously improving their performance and are now ready to be productivity, enterprise and performance laptops. For example, the recently announced world’s first 5GPC by Lenovo offers high performance and 24-hour battery life. (Full disclosure: The laptop is powered by Qualcomm Snapdragon 8cx, and Qualcomm is a client of mine.) With a 5GPC, you can work from virtually anywhere without worrying about being near a power outlet or a Wi-Fi hot spot. The data speeds with 5G should be far better than any regular hot spot would provide.
With today’s traditional laptops that have shorter battery life, even if you had cellular connectivity, the untethered experience is limited because you have to always think of charging options. The extremely long battery life of ACPCs makes them truly untethered. Not being tethered physically or wirelessly is an exhilarating experience. And it is logical to think people would be willing to spend a little bit more for this higher perceived value.
5GPCs will be particularly attractive for enterprises. There are many reasons for this, and the biggest one is security. One of the main security risks for enterprises is their employees connecting laptops to unknown, unsecured Wi-Fi hot spots. With 5GPCs, IT departments will be certain that their employees will always be connected to a secure known 5G network. The potential costs of lost data or security breaches would certainly outweigh any minimal increase in the cost of 5G cellular connectivity. Also, 5GPCs bring many other benefits to enterprises: Integrated GPS allows reliable asset tracking and security mechanisms such as geofencing; being always on, laptops will always be up to date with the latest security patches and updates. Of course, the increase in employee productivity by being reliably connected all the time with excellent speeds goes without saying.
5GPCs will bring much-needed excitement to the largely stagnant laptop market. If managed properly, the 5GPC trend has the potential to create a new full replacement cycle, which might last for years.
All the stars are aligning for 5GPC to be an attractive market for the industry. 5GPCs have the performance to make the best use of 5G and provide a differentiated experience. Both consumers and enterprises will benefit enormously from 5GPCs. Cellular operators can utilize 5G’s extreme capacity to offer services that make true anywhere, always-connected, fully untethered experiences possible. But it will only be a reality if they can offer attractive and innovative pricing and data plans. With major 5GPC device announcements trickling in and operators looking to expand their 5G offerings, it will be interesting to see how the story of 5GPCs plays out.
For the last few weeks, while the influencer world was busy with testing and reviewing the Samsung Galaxy S20 and Galaxy Z Flip smartphones, I was diligently using and testing another equally important and impressive Samsung product—Galaxy Book S—the latest always on, always connected PC (ACPC). My verdict? It defines what portable laptops are meant to be. However, being an analyst, I can’t stop myself from giving the rundown on why I think so and how it provides a glimpse of the future of laptops.
Purchasing and setting up Book S
The Galaxy Book S comes in only one configuration—the Snapdragon 8cx processor, 8GB LPDDR4X RAM, and 256 GB SSD (MicroSD slot supporting up to 1TB) with Windows 10 Home OS. I bought mine on the Samsung website. Ordering was a breeze, although Samsung may confuse buyers by showing only Verizon and Sprint as the supported carriers. I bought the Verizon version by paying in full ($999 + tax). However, it came factory unlocked and it worked perfectly fine with Sprint, T-Mobile, and Google Fi. I am reasonably sure, would work with AT&T as well. I have sought clarification from Samsung on whether the Verizon and Sprint versions are different SKUs and have any major differences, such as spectrum bands supported, carrier aggregation combinations, etc. I am yet to hear back from them (will update this article if I do in a reasonable time). Surprisingly, I believe Samsung is artificially limiting the reach, and the market opportunity by only showing two operators, even though it works with virtually any operator. This is important because other laptops in this category only support certain operators. For example, HP Spectre works only with AT&T and T-Mobile.
The set-up was easy. I did have an issue with the keyboard backlight not working, which was resolved with a Windows update. Backlighting has three levels, which is nice, but the first step is dim enough that you might confuse it for not working except in low light situations.
Incredibly thin and light, with extremely long battery life – perfect for travel or the office
I have used a lot of laptops in my professional life, and that is an understatement. By far, this is the thinnest, lightest laptop that did everything I wanted, while providing the longest battery life. The official dimensions can be found here. My workloads are primarily productivity-focused. As I had explained in my earlier article, I use more than 15 email windows, multiple sessions of Microsoft Office applications including Word, Excel, PowerPoint, and usually have more than 20 browser tabs open at a time. The Samsung Galaxy Book S with its Snapdragon 8cx processor never struggled under this load. There is something to be said about the new chromium-based Microsoft Edge browser, which comes as a default. It is fast, stable and supports Chrome extensions, so I never miss my previous favorite Chrome browser! Edge provides native ARM64 support, so its battery life performance versus Chrome which runs in 32-bit simulation mode is beyond compare on the Snapdragon compute platform.
The Galaxy Book S is a perfect companion for a road warrior like me. However, thanks to COVID-19, my travel is severely curtailed. During the limited travel I did with the Galaxy Book S, I never carried its charger for single-day trips or in town meetings. That means no backpacks, no other bags to carry, just the Book S like a notebook. At the end of each of those days, I ended the day with more than 30-40% of the battery still remaining. Truly remarkable.
Without travel, I have converted the Galaxy Book S into my home workstation. With external 32’ WQHD (1440p) monitor, mouse and keyboard, all connected through a USB-C hub, I almost forget that it is a laptop, such is the user experience!
The Galaxy Book S always gets compliments about its thinness and weight, whether I use it in meetings or when I go to my son’s karate class etc. Many wonder how one could fit a fan in such a thin chassis. Some of my curious IT friends even tried to search for the fan and vents! It is the kicker to tell them that it has no fan or vents, thanks to the Qualcomm Snapdragon 8cx processor inside.
The secret behind the incredible size and battery life of the Galaxy Book S
The biggest challenge laptop designers face is the tradeoff between size (thinner and lighter) vs. performance and battery life. Designers seem to have reached a saturation point in that tradeoff. It all boils down to the thermal characteristics of today’s processors—higher the performance, more the power used, and more the heat generated. There are two options to manage this heat—either use a fan and proper ventilation or throttle the performance. Most of today’s laptops, even the ones such as MacBook Air, utilize fans, which makes them big and bulky while also increasing the power consumed. Premium sleek devices such as the older generation Microsoft’s Surface line-up uses throttling which compromises the user experience. In terms of increasing battery life, the only option is adding bigger batteries, which increases weight.
Now comes the Snapdragon 8cx compute platform used in the Samsung Galaxy Book S. Built using the best from Qualcomm’s mobile heritage, combined with the performance you’d expect of a PC. It is based on Arm’s architecture, offering similar performance as x86 based Core i5. Snapdragon 8cx provides consistently higher performance with minimal heat production in an extremely power-efficient way. So, without fans or cooling constraints, and without the need for bigger, heavier batteries, device designers can develop extremely thin, light, and high-performance laptops, such as Samsung’s Galaxy Book S, whose battery-life is measured in days not hours.
Galaxy Book S vs. Surface Pro X
Since I have reviewed and have been using the Microsoft Surface Pro X for the last few months, a comparison between the two is another question I am often asked. Well, I like them both. They have some common uses but many where one is more suited than the other. For example, as I had explained in my article, Pro X can be off-balance when you try using it on your lap, whereas the Galaxy Book S proved to be a perfect fit for such uses. As a detachable 2-in-1, the Pro X is ideal if you like to use your device also as a tablet and use the stylus. The Galaxy Book S is a clamshell design that is more suitable for a driver or a workstation easily connected through USB-C docks and such. Although the Galaxy Book S has less RAM (8GB vs. 16GB), I haven’t seen that affect my productivity apps much. But if you are using more graphics and processor-intensive applications, the difference might be more apparent. Of course, Pro X, with all the accessories costs upwards of $1500, whereas Galaxy Book S is around $1000. I currently use both devices. All my content is on OneDrive and these being always connected, I can seamlessly switch between the two, no matter where I am.
The biggest concern of ACPCs still remains the app compatibility. More apps are being ported over to run natively in ARM64, though there are applications, like some games and video editors and such, that are still incompatible. It is worth noting though that most of those demanding applications don’t run well on other thin and light notebooks either. The other concern for some is around high cellular data pricing, but operators now have bundled options where one can get reasonably priced unlimited add-on data plans.
A glimpse of the future
The Samsung Galaxy Book S is only the second ACPC based on Snapdragon 8cx, and supports the best in class 4G LTE connectivity, with peak speeds up to 1.2Gbps. But we are at the dawn of 5G, which promises to provide multiple gigabit user speeds, extreme capacity, and lower latency. 5G ACPCs (aka 5GPCs) will be the best devices to utilize this unprecedented connectivity everywhere, as I have explained here. Book S gives a glimpse of what those 5GPCs have to offer in the years to come. In fact, the world’s first 5GPC has already been announced, and many are on the horizon. I can’t wait to get my hand on those!
It is bliss, as an engineer, to witness a whopping 2Gpbs speed on a live commercial network, using an off the shelf device. And that was my experience a few weeks ago, using the new Lenovo Flex 5G on Verizon’s live mmWave network in San Diego. It is even more amusing considering that I had tested 9.6 Kbps (yes, Kilo bites per second)) speeds on 2G networks only two decades ago, and 10s of Mbps only a few years ago.
The Flex 5G is the world’s first 5G PC and it’s powered by the Qualcomm Snapdragon 8cx 5G compute platform, using the Snapdragon X55 5G Modem-RF system. It represents what ideal productivity 5G PC should be—Ultra high-speed mmWave and Sub-6GHz 5G connectivity, the famed long battery life of Always Connected PCs (ACPCs), robust performance, and lightweight fanless design—all of which are enabled by the Snapdragon processor.
It is a perfect device for a user like me—a professional, who is always on the move, who needs top-notch connectivity, light, and high-performing laptop, without the hassle of constantly looking for Wi-Fi hotspots and power outlets.
Immediately after buying the Flex 5G, I couldn’t stop myself from testing and tweeting my initial thoughts. I used it extensively as my daily driver and travel companion for more than a month, and I came out very impressed.
Side note: If you would like to know more about ACPCs, including reviews of the Microsoft Surface Pro X and the Samsung Galaxy Book S, check out my other articles in this series.
Solid and highly functional build
Built in Lenovo’s popular Yoga style (in fact, this laptop is called the ‘Lenovo Yoga 5G’ outside the U.S.), the Flex 5G‘s aluminum and magnesium body looks sleek and stylish. At 2.9lbs., it is slightly heavier than other ACPCs I have used (Surface Pro X and Galaxy Book S), but you really don’t feel that much of a difference when carrying it around as it is still very light and portable. I especially liked its rubbery back and sides which offer a very satisfying firm grip when holding it, and stability when placed on uneven surfaces. This came very handy during my recent RV trip with the family. The Flex 5G would sit firmly, no matter where I placed it—on the seat, on the table, or anywhere else—even when driving on bumpy roads.
Blazing fast 5G connectivity
The Flex 5G’s claim to fame is its 2 Gbps 5G mmWave speed. Unlike many peak speed claims, you can actually get that speed when standing close to the base station! But generally, when you move away from the base station and when the network load increases, speeds will move to hundreds of Mbps, though still notably better than 4G and better than most home networks. I did extensive testing on Verizon’s 5G UWB (mmWave) live network in San Diego and was blown away by the speed.
When I tested, Verizon had two sites in San Diego, but they seem to have added two more recently. The coverage is limited to a couple of blocks around those sites. Most of my testing was near the University Heights site. I could get speeds in excess of 1 Gbps more than a block away, as long as there was line of sight (LoS). I would get decent speeds even without LoS, but would quickly drop to 4G LTE when moved behind buildings or major obstructions. But thanks to the Flex 5G’s dual connectivity, the handoffs in and out of 5G coverage were seamless. I have included screen captures of some of the test results. Verizon has good 4G coverage, offering high speeds in the area, which was a big plus.
I did some speed test comparison between the Flex 5G and Samsung Galaxy S20, which also utilizes the Snapdragon X55 5G Modem-RF system. Generally, the speeds on the Flex 5G were slightly higher, and coverage little bit better than S20. I would attribute that to the laptop having better antennas (probably with higher gain), better spacing, and fewer near-end obstructions such as hand and other body parts.
During the testing, I discovered that Ookla, Netflix Fast, and other speed test sites will not give full speed when checked on browsers (Edge, Chrome, and Firefox). The speeds topped at 600 – 700 Mbps. But Windows 10 apps showed the full gigabit speeds. This confused me a bit. When checked, Ookla could not give any specific reason for such behavior and suggested to always use the app for accurate results. This indicates that browsers are not yet optimized to utilize such high speeds, and that might create user experience challenges, if not addressed soon.
Days-long battery life
The Flex 5G, just like the other ACPCs I have reviewed, lives up to its promise of long battery life. It sports a 4-cell 60Wh battery, slightly bigger than comparable Yoga laptops. This is made possible by the Qualcomm Snapdragon 8cx 5G compute platform, which is thermally efficient so devices utilizing the solution don’t need a fan or any other specialized cooling, so there is extra space and weight margin. This also helps the Flex 5G remain lighter than other comparable models.
Instead of testing Lenovo’s claimed 26 hours of video playback time, I tested the laptop for my typical productivity use. This included multiple email tabs, lots of browser tabs, Microsoft 365, Zoom and other conference call apps, YouTube, audio/podcast recording/editing, and others. I got more than two days of battery life from a single charge while doing these things. The laptop was connected primarily through Wi-Fi with occasional cellular use. The battery lasted even longer during my limited travels as the usage was lower, but it was always using a cellular connection. I wish I had done more testing during travel, but Covid-19 didn’t allow it. Since I often travel to most of the major cities and areas Verizon and other operators are deploying 5G, I could have fully utilized the benefits of 5G connectivity.
Performance tuned for productivity
The Flex 5G is a perfect machine for productivity. I found its processing power to be more than adequate for all my usage (mentioned above). Even with all these applications running, it never got hot. I am not a gamer, nor do I use any high-intensity graphics applications, so I cannot speak to application compatibility or the performance for those needs. Also, it is worth noting that such thin, lightweight laptops are not targeted for such users anyway.
One revelation was how accustomed I have gotten to the absence of fan noise during my more than 8 months of using Snapdragon-powered ACPCs. A couple of weeks ago, when I had to use a buddy’s laptop, its fan noise was so distracting and drove me crazy. Once you experience the pure silence of these ACPCs, it’s hard to go back to traditional devices with loud, heavy fans.
The Flex 5G comes with Windows 10 Pro and one year of free Microsoft 365 Personal. It has 2×2 11AC with MU-MIMO Wi-Fi (aka WiFi5) which has excellent performance. I was especially impressed with the quality of the on-board microphone. I was moderating a 5G panel at the recently held IWCE Virtual event, and my headset broke at the last minute, so I had to use the laptop mic, and I was really impressed by how good it sounded.
Some misses and room for improvement
Despite the excellent overall experience, there are some misses too. The 256 GB SSD is rather small for a premium productivity laptop. It is even worse considering that there are no upgrade options: the SSD is not field-replaceable (soldered to the board), and there is no micro SD slot. For its thickness and weight, Lenovo could have provided a full-sized USB-A port, in addition to, or instead of one of the two USB-C ports. Also, it currently only supports Verizon 5G connectivity in the United States (unlocked version works only in 4G mode with other operators).
Verizon’s extremely limited 5G coverage leaves a lot to be desired. mmWave needs dense deployment of sites, as I had explained in my earlier article, and I hope they do so soon. They will also soon enable the Dynamic Spectrum Sharing (DSS) feature, which allows 5G to use the existing 4G spectrum, which will tremendously help to rapidly expand 5G coverage. But with limited 4G spectrum, gigabit speeds will not be possible. Snapdragon X55 inherently supports DSS. Verizon also needs to improve its customer support system for ACPCs. I had some issues activating the device and the frontline reps had no clue where to redirect me. It took a few tries and a couple of hours to get to the right person and get my service going.
The Lenovo Flex 5G is available for $1399 on the Verizon website (but shows $1699 on the Lenovo webpage for some reason), which is anywhere from $200-$300 higher than comparable thin, lightweight premium productivity laptops. Considering that this is first of its kind, and you are futureproofing your investment, it might be worthwhile for many mobile professionals like me. A lot also depends on how quickly the 5G coverage improves, and how soon we will start traveling and moving around again like before.
In closing
The Lenovo Flex 5G lives up to its promise of the world’s first 5G PC and shows what a 5G PC should be. It delivers on all the characteristics of a Snapdragon-powered ACPC – a sleek fanless design, lightweight build, multi-day battery life, crested with ultra-high-speed mmWave 5G connectivity. The device’s 5G usability is currently somewhat limited by Verizon’s coverage. However, they are working hard to add more mmWave sites and bring DSS, which should substantially expand coverage. The Flex 5G currently delivers a great computing experience now, and will only be enhanced as 5G coverage grows.
To read more reviews like this as well as to get an up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
FTC vs. Qualcomm Antitrust Trial
The ongoing saga between FTC and Qualcomm
It is unbelievable when one of the world’s richest companies complains that it is an undue burden to pay for the innovations that power its high margin products. But it sure looks like a well-orchestrated war on innovation with sinister motives, when a government agency such as the FTC (Federal Trade Commission) joins hands with it in beating down its much smaller (10x) supplier that is a proven technology pioneer.
I am talking about the trial that is underway between the FTC and Qualcomm in the U.S. District Court in San Jose, California. I am not a lawyer, instead, a passionate engineer who was part of the 2G, 3G, 4G, and now 5G transitions. I know first-hand what it takes to conceive, build, and deploy wireless technologies. Here are my thoughts on this legal tussle and its potential consequences.
Wireless communication, especially for broadband data, is a fascinating invention that it is largely invisible—literally and metaphorically. Unlike beautiful smartphone screens, artful industrial designs, or clever apps, wireless has been an enigma attracting little attention or appreciation. You only realize its importance when out of coverage! Oh, the agony, the insecurity, and the fear of missing out! The device is called a smart “phone” for a reason: without the “phone” functionality, most of those smarts have little value!
“Wireless data” is the defining technology of the smartphone, not just another feature
Why am I explaining the importance of wireless data? In the current FTC trial, the Commission’s lawyers and witnesses put forward two complaints: 1) Licensing fees should be based on the modem’s price, not that of the device, and 2) Qualcomm’s licensing fees are too high. Looking at the first, wireless data is the fundamental and defining technology of any smartphone. Also, it is a misconception to think that wireless data technology is only contained within the “modem” block. In reality, the functionality is the result of a comprehensive system design that makes the smartphone work as a complete device, with all subsystems and software in it. Additionally, the design includes complex interactions with numerous infrastructure and network (radio, core, and cloud) elements to function as a well-orchestrated system. So, it would be disingenuous and utterly ridiculous to limit the value of all of this technology to a small percentage of the price of a modem.
On the licensing fees argument, fees should be determined by the value the technology imparts to the overall usefulness of the device, and not correlated with a single isolated part. Also, the valuation of wireless technology should be market-driven, not arbitrarily or subjectively determined by the FTC or other regulatory authority. If you accept the notion of regulatory price-fixing, then why stop with Intellectual Property (IP)? Why not also regulate the price of smartphones? If you look at the recent price increases, it may not be a bad an idea after all! Jokes aside, as witnessed by the spectacular proliferation of smartphones over the last decade, market pricing of wireless technology IP has benefited the mobile industry and the consumers.
The value of Qualcomm’s IP has been accepted by most of the industry, as illustrated by more than 300 negotiated licenses. Moreover, after a lengthy investigation by and negotiations with the Chinese regulator, the NDRC (National Development and Reform Commission), Qualcomm agreed to a settlement that included rates deemed fair by the Chinese agency. It is telling that even Chinese OEMs agree that the licensing rates are fair, despite these OEMs having far thinner margins and much smaller scale than Apple, who makes most of the mobile industry’s profits (almost 90% by some estimates). So, it would seem that the subjective claim of Apple–“license fees are too high”–doesn’t pass the sniff test. It is interesting to note that many of FTC’s witnesses in the trail, such as Huawei, Apple, and Intel, are Qualcomm’s arch-rivals.
Will the FTC case against Qualcomm help or harm consumers?
Let’s examine the premise of this case and how it relates to FTC’s mission, which is to ensure fair competition so that consumers benefit from wider choices and lower prices.
When you look at the US smartphone market, there are two dominant players, and others are smaller, emerging players. I believe any negative action by FTC will further exacerbate this situation by eliminating these smaller players. Wireless innovation is extremely hard, time-consuming, and capital intensive. Qualcomm invests billions of dollars in R&D every year. A lot of this investment is done very early, years before a market even exists, which means there are significant risks involved. For example, Qualcomm has been investing in 5G since 2014, and commercial devices will only start entering the market in 2019 and 2020. For a company like Qualcomm, the only way to recoup such large, ongoing investments is to license its technology to as many smartphone OEMs as possible. Moreover, most of these OEMs don’t have the money to do their own R&D, and they rely on Qualcomm’s innovations to cost-effectively compete with the big OEMs. This creates a vibrant, highly competitive marketplace that offers consumers a wider range of choices and affordable prices, the ultimate goal of FTC. A great example of this is 4G LTE, which enabled many new and very innovative smartphone OEMs to enter the market. They are growing stronger and are expected to be formidable competitors in 5G. The virtuous cycle repeats as Qualcomm reinvests large portions of its licensing revenue back into R&D to offer a continuous stream of innovations.
In the absence of an entity like Qualcomm, most OEMs would be deprived of new technologies. Only a few big OEMs would be able to invest billions into technology development, and it’s unlikely that these vertically-integrated players would share most of their technology with others. Most other OEMs would not be able to afford to invest on their own and probably exit the market. This outcome would be the opposite of the FTC’s mission. If you don’t believe this, look at how aggressively Apple, Samsung, and Huawei have been trying to vertically integrate by either acquiring or building as much of their own technology as possible.
Beware of the consequences
Any attempt to trivialize or delegitimize Qualcomm’s IP and its role in the industry will have a long-lasting impact not only on the smartphone market but on the entire tech industry. If the FTC undermines companies’ ability to earn rewards for the investments, or worse, arbitrarily caps the value of their technology, it will discourage the American innovation and severely curtail the flow of capital to those innovations. Small and medium-sized companies that are the backbone of this innovation engine will be the most affected. So, in essence, this trial may (unwittingly?) amount to a war on the American innovation engine, and a negative outcome will ultimately hurt American consumers by decimating competition and choice in the marketplace; this is the antithesis of the FTC’s very existence and charter.
Analyzing the long term impacts of FTC’s activist litigation
In all the chaos of allegations, counter allegations, scores of testimonies, rebuttals, cross-examinations, and others, I humbly request that Judge Koh and the FTC pause for a moment and ponder this question: “If Qualcomm loses this case, who will win?” No, it’s not the FTC; the real winner would be China, in the form of its proxy Huawei (and to a lesser extent, Apple).
In my previous article, I explained how FTC’s activist attempt to fight Qualcomm will result in reduced competition, limited choice, increased prices, and will ultimately do great harm to consumers and the industry. This is clearly against FTC’s sworn mission and the very reason for its existence. But the importance of this case goes much further and beyond the FTC; it goes directly to the core of the purpose of the United States government itself, which is to protect the lives, the assets, and the interests of citizens of this great country. Today, technological advances define the future of countries. Rightly so, the U.S. government has made the protection of its intellectual property one of its main objectives. However, FTC’s actions are summarily against that objective.
Qualcomm is a well-oiled innovation engine
As the trial progressed, a lot of interesting facts have come to the light of day. It is undeniably clear that Qualcomm has been and continues to be a well-oiled innovation engine, efficiently cranking out technologies and products. In the testimony on Friday, Jan 25th, 2019, Christopher Johnson of Bain & Company reluctantly spilled the beans from the competitive analysis they did for Intel. They benchmarked investments, execution, and productivity between Intel and Qualcomm, especially pertaining to the development of wireless technologies and products. Bain’s analysis showed that Qualcomm’s investment on the SoCs (System on Chip) was comparable to that of Intel, but produced three times as many products. The report also showed that Qualcomm invested much more than Intel in developing wireless technologies and modems, which are at the heart of all mobile devices and networks.
With Qualcomm’s strong performance, no wonder weaker modem chipset players couldn’t compete and quickly folded. For example, companies such as Broadcom (which consolidated assets from Renesas, and Beceem), ST Ericsson, and Texas Instruments exited the business. Other players such as Infineon were bought by bigger companies like Intel. As a result, the majority of smartphone OEMs, be it new ones such as Apple, Samsung, LG, and a whole slew of Chinese OEMs, or legacy OEMs such as Motorola, Sony, Blackberry, and others, ultimately ended up using Qualcomm’s chipsets. In other words, Qualcomm’s strong market position was primarily because of its clear vision, incredibly talented engineers, and military-precision execution. However, this position didn’t give them the market power as alleged by FTC or make them immune to competition. As proven time and again, the highly-competitive mobile market only rewards winners, and harshly punishes those that stumble. Nokia’s spectacular demise from its peak is a great example of this. Specific to Qualcomm, the failure of the Snapdragon 810 chipset which came after the blockbuster Snapdragon 800, made many OEMs quickly abandon Qualcomm and take their business elsewhere. In the fast-changing mobile industry, market power is a misnomer, and only the companies that have the right foresight, investment and execution survive and thrive.
Down payment for the next-gen technologies
When analyzing the value of cellular IP and modem chipsets, conventional wisdom might be to only consider the share of a company’s contribution in the current generation and to evaluate accordingly. However, many fail to understand that wireless technology is not static, but a series of evolutions, and multiple releases within each evolution (G, or generation). For OEMs to be successful, the key is to leverage a steady stream of technologies and solutions to feed multiple generations of products. That means, the price they are paying for today’s technology also includes a down payment for the next generation of technologies they will need down the road. For example, when OEMs were selling 3G devices in 2006 and 2007, Qualcomm’s R&D engineers were already working on 4G technologies, funded in large part by licensing revenue from all of those OEMs’ devices. And when 4G was growing exponentially in 2014 and 2015, Qualcomm was already heavily re-investing in 5G. Essentially, Qualcomm has acted like an R&D design house for the entire smartphone industry ever since 2G. It is a virtuous cycle of innovation and re-investment, one generation after another.
What happens, if this cycle of innovation and re-investment is disrupted?
If Qualcomm loses this trial, and its ability to recoup investments through licensing technology at market prices is severely curtailed, Qualcomm will undeniably have to reduce investment in risky new technologies. Remember that 5G is still in its infancy, and the industry still has a long way to go to achieve its promise of changing the world. As articulated in testimonies in the trial, it is not just the investment that matters; Qualcomm’s vision, brain trust, and execution will also be severely hampered. Damage to Qualcomm will create a big void that no other American company may be able to fill, and any public company would be faced with the same challenge of not being able to recoup its investments with fair returns. There are not many companies in the U.S. that have the expertise, and fewer still, the efficient horizontal business model of Qualcomm, as made amply clear by Bain’s analysis.
China’s premier technology provider, Huawei, would be more than happy to fill this void, and with tacit support from the Chinese government. Unlike publicly-traded American companies, Huawei enjoys freedom from the worries about access to capital for investment, and it’s not particularly worried about returning a profit to investors. Remember that Advanced Information technology is among the top of “Made in China 2025” goals set out by the Chinese government. Capitalizing on its current momentum, Huawei would willingly take the world’s R&D crown. And the FTC would unwittingly be handing over the tiara on a silver platter.
The irony is that other parts of the U.S. government, for example, the U.S. Department of Justice, are busy pressuring other governments to keep Huawei at bay for security concerns. They even criminally charged Huawei for IP violations and other charges. Yet the FTC is upholding Huawei as its key, credible witness in undermining Qualcomm, the crown jewel of U.S. innovation. What could you call this travesty? The tragedy of democracy, the lethargy of bureaucracy? No matter what you call it, this is indeed a national disgrace.
It’s been more than a month since arguments rested for the FTC vs. Qualcomm case. Every passing day is increasing the anxiety of people on both sides of the issue. The media is rife with the rumors, leaks, and loud calls for the U.S. Government to intervene for national security reasons and take CIFIUS-like action.
FTC vs. Qualcomm might seem like any other antitrust case, but in reality the outcome could potentially jeopardize U.S. national security. Qualcomm is the undisputed leader in technologies and R&D that power cellular systems such as 3G, 4G and now 5G. Telecommunication networks are the plumbing that connects the country, and cellular technology is its brain. Any country that wants to control its destiny should own that technology, or at the very least, have significant influence in steering the evolution of its capabilities. If the FTC case seriously damages Qualcomm, China’s Huawei will claim its place and be the global champion of cellular technology.
But, you might ask, hasn’t the government already addressed this issue by banning Huawei in the U.S.? Well, that would be akin to shutting off one faucet in a house while water is free to flow through all of the others. There is much more to cellular technology than just the network infrastructure. Let me explain.
What it takes to be a leader in the cellular technology:
To be a leader in the cellular technology, one needs deep, end-to-end system expertise. One needs years of experience designing new wireless systems, standardizing them, building and enabling a large ecosystem to commercialize them, and continuously evolving them after they launch. Very few companies possess such capabilities; most specialize in one or a few specific areas. For example, companies like Apple focus on devices, and others like Ericsson and Nokia focus on network infrastructure.
The leading companies that have complete systems expertise are Qualcomm and Huawei (Of course, there is also Samsung, I will discuss about that in a later article). Let’s take a closer look at these leaders, starting with Huawei. The rise of Huawei is worthy of a business school case-study. It has meticulously built its businesses, allegedly with strong financial and bureaucratic support from the Chinese Government. Huawei realized the importance of cellular technology and standardization, and started very early, since the 2G days. It initially focused on infrastructure products, then strategically expanded into smartphones, and subsequently developed its own platforms for modem, application processor, neural processor, even reportedly its own operating system, and other key technologies. Huawei owns virtually all key technologies in the cellular value chain and is also a force to be reckoned with in 5G standardization. No wonder Huawei is considered the crown jewel and a role model for the Chinese government’s global technology ambitions.
On the other side is Qualcomm, which to uninformed eyes might look like any other chipset supplier that can easily be dispensed with and replaced. However, upon closer inspection, one realizes that it is a systems engineering company with deep, and unmatched end-to-end wireless competence. Qualcomm has gained valuable experience leading the successful commercialization of 2G, 3G, and 4G. The intensity with which the company almost single-handedly drove the acceleration of 5G has clearly shown its capabilities. For 5G, Qualcomm co-developed the full system architecture and design from the ground up, including fundamental technologies and algorithms. Qualcomm’s R&D teams also built complete prototype systems to develop, test, and perfect the technologies that the company contributed to 3GPP to define and standardize 5G. Qualcomm, because of its unwavering focus on engineering and technology instead of glitzy consumer marketing and brand, isn’t a household consumer name unlike many of its competitors.
Some might then ask: why only Qualcomm, why can’t other U.S. giants that are much larger and have greater financial wherewithal, take on Huawei? When it comes to the mobile industry, other than Qualcomm, there might only be two other companies that could come close — Apple and Intel. Let’s look at them more closely.
Although Apple is the profit leader in smartphones, reportedly raking in almost 80% of all mobile industry profits, it is pretty thin on the cellular technology front. Instead, its strategy has been to optimize existing technologies, and bring them into its vertically-integrated devices and closed ecosystem. Apple is indeed more focused on developing proprietary technologies that improve user experience and increase the appeal of its devices. Despite being a dominant smartphone player since the 3G days, Apple hasn’t brought any groundbreaking innovations to the cellular ecosystem or cellular standards. The company is never on the leading edge of cellular technology adoption either. Specifically, with 5G, it is more than a year behind almost every other major smartphone OEM, including smaller players such as Xiaomi, Vivo, Oppo, and far behind rivals Samsung and Huawei. Short of using its bounty of more than $200 Billion to buy another wireless technology leader (which could run into serious antitrust scrutiny), Apple would find it very hard, if not impossible, to compete with Huawei in the 5G+ technology race. Even if it developed the necessary competence, Apple’s vertical integration strategy would likely make it keep all IP to itself, and not license it to others. I really don’t see the company making a U-turn and becoming the cellular technology torchbearer for the country.
Then there’s Intel, which has ruled the PC industry for many decades. It might be because of its apathy toward the cellular industry in its early days (Intel sold its division that built processors for early smartphones to Marvel), the company has never succeeded in becoming a force to reckon with. Intel’s heavy bet on WiMAX didn’t pan out, instead, putting the company years behind in LTE. Even after buying Infineon, a strong modem player of yesteryears, the company still seems to be struggling in wireless. Intel did score a major victory last year by claiming 100% of iPhone modem share, albeit only offering the performance of Qualcomm’s previous generation of modems. To date, Intel’s 5G wireless story is not promising either. It seems to be almost one year and two generations behind its peers. Apple’s recent aggressive stance in growing its modem competence doesn’t bode well for Intel either. Also, I have lots of doubts about Intel’s end-to-end system capabilities. As a result, I believe Intel is in no position to compete with Huawei.
The bottom line is, Qualcomm is the only safe bet for the U.S. to maintain its edge in 5G and beyond.
What happens if Qualcomm is weakened by an adverse FTC trial ruling?
Qualcomm’s (and the U.S.’s) fate is hanging in the balance, pending the outcome of the FTC Trial. One might wonder what would happen if Qualcomm were to lose this case. Qualcomm’s licensing business, which generates the bulk (2/3) of its profits, might be seriously impacted. Without going into hypothetical scenarios, one thing would be certain: the company’s ability to invest in fundamental cellular technology development would be severely curtailed. Its virtuous cycle of technology development and plowing profits back into future technology R&D would come to a screeching halt. U.S. dominance of cellular technology would likely rapidly decline, and eventually end. With strong market presence and the Chinese Government’s backing, Huawei would be virtually unstoppable and would exert significant influence on the definition of future of cellular technologies… and it’s doubtful that it would have the U.S.’s interests and needs at heart.
Most affected would be smaller OEMs. Without substantial resources, or access to cutting-edge technology IP and advanced, high-performance platforms from Qualcomm, they would not be able to compete in the premium tier against vertical players like Apple, Huawei, and Samsung. The premium smartphone market in the U.S. would become an even greater duopoly (Apple and Samsung) and oligopoly outside the U.S. (the former two plus Huawei). It’s no wonder that both Apple and Huawei are strong supporters of (and collaborators with) the FTC’s case.
In the end, the real losers will be consumers, who will have no choice but to bend to the whims of these increasingly powerful vertical players… vendors that have already shown a strong affinity for increasing smartphone prices.
So, for the U.S. government, the time to act is now. I hope that saner instincts will prevail, resulting in actions that will protect, preserve, and propel U.S. technology, innovation, and the country’s vital communication infrastructure.
While the final decision on the FTC vs. Qualcomm case is still pending from the last two months, the new developments have put the very premise of FTC’s case in question. The details revealed during the Apple vs. Qualcomm trial and the ensuing settlement are making the pillars of the FTC case crumble. Everybody is eagerly waiting for the FTC’s next move, and wondering how all of this will affect Judge Koh’s final decision, if she eventually has to give one.
One might ask, “What is the relevance of the Apple vs. Qualcomm litigation on the FTC case?” Well, Apple was one of the key witnesses and a major force behind the FTC case. The underlying principles, claims, and counterclaims are same between the two, so much so that Apple’s main arguments presented during the case with Qualcomm were almost verbatim to what was put forward in the FTC trial. So, both cases are undeniably intertwined, and the result of one will affect the other.
FTC’s claims are in serious jeopardy
At a very high-level, the majority of FTC’s allegation can be combined into three claims:
-
Qualcomm’s licensing practices are not compliant with FRAND (Fair Reasonable and Non-Discriminatory) terms, and that has harmed the cellular industry, including Apple
-
Licensing at the device level is not justified
-
Qualcomm’s alleged market power combined with its licensing policies have harmed competitors such as Intel
Let’s evaluate the merits of each of these claims, especially in the wake of the settlement and the new information it has brought to light.
Apple was one of the strongest forces behind FTC’s case against Qualcomm. The documents revealed during the Apple vs. Qualcomm case show that the ultimate reason behind Apple’s litigation (including FTC case) was to reduce its royalty cost. There was no alleged harm. Even during the trial, the FTC failed to produce any concrete evidence to show the harm to the industry caused by Qualcomm’s licensing practices. Now, Apple signing a long-term licensing contract as part of the settlement clearly shows that Qualcomm’s licensing practices are indeed fair and market driven. Furthermore, the other over one hundred licensing contracts Qualcomm has signed with many OEMs including majors such as Samsung, and LG proves this point as well. All of this debunks FTC’s first claim.
As it became very apparent during the trial, licensing at the device level is a decades-old industry norm. All the Intelectual Property (IP) holders practice this because it is the most efficient and practical way to capture the value of IP. Stipulating a cap on the maximum device price for license fee calculations makes the practice even more meaningful and fair. As disclosed during the trial, Qualcomm’s licensing fees are up to 5 percent of the wholesale price of the phone, with a device price cap of $400. This license includes a portfolio of more than 130,000 Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEPs). For reference, in another related case between Apple and Qualcomm in San Diego, the jury awarded $1.41 per device to Qualcomm for just three non-SEPs. That is a far cry when compared to the $7.5 for every iPhone that Apple was paying before the dispute started. So again, FTC’s second claim has no merit. On a side note, If you would like to know more about patents and licensing, check out my explainer articles here: Part-1 and Part-2.
There was no dearth of drama on the day Apple and Qualcomm settled the dispute. The settlement news broke while the opening statements were still being presented in the court. The Qualcomm’s stock shot up by record levels immediately after the settlement. Mere hours after the settlement news, Intel announced their decision to exit the 5G smartphone modem business. Some might think that Intel decision to quit proves FTC’s claim of harm to competitors. However, closer scrutiny reveals a different story.
By Intel’s own admission, the reason for their decision was Apple signing a multiyear modem supply deal with Qualcomm, as part of the settlement. As publically discussed in many forums, the most likely reason for Apple to ditch Intel in favor of Qualcomm was the realization that Intel wouldn’t be able to meet Apple’s hefty 5G modem needs. This indeed is a major miss by Intel, considering that they are currently the sole modem supplier to Apple’s latest iPhone. Their inability to deliver the right modem solution for such a large and almost guaranteed opportunity clearly shows a profound and fundamental flaw in Intel’s operations and execution strategy. By all counts, 5G was a level playing field for Intel as well as everybody else in the race including Qualcomm, and Intel failed to deliver. In such a case, it is reasonable to argue that, this might as well be the case with 4G LTE. That means, whatever harm the FTC has claimed for Intel in 4G LTE was because of its inability to deliver, and not because of Qualcomm’s alleged market power or licensing policies. This proves that FTC’s third claim is completely flawed as well.
Who stands to benefit from FTC trial now?
With Apple and Qualcomm settling, and Intel exiting 5G smartphone modem market and mulling strategic options for its modem business, the question arises, “Who stands to benefit now from the continuation of FTC case?” The surprising answer is China’s Huawei, as it was FTC’s third collaborator along with Apple and Intel. This is such an unfortunate and disgraceful situation that an arm of the US government is directly helping a foreign entity, against a US company who is heralded as the country’s 5G leader. This is even more ironic and embarrassing, considering that the US government has virtually banned Huawei for national security reasons!
What could be the possible outcome?
With all the major claims of the FTC discredited, its case is in serious jeopardy. As Judge Koh noted during the closing stages of the trial, this case is very complex with a huge amount of evidence to examine. The hurried summary judgment that Judge Koh gave in the early part of the trial, the radical remedy that the FTC is seeking, and the recent developments, complicate the case even further.
The FTC didn’t make a strong case, to begin with, it looks even weaker now. That means, it is almost impossible for Judge Koh to give a judgment that might permanently alter cellular IP licensing regimen being practiced for decades. In my view, the only possible option for the FTC now is to settle with Qualcomm and save its face, especially considering that anything other than that will help Huawei. I am sure Judge Koh will be happy with that outcome as well. Any decision other than that will surely be challenged in the appellate court and most likely be overturned.
The telecom industry is still digesting the surprising and far-reaching decision by Judge Koh of the U.S. Northern California District Court. The expansive court order is as hard to digest as it is to comprehend. If you thoroughly read it (yes, I have, all the 233 pages!), it seems that Judge Koh had already made up her mind long before the trial, and hand-picked specific points from testimonies, evidence, and circumstances to suit her narrative. However, the battle rages on: Qualcomm is appealing the decision at the U.S. Ninth Circuit Court of Appeals. Meanwhile, the company is requesting a stay from Judge Koh until the appeal is heard. I think this is a mere formality, as I expect Judge Koh to reject the stay request. If and when that happens, Qualcomm will request a stay from the Ninth Circuit Court. While all of these court proceedings play out over the next few months, if not years, it is important to consider the havoc this decision and the possible denial of stay might cause in the market. It is even more crucial because we are at a critical juncture in the global 5G race, and this decision will affect how different companies, and perhaps more importantly, different countries progress.
In my previous article, I had briefly touched upon the question “who might benefit from an adverse decision against Qualcomm.” Since that fear has become a reality now, a more detailed discussion and evaluation of some what-if scenarios is in order.
At the very outset, there is no question that Huawei and China are the biggest beneficiaries. With this legal quagmire, the attention of Qualcomm’s executives and many of its engineers may be divided between trying to prevail in the legal fight and making great technology. This distraction gives Huawei (and in turn, China,) a leg-up, allowing it to strengthen its position in 5G. When you dig a little deeper, you will realize that, if Qualcomm’s request for a stay is not granted, the situation gets even direr.
What happens if the stay request is denied?
As I have discussed in my previous article, licensing revenues are the lifeblood of Qualcomm’s virtuous cycle of technology development, commercialization, and monetization. Judge Koh’s order threw a monkey wrench in to that cycle, exposing almost all of Qualcomm’s licensing contracts to renegotiation risk. Based on the news articles, it seems that recent deals with Apple and Samsung could be safe for some time; but I can’t imagine both of those behemoths not trying to use the court’s decision to eke out more concessions from Qualcomm. If you remember, during a separate trial, Qualcomm produced documentary evidence that showed how Apple intentionally tried to harm Qualcomm’s licensing business. Bottom line is, Qualcomm’s every licensing contract could be up for grabs. The company’s much-publicized, recent licensing spat with LG offers a glimpse of how convoluted and long these renegotiations could get.
Let’s look at the biggest block of the licensing lot, the Chinese OEMs that bring in a large portion of Qualcomm’s licensing revenue. Just like LG, all of these OEMs buy chipsets from Qualcomm. That means just as LG is trying to do, they might also ask for chipset based licensing. But most of them, if not all, license Qualcomm’s full portfolio, including cellular SEPs (Standard Essential Patents), non-cellular SEPs (e.g. Wi-Fi and Bluetooth), and non-SEPs (NEPs). However, the court order only applies to cellular SEPs. Given Judge Koh’s ruling, how would you negotiate a licensing deal that would span all these different kinds of patents? It would seem that the only option would be for Qualcomm and its licensees to examine more than 130,000 patents, one-by-one, and license on a la carte basis. As one could imagine, that would be a herculean task. Taking this insanity further, many of these are system-level patents, which mean they may cover more than just the modem or any single chip, and span different parts of the system and software. For example, if you consider MIMO, an important feature of 4G and 5G, the technology covers not just the modem but also RFICs and antennas, phones, and network equipment. Would patents related to MIMO be licensed based on modem pricing or RFIC, or antennas, or base stations? Also, different vendors produce these components. So, would all those vendors have to get licenses for cellular SEPs? So many complex questions with few clear answers!
If your head is not yet spinning with the complexity, consider this absurdity: Qualcomm would still be free to license all patents other than cellular SEPs at the device level. This means, there might be a case wherein the prices of non-SEPs would be higher than that of SEPs, which at some level defies logic! The point is, licensing could get so complex that it might take years to agree on how to structure meaningful contracts. A side note, look for my next article on the range of absurdities this court order is causing. Also, if you would like to know more about cellular licensing, please read my articles here, here and here.
The real threat of 5G investments getting strangled
During the uncertainty of lengthy negotiations and the complexity of restructuring of contracts, it is highly likely that many OEMs would be tempted to stop paying royalties. This would be similar to what Huawei is doing during its negotiations with Qualcomm now, and to what Apple did until it settled with Qualcomm back in March 2019. Such a large-scale disruption could mean that the revenue stream that feeds the Qualcomm’s R&D engine would go dry. The direct casualty of such an outcome would be the development of 5G, and America’s leadership in 5G. As you might know, we are only in the early stages of 5G. A lot of what 5G promises is still under development. All of that requires billions of dollars of investment and multiple years of sustained development, with a long lead time for revenue generation. Any interruption to Qualcomm’s licensing revenue could directly impact Qualcomm’s ability to create those inventions and the development of 5G. The world would be at the mercy of China for the future of 5G, and to deliver technologies for Industry 4.0, and others.
Handing a powerful lever to China in the trade war
The fact that a large portion of Qualcomm’s licensing revenue comes from Chinese OEMs has huge significance when the United States is in a bitter trade war with China. As evident from developments, both countries will use whatever leverage they have to get the upper hand. In such a case, the considerable revenue stream for a strategic American company will surely be weaponized and used as a bargaining chip by China in the broader trade negotiations. It is no secret that the Chinese government wields considerable influence over these OEMs. If you think about it, this is such a potent tool, not only for trade negotiations but also to severely hurt America’s prospects for 5G leadership.
Whose interest is FTC fighting for?
It is abundantly clear that the real and biggest beneficiaries of the FTC’s and Judge Koh’s actions are neither the American People, nor American companies, but ironically, China and Chinese companies. And this too, to the detriment of American 5G leadership and at the expense of an American technology company that has been hailed as a 5G leader by the U.S. Government itself. This is exactly the reason the U.S. Department of Justice voluntarily tried to impress upon Judge Koh that she be cognizant of the implications of her decision for America’s national interests.
On the closing note, to those who value free markets and fair competition, I would like to point them to recently finalized 5G infrastructure contracts in China. Huawei won the lion’s share of these contracts, clearly showing how the Chinese government protects its companies. Who is there to protect the American companies? Far from protecting its own national interests, a U.S. government agency is effectively fighting tooth and nail to hurt a legitimate American company and help the Chinese. What an irony!
Last week’s remarkable decision of the United States Court of Appeals for the Ninth Circuit (appellate court) consisting of three judges, finally brings some common sense into FTC’s bizarre antitrust case against Qualcomm. The appellate court granted Qualcomm’s request to stay the United States District Court for the Northern District of California’s (lower court) ruling, which had far-reaching implications for the entire U.S. patent regimen.
Side note: If you are new to the subject would like to understand the background, please read my previous articles here, here, here, here and here.
What did the appellate court say?
The court order must have sounded like music to Qualcomm’s ears. Even they could not have written it better! Don’t be confused by the title of the court order which says “partial stay,” Qualcomm actually got all of what it requested, and then some. The tone, the language, the arguments, the selection of phrases and words, the precedence cited, the direct denunciation of the lower court’s decision, everything screams a thumping Qualcomm victory.
First, it says that the application of the Sherman Act (antitrust law) to the case is not accurate, as private businesses have discretion on who they deal with. That means, Qualcomm is free to license its Standard Essential Patents (SEPs) to whomever they choose — effectively negating the lower court’s order of mandatorily licensing of SEPs to rival chipmakers on exhaustive basis.
Second, it acknowledges that there is a stark difference of opinion between two governmental agencies tasked with enforcement of antitrust laws— FTC and Department of Justice (DOJ). This is in complete contrast to the lower court’s abject disregard for DOJ’s request to conduct additional briefings before imposing remedies, and be considerate about the effects of broad and far-reaching remedies that alter market dynamics and jeopardize national security.
Third, it clearly states that the appellate court is satisfied with Qualcomm’s argument that its practice of licensing only to devices OEMs and charging royalties at the device level doesn’t violate any antitrust laws. This is again the opposite of one of the key rulings of the lower court. The appellate court goes on to even mention the extraordinary step taken by the sitting FTC commissioner— Maureen K Ohlhausen, publically expressing her dissent to the theory urged in the complaint and adopted by the lower court.
Fourth, it says that it also agrees with Qualcomm’s strong argument that implementing the lower court ruling, before the appeal decision, will do irreparable harm to its business. This was one of the easiest things to understand and realize to anybody even with a hint of knowledge of the licensing and wireless business. The lower court’s complete disregard for such logical reasoning was appalling to the keen observers of this case like me.
Finally, the appellate court concludes that the difference of opinion between FTC and all the other relevant government agencies, including DOJ, Department of Defense, and Department of Energy, warrants the stay be granted. It further points out that these government agencies have opined that the lower court’s adverse action against Qualcomm threatens national security and “has the effect of harming rather than benefiting consumers.”
If you feel like you have heard these arguments before, you are right. These are the same arguments I put forward in my previous articles here, here, here, here and here.
What’s next?
The biggest kicker in the appellate court’s order is its ridicule of the lower court’s order as “.. a trailblazing application of the antitrust laws or instead of an improper excursion beyond the outer limits of the Sherman Act..”
To be sure, the lower courts are supposed to implement the law based on precedence, and not be a trailblazer!
Further, the appeal hearing is scheduled for Jan 2020, much quicker than usual timelines. The tone of the appellate court order, the decisive and unambiguous way in which the panel has struck down all the major aspects lower court’s assertions, strongly suggests that the overturn of its ruling is imminent. The urgency in scheduling the appeal hearing also indicates the importance appellate court imparts to this case. Qualcomm filed its long opening brief to the court on Aug 24th,2019.
Final thoughts
This appellate court decision was longtime coming. Actually, the whole trial was a series of bizarre turns of events. From the judge arbitrarily limiting the evidence period to March 2018, excluding the pertinent evidence thereafter, to strange explanation for summarily discounting defendant’s in-court live testimony, because the judge felt that the witnesses looked “prepared” to using an extremely narrowly defined potential violation for an extremely broad and industry-altering remedy and so on. But fortunately the saner senses have finally prevailed, and justice is being served the right way, albeit delayed. Now all the eyes are on the Jan 2020 hearings.
Qualcomm got a reprieve when the United States Court of Appeals for the Ninth Circuit stayed the decision of United States District Court for the Northern District of California’s (DC) in its antitrust case. Immediately after the stay, Qualcomm filed its opening brief (175 pages long), which was followed by a flurry of supporting Amicus Briefs (each more than 40 pages) from different companies, U.S. government, a retired circuit court judge, and groups of experts. While all of them criticize DC’s ruling, two of them choose to be neutral; all others were strongly in favor of Qualcomm.
<<Side note: If you would like to know more about Ninth Circuit court ruling, and the complete FTC vs. Qualcomm saga, check out this article series.>>
Principal arguments
The briefs supporting Qualcomm strongly condemn DC’s ruling. Their arguments can be summed up into three major themes:
-
DC either misunderstood or misapplied the US antitrust laws, as well as the precedence. The proponents claim that Qualcomm’s licensing approach, “No license No chips” policy or alleged “higher licensing prices” don’t violate Sherman Act. Also, Qualcomm’s decision to only license to device OEMs is not against the Fair and Reasonable and Anti-Discriminatory (FRAND) principles of Standards Developments Organizations (SDOs). Additionally, they claim the FTC or court did not show apparent consumer harm.
-
The remedies imposed by DC are very broad and far-reaching. The ruling applies to every aspect of Qualcomm’s licensing business including all of its global contracts; in many cases, those are even outside the purview of FTC or the DC. For example, contracts with Chinese OEMs for devices to be sold only in China are beyond FTC’s authority.
-
The ruling creates widespread disruption to the decades-old licensing regimen that has proven to encourage innovation, be efficient, and easy to implement. If licensing based on Smallest Saleable Patent Practice Unit (SSPPU) becomes mandatory, that will put almost every existing licensing deal that doesn’t use SSPPU, up for renegotiation. The proponents claim that because many patents span multiple functional units, DC’s ruling will create an unfathomable mess of who to license who, at what rate, and how.
The focus of each Amicus Brief
All the briefs came with a heavy dose of related precedence. Since the supporters are from different fields, each of them stressed on different parts of the argument, as highlighted in the sections below:
U.S. Department of Justice (DoJ):
One of DoJ’s main points is, alleged “unreasonably high royalty” is not anti-competitive; on the contrary, they quote from precedence that high royalties enable “risk-taking that produces innovation and economic growth.”
DoJ also emphasizes that Sherman Act violation requires “harm to completion” and not just “harm to competitors” as alleged by DC. DoJ ridicules DC’s “misunderstanding” of antitrust law, and also reminds it about the CFIUS’ action to block the takeover of Qualcomm because of national security reasons.
Judge Paul R. Michel (Ret.) – Served on Circuit Court for more than 20 years
Judge Michel states that SSPPU is a mere tool to avoid jury confusion. He argues, since this was a bench trial, and because of the sheer number of complex patents (~140,000) that cover multiple functional units, use of SSPPU does not make any sense.
The judge also points to the disastrous outcomes when the SSPPU was mandatorily applied to IEEE standards 802.11ah and ai, which were ultimately rejected by ANSI (American National Standards Institute).
A group of 20 antitrust and patent law professors and experts
These experts, including a retired chief judge of the federal circuit court of appeals (Randall R. Rader), who came up with the SSPPU concept, point out that the antitrust law needs actual proof of the harm (e.g., economic analysis), not just “Per Se” or “theory-driven arguments.” They condemn DC for using the discredited theory of Mr. Shapiro (without using his name) and simplistic documentary evidence, such as email, instead of concrete economic evidence to establish anti-competitive conduct.
They draw an interesting parallel between the decade long antitrust crusade against IBM, launched at the closing days of Johnson administration and that of Qualcomm, filed during the last days of Obama administration. They point out that DoJ learned its lessons about the ill effects of antitrust overreach by pushing IBM, an American technology jewel, to certain bankruptcy, and warn against repeating it.
International Center for Law & Economics (ICLE)
ICLE, a group which has many antitrust and economics experts, opines that this “case is a prime—and potentially disastrous—example of how the unwarranted reliance on inadequate inferences of anticompetitive effect lead to judicial outcomes utterly at odds with Supreme Court precedent.”
Further, ICLE quotes one of the previous relevant judgments that seem to uproot the crux of DC’s argument—“The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system.”
Cause of Action Institute (CoA)
CoA, a non-partisan government oversight group, comes down rather heavily on both DC and FTC. It reiterates the words of a sitting FTC commissioner who called this trial “a product of judicial alchemy, which is both bad law and bad public policy.”
Further, CoA asserts that FTC exceeded its statutory authority in at least four ways, including the reasons that DC’s “injunction violates due process and is unenforceable for vagueness.”
Alliance of U.S. Startups & Inventors for Jobs (USIJ)
USIJ states the fact that the cellular industry is one of the most competitive, dynamic, and thriving markets, and there is no need for regulatory or judicial interference. Instead, it suggests that the FRAND complaints and the other concerns can be better resolved by using contract and patent law rather than antitrust law. They say that the latter would be akin to using a hammer instead of a scalpel.
It warns that DC’s ruling will stop companies from participating in standardization, and that will be anticompetitive and will harm consumers.
InterDigital
InterDigital emphasizes that antitrust law shouldn’t trump innovation, and it points out how the law is being misused to make inventors “accept sub-FRAND royalties.” It also cautions about how antitrust overreach will weaken innovative US companies, and make their leadership replaced by foreign companies supported by their governments, who may not have the US’s best interests at heart.
InterDigital doesn’t specifically mention whether it supports Qualcomm or not.
Dolby
Dolby comes out strongly in favor of keeping the flexibility of patent holders in deciding where in the value chain they license. They insist that this allows the innovators to maximize returns on their huge investments and fairly compensates them for the risks.
Dolby faults DC in misinterpreting the FRAND commitments to SDOs and suggests that there are no mandatory requirements to license at any specific level or to any specific providers. It also highlights the confusion and the havoc it would create if the well-established end product based licensing, practiced across many industries, is altered in any way.
Dolby only asks for the reversal of DC’s summary judgment instructing Qualcomm to license to rival chips makers.
Nokia
Nokia points out the difficulties in licensing at a component level, and how patents cover more than a single functional unit, and how SSPPU is not applicable at all. While highlighting these inconsistencies in DC’s decision, it remains neutral.
In closing
There is a striking commonality in what Qualcomm has claimed in its briefing and all the Amicus Briefs coming from this diverse set of experts and in some cases competitors such as InterDigital. That suggests that there indeed is a strong case to be made against DC’s ruling. As I have pointed out in my earlier article, the appellate court seems to agree with many of these assertions as can be gleaned from the stay ruling. I would be highly surprised if the appellate court doesn’t overturn many of the draconian rulings of the DC.
Also, In response to Qualcomm’s briefing, FTC is expected to file its briefing sometime in October or November, and any Amicus Briefs supporting it will follow soon after. Come back to my column here for the latest developments and what they mean.
The stage is set for Feb 13th, 2020, hearing of FTC vs. Qualcomm antitrust case at the United States Court of Appeals for the Ninth Circuit (Ninth Circuit). In preparation, FTC, Qualcomm, and many interested parties have filed their briefs in support and against the decision by the United States District Court for the Northern District of California (lower court).
In the briefs, FTC’s subtle change in tactic caught my eye. They seem to have changed their “hero” argument. They are now trying to make Qualcomm’s alleged breach of FRAND (Fair Reasonable and Non-Discriminatory) commitments to Standard Setting Organization (SSOs), their main argument, while treading lightly on their earlier key, albeit discredited, “surcharge on competitor” theory. Is it a sign of FTC losing confidence in its case? Also, their FRAND breach argument seems to be on shaky ground.
<<Side Note: If you would like to understand the history of this case, please refer to my earlier articles on the subject>>
I spent many hours meticulously reading through all the briefs (~1500 pages). They are complex, with lots of legal jargon, illustrations, and citations. Here is a high-level summary of the arguments and my opinions on their effectiveness.
The hypothetical “surcharge on competitors” argument
FTC and its supporters are still relying on the theory put forward by Prof. Carl Shapiro. They also have provided torturous examples and illustrations. However, this theory was rejected by the US Court of Appeals for the District of Columbia Circuit in a separate case—United States vs. AT&T. The court’s rejection, as stated, was based on the evidence of actual market performance. Interestingly, both these cases have lots of similarities. Just like AT&T’s case, FTC’s arguments are also based only on theory, without any empirical study of actual market conditions. Moreover, the developments in the market completely debunk Dr. Shapiro’s theory. Unfortunately, those developments could not be included in the trial as evidence, because they happened outside the discovery period of the trial.
According to the theory, Qualcomm allegedly abused its monopoly power to create an imaginary surcharge on the competitors, making their chipsets more expensive. In reality, around 2016, Apple, who was exclusively using Qualcomm’s chipsets, also started using Intel’s chipsets. This fact virtually nullifies the monopoly power allegation. To a large extent, it also disproves the claim that the alleged imaginary surcharge was disincentivizing competitors. Alas! None of this mattered in the trial because of a stringent discovery timeline.
FTC claims that this imaginary surcharge reduced competitors’ profit and hampered their investment in R&D. That seems like a ridiculous argument when you consider that those competitors are behemoths like Intel, and the OEMs are giants like Apple. Looking at all these contradictions, it is clear why FTC is not pushing this argument as hard as it did in the lower court.
Is “harm to competitors” the same as “harm to the competitive process?”
For claiming antitrust law violations, prosecutors must prove harm to the competitive process. FTC is arguing that Intel being late with CDMA and LTE chipsets, and players such as Broadcom and ST Ericsson exiting the market prove harm to competition. Many experts, including the US Department of Justice (DoJ), argue that such instances as well as companies making less profit show harm to competitors, but not necessarily to the competitive process.
During the trial in the lower court, there was ample evidence presented to explain the reasons behind the problems competitors faced — none instigated by Qualcomm. For example, documents presented by Intel’s strategy consultant Bain and Company attributed Intel’s delay to faulty execution; an executive from ST Ericsson opined that they couldn’t execute fast enough to keep up with Qualcomm and rapidly lost the market share, which resulted in their exit.
The reasons for competitors not faring well in CDMA and being late in LTE were pretty clear to the keen industry observers like me. Regarding CDMA, not many chipset vendors were interested in that market as they thought the opportunity was small and fast diminishing. There were only a couple of large CDMA operators (Circa 2006), and with LTE on the horizon, they thought CDMA would quickly disappear. Hence they never invested in it. Much to their chagrin, CDMA thrived for many years, allowing Qualcomm to enjoy a monopoly. Ultimately, Intel acquired a small vendor—Via Telecom—in 2015 to get CDMA expertise. On the LTE front, nobody foresaw the exponential growth of LTE smartphones. Qualcomm, because of its early investment and cellular standards leadership in LTE, surged ahead, leaving others in the perpetual catch-up mode. For example, even when the LTE market has stabilized, Qualcomm chipsets had superior performance.
Alleged practice of “license for chips” policy
FTC claims that it has factually proven Qualcomm’s alleged “license for chips” policy, where Qualcomm would only sell its highly coveted chips if the OEMs sign the license agreement. Qualcomm disagrees. In my view, FTC’s evidence is pretty scant and unconvincing. It includes a few emails with some text that alludes to such intention (license for chips). In many of these emails, the main topics of discussion seem to be something unrelated. There were a couple of testimonies from Qualcomm’s OEMs, mentioning how they “felt” the overhang of this policy during negotiations. But they didn’t have any tangible evidence. There was only one concrete instance—a mail with a veiled threat. But the evidence presented in response showed that Qualcomm top management swiftly dealt with it, and condemned any such practice by its lower cadres.
Another of FTC’s claims is regarding an agreement between Qualcomm and Apple, through which Qualcomm paid Apple for a commitment to use its chipsets in a majority of the devices. FTC alleges that this amounts to Qualcomm indirectly subsidizing licensing fees, and that violates antitrust law. This also is part of the imaginary surcharge to competitor argument. Qualcomm claims that, as stated in the contract, the payment was to compensate Apple for the expenses it would incur in modifying its designs to incorporate Qualcomm chipsets, and was a traditional volume discount. When the contract was signed, Apple was already the market leader with multiple successful iPhone models and was using a different vendor’s chipset. That would indicate Qualcomm didn’t posses any monopoly power over Apple. The contract and the payment were revocable, which Apple ultimately did. So, it is questionable whether it can be treated as a subsidy.
Is FRAND commitment “duty to deal?”
Now to the new “Hero” argument. FTC claims that Qualcomm’s FRAND commitment to the US-based SSOs binds it to license its Standard Essential Patents (SEPs) to rival chip vendors (aka duty to deal). The SSOs in question are ATIS (Alliance for Telecommunications Industry Solutions), and TIA (Telecom Industry Association). The argument is, Qualcomm’s decision to not license to rival chipmakers is a violation of antitrust law. Many of the third parties on the FTC’s side overwhelmingly support this argument as well, for obvious reasons. Well, this at the surface seems like a simple and compelling argument. But it has multiple facets.
<<Side Note: If you would like to understand SEP and the patents process, refer to this article series>>
First, do these commitments mean holders have to license the patents, or is it enough to provide access to them? Second, whether FRAND violation, if true, amounts to an antitrust violation, which is usually a much higher bar? Third, which is more interesting—Are patents practiced by the chipsets or by the end devices (e.g., smartphones)? If latter, then licensing and violation only occurs at the device level, so no real need to license to chipset vendors. Fourth, the policies and practices of the biggest SSO —ETSI (European Telecommunications Standards Institute). ETSI’s policies are considered as the gold standard for SSOs. Interestingly, in its decades of history, ETSI has never compelled its members to license to rival chipset vendors or at the chip/component level. Many of the current SEP holders, such as Nokia, Ericsson, and others, strongly supported this approach during the trial. Well, I have merely scratched the surface of this argument. Since this is now FTC’s main argument, indeed, it needs close scrutiny, which I will do in my next article.
If you have been following this case and feel that you have heard these arguments before, you are right! Both sides made these arguments in the lower court and still sticking to them, except for FTC’s subtle change. It will be interesting to see how the Ninth Circuit considers these arguments. I will be in court to witness and report it. Make sure to follow my updates on twitter @MyTechMusings.
As promised in my previous article, here is a detailed discussion on FTC’s FRAND (Fair Reasonable And Non-Discriminatory) argument in its antitrust case against Qualcomm. FTC argues that Qualcomm agreeing to the FRAND (Fair and Reasonable Anti Discriminatory) requirements of Standards Setting Organizations (SSO) binds them to license patents to all applicants; Qualcomm declining to license its Standard Essential Patents (SEPs) to rival chipset vendors amounts to an antitrust violation. The FRAND requirements are more nuanced than what they appear to an untrained eye. I will dig deeper and try to decipher the arguments as well as examine the industry’s practices for more than two decades.
<<Side Note: If you would like to know the full history of this case, please refer to my article series. >>
What does FRAND commitment to SSOs mean?
The SSOs in question here are TIA (Telecommunications Industry Association), which developed CDMA standards, and ATIS (The Alliance for Telecommunications Industry Solutions), which developed LTE standards. Both organizations require their members to mandatorily sign the IPR policy document, which includes the FRAND requirements.
TIA has a 24-page IPR Policy document. The most relevant portions to this case are on pages 8 and 9:
(2) (b) A license under any Essential Patent(s), the license rights which are held by the undersigned Patent Holder, will be made available to all applicants under terms and conditions that are reasonable and non-discriminatory, which may include monetary compensation, and only to the extent necessary for the practice of any or all of the Normative portions for the field of use of practice of the Standard
The first part of this section is pretty straight forward. But the part marked in red is what is at issue here. In layman’s terms, this means the patent holder agrees to give a license for the practice of the standard. In other words, licenses to the applicants whose products practice the standard. Qualcomm argues that devices—and not chipsets—practice the standards. They point to the actual language/text of the standards as evidence. It is customary for the patents to state, “UE (User Equipment, aka device) shall do this,” or “Base station shall do that,” etc. And the standards never state, “Chipset shall do this or that.” Considering that, Qualcomm argues, they are not required to license SEPs to chipset vendors, but only to device vendors. To that effect, they also point out that they have never sued any chipset vendors for patent infringement.
Now, let’s look at the ATIS IPR policy, which is governed by the “Patent Policy as adopted by ATIS and as set forth in the “Operating Procedures for ATIS Forums and Committees,” a 26-page document. The most relevant portions are on page 10 and 11:
“…Statement from patent holder
Prior to approval of such a proposed ANS, ATIS shall receive from the identified party or a party authorized to make assurances on its behalf, in written or electronic form (b) assurance that a license to such essential patent claim(s)will be made available to applicants desiring to utilize the license for the purpose of implementing the standard. (i) under reasonable terms and conditions that are demonstrably free of any unfair discrimination…”
Again, looking at the highlighted part, Qualcomm argues, as stated in the standard, chipsets don’t implement the standard, but the devices do. So, there is no need for them to license to chipset vendors!
Is a violation of SSO commitment violation of US antitrust law?
Even if you consider that SSO IPR policies are violated, then the question becomes, “does that amount to a violation of US antitrust law?” One argument is that the alleged FRAND violation is a commercial matter and can easily be dealt with through contract and patent law, instead of policy tools such as antitrust law. In his Amicus Brief in support of Qualcomm, Hon Judge Paul R. Michel (Ret.) of US circuit court gave a compelling simile: “as a general proposition, the hammer of antitrust law is not needed to resolve FRAND disputes when more precise scalpels of contract and patent law are effective.”
Even the United States Court of Appeals for the Ninth Circuit (Ninth Circuit) panel, while granting Qualcomm’s request for a stay, ridiculed the lower court’s ruling as “… a trailblazing application of the antitrust laws or …an improper excursion beyond the outer limits of the Sherman Act..”
Precedence and other considerations
3GPP (3rd Generation Partnership Project), the cellular specifications group, prefers all the SSOs across the world to have consistent IPR policies. ETSI (European Telecommunications Standards Institute) is one of the major players among the eight SSOs that are the organizational partners of 3GPP. There has been much discussion at ETSI regarding the issue of component-level licensing, such as licensing to chipset vendors. But ETSI has never stated that it supports or requires its members to offer component-level licensing. So, the lower court decision creates inconsistency between ATIS, ETSI, and other SSOs, whose impacts go far beyond this case.
<<Side Note: If you would like to learn more about 3GPP’s organizational structure and operational procedures, please refer to this article series.>>
More than two decades of cellular patent licensing history proves that the device-level licensing works smoothly and efficiently. Although the discussions related to this case are mostly about modem chipsets, typical devices have hundreds of different components. If licensing is brought to the component-level, it would be a logistical and legal nightmare for OEMs to understand, and negotiate separate licenses with all those vendors, as I explained in this article. Also, probably every existing cellular IPR contract will have to be rewritten.
Final thoughts
So far, there have been only a few minor cases in the telecom industry regarding the violation of FRAND commitments. FTC’s case against Qualcomm is the first major case where its relevance to antitrust law is being tested. The decision of this trial will be a defining moment in the “component vs. device-level” licensing debate. Qualcomm seems to have strong arguments, and the earlier Ninth Circuit panel agreed with most of them. But now the appeals hearing has a new panel of judges, which brings a new set of uncertainties to the case. As promised before, I will be there in person to witness the appeals hearing of this historic case. Be sure to follow my Twitter feed @MyTechMusings for the latest.
The title best describes the current situation after the recent hearing in the more-than-yearlong saga between FTC and Qualcomm. On Feb 13th, 2020, a three-judge panel of the US Court of Appeals for the Ninth Circuit (Ninth Circuit) heard Qualcomm’s appeal to reverse the ruling of the US District Court of Northern California (lower court). During the hearing, the panel asked a lot of skeptical questions to FTC regarding its position, arguments, and precedents, probed Qualcomm’s stance, and almost snubbed the US Department of Justice (DoJ). Although the judges appeared confused in the beginning, they seemed to have gotten the main points toward the end. Based on the verbal and non-verbal communications of the judges, Qualcomm definitely had a more positive day than FTC.
<<Side note: If you would like to understand the history of the case, please refer to the article series “FTC vs. Qualcomm Antitrust Trial”>>
I was fortunate enough to be in the court to witness the hearing. The appeals panel consisted of three judges: Judge Callahan, Judge Rawlinson, and Judge Murphy III. Being in front of them, I was able to observe lots of their non-verbal cues, such as subtle changes in mood and facial expressions, inaudible grunts, how keenly were they listening to whose arguments, etc., which many people watching online might have missed.
With only about 50 minutes allocated to the hearing, both parties only focused on the main points. What caught my eye was that during Qualcomm’s arguments, judges were more in the listening mode and only prodding Qualcomm for clarifications. But during FTC’s time, they were more skeptical, often questioning and challenging FTC counsel’s assertions, and mostly in the “so what” mode. This is unlike other appeals cases, where usually appellants (Qualcomm in this case) face more scrutiny.
<<Side note: Please refer to my articles here and here for more details on the arguments at play in the case>>
Duty to deal
FTC massively hurt their case by conceding that Judge Koh had erred in citing the Aspen Skiing case as the precedent for “Duty to Deal,” i.e. the ruling that Qualcomm has the duty to license its patents to competitors. Judge Callahan even went to the extent of saying that the house of cards, i.e. FTC’s case, starts to fall if the card of Aspen case is pulled out. Qualcomm obviously made a field day with it, quoting lower court’s argument that “Duty to Deal” was one leg of the three-legged stool, and with that gone, the case couldn’t stand (literally and figuratively). FTC’s alternate precedents of Caldera and United Shoe Company cases, or argument about Qualcomm breaching FRAND commitments to Standards Setting Organizations (SSOs) didn’t seem to impress the panel. So, I am positive that this ruling will be reversed.
“No license no chips” policy
This argument confused the heck out of judges. Multiple times Judge Callahan asked and confirmed that Qualcomm was not accused of the “No chips No license” policy, which obviously is antitrust conduct. She even suggested that probably Judge Koh of the lower court was confused about that as well! In other words, she didn’t think “No License No Chips” was anti-competitive. There was a clear difference of opinion between FTC’s and Qualcomm’s counsels on how OEMs expressed their views on the policy. FTC said that many witnesses from smartphone OEMs had given testimonies about paying higher royalties because of the risk of not getting chips. On the other hand, Qualcomm said that there was only one witness, from one OEM, in a non-monopoly market. To my recollection attending those hearings, mostly OEM expressed that they felt such policy existed, but never showed any evidence of Qualcomm practicing it. So, obviously, the panel will have look at the actual testimonies to make their determination. There was no discussion on whether this policy itself was illegal or not. but using this policy for creating the alleged surcharge on competitors.
Surcharge on competitors
If no license no chips discussion was confusing, this torturous surcharge claim hypothesis knocked the wind out of judges! Judge Murphy even said that he was having a hard time keeping up with all these things! I don’t blame them. Most of FTC’s time was spent on making the judges understand what FTC calls a surcharge, how it affects competition in their view etc. As expected, the panel challenged this claim from multiple angles—precedence, market evidence, harm to competition not competitors, etc. and tried to poke holes in FTC’s position.
Here are the notable questions and challenges. Judge Rawlinson asked “… what would be wrong with that (higher royalty fees), doesn’t the Supreme court say that patent holders have the right to price their patents, what would be anticompetitive about that?” and “..What case says that it is anti-competitive to move (cost) from chip to patent?” Judge Callahan asked, “Why did the OEMs say it’s unfair because they have to buy a license anyway?”; “..who is a Goliath here, Apple is more of a Goliath than Qualcomm”; “..your argument that Qualcomm’s licensing fees increase rival’s cost doesn’t make sense to me…” ; “There seems to be….. a conflation of profitable and anti-competitive (one means the other).”; “… weren’t there multiple competitors enter the …market successfully beginning around 2015, leading to a precipitous decline in Qualcomm’s market (share)? Judge Murphy III asked, “…why don’t we let OEMs exercise their right in patent law to file (cases for) predatory pricing, abuse of monopoly, etc. (instead of antitrust law)?” These were mere samples.
The panel was unconvinced and most likely will still be even after looking at the documents.
Chip volume incentives or royalty discount
This issue was not discussed as much as others but was used as a basis for other arguments. FTC claims that Qualcomm’s volume discount to Apple is exclusionary and anti-competitive. Qualcomm, during its rebuttal, argued that licensing and chipset are two separate contracts and it doesn’t make sense to combine them. Again, this is another issue where the judges will have to look at the documentation and decide.
Is the “Threat to national security” argument justified?
This is the first time that DoJ and FTC are on opposite sides of a case. Qualcomm ceded five minutes of their time to DoJ. DoJ’s major claim is that the lower court’s global and expansive remedy harms national security. Judge Murphy seemed hostile against DoJ and asked whether they have any market analysis or financial evidence to prove the claim. DoJ counsel, although startled by the question, came back with a reasonable explanation that the basis for the case was 3G and 4G, but applying the remedy to 5G will negatively affect the country’s standing in 5G. 5G being such a crucial technology for many aspects of the country, DoJ and other government departments (Department of Defense and Department of Energy) are convinced that implementing the ruling will harm the country. FTC counsel was quick to capitalize on Judge Murphy’s assertion and discount the security concern as a simple abstraction without any supporting studies.
I am not sure whether the panel will consider the security question seriously.
What does all this mean?
You have to consider that the hearing is only one part, albeit an extremely important one, in resolving the case. The court will examine all the briefs, and case documentation before making a final decision. One could argue that the cues from the hearing may be overblown, for example, all those questions and challenges could just be the judges probing both parties to completely understand their stance and such. However, specific things such as difficulty in fully grasping the FTC’s argument, and understanding its point of view clearly indicate that the judges don’t believe those arguments and are not taking them at the face value. It also suggests that the FTC’s arguments are not as robust as the lower court thought they were.
From Qualcomm’s perspective, after a clear win with the stay, this hearing turned out to be very positive. The FTC had a major initial setback because of the Aspen Skiing reversal, but at least made the panel understand its arguments. Whether the panel agrees with them or not is a separate matter. In my view, Judge Callahan and Judge Rawlinson seem to be aligned with Qualcomm’s arguments and Judge Murphy seems to be neutral or slightly aligned with FTC’s argument. Ultimately, as Judge Murphy III succinctly put it, “anticompetitive behavior is illegal… hyper-competitive behavior is not… this case asks us to draw the line between the two.” Meaning, the judges have to decide whether Qualcomm’s behavior is anticompetitive or hyper-competitive.
What’s next?
There is no fixed timing for the Ninth Circuit’s decision. The expectation is six to twelve months. The decision doesn’t have to be unanimous, meaning, only two of the three judges have to agree.
In terms of outcome possibilities, the panel could completely knock down all the lower court’s rulings, or fully uphold them, or do anything in between. Meaning, it could agree to some parts of the ruling and reverse the others or make a determination on some and send the others back to the lower court to reconsider. No matter what the panel’s decision is, either party can request a full panel review, which involves all the 20+ judges at the Ninth Circuit, and further knock on the Supreme Court’s door. If Qualcomm loses, especially the claims that affect its licensing policy, I am sure it will go to the Supreme Court. On the other hand, if the FTC loses, it might ask for the full panel review and let it go after that.
As it stands today, I think Qualcomm is in a pretty good situation and more likely to win than the FTC.
Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get updates on this trial as well as the telecom industry at large.
The United States Court of Appeals for the Ninth Circuit (Ninth Circuit) gave a landmark decision in favor of Qualcomm, on Aug 11th 2020, in the long running antitrust case brought about by FTC. This was a highly anticipated outcome in the multi-year saga, which saw fortunes go back and forth between the parties. The detailed opinion written by Judge Callahan, representing the panel of three judges, is a tell-a-tale of how FTC mischaracterized Qualcomm’s business model, and how the United States District Court for the Northern District of California (lower court) misjudged the case. The ruling vacated all the decisions of the lower court, including the partial summary judgement. I spoke to Don Rosenberg, EVP, and General Counsel of Qualcomm, who of course was quite pleased with the outcome. He said, “we felt vindicated by the appeals court’s ruling and are looking forward to continue bringing path-breaking innovation like 5G to life.”
Ninth Circuit’s decision was not just relevant for this case, but clarifies a whole slew of long-standing issues, and will set a defining precedent for IPR licensing in the future, especially from an antitrust point of view.
Side note: If you would like to know the full background of the case, refer to my earlier articles in the FTC vs. Qualcomm article series.
Well expected outcome
The recent developments in the case had made me predict such ruling. The Ninth Circuit’s stay of the lower court’s decision, and the language used in that order, the tone of the in-person hearing, and the deep skepticism the panel showed in their questioning made it amply clear the direction the panel was tilting.
The case indeed had a lot of unusual and rather interesting turn of events from beginning to end. It was filed in the last days of the last administration with only a few FTC commissioners in the office. One of those commissioners who was opposed to this move wrote a scathing opinion in The Wall Street Journal, publicly disparaging the case. The new incoming chair of FTC recused himself from the case, which left the case on autopilot with FTC staff taking charge. The instigators, major supporters and witnesses moved away from the case midway—Apple and Huawei settled their licensing disputes with Qualcomm, Intel exited the modem market. The US Department of Justice, which shares the antitrust responsibility with FTC, went strongly against FTC, it even became a party to the hearing and pleaded against the case. But the biggest surprise for me was the ferocity with which the Ninth Circuit tore down and reversed every decision of the lower court, including the summary judgement.
Highlights of the ruling
This indeed was a complex technical case, where the judges had to quickly develop full understanding of the industry. Rosenberg highlighted the challenges of appellate court judges “They have to work on the record that somebody else has created for them, including lots of documentary evidence, witness testimony, lower court’s assertions and more” he added “considering that, the judges did an amazing job, cutting through the noise and really getting to the core issues and opine on them.” The interesting thing I found reading through more than 50-page ruling is, how it summarized and reduced the case into five key questions:
-
Whether Qualcomm’s “no license, no chips” policy amounts to “anticompetitive conduct against OEMs” and an “anticompetitive practice in patent license negotiations”
-
Whether Qualcomm’s refusal to license rival chipmakers violates both its FRAND commitments and an antitrust duty to deal under § 2 of the Sherman Act
-
Whether Qualcomm’s “exclusive deals” with Apple “foreclosed a ‘substantial share’ of the modem chip market” in violation of both Sherman Act provisions
-
Whether Qualcomm’s royalty rates are “unreasonably high” because they are improperly based on its market share and handset price instead of the value of its patents
-
Whether Qualcomm’s royalties, in conjunction with its “no license, no chips” policy, “impose an artificial and anticompetitive surcharge” on its rivals’ sales, “increasing the effective price of rivals’ modem chips” and resulting in anticompetitive exclusivity
The panel decided that FTC and lower courts were wrong on all counts. Rosenberg said that the opinion gave very logical, persuasive and point to point arguments with obviously relevant citations to refute all those assertions. Here are some of the excerpts from the opinion:
“…OEM-level licensing policy, .. was not an anticompetitive violation of the Sherman Act.”
“…to the extent Qualcomm breached any of its #FRAND commitments, the remedy for such a breach was in contract or tort law…”
“…”no license, no chips” policy did not impose an anticompetitive surcharge on rivals…”=
“…We now hold that the district court went beyond the scope of the Sherman Act…”
” Thus, it [Qualcomm] does not “compete”—in the antitrust sense—against OEMs like Apple and Samsung in these product markets. Instead, these OEMs are @Qualcomm’s customers…”
“…OEM level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,”
“…while Qualcomm’s policy toward OEMs is “no license, no chips,” its policy toward rival chipmakers could be characterized as “no license, no problem…”
“…even if we were to accept the district court’s conclusion that Qualcomm royalty rates are unreasonable, we conclude that the district court’s surcharging theory still fails as a matter of law and logic.”
“…neither the Sherman Act nor any other law prohibits companies from (1) licensing their SEPs independently from their chip sales; (2) limiting their chip customer base to licensed OEMs…”
“…Our job is not to condone or punish Qualcomm for its success, but rather to assess whether the FTC has met its burden under the rule of reason … We conclude that the FTC has not met its burden…”
What this means for the industry
This indeed was a landmark decision with long ranging consequences. It surely clears the clouds of uncertainty that were hanging over Qualcomm’s licensing business for a long time. It will also be a welcome decision for many other patent holders and licensors. The precedent this case has set will be used for resolving patent related antitrust issues for a long time to come. Here are some of the specific things I think are relevant:
-
Device-level licensing is not anti-competitive
-
FRAND and patent violations are outside the purview of the antitrust law, and are better handled under the contract law
-
Royalties of one company do not have to be in-line with the rates other companies charge
-
Surcharge on competitors may have to be direct, at least the “effective surcharges” from complex inferencing do not work
Rosenberg said “Qualcomm’s novel licensing model and its policies have now gone through intense global legal litigation and have successfully proven themselves. Now we are more confident and working hard to innovate and to expand the reach of 5G and bring its benefits to the world.”
What is next for the case?
The FTC has not made comments on its next steps. It does have a couple of options. It could ask for what is called an “en banc hearing” in which the whole Ninth Circuit bench (or a major part of it) is asked to hear the case. But for that to happen, a majority of the judges would have to vote to agree to the hearing. Even after the en banc hearing, either party could knock on the doors of the Supreme Court and ask whether it would be willing to hear the case.
But, keeping all the theoretical options aside, I think a unanimous verdict, ferocious opinion coupled with the fact that all of the lower court’s decisions were vacated, makes it very less likely for FTC to keep pushing the case further. Since the instigators and supporters have also moved on, there is no incentive for anybody to keep it going. The FTC might ask for an en banc hearing anyway as a face-saving step as that does not require significant effort from its side. Since en banc is a large effort, and many other judges will have to spend a lot of time and energy to fully understand such a highly technical and complex case to give any verdict, I doubt they will grant it. Hence, I am confident that in many respects, this is the end of the road for the case.
As we await the FTC’s response, for more articles like this, and up-to-date analysis of the latest mobile and tech industry news, sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter, or listen to our Tantra’s Mantra podcast.
Right before the passing of the deadline, as expected, the Federal Trade Commission (FTC) took another swing at Qualcomm by filing a request to reconsider the recent appellate court decision. But to everybody’s surprise, the FTC Chair and Trump appointee Joseph J. Simons, coming out of recusal, authorized that decision.
This request will again set in motion activities at The United States Court of Appeals for the Ninth Circuit (Ninth Circuit). After a few more weeks of action, I believe, eventually, this case will go into the history books as a great precedent for antitrust law in the realm of patents and licensing. Interestingly, Apple which was the alleged instigator of this case is already using this precedent to fight its case against Epic Games!
Side note: If you would like to know the full background of the case, refer to my earlier articles in the FTC vs. Qualcomm article series.
Well expected action by FTC but not by its chair
Even after the emphatic rebuke from the unanimous Ninth Circuit panel, FTC was well expected to file this request called en banc, as I predicted in my earlier article. There are many reasons for it: First, it doesn’t require much effort, only a short brief need to be submitted. Second, even in the unlikely event that its request is accepted, the rehearing will be short with minimal participation from FTC. Third, FTC would not like to appear as if it has given up on the case.
The most surprising thing was FTC’s chairman Simons siding with the other two commissioners resulting in the 3-2 in favor of en banc. He was recused from the case till May 2020, because his previous employer, Paul Weiss Rifkind Wharton & Garrison, advised Qualcomm on its unsuccessful bid to buy NXP Semiconductors. Since he is a Trump appointee, and the FTC case was filed in the wee hours of the Obama administration, even without the full commission in office, it was widely assumed that he would be against the case. Additionally, the administration’s Department of Justice (DoJ), Department of Defense, and few departments are also against the case, and in an unusual move, DoJ forced themselves into the Ninth Circuit hearing and argued against FTC.
The reasons behind Simons vote are not clear. Trump tweeting about government agencies not acting against tech companies might have made him show some action but on the wrong target. Since this was an easy move for FTC, he must have thought of going along with FTC staff during the last step of this case. Or maybe he actually believes in the case? We can only speculate. FTC taking the full 45 days available to file the request was also interesting. Maybe they are taking a more critical look at the case. As you may know, because of the 2-2 tie at the commision, FTC staff was running the show till now.
How does en banc work?
En banc is a process through which either of the parties requests the entire bench of the Ninth Circuit to reconsider the case. If you recollect, the earlier decision was heard by a three-member panel. Now, the full bench with 29 judges, minus any recusals, will take a vote on the request. If the majority votes to accept the request, the case will be assigned to another panel of 11 judges for a rehearing. The rehearing is expected to be short, only requiring Qualcomm to submit a reply to FTC’s en banc brief. No new evidence, and typically no physical hearing.
The rehearing has a quite high bar. Historically, less than one percent of the requests have been accepted. Only cases that are consequential for precedence, or that contradict any previous rulings or resolve any previous contradictions in the circuit are accepted. Also, the bench’s view of whether the panel has correctly applied the appropriate laws is a crucial consideration.
What is FTC arguing?
The 83-page long brief filed by FTC relies on many of their same arguments presented earlier in the case. Here are a few things, that are new and worth noting:
-
Argues that the Ninth Circuit panel only examined the applicability of the antitrust law to patents and licensing, and opined it is not, which obviously FTC disagrees
-
Points out that the panel did not disagree with any of District Judge Koh’s findings, and hence they must be true. Further, they refer to them as “facts” which I think is a big leap of faith
-
Relies heavily on United Shoes and Microsoft antitrust cases and attempts to draw strong parallels between them and Qualcomm. Clearly, they have learned their lesson and have moved away from the Aspen Skiing case!
-
Argues that Qualcomm’s royalties are inflated because of its chip monopoly, because, as claimed unsuccessfully before, its peers’ licensing revenues are much lower.