Article Series

Demystifying Cellular Patents and Licensing

Answers to the questions you always wondered but were afraid to ask

These articles were originally written by Prakash Sangam for RCR Wirless

Patents spark joy in the eyes of the innovators! Patents not only recognize innovators’ hard work but also provide financial incentives to keep inventing and continue to make the world a better place. Unfortunately, patent licensing often referred to as Intellectual Property Rights (IPR) licensing, has recently gotten a bad rap. The whole IPR regimen seems mystical, veiled under a shroud of confusion, misinformation, and of course, controversies. But tearing that shroud reveals the fascinating metamorphosis of abstract concepts developing into technologies that transform people’s lives. This process, in turn, creates significant value for the inventors.

I have been exposed to the cellular IPR in my entire career. And I thought I understood it well. But my research regarding the various aspects of the IPR journey, including their creation, evaluation, and licensing, was a real eye-opener, even for me. In a series of articles, I will take you through the same amazing journey that will demystify the myths, the misunderstandings, and the misinterpretations. I will use the standardization of 4G, which has run its full course, and that of 5G, which is ongoing, as the vehicles for our journey. So, get on board, buckle up and enjoy the ride!

Organizations that build cellular standards

It all starts at the International Telecom Union (ITU), an arm of UNESCO (www.unesco.org), which is a part of the United Nations. For any new generation of standard (aka G), ITU comes up with a set of minimum performance requirements. Any technology that meets those requirements can be given that specific “G” moniker. For 4G, these requirements were called IMT-Advanced, and for 5G, they are called IMT-2020. In the earlier days of 4G, there were two technologies that got the moniker. One among them was developed by IEEE, called WiMAX, which no longer exists. The other was developed by the 3rd Generation Partnership Project (3GPP), the most important and visible global cellular specifications organization.

3GPP, as the name suggests, was formed during 3G days, and has been carrying the mantle ever since. 3GPP is a combination of seven telecommunications Standard Development Organizations (SDO), representing telecom ecosystems in different geographical regions. For example, the Alliance for Telecom Industry Solutions Association (ATIS) represents the USA; the European Telecommunications Standards Institute (ETSI) represents the European Union and so on. In essence, 3GPP is a true representation of the entire global cellular ecosystem.

3GPP develops specifications that are then affirmed as relevant standards by SDOs in their respective regions. 3GPP’s specifications are published as a series of Releases. For example, Release 10 (Rel. 10) had the specifications that met the ITU requirements for 4G (IMT-Advanced). 3GPP sometimes also gives marketing names to a set of these releases. For example, Rel. 8 – 9 were named as Long Term Evolution (LTE), Rel. 10-12 were named as LTE Advanced, and so on. The Rel.15 includes the specifications needed to meet 5G requirements.

To summarize, ITU stipulates the requirements for any “G,” 3GPP develops the specifications that meet those requirements, and the SDOs affirm those specifications as standards in their respective regions. How the standards building process works

With those many organizations and their representatives involved, standards development is a long, arduous, and systematic process. 3GPP has several specification working groups focused on different parts of the cellular system and its interworking, including radio network, core network, devices, and others. The members of these groups are representatives of different SDOs.

Now coming to the actual process itself, the ITU requirements act as goals for 3GPP. The efforts start-off with members bringing their proposals, i.e. their innovations, to achieve the set goals. For example, for 4G one of the proposals was techniques to use OFDMA for the high-performance mobile broadband. These proposals are presented in each of the relevant groups. There are usually multiple of them for any given problem. All these proposals are discussed, closely scrutinized, and hotly debated. Ultimately, winning ideas emerge through a consensus process. One of the members of the group is then nominated to be the editor, and he/she distills the winning ideas into a working document. That document is continuously edited and refined in a series of meetings, and when stable, is published as the first draft of the specification. Publishing the first draft is a major milestone for any release. Companies usually start designing their commercial products based on the first draft.

The standard refinement process continues for a long time even after the first draft, this is akin to how software “bug fixing” and update process works. Members continuously submit contributions aka bug-fixes to refine the draft. Typically, these contributions are substantially higher in volume than the initial proposals. This is because the latter are radically new concepts or innovations, whereas the former could be trivial, such as editorial corrections. Once all the bug-fixing is done, the final specification is released.

As evident, for any new innovation to be accepted and included in the standard, it has to go through a rigorous vetting and has to withstand the intense scrutiny by peers and competitors. This means the inclusion is an explicit recognition by the industry that the said technology is a superior solution to the given problem. 3GPP contributions and record-keeping

3GPP is a highly bureaucratic organization, with a robust and well established administrative and record keeping system. But for historical reasons, the system is not equally rigorous throughout the process. For example, record keeping is nominal until the creation of the first draft. The proposals, ideas, contributions presented during that time are just tagged as “considered” or “treated,” without any specific recognition. However, the record keeping gets very structured and rigorous after the first draft. The bug-fixing contributions that are adopted into the specification are tagged with more official-sounding names such as “approved,” no matter whether they are very trivial or significant. These uneven record-keeping and naming practices have created some very simpleton, amateurish and really flawed IPR evaluation methods. More on this in later articles.

Nonetheless, 3GPP specification development is a consensus-based, democratic process, by design. This necessitates collaboration among members, who many times have opposite interests. This approach indeed has made 3GPP a great success, resulting in the cellular industry to excel and thrive.

Cellular patents are created during the standardization process

The cellular standardization process is primarily a quest to find the best solutions for a systematic problem. The winning innovations borne out of that process create valuable patents. You can guarantee that almost all the ideas presented as candidates for standardization hit the patent offices in various countries before coming to 3GPP. The value of those innovations and thereby patents dramatically increases when accepted and incorporated into standards. Inclusion in the standard is also the stamp of approval that the innovation is the best of the crop, as it has won over other competing ideas, as I explained my previous article.

Another important aspect, especially relevant to cellular patents, is that the innovations presented to standards are the solution to solve an end-to-end system problem. This means those ideas are not specific to just the device or the network, but a comprehensive solution that touches many parts of the system. So, many times, it is very hard to delineate the applicability of those ideas to only one part or section of the system. For example, if you consider MIMO (Multiple Input Multiple Output) technique, it needs a complete handshake between the device and the network to work. Additionally, many patents might touch many subsystems within the device or the network, which further complicates the effort to isolate their relevance to specific parts. For example, consider how the power management and optimization in a smartphone works, which makes AP, Modem and other subsystems wake up or go to sleep in sync. That innovation might touch all those subsystems in the phone.

All patents not created equal

Thousands of patents go into building cellular wireless systems, be it devices, radio infrastructure or core networks. At a very basic level, these patents can be divided into two categories: Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEP or NEP). SEPs are those which are absolutely necessary to build a standard compliant product, and that can’t be circumvented. Hence, they are highly valued. On the other hand, non-SEPs are relevant to standards, but may not necessary for the basic functioning of the standard compliant products and can be designed around. For example, for 4G LTE devices, patents that define using OFDMA for cellular connectivity are SEPs, whereas patents that improve the battery life of the devices could be considered as non-SEPs.

3GPP and Standard Development Organizations (SOD) strongly encourage early disclosure of IPR that members consider essential, or potentially essential for standards. Further, they also mandatorily require licensing of SEPs on fair, reasonable and non-discriminatory (FRAND) terms. There are no such licensing requirements for non-SEPs.

While 3GPP or SDOs make FRAND compliance for SEPs mandatory, they don’t enforce or regulate any specific monetary value for them. They consider the licensing to be a commercial transaction outside their purview, and hence let the market forces decide their worth. How to value patents?

According to some estimates, there were 250,000 active patents covering smartphones in 2012. And when I write this article in 2019, I am sure that number has become even bigger. Then the issue becomes how to determine the value of these patents, and how best to license and administer them to others.

With the sheer number of patents involved, it is impossible to manage licensing on an individual patent basis. It is even more impractical to license them on a subsystem or at the component level, as mentioned before, it is hard to delineate their applicability to a specific part. So, it indeed is a hard problem to solve. Since cellular standards have been around for a few decades now, it is worthwhile to examine how historically licensing has been dealt with.

In the 2G days when the cellular markets started expanding, there were a handful of well-established large players such as Ericsson, Nokia, Motorola, Nortel. Alcatel, Siemens and others. These players not only developed the technologies but also had their own devices and network infrastructure offerings. Since it was a small group of players, and all of them needed each other’s technology to make their products, they resorted to a simple method of bartering, also known as cross-licensing. Some industry observers and participants accused them of artificially inflating the value of their patents to make it very hard for any new players to enter the market.

With the advent of 3G, Qualcomm appeared on the scene with a unique horizontal business model. Qualcomm’s core business was to invent in advanced mobile technology, make it accessible to the ecosystem through licensing, and enable everyone to build compelling products based on its technology (Qualcomm initially invested in infrastructure, mobile device and service provider businesses, which they eventually divested). Qualcomm’s licensing made the initial investment more reasonable and the technologies accessible for the OEMs, which significantly reduced the entry barrier. The rise of Apple, Samsung, LG as well as the score of Chinese OEMs can be attributed to it.

Taking the market forces approach, Qualcomm decided to license the full portfolio of patents, including tens of thousands of patents, for a percentage of the wholesale selling price of the phone. They put a cap on the fee when the price of phone prices started getting higher. Qualcomm decided to license the IPR to the phone OEMs because that’s where the full value of their innovations is realized. Apparently, this was also the approach all the patent holders during that time, including Ericsson, Nokia and other practiced, as attested by some of these companies during Qualcomm vs. FTC trial. This practice has continued until now and has withstood the challenges all over the world. Of course, there have been challenges and changes to the actual fees charged. But the approach has still been largely intact.

Usually, the actual licensing rates are confidential among the licensee and licensors. We got some details during Qualcomm’s court cases around the world. As of now, what we know is, for example, Qualcomm charges 3.25% of the device wholesale price for its SEPs, and 5% for the full portfolio including both SEPs and non-SEPs. The device price base is capped at a max of $400.

There are others in the industry, such as Apple who are attempting to change this decade-old approach and proposing a new approach, sometimes referred to as the Smallest Saleable Patent Practicing Unit (SSPPU) pricing. Their argument is that most of Qualcomm’s SEP ’s value is in the modem, and hence the licensing fee should be based on the price of the modem and not the phone. Obviously, Qualcomm disagrees, and both are fighting it out in the courtrooms around the world.

Being an engineer myself, I know that when designing a solution, engineers don’t consider the constraints of limiting it to a specific unit, or subsystem or apart. Instead, they come up with the best solution that effectively solves the problem. Often, by the virtue of such an approach, the solution involves the full system, as I explained in two examples earlier. So, in my view, limiting the value to a specific unit is a very simpleton, impractical approach and grossly undervalues the monetizing ability of innovations. Hence, I believe, the current approach should continue, and let the market forces decide what actual price is.

The statement “All patents are not created equal” seems like a cliché, but is absolutely true! The differences between patents are multi-dimensional and much more nuanced than what meets the eye. I slightly touched upon this in my previous article. There is denying that going forward, patents will play an increasingly bigger role in cellular, not only pitting companies against each other but also countries against one another for superiority and leadership in technology. Hence it is imperative that we understand how patents are differentiated, and how their value changes based on their importance.

Let me start with a simple illustration. Consider today’s cars, which have lots of different technologies and hence patents. When you compare the patents for the car engine, to say, the patents for the doors, the difference between relative importance is pretty clear. If you look at the standards for building a car, probably the patents for both the engine and the door are listed as listed essential, i.e., SEPs (Standard Essential Patents). However, the patent related to the engine is at the core of the vehicle’s basic functionality. The patent for the door, although essential, is clearly less significant. Another way to look at this is, without the idea of building the engine; there is not even a need for the idea for doors. That means the presence of one is the reason for other’s existence. The same concepts also apply to cellular technology and devices. Some patents are invariably more important than others. For example, if you consider the 5G standard, the patents that cover the Scalable-OFDMA are fundamental to 5G. These are the core of 5G’s famed flexibility to support multiple Gigabits of speeds, very low latency, and extremely high reliability. You can’t compare the value of that patent to another one that might increase the speed by a few kilobits in a rare use case. Both patents, although being SEPs, are far apart in terms of value and importance.

That brings us to another classic challenge of patent evaluation—patent counting. Counting is the most simplistic and easy to understand measure—whoever has the most patents is the leader! Well, just like most simple approaches, counting also has a big issue—it is highly unreliable. Let me again explain it with an illustration. Consider one person having 52 pennies and a second person having eight quarters. If we apply simple counting as a metric, the first person seems to be the winner, which can’t be farther from the truth. Now applying the same concept to cellular patents, it would look stupid to call somebody a technology leader purely based on the number of patents they own, unless you know what they are.

When you look at the 5G standard, it has thousands of SEPs. If you count patents for Scalable-OFDMA and other similar fundamental and core SEPs with the same weight as minor SEPs that define peripheral and insignificant protocols and other things, you would be highly undervaluing the building blocks of the technology. So, simply counting without understanding the importance of the patents for technology leadership is very flawed. Also, the process of designating a certain patent as a SEP or not is nuanced as well, which makes the system vulnerable to rigging and manipulation, resulting in artificially increased SEP counts. I will cover this in the later articles. This potential for inflating the numbers further exacerbates the problem of patent counting.

In conclusion, it is amply clear that all patents are not created equal, and simpleton patent counting is not the best measure to understand the positioning of somebody’s technology prowess. One has to go deeper and understand their importance to realize the value. In my next articles, I will discuss the key patents that define 5G and explore alternate methods for patent evaluations that are possibly more robust and logical. In the meantime, beware and don’t be fooled by entities claiming to be leaders because of the sheer volume of their patent portfolio.

3GPP is this mystic organization that many seem to know, but few understand it. The key players of this efficient and well-regarded organization work often without the fanfare or public recognition. But no more! As part of this article series, I go behind the doors, explore the organization, meet the hard-working people, and bare the details on its inner workings.

Side note, if you would like to understand the cellular standardization process, please read my previous articles in the series here, here, and here.

“3GPP is a membership-driven organization. Any company interested in telecommunications can join, through one of its SDOs (Standard Development Organizations)” said Mr. Balazs Bertenyi of Nokia Corporation, the current chair of TSG-RAN and a 3GPP veteran. “One of the important aspects of 3GPP is that a large portion of its working-level office bearers are members themselves and are elected by the other fellow members.”

I became a proud member of 3GPP through the American SDO, ATIS, earlier this year.

3GPP organization structure

3GPP consists of three layers, as shown in the schematic: Project Coordination Group (PCG) at the top, which is more ceremonial; three Technical Specifications Groups (TSG) in the middle, each responsible for a specific part of the network; multiple Working Groups (WG) at the bottom, where the actual standards development occurs. There are many ad-hoc groups formed within each of these as well. All these groups meet regularly, as shown in the example meeting cycle.

Inner workings of WGs and the unsung heroes

Let’s start with the WGs, specifically the ones that are part of TSG-RAN. Being an RF Engineer, these are closest to my heart. However, this discussion applies equally to other TSGs/WGs as well. There are six WGs within TSG-RAN, each with one chair and two vice-chairs.

The best way to understand the group’s workings is to analyze how a fundamental 5G feature such as Scalable OFDMA would be standardized. There might be a few proposals from different member companies. The WGs have to evaluate these proposals in detail, run simulations for various scenarios to understand the performance, the pros and cons, competitive benefits, and so on. They have to decide the best solution and develop standards to implement it across the system. As evident, the WG chair must facilitate the discussion in an orderly, fair, and impartial way, and let the group reach a consensus decision. As you can imagine, this task is a combination of science and art—bringing people together through collaboration, personal relationships, and making sure they arrive at meaningful conclusions—all of this while under tremendous time pressure.

In such a situation, WG members expect the chair to be fair, balanced, and trustworthy. Many times, the members whose companies they represent are bitter competitors with diagonally opposite interests, each trying to push their views and assertions for adoption. “It is quite a task bringing these parties together for a consensus-based agreement, in the true spirit of 3GPP,” says Mr. Bertenyi. “It requires deep technical knowledge, a lot of patience, empathy, leadership, and ability to find common ground to be a successful WG chair.” That is the reason why 3GPP’s process of electing chairpersons through the ballot, instead of nomination, makes perfect sense.

The members of WG vote and elect somebody they trust and have respect for to lead the group. Before taking over, the employer of the newly elected officer has to formally sign a support letter declaring that the officer will get all the support from his company to successfully undertake his duties as a neutral chair. “From then on the elected officer stops being a delegate for his company, and becomes a neutral facilitator working in the interest of 3GPP and the industry” added Mr. Bertenyi. “Being a chair, I have presided over many decisions that were not supported by my company but were the best way forward in a given dispute. I have seen it often happen in WGs as well. For example, I saw Wanshi Chen, chair of RAN-1 do the same many times.”

The WG members are primarily inventors trying to develop solutions for difficult technological challenges. The WG chairs are at the forefront of this effort, and by virtue of that, it is not uncommon for them to be prolific inventors themselves and be a party to a large number of patents. This, in fact, proves that they are worthy of the leadership role they are given.

“It wouldn’t be untrue to say that the hard-working WG chairs are truly unsung heroes of 3GPP, and they deserve much respect and accolades,” says Mr. Bertenyi. “I am extremely proud to be working with all the chairs of our RAN WGs—Wanshi Chen of Qualcomm heading RAN-1, Richard Burbidge of Intel heading RAN-2, Gino Masini of Ericsson heading RAN-3, Xutao Zhou of Samsung heading RAN-4, Jacob John of Motorola heading RAN-5, Jurgen Hoffman of Nokia heading RAN-6.”

Responsibilities of TSG and PCG

While the WGs are workhorses, TSG sets the direction and manages resource allocation and on-time delivery of specifications.

There are three TSGs, one each for Radio and Core Networks and a third for systems work. Each of the TSGs has a chair and three vice-chairs, all elected by the members. They provide direction based on market conditions and needs. For example, the decision to accelerate 5G timelines in 2016 was taken by the TSG-RAN. The chairs are usually accomplished experts and excellent managers. I witnessed how effectively Mr. Bertenyi conducted the recent RAN#84 plenary while being fair, cheerful, and decisive at the same time.

PCG on record is the highest decision-making body, dealing mostly with non-technical project management issues. It is chaired by the partner SDOs on a rotational basis. It provides oversight, formally adopts the TSG work items, and ratifies election results and the resources commitments.

Elections and leadership tenure

As mentioned, all the working-level 3GPP office bearers are duly elected by fellow 3GPP members in a completely transparent ballot process. The standard tenure of each office bearer is two years. But often they are reelected for a second term based on their performance, as recognition for their effective leadership. Many times members start with vice-chair position and move on to the chair level, again based on their performance. In closing

3GPP is a truly democratic, consensus-based organization. Its structure and culture that encourages collaboration, even among bitter business rivals, has made it a premier standards development organization. The well-managed cellular technology roadmap and success of the mobile industry at large is a testament to 3GPP’s systematic and broad-based approach.

While the 5G race rages on, so does the race to be perceived as the technology leader in 5G. This race transcends companies, industries, regions, and even countries. No major country, be it the new power such as China or existing leaders such as the US and Europe, wants to be seen as laggard. In this global contest, 5G patents and IPR (Intellectual Property Rights) is the most visible battleground. With so many competing entities and interests, it indeed is hard to separate substance from noise. One profound truth prevails even with all the chaos: Quality of inventions always beats quantity.

The fierce competition to be the leader has made companies make substantial investments to innovate new technology as well as play a key role in standards development. Since the leadership battles are also fought in the public domain, the claims of leadership has been relegated to simplistic number counting, such as how many patents one has, or much worse, how many contributions one has submitted to the standards. In the past, there have been many reports dissecting these numbers in many ways and claiming one or the other company to be the leader.

The awakening – Quality matters

Fortunately, now there seems to be some realization of the perils of this simplistic approach to a complex issue. There have been reports recently about why the quality, not the quantity matters. For example, last month, the well known Japanese media house, Nikkei, published this story based on the analysis of Patent Result, a Tokyo-based research company. Even the Chair of the 3GPP RAN group, Mr. Balazs Bertenyi, published a blog highlighting how technology leadership is much beyond simple numbers.

Ills of contributions counting

One might ask, what’s wrong with number counting, after all, isn’t it simple and easy to understand? Well, simple is not always the best choice for complex issues. Let me illustrate this with a realistic example. One can easily create the illusion of technology leadership by creating a large number of standards contributions. The standards body 3GGP, being a member-run organization, has an open policy for contributions. As I explained in the first article of this “Demystifying cellular patents” series, there is a lot of opportunity to goose-up the number of contributions during the “bug-fix” stage when the standard is being finalized. Theoretically, any 3GPP member can make an unlimited number of contributions, as long as nobody opposes them. Since 3GPP is also a consensus-driven organization, its members are hesitant to oppose fellow member’s contributions, unless they are harmful. It’s an open question whether anybody has exploited this vulnerability. If one looks closely, they might find instances of this. Nonetheless, the possibility exists, and hence simply, the number of contributions can’t be an indicator for anything important, let alone technology leadership.

In his blog, Mr. Bertenyi says, “…In reality, flooding 3GPP standards meetings with contributions is extremely counterproductive...” It unnecessarily increases the workload on the standards working groups and extends the timelines, while reducing the focus on the contributions that really matter.

So what matters? Again, Mr. Bertenyi explains, “…The efficiency and success of the standards process are measured in output, not input. It is much more valuable to provide focused and well-scrutinized quality input, as this maximizes the chances of coming to high-quality technical agreements and results.”

Contrasting quantity with quality

Another flawed approach is measuring technology prowess by counting the number of patents the company holds. Well, unlike mere contributions, the number of patents has some value. However, this number can’t be the only or meaningful measure for leadership. What matters is actually the specific technology those patents bring to the table. Meaning, how important they are to the core functioning of the system. The Nikkei article, which is based on Patent Result’s analysis, sheds light on this subject.

Patent Result did a detailed analysis of the patents filed in the U.S. by major technology companies, including Huawei, Intel, Nokia, Qualcomm, and many others. It assessed the quality of the patents according to a set of criteria, including originality, actual technological applications, and versatility. Their ranking based on the quality of patents was far different than that of the number of patents.

Some might ask, isn’t the SEP (Standard Essential Patent) designation supposed to separate the essential, i.e., important ones from non-important ones? Well, in 3GPP, SEP designation is a self-declaration. Because of that, there is ample scope for manipulation. This process is a major issue in itself, and a story for another day! So, if something is an SEP, it doesn’t necessarily mean it is valuable. In my previous article “All patents are not created equal,” I had compared and contrasted two SEPs in a car: one for the engine of the car and another for its fancy doors. While both are “essential” to make a car, the importance of the first is magnitudes higher than the second. On the same strain, you couldn’t call a company with a large number of “car-door” kinds of patents to be a leader over somebody who has fewer but more important “car-engine” level patents.

So, the bottom line is, when it comes to patents, quality beats quantity any day of the week, every time!.

As I discussed in my previous articles, the industry is finally waking up to the fact that when it comes to patents, quality indeed matters much more than quantity. Also, the realization that simpleton approaches such as standards contribution counting or counting the mere number of patents doesn’t give an actual picture of technology leadership. At the same time, assessing the quality of patents has been a challenge. While the gold standard, in my view, is market-based valuation, new quality accessing metrics and methods are emerging. These are designed to consider many aspects such as how fundamental and market impacting the inventions are, how wide the reach of the patents is, how many other patents are derived from them etc. and try to come up with a quality score. I will explore many of them as part of this article series, here is the discussion on the first one on the list.

Patent Asset Index™ by LexisNexis® Patent Sight®

Patent Sight is a leading patent analytics and valuation firm, based in Germany. Its services are utilized by many leading institutions in the world, including the European Commission. Patent Sight has developed a unique methodology that considers the importance of the patent in the hierarchy of the technologies, its geographical coverage, and other parameters to provide a score called the Patent Asset Index. This index allows industry as well as general audiences to not only understand the comparative value of the patents that various companies hold but also rank them in terms of technology leadership.

Here are some of the Patent Sight charts regarding 4G and 5G patents, presented at a recent webinar hosted by Gene Quinn of IPWatchDog. During the webinar, William Mansfield of Patent Sight shared these charts. The first chart shows the number of patents filed by some of the top cellular companies between 2000 and 2018. As is evident, if only quantity was the metric, one could say that companies such as Qualcomm, Huawei, Nokia, LG, and Samsung, are far ahead of the others.

Now let’s look at the Patent Asset index chart of the same companies:

Under this assessment, the scene is vastly different. Qualcomm is still in the lead, and there is a drastic change in the ranking as well as the relative standings of others. Qualcomm is far ahead of its peers, followed by Samsung as a distant second, followed by LG, Nokia, and InterDigital. Surprisingly, Huawei, which was neck-to-neck with Qualcomm in terms of sheer number patents, is much farther behind. Why quality vs. quantity comparisons matter?

Why quality vs. quantity comparisons matter?

Unquestionably patents are borne out of important innovations. However, as I have explained in this article, all patents are not created equal. Also, when it comes to cellular patents, there is a much-believed myth that Standard Essential Patents (SEPs), as the name suggests, are extremely important, and are core to the technology. However, because of 3GPP’s self-declaration policy, this designation is not as reliable as it seems and is highly susceptible to abuse. For example, companies with deep pockets that are interested in boosting their patent profile might invest large sums in developing non-core patents and declaring them as SEPs. That’s why the quality indicators such as the Patent Asset Index and other such approaches are important tools to assess the relative value of the patent portfolios.!

Always Connected PCs (ACPCs)

These articles were originally written by Prakash Sangam for RCR Wireless and Forbes

Have you heard the phrase “converting poison into medicine?” Well, that’s kind of what is happening to the PC industry now. Let me explain. Not too long ago, the rise of powerful smartphones and tablets, which were primarily powered by ARM processors, decimated the PC market. Interestingly, the tenets of smartphones – always connected, long battery-life, thin and light weight— that caused the downfall of PCs are bringing life back into them. The introduction of ultra-thin laptops and 2-in-1s are making PCs get their mojo back. In early December 2018, Qualcomm announced a major step in this smartphonification of laptops. Their new world’s first 7nm Snapdragon 8cx compute platform not only embodies all those hallmark characteristics of a smartphone, but also will provide the performance that will meet or exceed that of traditional intel x86 processors. Most importantly Snapdragon 8cx will run the full Windows 10 Enterprise version, and will natively run browsers and many other applications.

Qualcomm dipped their toes into the PC market by creating a new category, aptly named Always Connected PC (ACPC), which used their repurposed mobiles SoCs. They started with Snapdragon 835 and very recently Snapdragon 850. All these were built for Android OS, later optimized for Windows 10 and for computing devices. They had restricted Windows version, and offered limited performance mainly because the applications were run using ARM to x86 translators. They were good enough for use cases with light and simple tasks such as browsing, video etc., but not ready for processor intensive apps or enterprise-grade use cases. But the story is completely different for newly announced Snapdragon 8cx.

Qualcomm said that Snapdragon 8cx is purpose-built from the ground up for computing and Windows 10. Supposedly they have been working on this since 2015! Snapdragon 8cx indeed shares the architecture with, and was announced at the same time as, their flagship Snapdragon 855 mobile SoC. This will naturally attract the skepticism that just like previous version, this platform might also be slightly tweaked version of the mobile SoC. However, when you look closely at the significant difference between the building blocks of the two, it is quite clear that indeed Snapdragon 8cx is a different breed. For example, 8cx has the much more powerful Kryo 495 CPU vs. 485 on Snapdragon 855. The clocking configuration for the eight cores of the CPU is different as well. The Snapdragon 8cx has more advanced Adreno 680 Extreme vs. 640 in the mobile SoC. The Snapdragon 8cx has features that are only found in high-end enterprise laptops, such as support for dual HDR 4k displays, up to 16 GB RAM, NVMe SSD, UFS 3.0 and many more. Most importantly, during the launch event, Microsoft confirmed the Windows 10 Enterprise support for the Snapdragon 8cx, which indeed is a strong vote of confidence to the platform. Additionally, many popular applications such as Chrome, Firefox, Microsoft Edge, Internet Explorer browsers as well as Gameloft, Hulu and other applications run in the native mode and a wide range of apps are optimized for ARM on Windows.

When you combine these features along with trendsetting X24 LTE modem that provides up to 2 Gbps peak speed, Quick Charge 4, advanced audio capabilities with aptX HD codec, as well as the hallmark ARM features, multiday battery-life, always-on connectivity, I think there is no question that Snapdragon compute platform and ARM architecture is ready for primetime, and is well-equipped to challenge the dominance of Intel x86 based platforms in performance computing. Qualcomm’s claim that Snapdragon 8cx performance is comparable to a competitor (supposedly Intel core I-5) and is delivered at twice the battery-life should send chill down Intel’s spine.

Qualcomm confirmed that Snapdragon 8cx can be integrated with X50 modem for 5G connectivity, But for some reason it didn’t make it a major selling point. Looks like they are worried about the 5G taking away all the goodness of the compute effort, or perhaps there might be laptops which will not support 5G. Qualcomm is tight-lipped about the reasons. In my view, although X24 modem has excellent performance, ACPC with 5G is the ultimate ACPC one could have. After all it’s the “connected” PC, why not supersize it and make it the best on all aspects? Also, the huge capacity gains and efficiency improvements of 5G will enable operators to offer very attractive “always on” unlimited plans.

Coming back to the competitive landscape, ultra-thin PCs are the most profitable tier for Intel. They have had a good run with them so far. Some devices such as Microsoft’s Surface Pro and HP’s Folio have shown that Intel I-5 core processors can be designed into attractive fanless laptops with long battery-life, However, most other Intel x-86 based laptops fall much short. With Snapdragon 8cx based laptops planned to hit during second half of 2019, amidst the busy back to school and holiday seasons, it would be interesting to see how Qualcomm and Intel platforms will compete and perform. Come 2020, this will very quickly turn in to not just processors battle but also a 5G battle.

With 5G, the ACPC battle gets even more interesting. Based on Qualcomm’s comments, it seems that they will have 5G based ACPC in the market in early 2020, if not in late 2019. Intel has announced its own 5G connected laptop plans with Sprint. Knowing x-86 performance and their delayed 5G modems, lt will be a tall order for Intel to beat the battery -life and more mature 5G connectivity of Qualcomm ACPCs. With connected ultra-thin, long battery-life laptops continue to gain popularity and Qualcomm catching up in performance, Intel must adapt to extremely fast pace of innovation that smartphonificaton is bringing to PC industry to compete effectively.

A bunch of recent events, including the announcement of Microsoft Surface Pro X and Samsung Galaxy Book S, are supporting a turning point in the largely stagnant laptop market. These devices, dubbed as always-on, always-connected PCs (ACPCs), bring the hallmark characteristics of smartphones to laptops while also providing enterprise-class computing performance. As a long-time observer and an industry analyst, I strongly believe that ACPCs are set to transform laptops and redefine personal computing.

After revolutionizing portable personal computing in the late 1980s and ’90s, laptops have not changed much. Of course, they have become a bit thinner, lighter and more powerful. But considering that you still need to carry the charger and look for Wi-Fi or other connectivity wherever you go, you can’t call those incremental improvements a big leap. These incremental steps look even smaller when compared to the speed at which smartphones have evolved.

ACPCs completely change the outlook for laptops and accelerate the pace of innovation. They are always on, connected to LTE or 5G, can run a full day without needing a recharge and provide performance at par with or better than today’s bulky laptops. All of this is made possible by a new breed of processors with micro-architecture similar to the ones used in smartphones.

Smartphone Revolution Powered By Arm Processors

Ever since their debut in the early 2000s, smartphones have been dominating the personal computing space. They have rapidly grown in both performance and influence. Almost all of today’s smartphones are powered by processors with a micro-architecture designed by the British company Arm Holdings. Smartphone players such as Apple and Qualcomm use processor cores designed by Arm.

(Full disclosure: Qualcomm is a client of my company, Tantra Analyst.)

These processors have been proven to be power-efficient. Designed primarily for portable devices, they seem to have previously focused more on power consumption than processing capability. But the evolution of these processors and the optimizations from the original equipment manufacturers (OEMs) have dramatically improved their performance in recent years. This has set Arm processors up for performance-focused devices such as laptops, PCs and even servers.

Laptops Have Survived The Test Of Times

Laptops have defied many predictions of ultimate demise. It was netbooks they said would kill the laptops, but they ended up just being a fad. Then it was tablets that were supposed to replace laptops. But they never scaled up.

The way I see it, the biggest trait of laptops, which made them stand strong against these odds, was their ability to be a productivity and content creation tool — be it for personal and consumer-type use cases or enterprise ones. The basic needs for such use cases are excellent performance and support for thousands of existing Windows applications.

Writing The Next Chapter Of Laptops

The first attempt at making the Windows operating system (OS) compatible with Arm processors was circa 2012, called Windows RT, designed for tablets. But it turned out to be a dud, mainly because it couldn’t run existing applications. Its makers, Microsoft and Qualcomm, still believing in the concept, doubled their efforts. This round made sure Windows 10 and all those existing applications would work flawlessly on Arm processors used in ACPCs.

It is debatable whether ACPCs are a new category or an existing yet transformed laptop category. Some OEMs such as Lenovo, Samsung and Asus are continuing with traditional clamshells, whereas others like Microsoft are trying out the 2-in-1 model with detachable displays that covert to fully functional tablets.

I think it is telling that many PC vendors have introduced ACPCs. I believe that the attractiveness of bringing the smartphone-like battery life and user experience to laptops, the proliferation of 5G, along with a strong commitment from Microsoft and the entire PC ecosystem makes it clear that ACPCs are the future of laptops.

What’s Inside The ACPCs?

ACPCs are powered by Qualcomm Snapdragon platforms. The first-generation devices used optimized versions of Snapdragon SD835 and SD850. But the latest ones, including Samsung Galaxy Book S and Surface Pro X, use purpose-built Snapdragon 8cx (Pro X uses a modified version of 8cx chip called SQ1). Snapdragon 8cx has a powerful CPU and GPU, as well as strong artificial intelligence capability.

I’ve seen many popular browsers, video game platforms and media player developers porting their applications to run natively on Arm processors. Likewise, many enterprise vendors have ported their applications on Windows on Arm. Adobe announced that its drawing and painting applications will be available to ACPCs. And according to Microsoft, Surface Pro X offers three-times higher performance compared to the previous generation Surface Pro 6 that used a conventional x86 processor. So, there is no question in my mind that ACPCs are now primed for running high-performance workloads of consumers as well as enterprises.

The progress of ACPCs may be slower than some might have expected, but it takes time to transform an industry with more than three decades of history. I believe the Arm micro-architecture ready for performance-focused computing has repercussions beyond laptops, as there could be many applications and use cases.

What This Means For Marketers

Because of the stagnant market, it seems that marketers have gradually reduced their attention to laptops and, instead, moved their strategies toward media more suited for smartphones. I believe ACPCs will drastically change that equation. Marketers will likely need to quickly pivot their marketing plans and spend. Specifically, the 2-in-1 model almost creates a new category of devices, and marketers will be well served if they capitalize on this growing popularity and devise their marketing plans around them.

We are at the turning point of personal computing, and at the dawn of a new era with devices powered by Arm micro-architecture. It will be interesting to watch it unfold, especially for an analyst and a keen industry observer like me.

The fun of being an analyst is that you get to test new gadgets firsthand and share your opinions without any inhibitions. It also comes with a sense of responsibility towards your readers. I got my Microsoft Surface Pro X about two weeks ago and have been using it as my daily driver ever since. My verdict – it is an excellent productivity notebook for a pro user like me, who extensively uses office applications, browsing, videos, and social media. Beyond that, it also signals the dawn of a new class of always-on, always-connected notebooks (aka ACPCs) that will redefine personal computing.

I bought a 16GB/256GB Pro X model with a keyboard and stylus. The windows set-up on this was a breeze. The impressive part was the ease of enabling cellular connectivity, like a smartphone—push the nano-SIM in, a couple of clicks, and you are ready to go. I have been using connected laptops since 2008/3G days. It was always a pain to transfer a subscription from one laptop to another. Although I didn’t utilize it, a user-removable SSD drive is another neat feature. The best part of this machine is its always ON feature, just like smartphones. You come in front of it, your face is recognized, and it is ready to go. Additionally, OneDrive allowed me to move files from my old laptop seamlessly.

Ever since setting it up, I have been using it as my primary computer for working in my home office, for meetings with clients, bringing it to my son’s karate and other classes, etc. Thanks to the Snapdragon/SQ1 processor, Pro X is so thin, and light, carrying it around is extremely convenient. A solid productivity machine

The biggest character of Pro X is that it is a great workhorse, and using it is a joy! Its bright display is beautiful, and its thin bezels make a full 13” screen fit in a small form factor. Coming from my 13.3” laptop, I felt homely. I am a power user of many of the Microsoft Office tools, including Word, Excel, PowerPoint, and Outlook. The user experience was very snappy and super responsive, even when multi-tasking with lots of documents, spreadsheets, and presentations. Switching between windows of the same app or between different apps was very smooth.

I use emails on Outlook as my to-do list—keeping many email windows (more than 15) open till the action items in them are dealt with. My previous laptops had issues dealing with this, especially when the laptop was put to sleep and turned back on. Many times Outlook would become unresponsive, requiring restarts. But Outlook on Pro X has been pretty stable so far.

A lot of my work happens through the browser, and Chrome is my favorite. I usually have more than ten tabs open that span multiple Gmail accounts, local, national, and international news sites with video feeds, ads, etc., Tweetdeck and Twitter pages, Yahoo finance page, multiple forums that I regularly follow, Whatsapp web, Google Sheets and Google Photos that I share with my wife, Facebook, and others. I also use tabs as my to-do list. My kids call me crazy when they see how many tabs I use. Surprisingly, the user experience was smooth even with those many tabs open. As you might know, Chrome currently runs in the emulator mode. Microsoft recently announced the beta of their Edge browser that will run natively on ARM processors (i.e., on SQ1) that would further improve the performance and battery life. I am thinking of migrating to Edge and evaluate the experience myself.

So, all in all, I was very impressed with the workload Pro X could take and proved itself as a solid machine. A perfect companion for travel and offsite work – battery life and connectivity

The biggest differentiation of ACPCs such as Pro X, as touted by Microsoft, Qualcomm, and Arm, is their more than a full day of battery life. I really experienced it while using Pro X. I would always have at least 10 -20% of battery left after a full day of work (8-9 hours). That was using a mix of Wi-Fi and cellular connectivity. I bet I could eke out even more with optimized screen brightness and connectivity settings.

Pro X transformed how I go out for meetings and travel. I would always bring the charger with my old laptop to avoid battery anxiety, which necessitated carrying a bag. Once I decided to get the bag, I would throw in lots of “just-in-case” items that I hardly use. But with Pro X, viola! No anxiety, no charger, no bag, and none of the other junk! This thing is so sleek, light, and stylish. I carry it as a notebook! And a nice stylus with handwriting converter to boot! Additionally, with fast charging, its battery can go from 0 to 100% in a little over an hour.

For a road-warrior like me, integrated cellular connectivity is a no brainer. It is such a relief that I am always connected, no matter where — no need to search for Wi-Fi, no worries of security and privacy, etc. Also, no need to use my phone’s hotspot and worry about its battery running out.

What about gaming and other incompatible apps?

This is the most frequent question I encountered when carrying or using Pro X in public. Well, I am not a gamer, and, it turns out, I don’t use those x-86 apps that don’t have 32-bit versions, which are needed to run them on Pro X. So, I am not the best person to give a judgment on that.

There have been reports of people having trouble running games on this. That has actually worked in my favor! Ever since I opened the Pro X package, my teenage son had his eye on this thing, always tinkering with it. I think he tried a few of his favorite games, such as Minecraft, Fortnite, CS:GO. I have a feeling either they didn’t work, or he didn’t like the user experience. That is because, after the first couple of days, he resorted back to his powerful gaming rig. Obviously, Pro X is no match to his purpose-build beefy desktop.

What are the misses?

I think the biggest miss is its steep price tag. Even the most basic configuration with only the keyboard would cost $1,100 plus tax. So, this is no mainstream computer but targeted toward those who value its premium design and features.

Despite the premium cost, I was surprised that there was no cellular data plan included. I would have expected Microsoft to bundle at least a few months, if not a year, of data to let consumers evaluate the always-connected experience.

Pro X is a notebook, literally not a laptop. As with any Surface Pro, it is almost impossible to use it on your lap.

Heralding the ACPC era

Many people might review Pro X like any other expensive gadget, on its merits and misses. However, the relevance of Pro X is far beyond this one product. Its performance conclusively proves that ACPCs are real, and can deliver on the promises their proponents Qualcomm, Microsoft, and Arm have been making for the last two years. Pro X also shows the strong commitment these companies have for the ACPC concept. As mentioned, Pro X is not a mainstream device, but it will herald a new era of personal computing, and I am sure there will be more cost-effective options soon that will make arm-based ACPCs mainstream.

Qualcomm, during its annual Tech Summit in Maui, Hawaii, unveiled a comprehensive portfolio of platforms for Always-On, Always-Connected PCs (ACPCs) to cover the full spectrum of tiers and use cases. This announcement further solidifies the industry’s move toward ACPCs, led by Qualcomm, Microsoft, and Arm.

A broad portfolio of offerings

The Snapdragon 8cx, announced at the same event last year, was the first real ACPC platform that brought Arm chips into the performance and enterprise computing space. Since then, the 8cx has powered a handful of devices, including trend-setting Microsoft Surface Pro X, stylish Samsung Galaxy Book S, and the first 5G supported Lenovo ACPC. Many other designs are in the pipeline.

While the Snapdragon 8cx was targeted at the premium and high-performance segment, the newly announced Snapdragon 8c and Snapdragon 7c offer OEMs the choice to address to the other tiers in the highly competitive laptop space. The tiering is based on CPU, GPU, and DSP performance, Artificial Intelligence (AI), and Machine Learning (ML) capabilities, and cellular connectivity speeds. However, Qualcomm never forgets to emphasize that even with tiering, all the platforms squarely deliver on the ACPCs famed promise of smartphone-like ultra-thin form-factor, multiday battery life, and excellent connectivity, without any compromises. This promise is attractive for any tier, and that’s why almost every major PC OEM has embraced ACPCs.

Snapdragon 8c for everyday laptops

The key aspect of Snapdragon 8c is enabling sub-$800, highly capable, consumer, and enterprise ACPCs that excel in high productivity workloads, as well as top-notch entertainment and multimedia performance. The 8c is a beast sporting a 7nm octa-core Kryo 490 CPU, Adreno 675 GPU, 4-channel LPDDR4x memory, support for NVMe SSD, and UFS 3.0, dedicated Hexagon AI/ML Tensor Accelerator, integrated Snapdragon X24 LTE modem, and many other impressive features.

Snapdragon 8c offers 30% higher system performance than its predecessor—Snapdragon 850, more than 6 Trillion Operations Per Second (TOPS) AI/ML, and up to 2 Gbps of cellular speed.

Snapdragon 7c for entry-level ACPC

The primary focus of Snapdragon 7c is to bring the ACPC experience to even the cost-conscious entry-level laptops. These laptops are highly functional, with a sub-$400 price point. The 7c sports 8nm octa-core Kryo 468 CPU, Adreno 618 GPU, 2-channel LPDDR4x memory, robust AI/ML support unheard of at this tier, and integrated Snapdragon X15 LTE modem, among other things.

It offers 25% higher performance than competing solutions in the entry tier, more than 5 TOPS AI/ML, and up to 800 Mbps of cellular speed.

You can get the detailed specifications of this platform here.

Busting the myths of portability

Till now, portability in computing always meant a complex trade-off between weight and size, performance, battery life, and cost. If you wanted a thin and portable computing device, the only option was to use a tablet and be content with limited performance and crippled functionality, without the support for productivity OS such as Windows 10. On the other hand, if you wanted robust performance and long battery life, you had to cope with large and bulky devices with extended battery packs. If you wanted a combination of these features, you had to be ready for a hefty price tag.

But with ACPCs, you get uncompromised experience without any tradeoffs— Arm architecture that offers superior battery life and performance, full Windows 10 support for unhindered productivity, integrated cellular modem for always-on connectivity. All of that together in a thin, light-weight, and very attractive form factors, just like your smartphone.

The ACPCs are essentially aligning the computing industry with the smartphone industry. That will bring the smartphone industry’s hallmark of rapid innovation to the computing industry. Together both will benefit from the large economies of scale, cost-efficiency, and a huge ecosystem of OEMs, app developers, consumers, and enterprise players. That, in turn, has the potential to revitalize the stagnant and uninteresting laptop market and bring it much needed excitement and growth.

In other words, ACPCs are set to challenge the status quo of Intel’s x86 architecture and revolutionize the laptop/personal computing market.

In closing

Qualcomm’s announcement expanding the reach of ACPCs illustrates how the “Windows on Snapdragon” concept that Qualcomm, Microsoft, and Arm envisioned a few years ago is slowly but steadily coming to fruition. The comprehensive portfolio of platforms will pave the way for making ACPCs mainstream, bringing their benefits to all market segments, not just for the premium tier.

It will be interesting to see how the tussle between deeply rooted traditional x86 architecture and the disruptive Arm architecture unfolds and shapes the laptops and personal computing space.

While smartphones are all the rage in 5G, the market trends are aligning for a quiet revolution of 5G-enabled laptops (5GPCs) and other non-smartphone computer devices. The world's first 5GPC, Lenovo's Yoga 5G, was introduced at CES 2020, kick-starting the process. Although always-connected, always-on laptops (ACPCs) have been around for some time, their widespread adoption has been constrained mainly because of restrictive and expensive data pricing. The extremely high capacity and improved efficiency of 5G, which allows operators to offer attractive pricing combined with the remarkable improvement in the performance of ACPCs, has the potential to push the 5GPC market into high gear.

5G Offers The Best Network Technology For ACPCs

5G traction has been beyond anybody’s expectations. As of the end of 2019, 348 operators were investing in 5G and 61 operators had already commenced 5G services. The operators who have launched are steadily expanding their coverage. The introduction of dynamic spectrum sharing (DSS) — which allows 5G to use the 4G spectrum, expected commercially in the second half of 2020 — will substantially improve coverage. Thanks to the diligent work of regulators around the world, 5G has over 10 times more spectrum than 4G in many cases. That includes all the bands: higher (e.g., millimeter wave), middle (e.g., 2.5 and 3.5 GHz) and lower (e.g., 600 MHz).

lthough 5G’s super-high speeds get all the attention, the biggest advantage of 5G is its extreme capacity, thanks to all that spectrum. That means cellular operators have the opportunity, more than ever, to experiment with new pricing and data plans. We already see glimpses of that in the true unlimited data plans for smartphones and fixed wireless access (FWA) services and plans. I strongly believe that 5GPCs will be a worthy addition to the new horizons operators will explore with 5G.

For the operators pouring billions of dollars into 5G network build-out, the sooner and the more users they get on that network, the better. The abundant capacity of the 5G network allows operators to move laptop users into a new usage paradigm: from today’s “data sipping, only turning on the cellular connection when needed, always conscious of hitting the data limit” mindset to the “anywhere, anytime, worry-free” paradigm.

5G also allows true service bundling: a single contract and attractive pricing for smartphones, FWA, laptops and other connected devices. This, while reducing the cost for users, will increase the overall average revenue per user (ARPU) for operators. Bundled pricing brings service stickiness and builds long-term customer relationships. Operators could also work with 5GPC device OEMs to bundle the connectivity as part of the device cost, for at least the first months/year of 5G service. As a seasoned ACPC user, I know that once you experience the liberation of not looking for hot spots and constant worries of the safety of hot spots, hardly anybody will go back, as long the cost of that experience is reasonable.

5GPCs Will Be The Best ACPCs

ACPCs have been continuously improving their performance and are now ready to be productivity, enterprise and performance laptops. For example, the recently announced world’s first 5GPC by Lenovo offers high performance and 24-hour battery life. (Full disclosure: The laptop is powered by Qualcomm Snapdragon 8cx, and Qualcomm is a client of mine.) With a 5GPC, you can work from virtually anywhere without worrying about being near a power outlet or a Wi-Fi hot spot. The data speeds with 5G should be far better than any regular hot spot would provide.

With today’s traditional laptops that have shorter battery life, even if you had cellular connectivity, the untethered experience is limited because you have to always think of charging options. The extremely long battery life of ACPCs makes them truly untethered. Not being tethered physically or wirelessly is an exhilarating experience. And it is logical to think people would be willing to spend a little bit more for this higher perceived value.

5GPCs will be particularly attractive for enterprises. There are many reasons for this, and the biggest one is security. One of the main security risks for enterprises is their employees connecting laptops to unknown, unsecured Wi-Fi hot spots. With 5GPCs, IT departments will be certain that their employees will always be connected to a secure known 5G network. The potential costs of lost data or security breaches would certainly outweigh any minimal increase in the cost of 5G cellular connectivity. Also, 5GPCs bring many other benefits to enterprises: Integrated GPS allows reliable asset tracking and security mechanisms such as geofencing; being always on, laptops will always be up to date with the latest security patches and updates. Of course, the increase in employee productivity by being reliably connected all the time with excellent speeds goes without saying.

5GPCs will bring much-needed excitement to the largely stagnant laptop market. If managed properly, the 5GPC trend has the potential to create a new full replacement cycle, which might last for years.

All the stars are aligning for 5GPC to be an attractive market for the industry. 5GPCs have the performance to make the best use of 5G and provide a differentiated experience. Both consumers and enterprises will benefit enormously from 5GPCs. Cellular operators can utilize 5G’s extreme capacity to offer services that make true anywhere, always-connected, fully untethered experiences possible. But it will only be a reality if they can offer attractive and innovative pricing and data plans. With major 5GPC device announcements trickling in and operators looking to expand their 5G offerings, it will be interesting to see how the story of 5GPCs plays out.

For the last few weeks, while the influencer world was busy with testing and reviewing the Samsung Galaxy S20 and Galaxy Z Flip smartphones, I was diligently using and testing another equally important and impressive Samsung product—Galaxy Book S—the latest always on, always connected PC (ACPC). My verdict? It defines what portable laptops are meant to be. However, being an analyst, I can’t stop myself from giving the rundown on why I think so and how it provides a glimpse of the future of laptops. Purchasing and setting up Book S

The Galaxy Book S comes in only one configuration—the Snapdragon 8cx processor, 8GB LPDDR4X RAM, and 256 GB SSD (MicroSD slot supporting up to 1TB) with Windows 10 Home OS. I bought mine on the Samsung website. Ordering was a breeze, although Samsung may confuse buyers by showing only Verizon and Sprint as the supported carriers. I bought the Verizon version by paying in full ($999 + tax). However, it came factory unlocked and it worked perfectly fine with Sprint, T-Mobile, and Google Fi. I am reasonably sure, would work with AT&T as well. I have sought clarification from Samsung on whether the Verizon and Sprint versions are different SKUs and have any major differences, such as spectrum bands supported, carrier aggregation combinations, etc. I am yet to hear back from them (will update this article if I do in a reasonable time). Surprisingly, I believe Samsung is artificially limiting the reach, and the market opportunity by only showing two operators, even though it works with virtually any operator. This is important because other laptops in this category only support certain operators. For example, HP Spectre works only with AT&T and T-Mobile.

The set-up was easy. I did have an issue with the keyboard backlight not working, which was resolved with a Windows update. Backlighting has three levels, which is nice, but the first step is dim enough that you might confuse it for not working except in low light situations.

Incredibly thin and light, with extremely long battery life – perfect for travel or the office

I have used a lot of laptops in my professional life, and that is an understatement. By far, this is the thinnest, lightest laptop that did everything I wanted, while providing the longest battery life. The official dimensions can be found here. My workloads are primarily productivity-focused. As I had explained in my earlier article, I use more than 15 email windows, multiple sessions of Microsoft Office applications including Word, Excel, PowerPoint, and usually have more than 20 browser tabs open at a time. The Samsung Galaxy Book S with its Snapdragon 8cx processor never struggled under this load. There is something to be said about the new chromium-based Microsoft Edge browser, which comes as a default. It is fast, stable and supports Chrome extensions, so I never miss my previous favorite Chrome browser! Edge provides native ARM64 support, so its battery life performance versus Chrome which runs in 32-bit simulation mode is beyond compare on the Snapdragon compute platform.

The Galaxy Book S is a perfect companion for a road warrior like me. However, thanks to COVID-19, my travel is severely curtailed. During the limited travel I did with the Galaxy Book S, I never carried its charger for single-day trips or in town meetings. That means no backpacks, no other bags to carry, just the Book S like a notebook. At the end of each of those days, I ended the day with more than 30-40% of the battery still remaining. Truly remarkable.

Without travel, I have converted the Galaxy Book S into my home workstation. With external 32’ WQHD (1440p) monitor, mouse and keyboard, all connected through a USB-C hub, I almost forget that it is a laptop, such is the user experience!

The Galaxy Book S always gets compliments about its thinness and weight, whether I use it in meetings or when I go to my son’s karate class etc. Many wonder how one could fit a fan in such a thin chassis. Some of my curious IT friends even tried to search for the fan and vents! It is the kicker to tell them that it has no fan or vents, thanks to the Qualcomm Snapdragon 8cx processor inside.

The secret behind the incredible size and battery life of the Galaxy Book S

The biggest challenge laptop designers face is the tradeoff between size (thinner and lighter) vs. performance and battery life. Designers seem to have reached a saturation point in that tradeoff. It all boils down to the thermal characteristics of today’s processors—higher the performance, more the power used, and more the heat generated. There are two options to manage this heat—either use a fan and proper ventilation or throttle the performance. Most of today’s laptops, even the ones such as MacBook Air, utilize fans, which makes them big and bulky while also increasing the power consumed. Premium sleek devices such as the older generation Microsoft’s Surface line-up uses throttling which compromises the user experience. In terms of increasing battery life, the only option is adding bigger batteries, which increases weight.

Now comes the Snapdragon 8cx compute platform used in the Samsung Galaxy Book S. Built using the best from Qualcomm’s mobile heritage, combined with the performance you’d expect of a PC. It is based on Arm’s architecture, offering similar performance as x86 based Core i5. Snapdragon 8cx provides consistently higher performance with minimal heat production in an extremely power-efficient way. So, without fans or cooling constraints, and without the need for bigger, heavier batteries, device designers can develop extremely thin, light, and high-performance laptops, such as Samsung’s Galaxy Book S, whose battery-life is measured in days not hours. Galaxy Book S vs. Surface Pro X

Since I have reviewed and have been using the Microsoft Surface Pro X for the last few months, a comparison between the two is another question I am often asked. Well, I like them both. They have some common uses but many where one is more suited than the other. For example, as I had explained in my article, Pro X can be off-balance when you try using it on your lap, whereas the Galaxy Book S proved to be a perfect fit for such uses. As a detachable 2-in-1, the Pro X is ideal if you like to use your device also as a tablet and use the stylus. The Galaxy Book S is a clamshell design that is more suitable for a driver or a workstation easily connected through USB-C docks and such. Although the Galaxy Book S has less RAM (8GB vs. 16GB), I haven’t seen that affect my productivity apps much. But if you are using more graphics and processor-intensive applications, the difference might be more apparent. Of course, Pro X, with all the accessories costs upwards of $1500, whereas Galaxy Book S is around $1000. I currently use both devices. All my content is on OneDrive and these being always connected, I can seamlessly switch between the two, no matter where I am.

The biggest concern of ACPCs still remains the app compatibility. More apps are being ported over to run natively in ARM64, though there are applications, like some games and video editors and such, that are still incompatible. It is worth noting though that most of those demanding applications don’t run well on other thin and light notebooks either. The other concern for some is around high cellular data pricing, but operators now have bundled options where one can get reasonably priced unlimited add-on data plans.

A glimpse of the future

The Samsung Galaxy Book S is only the second ACPC based on Snapdragon 8cx, and supports the best in class 4G LTE connectivity, with peak speeds up to 1.2Gbps. But we are at the dawn of 5G, which promises to provide multiple gigabit user speeds, extreme capacity, and lower latency. 5G ACPCs (aka 5GPCs) will be the best devices to utilize this unprecedented connectivity everywhere, as I have explained here. Book S gives a glimpse of what those 5GPCs have to offer in the years to come. In fact, the world’s first 5GPC has already been announced, and many are on the horizon. I can’t wait to get my hand on those!

IOT Device Security

These articles were originally written by Prakash Sangam for RCR Wireless and enterprizeiotinsights

As the awareness of the transformative nature of 5G is increasing, the industry is slowly waking up to the enormous challenge of securing not only the networks, but also all the things these networks connect and the vital data they carry. When it comes to the Internet of things (IoT), the challenges of security couldn’t be bigger, and the stakes involved couldn’t be higher. The spread of IoT in homes, enterprises, industries, governments, and other places is making wireless networks the backbone of the country’s critical infrastructure. Safeguarding it against potential threats is a basic national security need.

With 5G set to usher in industry 4.0—the next industrial revolution, governments across the globe are understandably taking a keen interest in how 5G is deployed in their countries. There has naturally been a lot of emphasis on its security aspects. The current focus has primarily been on the network infrastructure side. Many countries, such as the USA, Australia, and New Zealand, have put restrictions on buying equipment from certain network infrastructure vendors such as Huawei and ZTE. As stated by these governments, their concerns are regarding the lack of clarity about the ownership and control of these vendors. While these concerns are valid, focusing only on the infrastructure side is not sufficient. It might even be more dangerous because it might give a false sense of security

Infrastructure-focused security is insufficient

Network infrastructure is only one part of the story. Telecommunications is often referred to as “two-to-tango” as it needs both infrastructure as well as devices to make the magic happen. So, to have foolproof security, one needs to cover both ends of the wireless link, especially for IoT. Securing only the network side would be akin to fortifying the front door while keeping the back door ajar. Let me illustrate this with a real-life scenario. Consider something as benign as traffic lights, which at the very outset, don’t seem to need strong security. But what if somebody hacked into and turned off all the traffic lights in a major metropolitan area? That would surely bring the city to a screeching halt, resulting in a major disruption, and even loss of life. The impact could be even worse if power meters are hacked, causing severe disruption. It would be an outright catastrophe if critical systems, such as the national power grid, are attacked, bringing the whole country to its knees.

When it comes to IoT devices, conventional wisdom is to secure only the most expensive and sophisticated pieces of equipment. However, often, simple devices such as utility meters are more vulnerable to attacks because they lack strong hardware and software capabilities to employ powerful security mechanisms. And they can cause huge disruptions.

IoT device security is a must

IoT devices are the weakest link in providing comprehensive system-wide security. More so because IoT’s supply chain and security considerations are far too different and much more nuanced than those of smartphones. Typically, the development and commercialization of smartphones are always under the purview of a handful of large reputed organizations such as device OEMs, OS providers, and chipset providers. Whereas the IoT device ecosystem is highly fragmented with a large number of relatively unknown players. Usually, large players such as Qualcomm, and Intel provide cellular IoT chipsets. A different set of companies use those chipsets to make integrated IoT modules. Finally, the third set of companies use those modules to create IoT end-user devices. Each of these players adds their own hardware and software components into the device during different stages of development. Because of this, IoT devices are far more vulnerable than smartphones.

Address IoT device security during the procurement

It is evident that IoT users have to be extremely vigilant regarding security and integrity of the entire supply chain. This includes close scrutiny of the origin of the modules and the devices, as well as a detailed evaluation of the reputation, business processes/practices, long-term viability and reliability of the module and device vendors. Because of the high stakes involved, there is also a possibility of malicious third-parties infiltrating the supply chain and compromising the devices even without the knowledge of vendors. Case in point, the much-publicized Bloomberg Business Week report about allegedly tampered motherboards vividly exposed the possibility of such vulnerability. Although the allegations, in that case, are not yet fully corroborated or debunked, it confirms beyond doubt that such vulnerabilities do exist.

It is abundantly clear that the more precautions IoT users take during the procurement and deployment phases, the better it is. Because of the sheer volume, and the long life of IoT devices, it is virtually impossible to quickly rectify or replace them after the security vulnerabilities or infiltrations are identified. The time to secure IoT devices is now!

Looking beyond the current focus on 5G smartphones, 5G Massive IoT will be upon us in no time. Building upon the solid foundation of LTE IoT, Massive IoT, as the name suggests, will connect anything that can and needs to be connected. This will span homes, enterprises, industries, critical city, state, and national infrastructure, including transportation, smart grids, emergency services, and more. Further, with the introduction of Mission Critical Services, the reach of 5G is going to be even broad and deep. All this means the security challenges and stakes are going to get only bigger and more significant.

So, it is imperative for the cellular industry, and all of its stakeholders to get out of the infrastructure-centric mentality and focus on comprehensive, end-to-end security. Every IoT device needs to be secured, no matter how small, simple, or insignificant it seems, because the system is only as secure as its weakest link. The time to address device security is right now, while the networks are being built, and the number of devices is relatively small and manageable.

Nowadays, security and privacy are on everybody’s mind. Hardly a day goes by without the news of security breaches at major institutions. Most of the time, the reporting is focused on the cloud or network infrastructure, hardly ever on devices. However, when it comes to cellular IoT, devices are the most vulnerable, as I explained in my previous article. IoT devices, being very simple, are usually much easier to hack in to, and can compromise the whole system.

The IoT device ecosystem is unique and far different than that of smartphones, in many aspects. Because of that, security challenges are also different, and many of them are related to a unit called IoT module, which is at the heart of any IoT device. To really understand the scope and impact of these challenges, it is important to closely look at the market landscape of the entire cellular IoT ecosystem. It is even more relevant now, considering that today’s 4G LTE cellular IoT will evolve into 5G Massive IoT.

The cellular IoT device ecosystem has far different considerations, especially from the security and privacy perspectives. The ecosystem includes modem chipset providers, many of whom are the same as those of smartphones, as well as a few smaller players. Cellular IoT also has a different category of vendors, called module providers. They take the barebones chipsets and add their own software and hardware to develop modules with standard interfaces and such. Device vendors develop IoT devices largely based on these modules. Modules simplify the connectivity and operator certification-related complexity so that the device vendors concentrate on developing use case-specific devices. Essentially, modules are a key link in the value chain between chipset providers and IoT device vendors.

Chipset and device market landscape

In the device ecosystem, the chipset market is dominated by the same large and well-known smartphone modem vendors, such as Qualcomm, Intel, MediaTek, Huawei (HiSilicon), Sequans, Altair, and others. They provide a full range of solutions with varying degrees of advanced features, including single and multimode options for eMTC, NB-IoT, with support for 3G, 2G, GPS, onboard processing and so on. Apart from the advanced features, the overall cost is a major consideration for the industry.

The cellular IoT device ecosystem is very large and diverse. The vendors are usually small and possess expertise in specific use cases. They don’t necessarily have the skillset and scale to justify designing devices based off the IoT chipsets. That’s where module vendors come in. Traditionally, IoT vendors were mostly from the US and Europe. However, there has recently been a surge in vendors from China, who are completely unknown outside the country. Many of them have taken cues from and have duplicated device and module designs from traditional vendors. The proliferation of Chinese vendors is primarily due to the Chinese government’s concerted effort and heavy investment in IoT in the country. The Chinese government’s well-funded large IoT projects coupled with considerable subsidies provided by operators such as China Mobile and China Telecom has created an ideal environment for these companies to flourish. The recently awarded 5G contracts are a great example of how the Chinese government and operators support Chinese vendors. These companies, emboldened by their success in China, are now trying to pursue global opportunities. Since they are leveraging the investments and subsidies availed in China, they can be extremely price-competitive in global markets.

IoT module market landscape

IoT modules are the “bridge of trust” between the well-known chipset vendors and the unknown device vendors. Module vendors also work with the regulators and cellular operators for certification, which addresses a significant hurdle for device vendors. The certification ensures smooth and rapid deployment of these devices in the field. As evident, the selection of module vendors is key to ensure device and system security.

The module vendor market comprises of a mix of existing and emerging players. Some players such as Gemalto (Siemens M2M at the time), Sierra Wireless (+acquisition of Sony Ericsson M2M and Wavecom), Telit (+acquisition of Motorola M2M) have been around since the 2G days. Others such as U-Blox entered the market during 3G and early part of 4G, leveraging their mobile expertise. Finally, the emerging module vendors from China, who just like IoT device vendors in the country, have grown at a fast pace, with substantial government support and operator subsidies. There is a long list of such players. A few among them, such as Quectel, SIMCom, Longsung, Fibocom, and Norway, are eyeing global markets. Many others may be looking with watchful eyes at how the initial players fare in their endeavor, before stepping out themselves.

Ecosystem challenges

Anybody who has looked closely at the IoT market realizes that the biggest challenge is its relatively low margins across the board, be it chipsets, modules or devices. Considering that the module vendors are relatively small compared to the chipset, infrastructure, cloud, or application vendors, they don’t have a lot of leverage, resulting in an extreme margin squeeze. In such a situation, increasing market share becomes crucial, putting even more pressure on pricing. This is exactly where government-funded projects and operator subsidies that the Chinese vendors enjoy at home starts to matter and alter the landscape. Because of government support at home, their pricing can be artificially low, reaching predatory levels.

Speaking to some of the sources in the industry reveals that there is indeed a race to the bottom when it comes to module pricing. If it persists, there is a real danger of non-Chinese players becoming financially unviable. This is of grave concern, especially when we are getting ready to move to 5G. Supporting 5G will need huge upfront investments, and the pay off period could be very long. If these companies can’t earn enough profit, they can’t afford to invest in 5G, and potentially, in the worst case, exit the market.

What do these challenges mean for the cellular IoT Industry?

If you feel like you have seen this movie before, you are not wrong! If you examine the turn of events in the cellular infrastructure market during the late 90s and early 2000s, the situation is almost identical. During that time, major American and European cellular infrastructure vendors failed to anticipate such threat and were unable to compete with emerging Chinese rivals that were allegedly supported by their government. Many American and European vendors such as Motorola, Lucent, Siemens, Ericsson, Nokia, with decades of experience and successful existence had to perish, merge, or downsize. Chinese upstart vendors such as Huawei and ZTE found a ripe market and quickly took away market share, grew exponentially, and became dominant players.

Why is the comparison with the past relevant, and why is it a security concern? Well, IoT devices are the weakest link in the security of the overall system. The industry needs to be as concerned about the security of IoT vendors, as much as with the infrastructure vendors, if not more.

What happens if we don’t heed to the teachings of the past? What are the implications for the security and privacy of IoT networks? I will explore those questions in my next article. So, be on the lookout!

In my previous articles here, and here, I explained the rationale for increased focus on device security and its challenges. The threats are more acute, especially from unknown foreign vendors offering predatory pricing. After reading the articles, a few people questioned me about the ills of such a situation and even suggested that the fierce competition will keep the pricing low and vendors in check. In this article, I will explore whether such short-term thinking will help or hurt the industry in the long-term and examine some what-if scenarios. I will also draw parallels to some historical lessons, and finally, offer suggestions on how the IoT ecosystem could protect itself.

Learning from history

The best parallel to what is happening in the IoT vendors space is the situation of American and European cellular Infrastructure vendors during the 3G transition, in the late 90s and early 2000s. I vividly remember it because I was amidst all of it, working for one such company. The world was slowly moving from 2G to 3G. The infra behemoths mostly from US and European companies, including, Lucent, Motorola, Nortel, Nokia, Siemens, Alcatel, and others were trying to get their customers to move to 3G quickly. However, they soon faced unprecedented headwinds from unknown Chinese companies named Huawei and ZTE, offering extremely low pricing. It was alleged that their low pricing was not only because of their lower cost but also more importantly because of the support from their governments. American and European vendors, confident because of their decades of heritage and experience, never took these players seriously. But alas, because of the dot com bust, and intense price pressure, many of those behemoths folded in no time. Others cobbled together to survive, but as a much smaller shadow of their former self. Only two among them remain in business, that too largely because of the US market where Chinese vendors are not allowed. From the ecosystem perspective, there are far fewer choices of vendors globally, and even fewer in the US.

So, what can we learn from this harrowing experience? Well, simply making decisions on cost alone might be very attractive in the short run, but might have negative long-term consequences. Once the landscape changes, it cannot be put back. Perils of inaction now

If this practice of offering artificially low prices on IoT devices and modules because of Chinese government subsidies goes unchecked, none of the non-Chinese vendors can sustain low margins and will edge towards bankruptcy or exit the market. Very soon, there would be anybody of repute left.

In such a situation, the IoT needs of critical infrastructures such as power grid, smart cities, installations of national security, and others, will not have any option but to rely on unknown suppliers without any proven track record or reputation. The case would be similar for large enterprises, industrial complexes, and such where IoT devices are a basic staple. The confidence in the security of IoT devices should be unquestionable and not even up for debate. Consider 5G Massive IoT, which will build on the solid foundation of 4G IoT. Additionally, going forward sharing of spectrum between defense and civilian cellular networks is going to be the norm. An early example of such an arrangement is CBRS, which allows sharing of spectrum between the US Navy and cellular operators. Any security breach in such deployments could expose the critical military operations for sabotage. These include radar and satellite communication systems.

Generally, there are risks with relying on a group of suppliers all coming from the same region/country. What if, trade wars flare up, resulting in high tariffs, or even worse, import/export bans, similar to the recent US ban of Huawei? In such a case, the whole critical infrastructure could come to a screeching halt — also, such vulnerability provides a huge advantage to the foreign country in any trade negotiations.

Many of the Chinese vendors are very small without any public, reliable information on their background, ownership, business, objectives, or motives. What if they plan to conquer the market now with low pricing, and increase prices exorbitantly soon after all the competition has diminished? Even worse, what if they had ulterior motives? No matter how much these companies vouch for their authenticity and business objectives, unless they can open themselves for close scrutiny or better yet, list on some of the reputed stock exchanges in the US or Europe, it is extremely hard to be convinced of their authenticity. If you consider the headwinds that Huawei is facing, even with its significant brand recognition, the path for unknow IoT companies will be even harder, if not virtually impossible.

How to ensure device security

Historically, utilities and many critical national infrastructure providers have been very conservative in their vendor selection. They make their vendors go through an extreme, multi-level vetting process, covering both technical as well as financial viability. They should continue this practice and include evaluation of overall ecosystem health, long-term impacts, and diversity of suppliers. Private enterprises should get the cue from them and be very careful in their vendor selection as well. The assessment should also include import bans, trade wars, and other such unlike yet catastrophic considerations.

The IoT users should evaluate the lifetime cost of ownership of their IoT devices, instead of just the initial cost. IoT devices typically have a very long life, extending ten years in some cases. During such a long time, the cost of maintenance, timely upgrades, quick fixing of security flaws exceeds the original procurement cost of the device. Additionally, these institutions should examine and understand the motivation behind predatory pricing and act with a long-term point of view.

As a last resort, the government and regulators should look at putting safeguards in place for procurement of critical infrastructure. The focus should not just be on the network, but equally, if not more on the devices as well. For example, the US government banned some vendors from supplying cellular network infrastructure. There could be a case be made for similar safeguard for devices for critical uses as well.

The biggest step the IoT users, be it government agencies or private enterprises, can take is to make sure to create an environment to nurture diverse, strong, reputable, and reliable players who value security.

The Federal Communications Commission (FCC) will vote on Friday to virtually block Huawei’s access to the U.S. market, but this rare bipartisan action only protects one element of America’s digital infrastructure. In reality, the likeliest and most susceptible security vulnerabilities aren’t well understood by policymakers, and we’re at the beginning of a very long fight.

In the $2.4 trillion telecom sector, the dawn of 5G is more than a buzzword. It’s truly a new era full of great promise, as well as great danger. But our policymakers’ focus has only been on the big companies with name recognition, without attention paid to the less prominent ones that might pose much larger security risks.

Huawei and ZTE (another major Chinese manufacturer up for the FCC’s vote, but which doesn’t get the same publicity) are easy targets for the uninformed masses who fear all things China. Meanwhile, the national security threat from other Chinese-subsidized and foreign-controlled telecom companies is potentially more vast and insidious than our leaders in Washington, DC understand and acknowledge.

There’s been no mention by politicians, in news media or on social media about the security risks posed by devices or cellular modules – the mini-computers that make up the brains of the Internet of Things (IoT). There will be 43 billion in the world by 2023, and consequently they’re the favored target for hackers. Unlike phones or chipsets, these modules are untraceable once embedded in devices. These elements are so critical in connected infrastructure that If a hostile state or player gains control with intent to attack the U.S., it’s far more horrific to imagine the scale of destruction than with a compromised smartphone or social media account.

Unauthorized access to your iPhone or Facebook enables spying. But access to an IoT device enables direct action in the real world. Shutting off power to Washington, DC. Turning off traffic lights in Manhattan. Pumping the breaks on autonomous cars in San Francisco. Stopping heat in winter to homes in Minnesota. Interfering with medical devices in Florida.

Forget the compromised security of smartphones. A compromised module - one of dozens that’ll be in every American home within the next few years - could mean literal life or death.

Five of the top ten IoT module manufacturers are Chinese, and they rake in 71 percent of the industry’s revenue using the same government backing and Huawei playbook to stifle competition in the U.S. and Europe. China’s heavy investment in IoT in the country - coupled with considerable government subsidies - allow Sunsea, Fibocom and Quectel to be extremely price-competitive in global markets.

Industry insiders have been vocal in sharing stories of these companies slashing module prices below reasonable production costs. Driving out competition with a questionable pricing structure – and the consequent potential for future manipulation of affordability and availability – adds another layer to the concerns regarding 5G security.

It’s arguable that Chinese vendors Sunsea, Fibocom and Quectel are clones of Huawei, especially since they’ve effectively cornered the global market for the most critical components in the IoT. That’s why it’s important for politicians and security experts to glance up from their research on Huawei to better understand the implications of U.S. reliance on Chinese IoT manufacturers.

The U.S. government shouldn’t ban a company just for being China-based, nor target one just for being in the business of telecommunications or technology. Not every tech company in China is a stooge for the government with unreserved, evil intent. In fact, companies like Quectel and Fibocom thrive in good part due to legitimate innovation, amazing engineers and good quality.

Nonetheless, the FCC will vote on Friday on Huawei and ZTE. We must hope that this is just a first salvo in making 5G and the Internet of Things secure, with more investigation and possible action to come. If the Trump Administration truly wants to protect the American people from foreign interference via smart devices, the FCC and Congress need to be more strategic in looking at potential threats beyond the flashiest names.

FTC vs. Qualcomm Antitrust Trial

The ongoing saga between FTC and Qualcomm

It is unbelievable when one of the world’s richest companies complains that it is an undue burden to pay for the innovations that power its high margin products. But it sure looks like a well-orchestrated war on innovation with sinister motives, when a government agency such as the FTC (Federal Trade Commission) joins hands with it in beating down its much smaller (10x) supplier that is a proven technology pioneer.

I am talking about the trial that is underway between the FTC and Qualcomm in the U.S. District Court in San Jose, California. I am not a lawyer, instead, a passionate engineer who was part of the 2G, 3G, 4G, and now 5G transitions. I know first-hand what it takes to conceive, build, and deploy wireless technologies. Here are my thoughts on this legal tussle and its potential consequences.

Wireless communication, especially for broadband data, is a fascinating invention that it is largely invisible—literally and metaphorically. Unlike beautiful smartphone screens, artful industrial designs, or clever apps, wireless has been an enigma attracting little attention or appreciation. You only realize its importance when out of coverage! Oh, the agony, the insecurity, and the fear of missing out! The device is called a smart “phone” for a reason: without the “phone” functionality, most of those smarts have little value!

“Wireless data” is the defining technology of the smartphone, not just another feature

Why am I explaining the importance of wireless data? In the current FTC trial, the Commission’s lawyers and witnesses put forward two complaints: 1) Licensing fees should be based on the modem’s price, not that of the device, and 2) Qualcomm’s licensing fees are too high. Looking at the first, wireless data is the fundamental and defining technology of any smartphone. Also, it is a misconception to think that wireless data technology is only contained within the “modem” block. In reality, the functionality is the result of a comprehensive system design that makes the smartphone work as a complete device, with all subsystems and software in it. Additionally, the design includes complex interactions with numerous infrastructure and network (radio, core, and cloud) elements to function as a well-orchestrated system. So, it would be disingenuous and utterly ridiculous to limit the value of all of this technology to a small percentage of the price of a modem.

On the licensing fees argument, fees should be determined by the value the technology imparts to the overall usefulness of the device, and not correlated with a single isolated part. Also, the valuation of wireless technology should be market-driven, not arbitrarily or subjectively determined by the FTC or other regulatory authority. If you accept the notion of regulatory price-fixing, then why stop with Intellectual Property (IP)? Why not also regulate the price of smartphones? If you look at the recent price increases, it may not be a bad an idea after all! Jokes aside, as witnessed by the spectacular proliferation of smartphones over the last decade, market pricing of wireless technology IP has benefited the mobile industry and the consumers.

The value of Qualcomm’s IP has been accepted by most of the industry, as illustrated by more than 300 negotiated licenses. Moreover, after a lengthy investigation by and negotiations with the Chinese regulator, the NDRC (National Development and Reform Commission), Qualcomm agreed to a settlement that included rates deemed fair by the Chinese agency. It is telling that even Chinese OEMs agree that the licensing rates are fair, despite these OEMs having far thinner margins and much smaller scale than Apple, who makes most of the mobile industry’s profits (almost 90% by some estimates). So, it would seem that the subjective claim of Apple–“license fees are too high”–doesn’t pass the sniff test. It is interesting to note that many of FTC’s witnesses in the trail, such as Huawei, Apple, and Intel, are Qualcomm’s arch-rivals.

Will the FTC case against Qualcomm help or harm consumers?

Let’s examine the premise of this case and how it relates to FTC’s mission, which is to ensure fair competition so that consumers benefit from wider choices and lower prices.

When you look at the US smartphone market, there are two dominant players, and others are smaller, emerging players. I believe any negative action by FTC will further exacerbate this situation by eliminating these smaller players. Wireless innovation is extremely hard, time-consuming, and capital intensive. Qualcomm invests billions of dollars in R&D every year. A lot of this investment is done very early, years before a market even exists, which means there are significant risks involved. For example, Qualcomm has been investing in 5G since 2014, and commercial devices will only start entering the market in 2019 and 2020. For a company like Qualcomm, the only way to recoup such large, ongoing investments is to license its technology to as many smartphone OEMs as possible. Moreover, most of these OEMs don’t have the money to do their own R&D, and they rely on Qualcomm’s innovations to cost-effectively compete with the big OEMs. This creates a vibrant, highly competitive marketplace that offers consumers a wider range of choices and affordable prices, the ultimate goal of FTC. A great example of this is 4G LTE, which enabled many new and very innovative smartphone OEMs to enter the market. They are growing stronger and are expected to be formidable competitors in 5G. The virtuous cycle repeats as Qualcomm reinvests large portions of its licensing revenue back into R&D to offer a continuous stream of innovations.

In the absence of an entity like Qualcomm, most OEMs would be deprived of new technologies. Only a few big OEMs would be able to invest billions into technology development, and it’s unlikely that these vertically-integrated players would share most of their technology with others. Most other OEMs would not be able to afford to invest on their own and probably exit the market. This outcome would be the opposite of the FTC’s mission. If you don’t believe this, look at how aggressively Apple, Samsung, and Huawei have been trying to vertically integrate by either acquiring or building as much of their own technology as possible.

Beware of the consequences

Any attempt to trivialize or delegitimize Qualcomm’s IP and its role in the industry will have a long-lasting impact not only on the smartphone market but on the entire tech industry. If the FTC undermines companies’ ability to earn rewards for the investments, or worse, arbitrarily caps the value of their technology, it will discourage the American innovation and severely curtail the flow of capital to those innovations. Small and medium-sized companies that are the backbone of this innovation engine will be the most affected. So, in essence, this trial may (unwittingly?) amount to a war on the American innovation engine, and a negative outcome will ultimately hurt American consumers by decimating competition and choice in the marketplace; this is the antithesis of the FTC’s very existence and charter.

Analyzing the long term impacts of FTC’s activist litigation

In all the chaos of allegations, counter allegations, scores of testimonies, rebuttals, cross-examinations, and others, I humbly request that Judge Koh and the FTC pause for a moment and ponder this question: “If Qualcomm loses this case, who will win?” No, it’s not the FTC; the real winner would be China, in the form of its proxy Huawei (and to a lesser extent, Apple).

In my previous article, I explained how FTC’s activist attempt to fight Qualcomm will result in reduced competition, limited choice, increased prices, and will ultimately do great harm to consumers and the industry. This is clearly against FTC’s sworn mission and the very reason for its existence. But the importance of this case goes much further and beyond the FTC; it goes directly to the core of the purpose of the United States government itself, which is to protect the lives, the assets, and the interests of citizens of this great country. Today, technological advances define the future of countries. Rightly so, the U.S. government has made the protection of its intellectual property one of its main objectives. However, FTC’s actions are summarily against that objective.

Qualcomm is a well-oiled innovation engine

As the trial progressed, a lot of interesting facts have come to the light of day. It is undeniably clear that Qualcomm has been and continues to be a well-oiled innovation engine, efficiently cranking out technologies and products. In the testimony on Friday, Jan 25th, 2019, Christopher Johnson of Bain & Company reluctantly spilled the beans from the competitive analysis they did for Intel. They benchmarked investments, execution, and productivity between Intel and Qualcomm, especially pertaining to the development of wireless technologies and products. Bain’s analysis showed that Qualcomm’s investment on the SoCs (System on Chip) was comparable to that of Intel, but produced three times as many products. The report also showed that Qualcomm invested much more than Intel in developing wireless technologies and modems, which are at the heart of all mobile devices and networks.

With Qualcomm’s strong performance, no wonder weaker modem chipset players couldn’t compete and quickly folded. For example, companies such as Broadcom (which consolidated assets from Renesas, and Beceem), ST Ericsson, and Texas Instruments exited the business. Other players such as Infineon were bought by bigger companies like Intel. As a result, the majority of smartphone OEMs, be it new ones such as Apple, Samsung, LG, and a whole slew of Chinese OEMs, or legacy OEMs such as Motorola, Sony, Blackberry, and others, ultimately ended up using Qualcomm’s chipsets. In other words, Qualcomm’s strong market position was primarily because of its clear vision, incredibly talented engineers, and military-precision execution. However, this position didn’t give them the market power as alleged by FTC or make them immune to competition. As proven time and again, the highly-competitive mobile market only rewards winners, and harshly punishes those that stumble. Nokia’s spectacular demise from its peak is a great example of this. Specific to Qualcomm, the failure of the Snapdragon 810 chipset which came after the blockbuster Snapdragon 800, made many OEMs quickly abandon Qualcomm and take their business elsewhere. In the fast-changing mobile industry, market power is a misnomer, and only the companies that have the right foresight, investment and execution survive and thrive.

Down payment for the next-gen technologies

When analyzing the value of cellular IP and modem chipsets, conventional wisdom might be to only consider the share of a company’s contribution in the current generation and to evaluate accordingly. However, many fail to understand that wireless technology is not static, but a series of evolutions, and multiple releases within each evolution (G, or generation). For OEMs to be successful, the key is to leverage a steady stream of technologies and solutions to feed multiple generations of products. That means, the price they are paying for today’s technology also includes a down payment for the next generation of technologies they will need down the road. For example, when OEMs were selling 3G devices in 2006 and 2007, Qualcomm’s R&D engineers were already working on 4G technologies, funded in large part by licensing revenue from all of those OEMs’ devices. And when 4G was growing exponentially in 2014 and 2015, Qualcomm was already heavily re-investing in 5G. Essentially, Qualcomm has acted like an R&D design house for the entire smartphone industry ever since 2G. It is a virtuous cycle of innovation and re-investment, one generation after another. What happens, if this cycle of innovation and re-investment is disrupted?

If Qualcomm loses this trial, and its ability to recoup investments through licensing technology at market prices is severely curtailed, Qualcomm will undeniably have to reduce investment in risky new technologies. Remember that 5G is still in its infancy, and the industry still has a long way to go to achieve its promise of changing the world. As articulated in testimonies in the trial, it is not just the investment that matters; Qualcomm’s vision, brain trust, and execution will also be severely hampered. Damage to Qualcomm will create a big void that no other American company may be able to fill, and any public company would be faced with the same challenge of not being able to recoup its investments with fair returns. There are not many companies in the U.S. that have the expertise, and fewer still, the efficient horizontal business model of Qualcomm, as made amply clear by Bain’s analysis.

China’s premier technology provider, Huawei, would be more than happy to fill this void, and with tacit support from the Chinese government. Unlike publicly-traded American companies, Huawei enjoys freedom from the worries about access to capital for investment, and it’s not particularly worried about returning a profit to investors. Remember that Advanced Information technology is among the top of “Made in China 2025” goals set out by the Chinese government. Capitalizing on its current momentum, Huawei would willingly take the world’s R&D crown. And the FTC would unwittingly be handing over the tiara on a silver platter.

The irony is that other parts of the U.S. government, for example, the U.S. Department of Justice, are busy pressuring other governments to keep Huawei at bay for security concerns. They even criminally charged Huawei for IP violations and other charges. Yet the FTC is upholding Huawei as its key, credible witness in undermining Qualcomm, the crown jewel of U.S. innovation. What could you call this travesty? The tragedy of democracy, the lethargy of bureaucracy? No matter what you call it, this is indeed a national disgrace.

It’s been more than a month since arguments rested for the FTC vs. Qualcomm case. Every passing day is increasing the anxiety of people on both sides of the issue. The media is rife with the rumors, leaks, and loud calls for the U.S. Government to intervene for national security reasons and take CIFIUS-like action.

FTC vs. Qualcomm might seem like any other antitrust case, but in reality the outcome could potentially jeopardize U.S. national security. Qualcomm is the undisputed leader in technologies and R&D that power cellular systems such as 3G, 4G and now 5G. Telecommunication networks are the plumbing that connects the country, and cellular technology is its brain. Any country that wants to control its destiny should own that technology, or at the very least, have significant influence in steering the evolution of its capabilities. If the FTC case seriously damages Qualcomm, China’s Huawei will claim its place and be the global champion of cellular technology.

But, you might ask, hasn’t the government already addressed this issue by banning Huawei in the U.S.? Well, that would be akin to shutting off one faucet in a house while water is free to flow through all of the others. There is much more to cellular technology than just the network infrastructure. Let me explain.

What it takes to be a leader in the cellular technology:

To be a leader in the cellular technology, one needs deep, end-to-end system expertise. One needs years of experience designing new wireless systems, standardizing them, building and enabling a large ecosystem to commercialize them, and continuously evolving them after they launch. Very few companies possess such capabilities; most specialize in one or a few specific areas. For example, companies like Apple focus on devices, and others like Ericsson and Nokia focus on network infrastructure.

The leading companies that have complete systems expertise are Qualcomm and Huawei (Of course, there is also Samsung, I will discuss about that in a later article). Let’s take a closer look at these leaders, starting with Huawei. The rise of Huawei is worthy of a business school case-study. It has meticulously built its businesses, allegedly with strong financial and bureaucratic support from the Chinese Government. Huawei realized the importance of cellular technology and standardization, and started very early, since the 2G days. It initially focused on infrastructure products, then strategically expanded into smartphones, and subsequently developed its own platforms for modem, application processor, neural processor, even reportedly its own operating system, and other key technologies. Huawei owns virtually all key technologies in the cellular value chain and is also a force to be reckoned with in 5G standardization. No wonder Huawei is considered the crown jewel and a role model for the Chinese government’s global technology ambitions.

On the other side is Qualcomm, which to uninformed eyes might look like any other chipset supplier that can easily be dispensed with and replaced. However, upon closer inspection, one realizes that it is a systems engineering company with deep, and unmatched end-to-end wireless competence. Qualcomm has gained valuable experience leading the successful commercialization of 2G, 3G, and 4G. The intensity with which the company almost single-handedly drove the acceleration of 5G has clearly shown its capabilities. For 5G, Qualcomm co-developed the full system architecture and design from the ground up, including fundamental technologies and algorithms. Qualcomm’s R&D teams also built complete prototype systems to develop, test, and perfect the technologies that the company contributed to 3GPP to define and standardize 5G. Qualcomm, because of its unwavering focus on engineering and technology instead of glitzy consumer marketing and brand, isn’t a household consumer name unlike many of its competitors.

Some might then ask: why only Qualcomm, why can’t other U.S. giants that are much larger and have greater financial wherewithal, take on Huawei? When it comes to the mobile industry, other than Qualcomm, there might only be two other companies that could come close — Apple and Intel. Let’s look at them more closely.

Although Apple is the profit leader in smartphones, reportedly raking in almost 80% of all mobile industry profits, it is pretty thin on the cellular technology front. Instead, its strategy has been to optimize existing technologies, and bring them into its vertically-integrated devices and closed ecosystem. Apple is indeed more focused on developing proprietary technologies that improve user experience and increase the appeal of its devices. Despite being a dominant smartphone player since the 3G days, Apple hasn’t brought any groundbreaking innovations to the cellular ecosystem or cellular standards. The company is never on the leading edge of cellular technology adoption either. Specifically, with 5G, it is more than a year behind almost every other major smartphone OEM, including smaller players such as Xiaomi, Vivo, Oppo, and far behind rivals Samsung and Huawei. Short of using its bounty of more than $200 Billion to buy another wireless technology leader (which could run into serious antitrust scrutiny), Apple would find it very hard, if not impossible, to compete with Huawei in the 5G+ technology race. Even if it developed the necessary competence, Apple’s vertical integration strategy would likely make it keep all IP to itself, and not license it to others. I really don’t see the company making a U-turn and becoming the cellular technology torchbearer for the country.

Then there’s Intel, which has ruled the PC industry for many decades. It might be because of its apathy toward the cellular industry in its early days (Intel sold its division that built processors for early smartphones to Marvel), the company has never succeeded in becoming a force to reckon with. Intel’s heavy bet on WiMAX didn’t pan out, instead, putting the company years behind in LTE. Even after buying Infineon, a strong modem player of yesteryears, the company still seems to be struggling in wireless. Intel did score a major victory last year by claiming 100% of iPhone modem share, albeit only offering the performance of Qualcomm’s previous generation of modems. To date, Intel’s 5G wireless story is not promising either. It seems to be almost one year and two generations behind its peers. Apple’s recent aggressive stance in growing its modem competence doesn’t bode well for Intel either. Also, I have lots of doubts about Intel’s end-to-end system capabilities. As a result, I believe Intel is in no position to compete with Huawei.

The bottom line is, Qualcomm is the only safe bet for the U.S. to maintain its edge in 5G and beyond.

What happens if Qualcomm is weakened by an adverse FTC trial ruling?

Qualcomm’s (and the U.S.’s) fate is hanging in the balance, pending the outcome of the FTC Trial. One might wonder what would happen if Qualcomm were to lose this case. Qualcomm’s licensing business, which generates the bulk (2/3) of its profits, might be seriously impacted. Without going into hypothetical scenarios, one thing would be certain: the company’s ability to invest in fundamental cellular technology development would be severely curtailed. Its virtuous cycle of technology development and plowing profits back into future technology R&D would come to a screeching halt. U.S. dominance of cellular technology would likely rapidly decline, and eventually end. With strong market presence and the Chinese Government’s backing, Huawei would be virtually unstoppable and would exert significant influence on the definition of future of cellular technologies… and it’s doubtful that it would have the U.S.’s interests and needs at heart.

Most affected would be smaller OEMs. Without substantial resources, or access to cutting-edge technology IP and advanced, high-performance platforms from Qualcomm, they would not be able to compete in the premium tier against vertical players like Apple, Huawei, and Samsung. The premium smartphone market in the U.S. would become an even greater duopoly (Apple and Samsung) and oligopoly outside the U.S. (the former two plus Huawei). It’s no wonder that both Apple and Huawei are strong supporters of (and collaborators with) the FTC’s case.

In the end, the real losers will be consumers, who will have no choice but to bend to the whims of these increasingly powerful vertical players… vendors that have already shown a strong affinity for increasing smartphone prices.

So, for the U.S. government, the time to act is now. I hope that saner instincts will prevail, resulting in actions that will protect, preserve, and propel U.S. technology, innovation, and the country’s vital communication infrastructure.

While the final decision on the FTC vs. Qualcomm case is still pending from the last two months, the new developments have put the very premise of FTC’s case in question. The details revealed during the Apple vs. Qualcomm trial and the ensuing settlement are making the pillars of the FTC case crumble. Everybody is eagerly waiting for the FTC’s next move, and wondering how all of this will affect Judge Koh’s final decision, if she eventually has to give one.

One might ask, “What is the relevance of the Apple vs. Qualcomm litigation on the FTC case?” Well, Apple was one of the key witnesses and a major force behind the FTC case. The underlying principles, claims, and counterclaims are same between the two, so much so that Apple’s main arguments presented during the case with Qualcomm were almost verbatim to what was put forward in the FTC trial. So, both cases are undeniably intertwined, and the result of one will affect the other.

FTC’s claims are in serious jeopardy

At a very high-level, the majority of FTC’s allegation can be combined into three claims:

Qualcomm’s licensing practices are not compliant with FRAND (Fair Reasonable and Non-Discriminatory) terms, and that has harmed the cellular industry, including Apple
Licensing at the device level is not justified
Qualcomm’s alleged market power combined with its licensing policies have harmed competitors such as Intel

Let’s evaluate the merits of each of these claims, especially in the wake of the settlement and the new information it has brought to light.

Apple was one of the strongest forces behind FTC’s case against Qualcomm. The documents revealed during the Apple vs. Qualcomm case show that the ultimate reason behind Apple’s litigation (including FTC case) was to reduce its royalty cost. There was no alleged harm. Even during the trial, the FTC failed to produce any concrete evidence to show the harm to the industry caused by Qualcomm’s licensing practices. Now, Apple signing a long-term licensing contract as part of the settlement clearly shows that Qualcomm’s licensing practices are indeed fair and market driven. Furthermore, the other over one hundred licensing contracts Qualcomm has signed with many OEMs including majors such as Samsung, and LG proves this point as well. All of this debunks FTC’s first claim.

As it became very apparent during the trial, licensing at the device level is a decades-old industry norm. All the Intelectual Property (IP) holders practice this because it is the most efficient and practical way to capture the value of IP. Stipulating a cap on the maximum device price for license fee calculations makes the practice even more meaningful and fair. As disclosed during the trial, Qualcomm’s licensing fees are up to 5 percent of the wholesale price of the phone, with a device price cap of $400. This license includes a portfolio of more than 130,000 Standard Essential Patents (SEPs) and non-Standard Essential Patents (non-SEPs). For reference, in another related case between Apple and Qualcomm in San Diego, the jury awarded $1.41 per device to Qualcomm for just three non-SEPs. That is a far cry when compared to the $7.5 for every iPhone that Apple was paying before the dispute started. So again, FTC’s second claim has no merit. On a side note, If you would like to know more about patents and licensing, check out my explainer articles here: Part-1 and Part-2.

There was no dearth of drama on the day Apple and Qualcomm settled the dispute. The settlement news broke while the opening statements were still being presented in the court. The Qualcomm’s stock shot up by record levels immediately after the settlement. Mere hours after the settlement news, Intel announced their decision to exit the 5G smartphone modem business. Some might think that Intel decision to quit proves FTC’s claim of harm to competitors. However, closer scrutiny reveals a different story.

By Intel’s own admission, the reason for their decision was Apple signing a multiyear modem supply deal with Qualcomm, as part of the settlement. As publically discussed in many forums, the most likely reason for Apple to ditch Intel in favor of Qualcomm was the realization that Intel wouldn’t be able to meet Apple’s hefty 5G modem needs. This indeed is a major miss by Intel, considering that they are currently the sole modem supplier to Apple’s latest iPhone. Their inability to deliver the right modem solution for such a large and almost guaranteed opportunity clearly shows a profound and fundamental flaw in Intel’s operations and execution strategy. By all counts, 5G was a level playing field for Intel as well as everybody else in the race including Qualcomm, and Intel failed to deliver. In such a case, it is reasonable to argue that, this might as well be the case with 4G LTE. That means, whatever harm the FTC has claimed for Intel in 4G LTE was because of its inability to deliver, and not because of Qualcomm’s alleged market power or licensing policies. This proves that FTC’s third claim is completely flawed as well. Who stands to benefit from FTC trial now?

With Apple and Qualcomm settling, and Intel exiting 5G smartphone modem market and mulling strategic options for its modem business, the question arises, “Who stands to benefit now from the continuation of FTC case?” The surprising answer is China’s Huawei, as it was FTC’s third collaborator along with Apple and Intel. This is such an unfortunate and disgraceful situation that an arm of the US government is directly helping a foreign entity, against a US company who is heralded as the country’s 5G leader. This is even more ironic and embarrassing, considering that the US government has virtually banned Huawei for national security reasons! What could be the possible outcome?

With all the major claims of the FTC discredited, its case is in serious jeopardy. As Judge Koh noted during the closing stages of the trial, this case is very complex with a huge amount of evidence to examine. The hurried summary judgment that Judge Koh gave in the early part of the trial, the radical remedy that the FTC is seeking, and the recent developments, complicate the case even further.

The FTC didn’t make a strong case, to begin with, it looks even weaker now. That means, it is almost impossible for Judge Koh to give a judgment that might permanently alter cellular IP licensing regimen being practiced for decades. In my view, the only possible option for the FTC now is to settle with Qualcomm and save its face, especially considering that anything other than that will help Huawei. I am sure Judge Koh will be happy with that outcome as well. Any decision other than that will surely be challenged in the appellate court and most likely be overturned.

The telecom industry is still digesting the surprising and far-reaching decision by Judge Koh of the U.S. Northern California District Court. The expansive court order is as hard to digest as it is to comprehend. If you thoroughly read it (yes, I have, all the 233 pages!), it seems that Judge Koh had already made up her mind long before the trial, and hand-picked specific points from testimonies, evidence, and circumstances to suit her narrative. However, the battle rages on: Qualcomm is appealing the decision at the U.S. Ninth Circuit Court of Appeals. Meanwhile, the company is requesting a stay from Judge Koh until the appeal is heard. I think this is a mere formality, as I expect Judge Koh to reject the stay request. If and when that happens, Qualcomm will request a stay from the Ninth Circuit Court. While all of these court proceedings play out over the next few months, if not years, it is important to consider the havoc this decision and the possible denial of stay might cause in the market. It is even more crucial because we are at a critical juncture in the global 5G race, and this decision will affect how different companies, and perhaps more importantly, different countries progress.

In my previous article, I had briefly touched upon the question “who might benefit from an adverse decision against Qualcomm.” Since that fear has become a reality now, a more detailed discussion and evaluation of some what-if scenarios is in order.

At the very outset, there is no question that Huawei and China are the biggest beneficiaries. With this legal quagmire, the attention of Qualcomm’s executives and many of its engineers may be divided between trying to prevail in the legal fight and making great technology. This distraction gives Huawei (and in turn, China,) a leg-up, allowing it to strengthen its position in 5G. When you dig a little deeper, you will realize that, if Qualcomm’s request for a stay is not granted, the situation gets even direr.

What happens if the stay request is denied?

As I have discussed in my previous article, licensing revenues are the lifeblood of Qualcomm’s virtuous cycle of technology development, commercialization, and monetization. Judge Koh’s order threw a monkey wrench in to that cycle, exposing almost all of Qualcomm’s licensing contracts to renegotiation risk. Based on the news articles, it seems that recent deals with Apple and Samsung could be safe for some time; but I can’t imagine both of those behemoths not trying to use the court’s decision to eke out more concessions from Qualcomm. If you remember, during a separate trial, Qualcomm produced documentary evidence that showed how Apple intentionally tried to harm Qualcomm’s licensing business. Bottom line is, Qualcomm’s every licensing contract could be up for grabs. The company’s much-publicized, recent licensing spat with LG offers a glimpse of how convoluted and long these renegotiations could get.

Let’s look at the biggest block of the licensing lot, the Chinese OEMs that bring in a large portion of Qualcomm’s licensing revenue. Just like LG, all of these OEMs buy chipsets from Qualcomm. That means just as LG is trying to do, they might also ask for chipset based licensing. But most of them, if not all, license Qualcomm’s full portfolio, including cellular SEPs (Standard Essential Patents), non-cellular SEPs (e.g. Wi-Fi and Bluetooth), and non-SEPs (NEPs). However, the court order only applies to cellular SEPs. Given Judge Koh’s ruling, how would you negotiate a licensing deal that would span all these different kinds of patents? It would seem that the only option would be for Qualcomm and its licensees to examine more than 130,000 patents, one-by-one, and license on a la carte basis. As one could imagine, that would be a herculean task. Taking this insanity further, many of these are system-level patents, which mean they may cover more than just the modem or any single chip, and span different parts of the system and software. For example, if you consider MIMO, an important feature of 4G and 5G, the technology covers not just the modem but also RFICs and antennas, phones, and network equipment. Would patents related to MIMO be licensed based on modem pricing or RFIC, or antennas, or base stations? Also, different vendors produce these components. So, would all those vendors have to get licenses for cellular SEPs? So many complex questions with few clear answers!

If your head is not yet spinning with the complexity, consider this absurdity: Qualcomm would still be free to license all patents other than cellular SEPs at the device level. This means, there might be a case wherein the prices of non-SEPs would be higher than that of SEPs, which at some level defies logic! The point is, licensing could get so complex that it might take years to agree on how to structure meaningful contracts. A side note, look for my next article on the range of absurdities this court order is causing. Also, if you would like to know more about cellular licensing, please read my articles here, here and here.

The real threat of 5G investments getting strangled

During the uncertainty of lengthy negotiations and the complexity of restructuring of contracts, it is highly likely that many OEMs would be tempted to stop paying royalties. This would be similar to what Huawei is doing during its negotiations with Qualcomm now, and to what Apple did until it settled with Qualcomm back in March 2019. Such a large-scale disruption could mean that the revenue stream that feeds the Qualcomm’s R&D engine would go dry. The direct casualty of such an outcome would be the development of 5G, and America’s leadership in 5G. As you might know, we are only in the early stages of 5G. A lot of what 5G promises is still under development. All of that requires billions of dollars of investment and multiple years of sustained development, with a long lead time for revenue generation. Any interruption to Qualcomm’s licensing revenue could directly impact Qualcomm’s ability to create those inventions and the development of 5G. The world would be at the mercy of China for the future of 5G, and to deliver technologies for Industry 4.0, and others. Handing a powerful lever to China in the trade war

The fact that a large portion of Qualcomm’s licensing revenue comes from Chinese OEMs has huge significance when the United States is in a bitter trade war with China. As evident from developments, both countries will use whatever leverage they have to get the upper hand. In such a case, the considerable revenue stream for a strategic American company will surely be weaponized and used as a bargaining chip by China in the broader trade negotiations. It is no secret that the Chinese government wields considerable influence over these OEMs. If you think about it, this is such a potent tool, not only for trade negotiations but also to severely hurt America’s prospects for 5G leadership.

Whose interest is FTC fighting for?

It is abundantly clear that the real and biggest beneficiaries of the FTC’s and Judge Koh’s actions are neither the American People, nor American companies, but ironically, China and Chinese companies. And this too, to the detriment of American 5G leadership and at the expense of an American technology company that has been hailed as a 5G leader by the U.S. Government itself. This is exactly the reason the U.S. Department of Justice voluntarily tried to impress upon Judge Koh that she be cognizant of the implications of her decision for America’s national interests.

On the closing note, to those who value free markets and fair competition, I would like to point them to recently finalized 5G infrastructure contracts in China. Huawei won the lion’s share of these contracts, clearly showing how the Chinese government protects its companies. Who is there to protect the American companies? Far from protecting its own national interests, a U.S. government agency is effectively fighting tooth and nail to hurt a legitimate American company and help the Chinese. What an irony!

Last week’s remarkable decision of the United States Court of Appeals for the Ninth Circuit (appellate court) consisting of three judges, finally brings some common sense into FTC’s bizarre antitrust case against Qualcomm. The appellate court granted Qualcomm’s request to stay the United States District Court for the Northern District of California’s (lower court) ruling, which had far-reaching implications for the entire U.S. patent regimen.

Side note: If you are new to the subject would like to understand the background, please read my previous articles here, here, here, here and here.

What did the appellate court say?

The court order must have sounded like music to Qualcomm’s ears. Even they could not have written it better! Don’t be confused by the title of the court order which says “partial stay,” Qualcomm actually got all of what it requested, and then some. The tone, the language, the arguments, the selection of phrases and words, the precedence cited, the direct denunciation of the lower court’s decision, everything screams a thumping Qualcomm victory.

First, it says that the application of the Sherman Act (antitrust law) to the case is not accurate, as private businesses have discretion on who they deal with. That means, Qualcomm is free to license its Standard Essential Patents (SEPs) to whomever they choose — effectively negating the lower court’s order of mandatorily licensing of SEPs to rival chipmakers on exhaustive basis.

Second, it acknowledges that there is a stark difference of opinion between two governmental agencies tasked with enforcement of antitrust laws— FTC and Department of Justice (DOJ). This is in complete contrast to the lower court’s abject disregard for DOJ’s request to conduct additional briefings before imposing remedies, and be considerate about the effects of broad and far-reaching remedies that alter market dynamics and jeopardize national security.

Third, it clearly states that the appellate court is satisfied with Qualcomm’s argument that its practice of licensing only to devices OEMs and charging royalties at the device level doesn’t violate any antitrust laws. This is again the opposite of one of the key rulings of the lower court. The appellate court goes on to even mention the extraordinary step taken by the sitting FTC commissioner— Maureen K Ohlhausen, publically expressing her dissent to the theory urged in the complaint and adopted by the lower court.

Fourth, it says that it also agrees with Qualcomm’s strong argument that implementing the lower court ruling, before the appeal decision, will do irreparable harm to its business. This was one of the easiest things to understand and realize to anybody even with a hint of knowledge of the licensing and wireless business. The lower court’s complete disregard for such logical reasoning was appalling to the keen observers of this case like me.

Finally, the appellate court concludes that the difference of opinion between FTC and all the other relevant government agencies, including DOJ, Department of Defense, and Department of Energy, warrants the stay be granted. It further points out that these government agencies have opined that the lower court’s adverse action against Qualcomm threatens national security and “has the effect of harming rather than benefiting consumers.”

What’s next?

The biggest kicker in the appellate court’s order is its ridicule of the lower court’s order as “.. a trailblazing application of the antitrust laws or instead of an improper excursion beyond the outer limits of the Sherman Act..”

To be sure, the lower courts are supposed to implement the law based on precedence, and not be a trailblazer!

Further, the appeal hearing is scheduled for Jan 2020, much quicker than usual timelines. The tone of the appellate court order, the decisive and unambiguous way in which the panel has struck down all the major aspects lower court’s assertions, strongly suggests that the overturn of its ruling is imminent. The urgency in scheduling the appeal hearing also indicates the importance appellate court imparts to this case. Qualcomm filed its long opening brief to the court on Aug 24th,2019.

Final thoughts

This appellate court decision was longtime coming. Actually, the whole trial was a series of bizarre turns of events. From the judge arbitrarily limiting the evidence period to March 2018, excluding the pertinent evidence thereafter, to strange explanation for summarily discounting defendant’s in-court live testimony, because the judge felt that the witnesses looked “prepared” to using an extremely narrowly defined potential violation for an extremely broad and industry-altering remedy and so on. But fortunately the saner senses have finally prevailed, and justice is being served the right way, albeit delayed. Now all the eyes are on the Jan 2020 hearings.

Qualcomm got a reprieve when the United States Court of Appeals for the Ninth Circuit stayed the decision of United States District Court for the Northern District of California’s (DC) in its antitrust case. Immediately after the stay, Qualcomm filed its opening brief (175 pages long), which was followed by a flurry of supporting Amicus Briefs (each more than 40 pages) from different companies, U.S. government, a retired circuit court judge, and groups of experts. While all of them criticize DC’s ruling, two of them choose to be neutral; all others were strongly in favor of Qualcomm.

Principal arguments

The briefs supporting Qualcomm strongly condemn DC’s ruling. Their arguments can be summed up into three major themes:

DC either misunderstood or misapplied the US antitrust laws, as well as the precedence. The proponents claim that Qualcomm’s licensing approach, “No license No chips” policy or alleged “higher licensing prices” don’t violate Sherman Act. Also, Qualcomm’s decision to only license to device OEMs is not against the Fair and Reasonable and Anti-Discriminatory (FRAND) principles of Standards Developments Organizations (SDOs). Additionally, they claim the FTC or court did not show apparent consumer harm.

The remedies imposed by DC are very broad and far-reaching. The ruling applies to every aspect of Qualcomm’s licensing business including all of its global contracts; in many cases, those are even outside the purview of FTC or the DC. For example, contracts with Chinese OEMs for devices to be sold only in China are beyond FTC’s authority.

The ruling creates widespread disruption to the decades-old licensing regimen that has proven to encourage innovation, be efficient, and easy to implement. If licensing based on Smallest Saleable Patent Practice Unit (SSPPU) becomes mandatory, that will put almost every existing licensing deal that doesn’t use SSPPU, up for renegotiation. The proponents claim that because many patents span multiple functional units, DC’s ruling will create an unfathomable mess of who to license who, at what rate, and how.

The focus of each Amicus Brief

All the briefs came with a heavy dose of related precedence. Since the supporters are from different fields, each of them stressed on different parts of the argument, as highlighted in the sections below:

U.S. Department of Justice (DoJ):

One of DoJ’s main points is, alleged “unreasonably high royalty” is not anti-competitive; on the contrary, they quote from precedence that high royalties enable “risk-taking that produces innovation and economic growth.”

DoJ also emphasizes that Sherman Act violation requires “harm to completion” and not just “harm to competitors” as alleged by DC. DoJ ridicules DC’s “misunderstanding” of antitrust law, and also reminds it about the CFIUS’ action to block the takeover of Qualcomm because of national security reasons.

Judge Paul R. Michel (Ret.) – Served on Circuit Court for more than 20 years

Judge Michel states that SSPPU is a mere tool to avoid jury confusion. He argues, since this was a bench trial, and because of the sheer number of complex patents (~140,000) that cover multiple functional units, use of SSPPU does not make any sense.

The judge also points to the disastrous outcomes when the SSPPU was mandatorily applied to IEEE standards 802.11ah and ai, which were ultimately rejected by ANSI (American National Standards Institute).

A group of 20 antitrust and patent law professors and experts

These experts, including a retired chief judge of the federal circuit court of appeals (Randall R. Rader), who came up with the SSPPU concept, point out that the antitrust law needs actual proof of the harm (e.g., economic analysis), not just “Per Se” or “theory-driven arguments.” They condemn DC for using the discredited theory of Mr. Shapiro (without using his name) and simplistic documentary evidence, such as email, instead of concrete economic evidence to establish anti-competitive conduct.

They draw an interesting parallel between the decade long antitrust crusade against IBM, launched at the closing days of Johnson administration and that of Qualcomm, filed during the last days of Obama administration. They point out that DoJ learned its lessons about the ill effects of antitrust overreach by pushing IBM, an American technology jewel, to certain bankruptcy, and warn against repeating it.

International Center for Law & Economics (ICLE)

ICLE, a group which has many antitrust and economics experts, opines that this “case is a prime—and potentially disastrous—example of how the unwarranted reliance on inadequate inferences of anticompetitive effect lead to judicial outcomes utterly at odds with Supreme Court precedent.”

Further, ICLE quotes one of the previous relevant judgments that seem to uproot the crux of DC’s argument—“The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system.”

Cause of Action Institute (CoA)

CoA, a non-partisan government oversight group, comes down rather heavily on both DC and FTC. It reiterates the words of a sitting FTC commissioner who called this trial “a product of judicial alchemy, which is both bad law and bad public policy.”

Further, CoA asserts that FTC exceeded its statutory authority in at least four ways, including the reasons that DC’s “injunction violates due process and is unenforceable for vagueness.”

Alliance of U.S. Startups & Inventors for Jobs (USIJ)

USIJ states the fact that the cellular industry is one of the most competitive, dynamic, and thriving markets, and there is no need for regulatory or judicial interference. Instead, it suggests that the FRAND complaints and the other concerns can be better resolved by using contract and patent law rather than antitrust law. They say that the latter would be akin to using a hammer instead of a scalpel.

It warns that DC’s ruling will stop companies from participating in standardization, and that will be anticompetitive and will harm consumers.

InterDigital

InterDigital emphasizes that antitrust law shouldn’t trump innovation, and it points out how the law is being misused to make inventors “accept sub-FRAND royalties.” It also cautions about how antitrust overreach will weaken innovative US companies, and make their leadership replaced by foreign companies supported by their governments, who may not have the US’s best interests at heart.

InterDigital doesn’t specifically mention whether it supports Qualcomm or not.

Dolby

Dolby comes out strongly in favor of keeping the flexibility of patent holders in deciding where in the value chain they license. They insist that this allows the innovators to maximize returns on their huge investments and fairly compensates them for the risks.

Dolby faults DC in misinterpreting the FRAND commitments to SDOs and suggests that there are no mandatory requirements to license at any specific level or to any specific providers. It also highlights the confusion and the havoc it would create if the well-established end product based licensing, practiced across many industries, is altered in any way.

Dolby only asks for the reversal of DC’s summary judgment instructing Qualcomm to license to rival chips makers.

Nokia

Nokia points out the difficulties in licensing at a component level, and how patents cover more than a single functional unit, and how SSPPU is not applicable at all. While highlighting these inconsistencies in DC’s decision, it remains neutral.

In closing

There is a striking commonality in what Qualcomm has claimed in its briefing and all the Amicus Briefs coming from this diverse set of experts and in some cases competitors such as InterDigital. That suggests that there indeed is a strong case to be made against DC’s ruling. As I have pointed out in my earlier article, the appellate court seems to agree with many of these assertions as can be gleaned from the stay ruling. I would be highly surprised if the appellate court doesn’t overturn many of the draconian rulings of the DC.

Also, In response to Qualcomm’s briefing, FTC is expected to file its briefing sometime in October or November, and any Amicus Briefs supporting it will follow soon after. Come back to my column here for the latest developments and what they mean.

The stage is set for Feb 13th, 2020, hearing of FTC vs. Qualcomm antitrust case at the United States Court of Appeals for the Ninth Circuit (Ninth Circuit). In preparation, FTC, Qualcomm, and many interested parties have filed their briefs in support and against the decision by the United States District Court for the Northern District of California (lower court).

In the briefs, FTC’s subtle change in tactic caught my eye. They seem to have changed their “hero” argument. They are now trying to make Qualcomm’s alleged breach of FRAND (Fair Reasonable and Non-Discriminatory) commitments to Standard Setting Organization (SSOs), their main argument, while treading lightly on their earlier key, albeit discredited, “surcharge on competitor” theory. Is it a sign of FTC losing confidence in its case? Also, their FRAND breach argument seems to be on shaky ground.

I spent many hours meticulously reading through all the briefs (~1500 pages). They are complex, with lots of legal jargon, illustrations, and citations. Here is a high-level summary of the arguments and my opinions on their effectiveness.

The hypothetical “surcharge on competitors” argument

FTC and its supporters are still relying on the theory put forward by Prof. Carl Shapiro. They also have provided torturous examples and illustrations. However, this theory was rejected by the US Court of Appeals for the District of Columbia Circuit in a separate case—United States vs. AT&T. The court’s rejection, as stated, was based on the evidence of actual market performance. Interestingly, both these cases have lots of similarities. Just like AT&T’s case, FTC’s arguments are also based only on theory, without any empirical study of actual market conditions. Moreover, the developments in the market completely debunk Dr. Shapiro’s theory. Unfortunately, those developments could not be included in the trial as evidence, because they happened outside the discovery period of the trial.

According to the theory, Qualcomm allegedly abused its monopoly power to create an imaginary surcharge on the competitors, making their chipsets more expensive. In reality, around 2016, Apple, who was exclusively using Qualcomm’s chipsets, also started using Intel’s chipsets. This fact virtually nullifies the monopoly power allegation. To a large extent, it also disproves the claim that the alleged imaginary surcharge was disincentivizing competitors. Alas! None of this mattered in the trial because of a stringent discovery timeline.

FTC claims that this imaginary surcharge reduced competitors’ profit and hampered their investment in R&D. That seems like a ridiculous argument when you consider that those competitors are behemoths like Intel, and the OEMs are giants like Apple. Looking at all these contradictions, it is clear why FTC is not pushing this argument as hard as it did in the lower court.

Is “harm to competitors” the same as “harm to the competitive process?”

For claiming antitrust law violations, prosecutors must prove harm to the competitive process. FTC is arguing that Intel being late with CDMA and LTE chipsets, and players such as Broadcom and ST Ericsson exiting the market prove harm to competition. Many experts, including the US Department of Justice (DoJ), argue that such instances as well as companies making less profit show harm to competitors, but not necessarily to the competitive process.

During the trial in the lower court, there was ample evidence presented to explain the reasons behind the problems competitors faced — none instigated by Qualcomm. For example, documents presented by Intel’s strategy consultant Bain and Company attributed Intel’s delay to faulty execution; an executive from ST Ericsson opined that they couldn’t execute fast enough to keep up with Qualcomm and rapidly lost the market share, which resulted in their exit.

The reasons for competitors not faring well in CDMA and being late in LTE were pretty clear to the keen industry observers like me. Regarding CDMA, not many chipset vendors were interested in that market as they thought the opportunity was small and fast diminishing. There were only a couple of large CDMA operators (Circa 2006), and with LTE on the horizon, they thought CDMA would quickly disappear. Hence they never invested in it. Much to their chagrin, CDMA thrived for many years, allowing Qualcomm to enjoy a monopoly. Ultimately, Intel acquired a small vendor—Via Telecom—in 2015 to get CDMA expertise. On the LTE front, nobody foresaw the exponential growth of LTE smartphones. Qualcomm, because of its early investment and cellular standards leadership in LTE, surged ahead, leaving others in the perpetual catch-up mode. For example, even when the LTE market has stabilized, Qualcomm chipsets had superior performance.

Alleged practice of “license for chips” policy

FTC claims that it has factually proven Qualcomm’s alleged “license for chips” policy, where Qualcomm would only sell its highly coveted chips if the OEMs sign the license agreement. Qualcomm disagrees. In my view, FTC’s evidence is pretty scant and unconvincing. It includes a few emails with some text that alludes to such intention (license for chips). In many of these emails, the main topics of discussion seem to be something unrelated. There were a couple of testimonies from Qualcomm’s OEMs, mentioning how they “felt” the overhang of this policy during negotiations. But they didn’t have any tangible evidence. There was only one concrete instance—a mail with a veiled threat. But the evidence presented in response showed that Qualcomm top management swiftly dealt with it, and condemned any such practice by its lower cadres.

Another of FTC’s claims is regarding an agreement between Qualcomm and Apple, through which Qualcomm paid Apple for a commitment to use its chipsets in a majority of the devices. FTC alleges that this amounts to Qualcomm indirectly subsidizing licensing fees, and that violates antitrust law. This also is part of the imaginary surcharge to competitor argument. Qualcomm claims that, as stated in the contract, the payment was to compensate Apple for the expenses it would incur in modifying its designs to incorporate Qualcomm chipsets, and was a traditional volume discount. When the contract was signed, Apple was already the market leader with multiple successful iPhone models and was using a different vendor’s chipset. That would indicate Qualcomm didn’t posses any monopoly power over Apple. The contract and the payment were revocable, which Apple ultimately did. So, it is questionable whether it can be treated as a subsidy.

Is FRAND commitment “duty to deal?”

Now to the new “Hero” argument. FTC claims that Qualcomm’s FRAND commitment to the US-based SSOs binds it to license its Standard Essential Patents (SEPs) to rival chip vendors (aka duty to deal). The SSOs in question are ATIS (Alliance for Telecommunications Industry Solutions), and TIA (Telecom Industry Association). The argument is, Qualcomm’s decision to not license to rival chipmakers is a violation of antitrust law. Many of the third parties on the FTC’s side overwhelmingly support this argument as well, for obvious reasons. Well, this at the surface seems like a simple and compelling argument. But it has multiple facets.

First, do these commitments mean holders have to license the patents, or is it enough to provide access to them? Second, whether FRAND violation, if true, amounts to an antitrust violation, which is usually a much higher bar? Third, which is more interesting—Are patents practiced by the chipsets or by the end devices (e.g., smartphones)? If latter, then licensing and violation only occurs at the device level, so no real need to license to chipset vendors. Fourth, the policies and practices of the biggest SSO —ETSI (European Telecommunications Standards Institute). ETSI’s policies are considered as the gold standard for SSOs. Interestingly, in its decades of history, ETSI has never compelled its members to license to rival chipset vendors or at the chip/component level. Many of the current SEP holders, such as Nokia, Ericsson, and others, strongly supported this approach during the trial. Well, I have merely scratched the surface of this argument. Since this is now FTC’s main argument, indeed, it needs close scrutiny, which I will do in my next article.

If you have been following this case and feel that you have heard these arguments before, you are right! Both sides made these arguments in the lower court and still sticking to them, except for FTC’s subtle change. It will be interesting to see how the Ninth Circuit considers these arguments

What does FRAND commitment to SSOs mean?

The SSOs in question here are TIA (Telecommunications Industry Association), which developed CDMA standards, and ATIS (The Alliance for Telecommunications Industry Solutions), which developed LTE standards. Both organizations require their members to mandatorily sign the IPR policy document, which includes the FRAND requirements.

TIA has a 24-page IPR Policy document. The most relevant portions to this case are on pages 8 and 9:

(2) (b) A license under any Essential Patent(s), the license rights which are held by the undersigned Patent Holder, will be made available to all applicants under terms and conditions that are reasonable and non-discriminatory, which may include monetary compensation, and only to the extent necessary for the practice of any or all of the Normative portions for the field of use of practice of the Standard

The first part of this section is pretty straight forward. But the part marked in red is what is at issue here. In layman’s terms, this means the patent holder agrees to give a license for the practice of the standard. In other words, licenses to the applicants whose products practice the standard. Qualcomm argues that devices—and not chipsets—practice the standards. They point to the actual language/text of the standards as evidence. It is customary for the patents to state, “UE (User Equipment, aka device) shall do this,” or “Base station shall do that,” etc. And the standards never state, “Chipset shall do this or that.” Considering that, Qualcomm argues, they are not required to license SEPs to chipset vendors, but only to device vendors. To that effect, they also point out that they have never sued any chipset vendors for patent infringement.

Now, let’s look at the ATIS IPR policy, which is governed by the “Patent Policy as adopted by ATIS and as set forth in the “Operating Procedures for ATIS Forums and Committees,” a 26-page document. The most relevant portions are on page 10 and 11:

“…Statement from patent holder

Prior to approval of such a proposed ANS, ATIS shall receive from the identified party or a party authorized to make assurances on its behalf, in written or electronic form (b) assurance that a license to such essential patent claim(s)will be made available to applicants desiring to utilize the license for the purpose of implementing the standard. (i) under reasonable terms and conditions that are demonstrably free of any unfair discrimination…”

Again, looking at the highlighted part, Qualcomm argues, as stated in the standard, chipsets don’t implement the standard, but the devices do. So, there is no need for them to license to chipset vendors!

Is a violation of SSO commitment violation of US antitrust law?

Even if you consider that SSO IPR policies are violated, then the question becomes, “does that amount to a violation of US antitrust law?” One argument is that the alleged FRAND violation is a commercial matter and can easily be dealt with through contract and patent law, instead of policy tools such as antitrust law. In his Amicus Brief in support of Qualcomm, Hon Judge Paul R. Michel (Ret.) of US circuit court gave a compelling simile: “as a general proposition, the hammer of antitrust law is not needed to resolve FRAND disputes when more precise scalpels of contract and patent law are effective.”

Even the United States Court of Appeals for the Ninth Circuit (Ninth Circuit) panel, while granting Qualcomm’s request for a stay, ridiculed the lower court’s ruling as “… a trailblazing application of the antitrust laws or …an improper excursion beyond the outer limits of the Sherman Act..”

Precedence and other considerations

3GPP (3rd Generation Partnership Project), the cellular specifications group, prefers all the SSOs across the world to have consistent IPR policies. ETSI (European Telecommunications Standards Institute) is one of the major players among the eight SSOs that are the organizational partners of 3GPP. There has been much discussion at ETSI regarding the issue of component-level licensing, such as licensing to chipset vendors. But ETSI has never stated that it supports or requires its members to offer component-level licensing. So, the lower court decision creates inconsistency between ATIS, ETSI, and other SSOs, whose impacts go far beyond this case.

More than two decades of cellular patent licensing history proves that the device-level licensing works smoothly and efficiently. Although the discussions related to this case are mostly about modem chipsets, typical devices have hundreds of different components. If licensing is brought to the component-level, it would be a logistical and legal nightmare for OEMs to understand, and negotiate separate licenses with all those vendors, as I explained in this article. Also, probably every existing cellular IPR contract will have to be rewritten.

Final thoughts

So far, there have been only a few minor cases in the telecom industry regarding the violation of FRAND commitments. FTC’s case against Qualcomm is the first major case where its relevance to antitrust law is being tested. The decision of this trial will be a defining moment in the “component vs. device-level” licensing debate. Qualcomm seems to have strong arguments, and the earlier Ninth Circuit panel agreed with most of them. But now the appeals hearing has a new panel of judges, which brings a new set of uncertainties to the case. As promised before, I will be there in person to witness the appeals hearing of this historic case

The title best describes the current situation after the recent hearing in the more-than-yearlong saga between FTC and Qualcomm. On Feb 13th, 2020, a three-judge panel of the US Court of Appeals for the Ninth Circuit (Ninth Circuit) heard Qualcomm’s appeal to reverse the ruling of the US District Court of Northern California (lower court). During the hearing, the panel asked a lot of skeptical questions to FTC regarding its position, arguments, and precedents, probed Qualcomm’s stance, and almost snubbed the US Department of Justice (DoJ). Although the judges appeared confused in the beginning, they seemed to have gotten the main points toward the end. Based on the verbal and non-verbal communications of the judges, Qualcomm definitely had a more positive day than FTC.

I was fortunate enough to be in the court to witness the hearing. The appeals panel consisted of three judges: Judge Callahan, Judge Rawlinson, and Judge Murphy III. Being in front of them, I was able to observe lots of their non-verbal cues, such as subtle changes in mood and facial expressions, inaudible grunts, how keenly were they listening to whose arguments, etc., which many people watching online might have missed.

With only about 50 minutes allocated to the hearing, both parties only focused on the main points. What caught my eye was that during Qualcomm’s arguments, judges were more in the listening mode and only prodding Qualcomm for clarifications. But during FTC’s time, they were more skeptical, often questioning and challenging FTC counsel’s assertions, and mostly in the “so what” mode. This is unlike other appeals cases, where usually appellants (Qualcomm in this case) face more scrutiny.

Duty to Deal

FTC massively hurt their case by conceding that Judge Koh had erred in citing the Aspen Skiing case as the precedent for “Duty to Deal,” i.e. the ruling that Qualcomm has the duty to license its patents to competitors. Judge Callahan even went to the extent of saying that the house of cards, i.e. FTC’s case, starts to fall if the card of Aspen case is pulled out. Qualcomm obviously made a field day with it, quoting lower court’s argument that “Duty to Deal” was one leg of the three-legged stool, and with that gone, the case couldn’t stand (literally and figuratively). FTC’s alternate precedents of Caldera and United Shoe Company cases, or argument about Qualcomm breaching FRAND commitments to Standards Setting Organizations (SSOs) didn’t seem to impress the panel. So, I am positive that this ruling will be reversed.

“No license no chips” policy

This argument confused the heck out of judges. Multiple times Judge Callahan asked and confirmed that Qualcomm was not accused of the “No chips No license” policy, which obviously is antitrust conduct. She even suggested that probably Judge Koh of the lower court was confused about that as well! In other words, she didn’t think “No License No Chips” was anti-competitive. There was a clear difference of opinion between FTC’s and Qualcomm’s counsels on how OEMs expressed their views on the policy. FTC said that many witnesses from smartphone OEMs had given testimonies about paying higher royalties because of the risk of not getting chips. On the other hand, Qualcomm said that there was only one witness, from one OEM, in a non-monopoly market. To my recollection attending those hearings, mostly OEM expressed that they felt such policy existed, but never showed any evidence of Qualcomm practicing it. So, obviously, the panel will have look at the actual testimonies to make their determination. There was no discussion on whether this policy itself was illegal or not. but using this policy for creating the alleged surcharge on competitors.

Surcharge on competitors

If no license no chips discussion was confusing, this torturous surcharge claim hypothesis knocked the wind out of judges! Judge Murphy even said that he was having a hard time keeping up with all these things! I don’t blame them. Most of FTC’s time was spent on making the judges understand what FTC calls a surcharge, how it affects competition in their view etc. As expected, the panel challenged this claim from multiple angles—precedence, market evidence, harm to competition not competitors, etc. and tried to poke holes in FTC’s position.

Here are the notable questions and challenges. Judge Rawlinson asked “… what would be wrong with that (higher royalty fees), doesn’t the Supreme court say that patent holders have the right to price their patents, what would be anticompetitive about that?” and “..What case says that it is anti-competitive to move (cost) from chip to patent?” Judge Callahan asked, “Why did the OEMs say it’s unfair because they have to buy a license anyway?”; “..who is a Goliath here, Apple is more of a Goliath than Qualcomm”; “..your argument that Qualcomm’s licensing fees increase rival’s cost doesn’t make sense to me…” ; “There seems to be….. a conflation of profitable and anti-competitive (one means the other).”; “… weren’t there multiple competitors enter the …market successfully beginning around 2015, leading to a precipitous decline in Qualcomm’s market (share)? Judge Murphy III asked, “…why don’t we let OEMs exercise their right in patent law to file (cases for) predatory pricing, abuse of monopoly, etc. (instead of antitrust law)?” These were mere samples.

The panel was unconvinced and most likely will still be even after looking at the documents.

Chip volume incentives or royalty discount

This issue was not discussed as much as others but was used as a basis for other arguments. FTC claims that Qualcomm’s volume discount to Apple is exclusionary and anti-competitive. Qualcomm, during its rebuttal, argued that licensing and chipset are two separate contracts and it doesn’t make sense to combine them. Again, this is another issue where the judges will have to look at the documentation and decide.

Is the “Threat to national security” argument justified?

This is the first time that DoJ and FTC are on opposite sides of a case. Qualcomm ceded five minutes of their time to DoJ. DoJ’s major claim is that the lower court’s global and expansive remedy harms national security. Judge Murphy seemed hostile against DoJ and asked whether they have any market analysis or financial evidence to prove the claim. DoJ counsel, although startled by the question, came back with a reasonable explanation that the basis for the case was 3G and 4G, but applying the remedy to 5G will negatively affect the country’s standing in 5G. 5G being such a crucial technology for many aspects of the country, DoJ and other government departments (Department of Defense and Department of Energy) are convinced that implementing the ruling will harm the country. FTC counsel was quick to capitalize on Judge Murphy’s assertion and discount the security concern as a simple abstraction without any supporting studies.

I am not sure whether the panel will consider the security question seriously.

What does all this mean?

You have to consider that the hearing is only one part, albeit an extremely important one, in resolving the case. The court will examine all the briefs, and case documentation before making a final decision. One could argue that the cues from the hearing may be overblown, for example, all those questions and challenges could just be the judges probing both parties to completely understand their stance and such. However, specific things such as difficulty in fully grasping the FTC’s argument, and understanding its point of view clearly indicate that the judges don’t believe those arguments and are not taking them at the face value. It also suggests that the FTC’s arguments are not as robust as the lower court thought they were.

From Qualcomm’s perspective, after a clear win with the stay, this hearing turned out to be very positive. The FTC had a major initial setback because of the Aspen Skiing reversal, but at least made the panel understand its arguments. Whether the panel agrees with them or not is a separate matter. In my view, Judge Callahan and Judge Rawlinson seem to be aligned with Qualcomm’s arguments and Judge Murphy seems to be neutral or slightly aligned with FTC’s argument. Ultimately, as Judge Murphy III succinctly put it, “anticompetitive behavior is illegal… hyper-competitive behavior is not… this case asks us to draw the line between the two.” Meaning, the judges have to decide whether Qualcomm’s behavior is anticompetitive or hyper-competitive.

What’s next?

There is no fixed timing for the Ninth Circuit’s decision. The expectation is six to twelve months. The decision doesn’t have to be unanimous, meaning, only two of the three judges have to agree.

In terms of outcome possibilities, the panel could completely knock down all the lower court’s rulings, or fully uphold them, or do anything in between. Meaning, it could agree to some parts of the ruling and reverse the others or make a determination on some and send the others back to the lower court to reconsider. No matter what the panel’s decision is, either party can request a full panel review, which involves all the 20+ judges at the Ninth Circuit, and further knock on the Supreme Court’s door. If Qualcomm loses, especially the claims that affect its licensing policy, I am sure it will go to the Supreme Court. On the other hand, if the FTC loses, it might ask for the full panel review and let it go after that.

As it stands today, I think Qualcomm is in a pretty good situation and more likely to win than the FTC.

Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get updates on this trial as well as the telecom industry at large.

The Chronicles of 3GPP Rel. 17

Have you ever felt the joy and elation of being part of something that you have only been observing, reading, writing about, and admiring for a long time? Well, I experienced that when I became a member of 3GPP (3rd Generation Partnership Project) and attended RAN (Radio Access Network) plenary meeting #84 last week in the beautiful city of Newport Beach, California. RAN group is primarily responsible for coming up with wireless or radio interface related specifications.

The timing couldn’t be more perfect. This specific meeting was, in fact, the kick-off of 3GPP Rel. 17 discussions. I have written extensively about 3GPP and its processes on RCR Wireless News. You can read all of them here. Attending the first-ever meeting on a new release was indeed very exciting. I will chronicle the journey of Rel. 17, through a series of articles here on RCR Wireless News, and this is the first one. I will report the developments and discuss what those mean for the wireless as well as the many other industries 5G is set to touch and transform. If you are a standards and wireless junkie, get on board, and enjoy the ride.

3GPP Rel. 17 is coming at an interesting time. It is coming after the much publicized and accelerated Rel. 15 that introduced 5G, and Rel. 16 that put a solid foundation for taking 5G beyond mobile broadband. Naturally, the interest is what more 5G could do. The Rel. 17 kick-off meeting, as expected, was a symposium of great ideas, and a long wish list from prominent 3GPP members. Although many of the members submitted their proposals, only a few, selected through a lottery system, got the opportunity to present in the meeting. Nokia, KPN, Qualcomm, Indian SSO (Standard Setting Organization), and few others were among the ones who presented. I saw two clear themes in most of the proposals: First, keeping enough of 3GPP’s time and resources free to address urgent needs stemming from the nascent 5G deployments; second, addressing the needs of new verticals/industries that 5G enables.

Rel. 17 work areas

There were a lot of common subjects in the proposals. All of those were consolidated into four main work areas during the meeting:

Topics for which the discussion can start in June 2019
The main topics in this group include mid-tier devices such as wearables without extreme speeds or latency, small data exchange during the inactive state, D2D enhancements going beyond V2X for relay-kind of deployments, support for mmWave above 52.6 GHz, Multi-SIM, multicast/broadcast enhancements, and coverage improvements
Topics for which the discussion can start in September 2019 These include Integrated Access Backhaul (IAB), unlicensed spectrum support and power-saving enhancements, eMTC/NB-IoT in NR improvements, data collection for SON and AI considerations, high accuracy, and 3D positioning, etc.
Topics that have a broad agreement that can be directly proposed as Work Items or Study Items in future meetings
1024 QAM and others
Topics that don’t have a wider interest or the ones proposed by single or fewer members

As many times emphasized by the chair, the objective of forming these work areas was only to facilitate discussions between the members to come to a common understanding of what is needed. The reason for dividing them into June and September timeframe was purely for logistical reasons. This doesn’t imply any priority between the two groups. Many of the September work areas would be enhancements to items being still being worked on in Rel. 16. Also, spacing them out better spreads the workload. Based on how the discussions pan out, the work areas could be candidates for Work Items or Study Items in the December 2018 plenary meeting.

Two specific topics caught my attention. First, making 5G even more suitable for XR (AR, VR, etc.) and second, AI. The first one makes perfect sense, as XR evolution will have even stringent latency requirements and will need distributed processing capability between device and edge-cloud etc. However, I am not so sure about AI. I don’t how much scope there is to standardize AI, as it doesn’t necessarily require interoperability between devices of different vendors. Also, I doubt companies would be interested in standardizing AI algorithms, which minimizes their competitive edge.

Apart from technical discussions, there were questions and concerns regarding following US Government order to ban Huawei. This was the first major RAN plenary meeting after the executive order imposing the ban was issued. From the discussions, it seemed like “business as usual.” We will know the real effects when the detailed discussions start in the coming weeks.

On a closing note, many compare the standardization process to watching a glacier move. On the contrary, I found it to be very interesting and amusing, especially how the consensus process among the competitors and collaborates work. The meeting was always lively, with a lot of arguments and counter-arguments. We will see whether my view changes in the future! So, tune in to updates from future Rel. 17 meetings to hear about the progress.

I just returned from a whirlwind session of 3GPP RAN Plenary #86, held at the beautiful beach town of Sitges in Spain. The meeting finalized a comprehensive package with more than 30 Study and Work Items (SI and WI) for Rel 17. With a mix of new capabilities and significant improvements to existing features, Rel 17 is set to define the future of 5G. It is expected to be completed by mid or end of 2021.

Although the package looks like a laundry list of features, it gives a window into the strategy and capabilities of different member companies. Some are keen on investing in new, path-breaking technologies, while others are looking to optimize existing features or working on the fringe or very specific areas.

The Rel. 17 SI and WIs can be divided into three main categories.

Blazing new trail

These are the most important new concepts being introduced in Rel. 17 that promise to expand 5G’s horizon.

XR (SI) – The objective of this is to evaluate and adopt improvements that make 5G even better suited for AR, VR, and MR. It includes evaluating distributed architecture harnessing the power of edge-cloud and device capabilities to optimize latency, processing, and power. Lead (aka Rapporteur) – Qualcomm

NR up to 71 GHz (SI and WI) – This is in the new section because of a twist. The WI is to extend the current NR waveform up to 71 GHz, and SI is to explore new and more efficient waveforms for the 52.6 – 71 GHz band. Lead – Qualcomm and Intel

NR-Light (SI) – The objective is to develop cost-effective devices with capabilities that lie between the full-featured NR and Low Power Wireless Access (e.g., NB-IoT/eMTC). For example, devices that support 10s or 100 Mbps speed vs. multi-Gigabit, etc. The typical use cases are wearables, Industrial IoT (IIoT), and others. Lead – Ericsson

Non-Terrestrial Network (NTN) support for NR & NB-IoT/eMTC (WI) – A typical NTN is the satellite network. The objective is to address verticals such as Mining and Agriculture, which mostly lie in remote areas, as well as to enable global asset management, transcending contents and oceans. Lead – MediaTek and Eutelsat

Perfecting the concepts introduced in Rel. 16

Rel. 16 was a short release with an aggressive schedule. It improved upon Rel. 15 and brought in some new concepts. Rel 17 is aiming to make those new concepts well rounded.

Integrated Access & Backhaul – IAB (WI) – Enable cost-effective and efficient deployment of 5G by using wireless for both access and backhaul, for example, using relatively low-cost and readily available millimeter wave (mmWave) spectrum in IAB mode for rapid 5G deployment. Such an approach is especially useful in regions where fiber is not feasible (hilly areas, emerging markets). Lead – Qualcomm

Positioning (SI) – Achieve centimeter-level accuracy, based only on cellular connectivity, especially indoors. This is a key feature for wearables, IIoT, and Industry 4.0 applications. Lead – CATT (NYU)

Sidelink (WI) – Expand use cases from V2X-only to public safety, emergency services, and other handset-based applications by reducing power consumption, reliability, and latency. Lead – LG

Small data transmission in “Inactive” mode (WI) – Enable such transmission without going through the full connection set-up to minimize power consumption. This is extremely important for IIoT use cases such as sensor updates, also for smartphone chatting apps such as Whatsapp, QQ, and others. Lead – ZTE

IIoT and URLLC (WI) – Evaluate and adopt any changes that might be needed to use the unlicensed spectrum for these applications and use cases. Lead – Nokia

Fine-tuning the performance of basic features introduced in Rel. 15

Rel. 15 introduced 5G. Its primary focus was enabling enhanced Broadband (eMBB). Rel. 16 enhanced many of eMBB features, and Rel. 17 is now trying to optimize them even further, especially based on the learnings from the early 5G deployments.

Further enhanced MIMO – FeMIMO (WI) – This improves the management of beamforming and beamsteering and reduces associated overheads. Lead – Samsung

Multi-Radio Dual Connectivity – MRDC (WI) – Mechanism to quickly deactivate unneeded radio when user traffic goes down, to save power. Lead – Huawei

Dynamic Spectrum Sharing – DSS (WI) – DSS had a major upgrade in Rel 16. Rel 17 is looking to facilitate better cross-carrier scheduling of 5G devices to provide enough capacity when their penetration increases. Lead – LG

Coverage Extension (SI) – Since many of the spectrum bands used for 5G will be higher than 4G (even in Sub 6 GHz), this will look into the possibility of extending the coverage of 5G to balance the difference between the two. Lead – China Telecom and Samsung

Along with these, there were many other SI and WIs, including Multi-SIM, RAN Slicing, Self Organizing Networks, QoE Enhancements, NR-Multicast/Broadcast, UE power saving, etc., was adopted into Rel. 17. Other highlights of the plenary

Unlike previous meetings, there were more delegates from non-cellular companies this time, and they were very actively participating in the discussions, as well. For example, a representative from Bosch was a passionate proponent for automotive needs in Slidelink enhancements. I have discussed with people who facilitate the discussion between 3GPP and the industry body 5G Automotive Association (5GAA). This is an extremely welcome development, considering that 5G will transform these industries. Incorporating their needs at the grassroots level during the standards definition phase allows the ecosystem to build solutions that are market-ready for rapid deployment.

There was a rare, very contentious debate in a joint session between RAN and SA groups. The debate was to whether set RAN SI and WI completion timeline to 15 months, as planned now, or extend it to 18 months. The reason for the latter is TSG-SA being late with Rel. 16 completion, and consequently lagging in Rel. 17. Setting an 18-month completion target for RAN will allow SA to catch up and align both the groups to finish Rel. 17 simultaneously. However, RAN, which runs a tight ship, is not happy with the delay. Even after a lengthy discussion, the issue remains unresolved.

It will be amiss if I don’t mention the excellent project management skills exhibited by the RAN chair Mr. Balazs Bertenyi of Nokia Bell Labs. Without his firm, yet logical and unbiased decision making, it would have been impossible to finalize all these things in a short span of four days.

In closing

Rel. 17 is a major release in the evolution of 5G that will expand its reach and scope. It will 1) enable new capabilities for applications such as XR; 2) create new categories of devices with NR-Light; 3) bring 5G to new realms such as satellites; 4) make possible the Massive IoT and Mission Critical Services vision set out at the beginning of 5G; while also improving the excellent start 5G has gotten with Rel. 15 and eMBB. I, for one, feel fortunate to be a witness to see it transform from concept to completion.

With COVID-19 novel coronavirus creating havoc and upsetting everybody’s plans, the question on the minds of many people that follow standards development is, “How will it affect the 5G evolution timeline?” The question is even more relevant for Rel. 16, which is expected to be finalized by Jun 2020. I talked at length regarding this with two key leaders of the industry body 3GPP—Mr. Balazs Bertenyi, the Chair of RAN TSG and Mr. Wanshi Chen, Chair of RAN1 Working Group (WG). The message from both was that Rel 16 will be delivered on time. The Rel. 17 timelines were a different story though.

3GPP meetings are spread throughout the year. Many of them are large conference-style gatherings involving hundreds of delegates from across the world. WG meetings happen almost monthly, whereas TSG meetings are held quarterly. The meetings are usually distributed among major member countries, including the US, Europe, Japan, and China. In the first half of the year, there were WG meetings scheduled in Greece in February, and Korea, Japan, and Canada in April, as well as TSG meetings in Jeju, South Korea in March. But because of the virus outbreak, all those face-to-face meetings were canceled and replaced with online meetings and conference calls. As it stands now, the next face-to-face meetings will take place in May, subject to the developments of the virus situation.

Since 3GPP runs on consensus, the lack of face-to-face meetings certainly raises concerns about the progress that can be made as well as its possible effect on the timelines. However, the duo of Mr. Bertenyi and Mr. Wanshi are working diligently to keep the well-oiled standardization machine going. Mr. Bertenyi told me that although face-to-face meetings are the best and the most efficient option, 3GPP is making elaborate arrangements to replace them with virtual means. They have adopted a two-step approach:1) Further expand the ongoing email-based discussions; 2) Multiple simultaneous conference calls mimicking the actual meetings. “We have worked with the delegates from all participant countries to come up with a few convenient four-hour time slots, and will run simultaneous on-line meetings/conference calls and collaborative sessions to facilitate meaningful interaction,” said Bertenyi “We have stress-tested our systems to ensure its robustness to support a large number of participants“

Mr. Wanshi, who leads the largest working group RAN 1, says that they have already completed a substantial part of Rel 16 work and have achieved functional freeze. So, the focus is now on RAN 2 and RAN3 groups, which is in full swing. The current schedule is to achieve what is called ASN.1 freeze in June 2020. This milestone establishes a stable specification-baseline from which vendors can start building commercial products.

Although, it’s reasonable to say that notwithstanding any further disturbances, Rel. 16 will be finalized on time. However, things are not certain for Rel. 17. Mr. Bertenyi stated that based on the meeting cancellations, it seems inevitable that the Rel. 17 completion timeline will shift by three months to September 2021.

It goes without saying that the plans are based on the current state of affairs in the outbreak. If the situation changes substantially, all the plans will go up in the air. I will keep monitoring the developments and report back. Please make sure to sign-up for our monthly newsletter at TantraAnalyst.com/Newsletter to get the latest on standardization and the telecom industry at large.