Tuesday, October 31, 2017

FCC to Release More Millimeter Wave Spectrum

It is hard to envision how the U.S. access business might change as huge amounts of new millimeter wave spectrum are made available for use. But consider that all U.S. mobile spectrum amounts to less than 800 MHz, while all Wi-Fi likewise amounts to less than 800 MHz of capacity.

Now consider that the U.S. Federal Communications Commission already is moving to add 11 GHz of new spectrum for wireless use (largely 5G). Even in raw physical terms, and ignoring the actual information-carrying capacity of the millimeter systems, the new spectrum already envisioned for deployment is more than an order of magnitude (10 times) all available mobile capacity.

When combined with advanced radio techniques, small cell architectures and the inherently-greater symbol representation of millimeter-range signals, perhaps two orders of magnitude of usable capacity (100 times) will be possible, compared to today's mobile business.

It therefore is easy enough to predict that large swaths of existing markets could be disrupted. It is a simple matter of supply and demand, compounded by the huge amounts of unlicensed spectrum to be made available, which will change the economics of networks.

Availability of prodigious amounts of unlicensed spectrum, plus new radios, will allow at least some enterprises to build their own private networks.


 

In principle, that could have the effect of removing some amount of the addressable public communications market, as content providers such as Google, Facebook and others now build and operate their own undersea networks, essentially removing huge amounts of demand from the public undersea capacity market.

At the same time, fixed wireless networks will emerge as direct substitutes for mass market cabled networks, for the first time, in urban markets. That should change the business case for gigabit access networks.

At the same time, for many use cases, mobile networks will emerge as full substitutes for fixed network services, for the first time.

Also, the new networks will provide a density, latency performance and bandwidth to support any number of new use cases related broadly to the internet of things, artificial intelligence and edge computing.

You can credit Moore’s Law for allowing commercial use of millimeter wave communications in the mass markets, bringing an order of magnitude (10 times) to two orders of magnitude (100 times) more usable mobile and fixed wireless spectrum to the U.S. market.

The huge trove of new capacity, dwarfing all present mobile and Wi-Fi spectrum, will reshape the economics of the access business, allowing new competitors and business models to arise, revaluing spectrum licenses, enabling fixed wireless to compete with fixed networks and positioning mobile networks as full product substitutes for  fixed networks for the first time.

At its November 2017 meeting, the Federal Communications Commission will vote on an order that would make available 1,700 Megahertz of additional terrestrial wireless spectrum available for use, adding to the 11 Gigahertz of spectrum the FCC earlier had made available for flexible terrestrial wireless purposes, largely expected to support 5G use cases.

The additional 1700 Megahertz of high-band spectrum will be made available in the 24 GHz and 47 GHz bands.

As part of its Spectrum Frontiers initiative, the FCC already had started work to release new spectrum in the 28 GHz, 37 GHz, 39 GHz and 64 GHz to 71 GHz bands.

Though 3.85 GHz of that 11 GHz would be made available on a licensed basis, 7 GHz would be available to use on an unlicensed basis.

Spectrum in the 28 GHz, 37 GHz and 39 GHz bands (3.85 GHz total) represents more than four times the amount of flexible-use spectrum the FCC has licensed to date. In the 37 GHz and 39 GHz bands 200-MHz license areas would be created, with a total of 2.4 GHz available.

In the 28 GHz band, two 425 MHz spectrum blocks will be available, on a nationwide basis.

The 7 GHz of new unlicensed spectrum, combined with the existing high-band unlicensed spectrum (57-64 GHz), doubles the amount of high-band unlicensed spectrum to 14 GHz of contiguous unlicensed spectrum (57-71 GHz).

That 14 GHz band will be 15 times as much as all unlicensed Wi-Fi spectrum in the lower bands.

Also planned: shared access in the 37 GHz to 37.6 GHz band makes available 600 MHz of spectrum for dynamic shared access between different commercial users, and commercial and federal users.

Saturday, October 28, 2017

New Technology Wrecks Forecasts

This comparison of actual versus predicted fixed network telephone lines by the International Telecommunications Union illustrates the perils of forecasts. In a market not disrupted by new technology, and especially not a new technology of the “general purpose technology” variety, adoption of any popular technology might well take the shape of a normal “Bell curve.”


Which is to say, in the early going, there is a longish period of gestation and adoption, followed by an inflection point or “knee” where adoption rapidly increases. For many popular consumer electronics products and services, the inflection point tends to be at about 10 percent adoption.


That is the point at which the rate of adoption increases.


For fixed network voice lines, the predictions broke down around 2000. And you can guess why: that was when the internet and mobility started to emerge as viable product substitutes for fixed network voice lines.




Since about 2000, fixed network line growth has been negative--peaking globally about 2006, and account growth has been driven by mobile subscriptions.


Chart 0



The point is that, a decade from now, we might not be tracking “mobile subscriptions,” or at least not “human user” subscriptions, to show revenue growth. That would parallel an earlier evolution in the fixed network business, where counting “access lines” has become less useful as a predictor of total revenue. The more-relevant metrics are based on “units sold,” where units can be voice, internet access, video entertainment or mobile products.

Businesses Prefer Bill Predictability Even More than Consumers

A new survey by J.D. Power suggests that mobile bill predictability is as important--if not more important--for business customers as it has been for consumers.  One caveat is that the study appears to measure user satisfaction, not necessarily the authorized payer’s satisfaction.

For business accounts, that can be a significant difference, when the employer pays for all or part of an employee’s mobile service bill.

“Just as consumer wireless customers dread getting those data overage notifications, business wireless customers prefer the predictable budgeting and unrestricted employee access that comes with unlimited data plans, even if those plans cost more,” said Peter Cunningham, Technology, Media, and Telecommunications Practice Lead at J.D. Power.

Among business customers surveyed by J.D. Power, the adoption rate of unlimited data plans is 69 percent,  compared with just 27 percent adoption in the consumer marketplace.

Among business customers who have unlimited data plans, 80 percent say they either “definitely will not” or “probably will not” switch to another carrier. That number falls to 73 percent among business users who have data allowances.

T-Mobile US is certain to tout the fact that it was the top-ranked supplier in all categories: large enterprise, small and mid-size business, as well as very-small business.



Thursday, October 26, 2017

What is Needed for Mobile Access to Become a Full Substitute for Fixed Access?

AT&T’s new Nighthawk LTE hotspot is not the only element needed to create a full mobile substitute for fixed network internet access. Gigabit speeds and pricing that is close to fixed network pricing also are required. Advanced 4G will help with the former. New 5G networks will help with the latter.

In that regard, the good news is that each new mobile network generation has helped, in terms of cost per bit.


The bad news is that a sizable gap has persisted, between mobile and fixed network cost per bit, with mobile costs being roughly an order of magnitude higher than fixed network costs, in terms of retail prices. Actual “cost per bit” will vary based on any end user’s actual usage per billing period.

Based on actual usage, the price discrepancy between mobile internet access and fixed network access is closer.


AI, Virtualization, Automation Essential for Telcos "Moving Up the Stack?"

With the caveat that it is always in the interest of a firm or industry segment to tout the business value, revenue growth and impact of the products those companies sell, a new white paper by Empirix essentially makes the argument that mobile business models will shift in the direction of revenue models based on artificial intelligence assisted insights, automated networks and virtualized infrastructure.

The near-term problems are revenue and cost pressures. The strategic problem is that every legacy telecom product category already has become mature, or is reaching maturity.

Revenue and cash flows of telecom companies have dropped by an average of six percent a year since 2010, according to McKinsey consultants.


And the same time, capital investment requirements are rising as higher data consumption requires investment in additional capacity. On average, the ratio of capital spending to revenues is about 15 percent for the major operators, a higher percentage than typically has been required in the the past.

In a now saturated market, that puts pressure on profit margins.

To be sure, operating cost reductions will help. But, longer term, nothing firms do in terms of operating cost reduction can overcome a loss of half of current revenue every 10 years. When that happens, one cannot “cut your way to profitability,” as the old adage suggests.

Applying virtualization, automated network operations and AI to network and other operations will help reduce costs, but will not automatically produce additional new revenue streams. That is the harder task, as it almost certainly involves moving up the stack.

The simple fact is that strategies based on “access” or pipe services will prove insufficient to sustain service providers, as the scale of new access revenues from internet of things, for example, will fail to replace the lost revenues from legacy services.

Though many criticize AT&T for its big moves into video entertainment, both on the “pipe” and “content” segments of that business, Comcast pioneered the strategy.

As a practical matter, the moves by Comcast and AT&T into the content and associated parts of the video entertainment ecosystem illustrate the broader moves tier-one access providers will have to make in other areas such as internet of things to sustain themselves.

Wednesday, October 25, 2017

Is AT&T Quarter Suggestive of Broader Change?

One might make a couple of observations about AT&T financial results. AT&T third quarter 2017 earnings were a little lighter than analysts had expected and lower than the same quarter a year ago.

But it might be worth noting that the results are suggestive of broader changes. Though it never is a good idea to extrapolate too much from a single quarter’s results, the numbers are suggestive of some broader trends. If one believes mobility is reaching--or already has reached--a product life cycle peak, then the pressure in the mobility segment makes sense.

If one believes that moving up the stack into applications is a necessary antidote to the coming decline of the mobility business, then even the slight dip in entertainment group operating revenue shows relative strength, compared to the other key revenue segments.

Mobile revenues were down about $800 million, year over year.

Entertainment segment revenues were off just slightly, about $100 million, year over year. Business Solutions revenue was off about $700 million, year over year.  

Operating revenue for Business Solutions of $17.1 billion was down four percent. Entertainment Group revenue of $12.65 billion was down 0.6 percent, while Consumer Mobility revenue of $7.75 billion was down 6.3 percent.

International operating revenue of  $2.1 billion was up 11.7 percent.

Some will seize on the impact of video cord cutting on the quarter's results, and that is part of the backdrop. Some of us would argue the coming Time Warner acquisition is precisely why that addition is necessary, as it moves AT&T significantly out of the distribution business and into the content ownership part of the ecosystem.

That will be seen as increasingly important, if the mobile business actually is about to cease driving revenue growth in the telecom business.

MIMO Will be Deployed to Support Advanced 4G, not 5G

Advanced 4G is the precursor and foundation for 5G, which is why “pre-5G” advances are important. Consider the use of multiple-input multiple-output antennas. Most will be put into service to support advanced 4G service, not 5G, through the early 2020s, according to ABI Research.

CBRS Rules Illustrate Old Story

Communications policy inherently is political, and every decision “in the public interest” necessarily has corresponding winners and losers. Communications policy also inevitably involves tradeoffs between stimulating investment and promoting competition; moving the deployment needle or connecting the least-connected, at least in open and competitive markets.

Those tradeoffs are unavoidable in oligopolistic markets, such as telecom. No matter how one looks at the data, most of the market share is held by just a few providers, whether fixed or mobile, internet access, voice or entertainment video.




So it is with Citizens Broadband Radio Service, the new 150-MHz block of wireless capacity to use spectrum sharing and a tripartite licensing scheme (incumbents with highest priority; licensed secondary use and unlicensed best effort access).

To reach the largest number of potential users, the licensing rules would favor larger blocks of spectrum and longer license terms. To favor smaller users, the rules would instead use smaller license areas and shorter licenses. Basically, the larger geographic licenses and longer terms favor larger firms; smaller license areas and shorter terms favor smaller firms with less capital.

So the fundamental policy challenge is whether to create incentives for widescale adoption by the service providers serving most potential users, or, conversely, to provide more incentives for smaller suppliers, even if that means less or slower adoption that actually makes a difference in national statistics and adoption.

The reason is simply that smaller suppliers, collectively, serve less than 10 percent of the total customer base, and very-small providers serve only a percent or two.

To get rapid and widespread adoption, policy should create incentives for the few larger providers who serve most people. Alternate rules favoring small independents, by definition, will not “move the needle.”

Such tradeoffs are common in all telecom policy efforts. Either investment or competition; mass adoption or more adoption in rural areas; support for small firms or advantage for the few large suppliers who serve most customers.

Tuesday, October 24, 2017

Communication Standards for Industrial IoT Released

A series of recommended practices for industrial internet of things systems has been published by the Industrial Internet Consortium. The framework covers communications options for IIoT systems, and one important observation is that there are a number of choices for local connectivity, ranging from cabled Ethernet to Wi-Fi; 802.15 wireless networks to mobile connections as well as 802.16 wide area networks.

That is worth keeping in mind when evaluating predictions of the number of IoT sensors in operation in future years. It is not yet clear what the mix of connectivity use cases might be, and therefore what the implications are for suppliers of connectivity solutions.
source: www.iiconsortium.org

Monday, October 23, 2017

AI: 20 Years to Produce 10% Share of Most Industries

If artificial intelligence does become a general purpose technology (GPT) that is foundational for whole economies, it is going to take perhaps two decades for AI to affect as much as 10 percent of the economy, based on the history of GPTs. 

General purpose technologies (GPT) tend to be important for economic growth as they tend to transform consumer and businesses do things. The issue is whether artificial intelligence is going to be a GPT.  

The steam engine, electricity, the internal combustion engine, and computers are
each examples of important general purpose technologies. Each increased
productivity directly, but also lead to important complementary innovations.

The steam engine initially was developed to pump water from coal mines. But steam power also revolutionized sailing ship propulsion, enabled railroads and increased the power of factory machinery.

Those applications then lead to innovations in supply chains and mass marketing and the creation of standard time, which was needed to manage railroad schedules.

Some argue AI is a GPT, which means there will be significant and multiple layers of impact.

Machine learning and applied artificial intelligence already can show operational improvements in all sorts of ways. Error rates in labeling the content of photos on ImageNet, a collection of more than 10 million images, have fallen from over 30 percent in 2010 to less than five percent in 2016 and most recently as low as 2.2 percent, according to Erik Brynjolfsson, MIT Sloan School of Management professor.


Likewise, error rates in voice recognition on the Switchboard speech recording corpus, often used to measure progress in speech recognition, have improved from 8.5 percent to 5.5 percent over the past year. The five-percent threshold is important because that is roughly the performance of humans at each of these tasks, Brynjolfsson says.

A system using deep neural networks was tested against 21 board certified dermatologists and matched their performance in diagnosing skin cancer, a development with direct implications for medical diagnosis using AI systems.

On the other hand, even if AI becomes a GPT, will we be able to measure its impact? That is less clear, as it has generally proven difficult to quantify the economic impact of other GPTs, at least in year-over-year terms.

It took 25 years after the invention of the integrated circuit for U.S.  computer capital stock to reach ubiquity, for example.

Likewise, at least half of U.S. manufacturing establishments remained unelectrified until 1919, about 30 years after the shift to alternating current began.

The point is that really-fundamental technologies often take decades to reach mass adoption levels.

In some cases, specific industries could see meaningful changes in as little as a decade. In 2015, there were about 2.2 million people working in over 6,800 call centers in the United States and hundreds of thousands more work as home-based call center agents or in smaller sites.

Improved voice-recognition systems coupled with intelligence question-answering tools like IBM’s Watson might plausibly be able to handle 60 percent to 70 percent  or more of the calls. If AI reduced the number workers by 60 percent, it would increase U.S. labor productivity by one percent over a decade.

But it also is quite possible that massive investment in AI could fail to find correlation with higher productivity, over a decade or so.

It might well be far too early to draw conclusions, but labor productivity growth rates in
a broad swath of developed economies fell in the mid-2000s and have stayed low since then, according to Brynjolfsson.

Aggregate labor productivity growth in the United States averaged only 1.3 percent per
year from 2005 to 2016, less than half of the 2.8 percent annual growth rate sustained over 1995
to 2004.

Fully 28 of 29 other countries for which the OECD has compiled productivity
growth data saw similar decelerations.

So some will reach pessimistic conclusions about the economic impact of AI, generally. To be sure, there are four principal candidate explanations for the discontinuity between advanced technology deployment and productivity increases: false hopes, mismeasurement,  concentrated distribution and rent dissipation or  implementation and restructuring lags.

In other words, new technology simply will not be as transformative as expected. The second explanation is that productivity has increased, but we are not able to measure it. One obvious example: as computing devices have gotten more powerful, their cost has decreased. We cannot quantify any qualitative gains people and organizations gain. We can only measure the retail prices, which are lower.

The actual use cases and benefits might come from “time saved” or “higher quality insight,” which cannot be directly quantified.

Another possible explanations are concentrated distribution (benefits are reaped by a small number of firms and rent dissipation (where everyone investing to reap gains is inefficient, as massive amounts of investment chase incrementally-smaller returns).

The final explanation is that there is a necessary lag time between disruptive technology introduction and all the other changes in business processes that allow the new technology to effectively cut costs, improve agility and create new products and business models.

Consider e-commerce, which was recognized as a major trend before 2000. In 1999, though, actual share of retail commerce was trivial, 0.2 percent of all retail sales in 1999. Only now, after 18 years, have significant shares of retailing shifted to online channels.

In 2017, retail e-commerce might represent eight percent of total retail sales (excluding travel and event tickets).


Two decades; eight percent market share. Even e-commerce, as powerful a trend as any, has taken two decades to claim eight percent share of retail commerce.  

Something like that is likely to happen with artificial intelligence, as well. If AI really is a general purpose technology with huge ramifications, it always take decades for full benefits to be seen.

Nokia Makes New Moves into IoT

It is possible to overplay the implications of any single partnership between firms, including a new deal between Nokia and Bosch for internet of things. That partnership joins Bosch sensors with Nokia communications solutions to create industrial IoT solutions including asset tracking, predictive maintenance and environmental monitoring.

Separately, Nokia is partnering with Amazon Web Services to commercialize IoT, with several combining the features of  AWS Greengrass, Amazon Machine Learning, Nokia Multi-access Edge Computing (MEC) and the Nokia IMPACT platform.

That, in turn, is part of a new emphasis by Nokia on edge computing solutions that will be foundational for new IoT use cases, those with low latency and latency-tolerant requirements, high and minimal bandwidth needs.

Nor is it illogical that a supplier in the mobile infrastructure business would want a higher profile in IoT. Many would argue that the business value of 5G, in terms of incremental new revenue, will comes precisely from IoT.

Still, it is not improper to note a shift in emphasis from the physical layer connectivity platform to the applications and solutions realm. It is not often that a major telecom infrastructure supplier has moved to supply industrial market solutions that go beyond connectivity, and embrace higher-order functions such as asset tracking.

Some might argue that the new move is a way of moving up the stack into vertical market solutions that build on Nokia’s legacy network infrastructure heritage. That never is easy, but arguably is necessary.



CBRS Will Boost Communications Service Provider Capacity: 25% of Cellular, 26% of Wi-Fi Assets

One way of quantifying the potential impact of coming capacity (150 MHz of spectrum) in the Citizens Broadband Radio Service (CBRS) is to compare that block of capacity with what currently exists in the Wi-Fi bands (2.4 GHz; 5.2 to 5.8 GHz).

As a practical matter, in the U.S. market about 75 Mhz of capacity is available to use in the 2.4-GHz band, while some 500 MHz is potentially usable in the 5-GHz band.

So one way of looking at the coming CBRS capacity is that it increases U.S. communications spectrum by 26 percent of the total amount of Wi-Fi assets now available.

A better way of describing potential CBRS impact is to compare it with all presently licensed mobile capacity, which is in the range of 600 MHz for all providers in the market. So new CBRS capacity would boost capacity by about 25 percent of all presently-available U.S. mobile operator licensed spectrum.



Is Sora an "iPhone Moment?"

Sora is OpenAI’s new cutting-edge and possibly disruptive AI model that can generate realistic videos based on textual descriptions.  Perhap...