To some extent, the mobile connectivity industry already is moving towards a “reuse rather than replace” mode for its development of next-generation networks. For example, 6G will build on 5G as 5G was built on 4G. 5G networks were designed to be compatible with 4G core networks, for example.
What remains unclear is the extent to which standards can be created more analogous to Ethernet or optical transmission, where physical media does not have to be largely replaced to upgrade network performance.
So far, much of the talk about a future 6G platform centers on applications and use cases that can be supported; integration of many other access networks or the role of the 6G network in supporting edge computing, for example.
Optical transceiver capabilities have continued to develop over time, supportinGbps capacities that have Gbpsrown from 2.5 Gbps through 10 Gbps, 100 Gbps, 200 Gbps, 400 Gbps, and 600 Gbps to 800 Gbps.
As much as connectivity executives hate the phrase “dumb pipe,” as it implies low-margin, commodity products, transparent media is what makes it possible to upgrade Ethernet and optical networks more gracefully.
Physical media changes still happen. Waveguides get better, which can eventually require installing new transport media. Still, transport media is less an issue for Ethernet and optical transport systems than it has been for mobile phone generations.
Still, one has to ask: is the mobile networks problem fundamentally the need to replace platforms every decade, or is it the revenue operators can generate after installing the new networks?
In principle, a 50-percent increase in infra costs is not an issue if revenue grows 100 percent, for example. Instead, the issue is that higher infra seems correlated with slight increases in revenue.
It therefore is not a surprise that mobile executives continue to look for ways to improve the business model by cutting costs or boosting revenue or both. That propels the push for fees to be paid to internet service providers by a few hyperscale app providers, for example.
Compared to fixed networks, though, mobile networks have been replaced about every decade in the digital era precisely because computing technology improves at a Moore’s Law rate (double the performance about every 18 months).
As a practical matter, that means cheap processing allows us to do things that were economically not viable 10 years ago. Very sophisticated signal processing, for example, allows us to multiply the intensity of use of any given amount of spectrum available. Commercial use of millimeter wave frequencies, for example, is now possible because the cost of processing signals is low enough to recover useful signals at distances not possible in the analog and earlier digital periods.
Additional spectrum also helps. Early spectrum allocations were limited enough that radio channels had to be “narrow,” featuring relatively little bandwidth. That matters as “wider” channels featuring more bandwidth are inherently more efficient.
So wider channels enhance capacity, all other things being equal. But those wider channels only are possible because additional big blocks of spectrum have been released for mobility use. In other words, unlike a terrestrial optical network, the “pipe” itself gets bigger only when governments allocate more spectrum.
So it has not been easy to create a “transparent” (or “dumb”) transport medium that can be capacity-upgraded simply by swapping devices at the ends of the network, as is possible on an optical or hybrid fiber coax network.
But we are approaching an era where that ability to create a transparent transport medium is possible. As that happens, standards should be crafted to allow upgrade processes that are analogous to the ways optical networks or Ethernet are upgraded: by swapping out transceivers at the network edges.
Granted, international standards bodies seem to intent on creating 6G on the old model. What happens after 6G might be quite different.
No comments:
Post a Comment