At least as Samsung sees matters, if 4G was intertwined with cloud computing, then 5G will be the era of edge computing. Perhaps 6G will usher in split computing.
There are several reasons why that might happen. Mobile devices used by humans and machine sensors will have limited computation capability. They also have limited battery life. A growing number of use cases also will require ultra-low or very-low latency, end to end.
So 4G was compatible with cloud computing, as latency was not generally an issue for most apps used by people on smartphones. In the 5G era, more apps used by humans, and many apps used by machines, will require much-lower latency, end to end. In the 6G era, it is possible devices, apps and use cases will benefit from the ability to compute both onboard and at some other location, simultaneously.
As in the past, computing and communications are functional substitutes: Communications can be used to provide access to computing, or computing can be used to avoid use of communications.
So as computational intensity continues to grow, one solution is to offload computation tasks to more powerful devices or servers from actual end user devices or sensors.
In the case of real-time intensive computation tasks, hyper-fast data rate and extremely low latency communications are required. And that means edge computing, in the 5G era.
As with all prior digital generations, better latency performance and higher bandwidth--by at least an order of magnitude--can be expected from 6G. As we will approach zero in terms of air latency, we might have to start thinking about “negative latency,” the ability of the network and computing infrastructure to anticipate problems and prevent them from occurring.
That obviously will be a virtual concept, as the latency performance advantage will not derive in a physical sense from the network but from avoided latency issues. Samsung notes that the interest in multi-access edge in the telecom industry is precisely this ability to support real-time and mission-critical functions with computing at the edge of the network.
These days, all networks are becoming computing networks. Also, computing and communications historically have been partial substitutes for each other. Architects can substitute local computing for remote, in other words. Mainframes, onboard, client-server, cloud and edge computing use different mixes of communications and computation resources.
Edge computing, most agree, is among the hottest of computing ideas at the moment, and reduces use of communications capital by putting computing resources closer to edge devices.
But technologists at Samsung believe more distribution of computing chores is possible. They use a new term “split computing” to describe some future state where computing chores are handled partly on a device and partly at some off-device site.
In some cases a sensor might compute partially using a phone. In other cases a device might augment its own internal computing with use of a cloud resource. And in other cases a device or sensor might invoke resources from an edge computing resource.
Conventional distributed computing is based on a client-server model, in which the implementation of each client and server is specific to a given developer, Samsung notes.
To support devices and apps using split computing, an open source split computing platform or standard would be helpful, Samsung says.
With split computing, mobile devices can effectively achieve higher performance even as they extend their battery life, as devices offload heavy computation tasks to computation resources available in the network.
You might agree that the split computing concept is in line with emerging computing and communications fabrics that increasingly operate by using any available resource. Up to this point, that has been seen most vividly in device or app use of Wi-Fi.
In the future we may see more instances of devices using any authorized and available frequency, network, tower or computing resource.
No comments:
Post a Comment