- Ubiquitous connectivity has been an unattainable goal in the USA, despite hundreds of billions in investment.
- Unrealistic application requirements of the network are one reason for this.
- High-bandwidth, high-latency networks using a variety of data placement strategies, including caching, content delivery networks, and local processing can provide an economic solution for most critical connectivity cases.
Laws of logic and nature often limit our ability to achieve what we want. This has particularly been the case for ubiquitous connectivity, a longstanding goal of the applied network research community in the U.S., for which the Government has provided USD 100 billion in investment programs and significantly more from the private sector.
The Ubiquity Tradeoff is a logical law that tells us something about the relationship between the logical strength of the definition of “connected within a network” and the potential for ubiquitous deployment of that network. A formal result known as the Hourglass Theorem (Figure 1) tells us that the logically weaker the definition of connectivity is, the more ways there are to implement it. More supporting implementations mean that it will be easier to achieve some desirable characteristics, such as low cost or high reliability. The bottom line is that a logically stronger service will be less likely to become ubiquitous.

Logical strength is not always an intuitive concept. A service is logically strong if its definition, expressed in some logical framework, makes a lot of guarantees. A service that makes fewer guarantees is logically weak. Sometimes this seems intuitive, for example, a network service that guarantees reliable delivery is stronger than a best-effort one. Conversely, a network that guarantees universal reachability between any two nodes is stronger than one that allows partial reachability. That means that an Internet that allows Network Address Translation (NAT) is logically weaker than one that does not.
The definition of connectivity in a network specifies the minimum service that is required to support standard connected applications. Using only this level of service ensures that an application can be deployed throughout the network. The ubiquity tradeoff tells us that a strategy for enabling a network to be ubiquitous is to adopt a sufficiently weak definition of connectivity.
A typical example of strong requirements making an application less widely deployable is reliance on high-definition interactive video conferencing. An application with this requirement cannot be deployed in environments where network performance is poor or highly variable. A network environment that demands the guarantees necessary to support this application requirement will be more challenging and expensive to implement, thereby reducing penetration.
There are other strategies to weaken the definition of connectivity. One obvious solution is to increase latency. Technically, even broadband Internet connectivity puts no specific bounds on latency. However, service level agreements and a general requirement that “interactive protocols” must be supported have the effect of ruling out reliance on highly asynchronous data delivery mechanisms.
A recent paper I co-authored with Terry Moore, “Is Universal Broadband Service Impossible?“, examines the extent to which functionality can be achieved from a high-bandwidth, high-latency network using various data placement strategies, including caching, content delivery networks, and local processing.
High latency is incompatible with strategies focused on ultra-low latency service to meet the needs of the most demanding (and most lucrative) applications, but it is not an actual requirement for most. These include telehealth, remote work and education, entrepreneurship through platforms such as Etsy, and the exchange of agricultural information. All of these can be implemented in highly useful ways by combining high-latency communication with local storage and processing.
Yes, network service is better when it makes strong guarantees to end users. But sometimes we get what we need, which is not necessarily what we want.
The Big Tech companies have had a good ride on the claim that providing excellent broadband was the only acceptable solution to the world’s connectivity needs. Now they are moving on to a new panacea – Artificial Intelligence in the form of Large Language Models. Perhaps it is time to ask how we can actually service the world’s true need for basic connectivity: less synchronous datagram delivery worldwide at truly minimal cost!
Micah D. Beck is an Associate Professor at the Department of Electrical Engineering and Computer Science, University of Tennessee.
This post has been adapted from the original, which first appeared on the Communications of the ACM Blog.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of the Internet Society.


