Shifting the Balance Between the Edge and the Cloud

Park Place Hardware Maintenance

Paul Mercina May 02, 2019

The stage is being set for different levels of “edginess” to complement cloud capabilities and handle tomorrow’s workloads.

There is a forthcoming power struggle with the potential to take down the incumbent in the compute space, cloud technologies. Currently ascendant, the cloud faces challenges from our mobile lifestyles and the Internet of Things, and it can’t keep up.

The shortcomings of centralized cloud resources are leading some experts[1] to predict that distributed edge computing will eliminate the cloud as we know, but such wholesale replacement of cloud technologies remains unlikely. Just as mainframes exist today, decades after being declared dead, and enterprise data centers still operate despite the exaggerated rumors of their demise, the cloud will survive.

The stage is being set, however, for different levels of “edginess” to complement cloud capabilities and handle tomorrow’s workloads. The balance of compute power is indeed shifting, but both cloud and edge will emerge stronger.

The Problem with the Cloud

The cloud has taken over the enterprise, as evidenced by the 96 percent adoption rate reported in the RightScale 2018 State of the Cloud Report.[2] So enticing are cloud solutions, one cloud is no longer sufficient for most organizations, and 81 percent have a multi-cloud strategy, usually incorporating more than four different public and private clouds.

As an efficient, consolidated computing powerhouse, complete with diverse, turnkey solutions for enterprises, the cloud could absorb tomorrow’s data processing demands. The problem is, the data can’t get to the cloud.

When Light Speed Isn’t Fast Enough

Fiber optic cable carries signals at about 122,000 miles per second. [3] It’s fast but unfortunately not fast enough for the use cases on the horizon. Augmented reality, for example, requires latency below that which the human brain can detect, optimally less than 7 milliseconds.[4]

Do the math, and a fiber optic signal takes about .82 milliseconds to go 100 miles.[5] Communicating from Northumberland to London and back would take nearly 5 milliseconds, if the path travelled is a straight shot—and that’s excluding all other sources of latency, from processing to network switching. Even such a short distance is too long.

The fact is, achieving the coveted sub-10 millisecond latency—demanded by driverless cars, smart cities, and many other futuristic solutions—will require local compute. This is the first imperative of the edge.

Realities of Network Advancement

According to Moore’s Law, compute power doubles every two years.[6] This exponential increase outstrips the pace of network advancement. For example, 10GbE began shipping in 2007,[7]and only in 2018 are there hints of widespread 100GbE adoption.[8] If networks obeyed Moore’s law, the tech sector would be in 640GbE territory.

Networks are also expensive, while compute is getting cheaper. Consider the much anticipated 5G roll-out. Estimates for delivering this wireless technology in South Korea alone—to a population of 51 million and with an excellent fiber backbone to build on—exceeds $8 billion for just one service provider.[9]

Both physically and financially, networks cannot keep up as the world speeds toward 75 billion connected devices generating 175 zettabytes of data by 2025. [10] [11] Making strategic use of available network resources, and deploying compute further afield in its place, will be the only solution. This is the second imperative of the edge.

Peeling the Onion at the Edge

Edge computing promises to both move data processing closer to the end user, thus reducing latency, and to limit the volume of data transferred over networks, preventing bottlenecks and overload. But “the edge” isn’t one solution. It’s multiple layers, from regionalized cloud services to on-site fog computing and onboard IoT device capabilities. The world’s compute needs will be spread across all of these layers.

All Roads Can’t Lead to London

A first step into the edge is the expansion among cloud providers into smaller markets. Already, the data center industry is investing in second- and third-tier cities, with regional players being acquired by or merging with larger providers.[12] Telcos may also make use of their extensive real estate portfolios in smaller markets to offer edge solutions. In the near term, moving compute from London to Manchester for an application accessed from northern England could offer edge advantages commensurate with most current technology needs.

Following close behind are likely to be telco offerings coinciding with the 5G introduction. Industry watchers anticipate the installation of micro-modular pods at the foot of cell towers and elsewhere along the wireless network. These would be mostly lights-out mini-facilities with minimal maintenance requirements.

Although the business model for edge products based out of these types of facilities remains fuzzy, enterprises can rest assured that edge capabilities will eventually be packaged for ready consumption, much like cloud services today. This means providers, including today’s CSPs, will be standing in line to help make edge computing as simple as possible, and significant enterprise uptake and an edge-directed shift in compute will ensue.

Triaging Before Transmission

IoT turns the internet on its head, with end nodes producing more data than they consume. Whereas users mostly spend time on smartphones downloading news, videos, and other media, a single self-driving car, as an IoT example, will generate terabytes of data in a single trip. The question is what to do with all the information that is produced.

For the reasons outlined above—latency and network traffic management—most of this data will be processed onboard or in a very local, wirelessly accessible facility. Only working at the edge will enable the car to “think” fast enough to stop suddenly when an obstacle appears in its path.

The vast majority of the data collected by cars will serve only ephemeral purposes, making possible the near-immediate decisions about acceleration, turning, and so on. It’s difficult to imagine a scenario in which Vauxhall will need to know one-by-one the color of every traffic light encountered on every route by every one of its driverless cars. There will, however, be certain data, generally aggregated across the trip or multiple trips, worth sending to a centralized repository for further analysis and storage. Data will be triaged at the edge—selected, consolidated, and aggregated—before transmission up the line to the cloud.

From a sheer processing perspective, the edge can be expected to handle the bulk of raw data generated by IoT sensors and devices. The industry will shift from a thin client reliant on the cloud to a fat client, or at least a fat edge, doing much of the compute work locally to steer our cars, shut down a manufacturing line when a sensor detects danger, or interpret our gameplay to deliver an appropriate augmented reality experience. But this doesn’t mean the cloud is in decline, to the contrary.

An Edge Continuum but Cloud Remains Critical

What will emerge is a continuum of solutions ranging from the centralized cloud to regional data centers and telecommunications towers to facility-based fog computing and increasingly powerful edge devices themselves. Distinct workloads will be allocated to the various edge layers.

These technologies are, however, co-evolving with analytics, machine learning, and artificial intelligence. The value of data will only increase alongside its volumes. Data will be triaged at the edge but aggregate information will be passed back to the cloud for storage and higher level processing. More sophisticated systems will be required to derive the insights organizations will seek from the billions of data points soon to be collected, and these centralized resources will represent immense compute power on their own.

Ultimately, compute may soon be more equally balanced between the cloud and the edge, but both will grow for some time to come.

About the Author

Paul Mercina, Director, Product Management Marketing
Paul Mercina brings over 20 years of experience in IT center project management to Park Place Technologies, where he oversees the product roadmap, growing the services portfolio, end-to-end development and release of new services to the market. His work is informed by 10+ years at Diebold Nixdorf, where he worked closely with software development teams to introduce new service design, supporting implementation of direct operations in a number of countries across the Americas, Asia and Europe.