Shifting the Balance Between the Edge and the Cloud, Part 1

Park Place Hardware Maintenance


Paul Mercina March 12, 2019

As an efficient, consolidated computing powerhouse, complete with diverse, turnkey solutions for enterprises, the cloud could absorb tomorrow’s data processing demands. The problem is, the data can’t get to the cloud.

There is a forthcoming power struggle with the potential to take down the incumbent in the compute space, cloud technologies. Currently ascendant, the cloud faces challenges from our mobile lifestyles and the Internet of Things, and it can’t keep up.

The shortcomings of centralized cloud resources are leading some experts[1] to predict that distributed edge computing will eliminate the cloud as we know, but such wholesale replacement of cloud technologies remains unlikely. Just as mainframes exist today, decades after being declared dead, and enterprise data centers still operate despite the exaggerated rumors of their demise, the cloud will survive.

The stage is being set, however, for different levels of “edginess” to complement cloud capabilities and handle tomorrow’s workloads. The balance of compute power is indeed shifting, but both cloud and edge will emerge stronger.

The Problem with the Cloud

The cloud has taken over the enterprise, as evidenced by the 96 percent adoption rate reported in the RightScale 2018 State of the Cloud Report.[2] So enticing are cloud solutions, one cloud is no longer sufficient for most organizations, and 81 percent have a multi-cloud strategy, usually incorporating more than four different public and private clouds.

As an efficient, consolidated computing powerhouse, complete with diverse, turnkey solutions for enterprises, the cloud could absorb tomorrow’s data processing demands. The problem is, the data can’t get to the cloud.

When Light Speed Isn’t Fast Enough

Fiber optic cable carries signals at about 122,000 miles per second. [3] It’s fast but unfortunately not fast enough for the use cases on the horizon. Augmented reality, for example, requires latency below that which the human brain can detect, optimally less than 7 milliseconds.[4]

Do the math, and a fiber optic signal takes about .82 milliseconds to go 100 miles.[5] Communicating from Northumberland to London and back would take nearly 5 milliseconds, if the path travelled is a straight shot—and that’s excluding all other sources of latency, from processing to network switching. Even such a short distance is too long.

The fact is, achieving the coveted sub-10 millisecond latency—demanded by driverless cars, smart cities, and many other futuristic solutions—will require local compute. This is the first imperative of the edge.

Realities of Network Advancement

According to Moore’s Law, compute power doubles every two years.[6] This exponential increase outstrips the pace of network advancement. For example, 10GbE began shipping in 2007,[7]and only in 2018 are there hints of widespread 100GbE adoption.[8] If networks obeyed Moore’s law, the tech sector would be in 640GbE territory.

Networks are also expensive, while compute is getting cheaper. Consider the much anticipated 5G roll-out. Estimates for delivering this wireless technology in South Korea alone—to a population of 51 million and with an excellent fiber backbone to build on—exceeds $8 billion for just one service provider.[9]

Both physically and financially, networks cannot keep up as the world speeds toward 75 billion connected devices generating 175 zettabytes of data by 2025. [10] [11] Making strategic use of available network resources, and deploying compute further afield in its place, will be the only solution. This is the second imperative of the edge.

Please return for the conclusion of this blog on March 14.

About the Author

Paul Mercina, Director, Product Management Marketing
Paul Mercina brings over 20 years of experience in IT center project management to Park Place Technologies, where he oversees the product roadmap, growing the services portfolio, end-to-end development and release of new services to the market. His work is informed by 10+ years at Diebold Nixdorf, where he worked closely with software development teams to introduce new service design, supporting implementation of direct operations in a number of countries across the Americas, Asia and Europe.