Skip to Content

The Shape of the Edge: Small Data Centers or Big data Waypoints?

Edge computing seemed to be just over the horizon. Yet over the past two years, it appears to have stayed there. What is an edge data center, really?

The Shape of the Edge: Small Data Centers or Big data Waypoints?

The Shape of the Edge: Small Data Centers or Big data Waypoints?

This article is the product of asking several dozen sources to define “the edge” using quantities people who manage data centers understand. If data and workloads are, as vendors claim, “moving to the edge,” then:

  • How much capacity is being relocated?
  • Can we repurpose or disconnect core data center capacity where data and workloads moved from?
  • If it costs more to manage workloads at the edge than the value they generate by being hosted there, can we move them back?

Content Summary

Just About the Size of Things
Migrate First, Ask Questions Later
Shoehorn
The Three Faces of Edge
The Zero-Edge Shape
Sponsored Content

If an edge data center were truly a place or a thing, a veteran facilities operator told me in 2018, the people making it could tell us what its capacity would be. They could say how much the damn thing costs to run. Does it need three-phase power? Does it require its own network-attached storage? Are we supposed to stick a solar panel or a windmill next to it? How do we manage it remotely? Who manages it, anyway? Does it allow us to scale back on our core and on-premises investments?

These should be the simplest questions to answer, this fellow exclaimed with exasperation. If they can’t give you specifications, then they’re just feeding you stuff, he added. Only it wasn’t “stuff.”

The whole point of edge computing, at least initially, was to leverage the portability and lower power consumption of modern servers, so that workloads could be distributed to more optimum locations. By “more optimum” we mean utilizing less fiber and fewer connection points in the network, thus (theoretically) reducing latency. Typically, the implication has been that a more optimum location would be closer than a “core” data center to something or someone important, whether it’s the customer who uses the processed data, the source where that data is collected, or the instruments in the factory or facility where job functions are taking place.

This article is the product of our having asked several dozen sources over the past year to define “the edge” using quantities people who manage data centers understand. Specifically, we asked: If data and workloads are, as vendors claim, “moving to the edge,” then:

  • How much capacity is being relocated?
  • Can we repurpose or disconnect core data center capacity where data and workloads moved from?
  • If it ends up costing more to manage workloads at the edge than the value they generate by being hosted there, can we move them back?

Just About the Size of Things

There is perhaps no further-out location for an edge server then at the end of a long pole. What you’re looking at, beneath the trio of transmitter antennas at the very top, are three Supermicro IP65 server boxes. They’re designed to hold servers that run on an individual (1P) Intel Xeon-D processors, which are astonishingly dense and, at the same time, low-power. Xeon D-2183IT, for example, which launched in the first quarter of 2018, has 16 cores, is clocked at 2.2 GHz with a 3.0 GHz boost speed, has memory bandwidth for up to 512 GB of DDR4 memory, and yet holds to just 100W of TDP (Intel’s rating for how much power is required to keep the chip cool).

It’s a high-performing CPU that runs very cool. Intel’s third-generation “Ice Lake” CPU architecture, which at the time of writing was expected to launch in the spring of 2021, would include a third-generation edge-class processor that would handily beat the three-year-old Xeon D-series chips.

SYS-E403-9D-16C-IPD2

SYS-E403-9D-16C-IPD2

Three of these IP65 boxes make weatherproof housings for a formidable server cluster. Their mounting and housing make certain the servers are operable at temperatures up to 122F. There’s a small heat exchanger unit inside each chassis, along with a 300W heater to keep the server from freezing up. You read that right! It can get cold up there.

If you’re a telecommunications service provider, this is as “edge” as it gets. It’s a system that holds out the promise of providing customers with some semblance of service, should everything else in the world fail.

IP65 is built in concordance with an emerging standard for computing systems in telecommunications installed in the field, called 5G MEC (Multi-access Edge Computing for 5G Wireless). I say “emerging” because it isn’t entirely defined. It’s an umbrella term for a work-in-progress rather than a standard. For now, its governing body is the European standards group ETSI.

One reference architecture for 5G MEC was produced in June 2019 by South Korea’s SK Telecom, working in cooperation with engineers from Intel. Their objective, without any hint of evasiveness, is to plant Intel processors at the foundation of 5G MEC platforms. As SKT’s white paper explains [PDF]:

MEC provides telecom service providers the ability to deliver new, real-time services with lower latency. This lower latency is a key requirement for new revenue-generating services to enterprises or consumers. Since edge computing minimizes the amount of data to be sent to the centralized cloud, it uses network bandwidth and resources more efficiently, reducing the cost for both enterprises and operators.

The key phrase here is, “to enterprises or consumers.” It’s been said before, the data center industry is as much about real estate as it is about networking and communications. No question, the precepts of edge computing are heavily rooted in location, location, location. But MEC promises to extend public cloud real estate outside of hyperscale facilities.

To that extent, it’s a fair question how anybody expects to be able to force open a server cluster perched atop the modern equivalent of a telephone pole to fit a relocated chunk of the cloud. What’s the value proposition for enterprises and consumers bunking up with telcos inside weatherproof plastic boxes mounted with very expensive servers and suspended 49 feet above the ground?

Migrate First, Ask Questions Later

“It’s not just about enabling a compute resource at the edge,” remarked Thierry Sender, who directs edge network product strategy for Verizon. “It’s what our customers are looking to do with that compute resource, and the solutions that are going to be very relevant for them.”

Verizon is banking on distributed computing being too difficult for most general enterprises to want to tackle. The whole point of the cloud migration, many telcos believe (or at least believed when it all started), was that managing on-premises infrastructure proved to be hard enough. They’re betting on enterprises being able to explicitly identify which applications would benefit most from being stationed on compute resources adjacent to the points in the network where data tends to be gathered together so that the data is processed before it has to be transferred over the network in bulk.

Then, rather than imagining capacity being lifted from the shoulders of their existing resources, be they on-premises or hosted in colocation facilities, these enterprises would migrate their resources on a per-application basis. The great cloud migration, such as it was, was presumed to have taken place one app at a time, like creatures scaling the ramp of a great ark. Evidently, if scriptures are to be trusted, while the dimensions of the ark came into question, volume never did. Such has been the mindset for those planning the great edge migration.

“The way we’ve architected these 5G Edge locations, and the size of the locations, is [so] the conversation on capacity never comes into the discussion with anybody,” Sender told us. “At no point do we ever tell anyone they have to make trade-offs.”

In late 2019 AWS began deploying smaller chunks of its cloud infrastructure in strategic areas for the beginning of what it called its Wavelength service. Verizon has been its principal partner in that effort. AWS markets Wavelength to customers who are already hosting their applications in its Virtual Private Cloud but could benefit from their raw, unprocessed data not having to be trucked across the globe before it can be processed in a hyper-scale AWS data center. When a Verizon central office happens to be one of the data’s landing zones, Wavelength enables that data to be serviced right there, before it has to be moved any further.

There’s a certain sensibility in the notion of moving cloud-based functionality closer to the customer, meeting somewhere halfway along with a kind of mutual territory that both the provider and the enterprise may jointly consider an “edge.” But that sensibility, importantly, presumes that there’s not a more direct route. Given that cloud data centers, with their dedicated fiber, have become the most well-connected facilities of any kind on the planet, it bears asking: Why bypass them? What’s to be gained by making the network take a less direct route?

Shoehorn

“For different industry verticals, the edge looks a little bit different,” remarked VMware director of product management for Tanzu Michael Michael, who is known to his friends and colleagues simply as M2. “When you’re a retail edge customer, when you’re a networking provider when you’re a manufacturing plant or a company that does distribution, you look at the edge from the lens of, what are really the workloads from the cloud-native ecosystem that you want to run at the edge?”

Speaking at the “cloud-native ecosystem’s” most important conference, KubeCon, M2 told attendees that VMware has begun dividing its map of the edge into three stripes:

  • At the thin edge, there may be Internet-of-Things devices brought together by low-power hubs, serviced perhaps by a virtual slice of a single server (i.e., a fraction of a CPU and a portion of memory).
  • Along the medium edge, you’ll find clusters of servers usually occupying no greater than two racks, where there may be just enough capacity to manage machine learning workloads on critical needs data — for example, scanning live surveillance camera video for signs of nefarious activity.
  • At the thick edge, compute power and capacity are essentially the same as for a regional data center facility. In fact, existing data centers that are smaller than hyper-scale but still bigger than a breadbox may already be edge facilities.

“Anything outside the core data center for us is the edge,” M2 said in a telling statement. From there, as VMware perceives them, the three categories take on service profiles contingent upon the availability of human effort to service them. “What kinds of personnel would you have at that location? Would there be someone who could actually service it? Or would you be looking at remote servicing, administration, monitoring, management? When you look at that, you start thinking about, okay, now we’ve figured out the types of apps. How much of the compute are you going to have at the edge? And what are some of the restrictions that exist with edge — latency, bandwidth, connectivity?”

“The edge is a geo-caching architecture,” said Vijoy Pandey, Cisco’s CTO for cloud and distributed systems. “You have these centralized clouds. And I know that’s a wrong term because a cloud is really a distributed system, but logically, they’re all centralized within these providers. Then you have edges that are reaching further and further out.

“A data center could be construed as an edge. A manufacturing floor could be construed as an edge. A mobile device in the hand is an edge. A [Cisco] Meraki camera is an edge. All of these are edges. But the way Cisco is thinking about this is, the key attribute here is the cost of handling data — the cost of pushing data all the way back and forth to your centralized cloud, because that gets very expensive, very fast.”

If you’ll notice, M2’s definition of “edge” is not differentiated from “cloud.” Rather, it’s an extension of the software-based, software-oriented workload deployment platform put forth by cloud service providers. Google Cloud, Microsoft Azure, and AWS have all come to embrace a method of workload deployment, based around the Kubernetes orchestrator, that cements these providers’ place as the platform provider. The very need for an edge somewhere, in some form, is evidence that the public cloud is not a complete substitute for enterprises’ own servers and their facilities. At a certain point, the virtual infrastructure can’t take the place of physical infrastructure.

Which makes the whole “cloud-native” metaphor suddenly unseemly, amid workloads being moved back from the public cloud and into enterprise-owned and possibly enterprise-operated facilities. If edge architecture has its way, workloads would be distributed from the cloud to the points of processing in customer facilities. The public cloud and its network would replace the software supply chain.

All that may be convenient for vendors and service providers. But it’s a predicament for data center managers and operators, for whom software has never been the currency of their supply chains.

“I would like the industry to come to, like, some consensus about the terminology,” declared Cindy Xing, principal software engineering manager at Microsoft, also speaking at the recent KubeCon. “In my mind, the edge is differentiated from the cloud. It’s where the intelligence happens, close to where the data is originally generated.”

We’ve climbed down from atop the pole pretty fast already. VMware’s and Cisco’s views of edge territory may both be predicated on the phrase, “everything but.” Their key element in common with one another is a supposition that the total lengths of data communication in a critical computing process may be minimized if processors are moved someplace other than what originally may have been optimum locations for their operators. Keep that theory in mind.

The Three Faces of Edge

The presumption, at least for now, is that any computing device stationed along what anyone could justifiably call an edge operates under more constraints than a server in a standard data center with surpluses of power and air conditioning. The latter offers redundant power and sufficient backups; an edge location is more like a campsite, with none of those luxuries. At issue is whether those constraints came with the location or were for some reason or matter of principle designed into the edge system.

“There is already a constrained environment in place, especially in retail, banking, and all of these branch office environments,” remarked Cisco’s Pandey. “Now, if you are building a new car manufacturing facility, then you have a whole lot of variables you can play with. In which case, you’d better start with what is it that you’re looking for?”

The typical data center is purpose-built. It’s a facility designed to accommodate rows of standardized racks with explicitly engineered cooling. By contrast, Pandey paints a new picture of what we would currently consider an edge computing environment, though which may change before long. In this picture, the edge is an environment of any size (not just small) whose architecture, operating conditions and configuration are what you inherit once you’ve committed to its location.

In that situation, the CTO suggests, you start with your own constraints. The shape of this edge may not exceed these boundaries. From there, you determine the requirements of the workloads you intend for these facilities to host. Then you build out a cluster of servers that suits those minimum requirements.

From the perspective of a core data center operator, however, this is backward. No large data center facility was ever constructed around workloads. Maybe one can pre-configure servers that are already installed in racks, but no formula equates workloads with kilowatts. Sure, there are compartments in some hyper-scale facilities for general-capacity and high-performance workloads, but these are what some hyperscalers have called “buckets” rather than architectural distinctions. Hyperscaling made it convenient to add kilowatts once workload capacity expanded. But that’s like excavating more capacity for an existing reservoir: People tend not to ask what all that new water will be used for.

In the VMware model shared by M2 computing capacity isn’t measured using objective metrics such as kilowatts but in terms of the business value provided by the more nebulous measure of workloads. But once these workloads are envisioned in their new facilities, they may then be re-evaluated in terms of their capacity and resource requirements in their edge locales and how much those requirements translate directly or indirectly into expenses.

So, what are we talking about, really? Do VMware’s three edge thicknesses truly exist in real-world deployments? You’ve seen the thinnest edge already: three well-optimized, weather-guarded boxes at the top of a pole, shielding dense, high-performance processors. Back when 5G Wireless was originally proposed, its stated purpose was to take radio network equipment off of towers, replace it with virtual machines hosted in hybrid cloud data centers, and in the process reduce operating expenses by greater than half. The cloud (in this case, a cloud services platform hosted inside telcos’ existing central offices) was supposed to be the solution. That flip-flopped once it was determined that AI workloads required large data tables to be staged in memory — which, if we’re truly being practical, should be located no more than a handspan from the processor.

Certainly, if we’re not in the telecommunications business, then we may have options other than hosting our workloads 49 feet in the air. We’ve seen micro data centers small enough to prop your feet on, hosted in locations their designers also call the edge. Schneider Electric’s “coffee table” microdata center enclosure (its C-Series Mini Soundproof 8U cabinet) in the image above is rated for 0.6kW of IT capacity and 0.8kW of cooling capacity. On the right, its Large Soundproof model has space for 34U of equipment and is rated for 3.6kW of IT capacity and another 3.6kW of cooling capacity. These cabinets accommodate the power and cooling (including airflow) that critical-needs computing equipment will require in locations that were built for other purposes than computing — supermarkets, for example.

Schneider Electric C-series μPC cabinets. [Courtesy Schneider Electric]

Schneider Electric C-series μPC cabinets. [Courtesy Schneider Electric]

At the opposite end of the spectrum, an edge data center may look more like any other data center, except perhaps not huge. EdgeConneX’s DEN01 facility just outside of Denver offers 29,800 square feet of space and support for about 20kW of power per colocated cabinet. It’s a colo, and it doesn’t require special-purpose servers. Its location is called “the edge,” but, as EdgeConneX CEO Randy Brouckman told us, there may be at least two classes of these three types of edges.

“What the edge doesn’t do is displace the core,” said Brouckman. “And what the far edge doesn’t do is displace the edge. It just keeps adding the right processing, storage, compute, and decision making at the right place.” When EdgeConneX began doing business in 2013, “the right place” was small facilities adjacent to telcos’ transmitter towers. But then, the company expanded rapidly by building multiple content delivery network facilities (CDN) in metropolitan areas, guided mainly by the need to direct distribution power for high quantities of data (mostly video) closer to end customers. From his perspective, EdgeConneX has built out a kind of “edge before the edge,” one that pushes out the more extreme deployment situations into what he semi-jokingly called the “edge.”

In-between the perches of cell towers and repurposed commercial warehouses in the suburbs are prefabricated modular data centers (PFM) like Vertiv’s SmartMod (see page 9). It’s a physical chassis not unlike a shipping container, capable of supporting multi-rack configurations from 20kW to 85kW of IT power load. And it contains its own battery backup, smoke detection, fire suppression, and cooling for each chassis to match its IT capacity.

EdgeConneX DEN01 29,800 square-foot facility outside of Denver. [Courtesy EdgeConnex]

EdgeConneX DEN01 29,800 square-foot facility outside of Denver. [Courtesy EdgeConnex]

There’s an argument that anyone who believes all these form factors constitute equivalent options for deployment at the edge may be overplaying their hand. But let’s stop calling it by its marketing name for a moment and look at it from an engineer’s perspective. This is the ultimate dream of distributed computing. At last, we have form factors ranging from as small as a reporter’s notepad to as large as a shipping container for placing all manners of workloads exactly where they need to be. One branch office may need one of those Schneider cabinets that would befit a C. S. Lewis novel; another may require a Vertiv PFM to be lowered from a crane and dropped in place outside the parking lot. It’s like deploying defensive force at the platoon level instead of always in regiments.

But here comes that theory I asked you to remember earlier. If a regiment were as portable an asset as a platoon, all this discussion may be for naught. This may be why Digital Realty CTO Chris Sharp is smiling.

“It’s all about the horizons we’re speaking in,” remarked Sharp, speaking with Data Center Knowledge. “Quite frankly, when we see the edge, it’s going to be shaped by spectrum and workload requirements.

“I firmly believe in the edge. The edge is absolutely going to create a shift in traffic and a shift in requirements.” The objective of 5G engineering, he argued, has already shifted from marshaling voice traffic to providing the data backhaul necessary for delivering data from IoT devices. “I’m still firmly a believer that that is required to support a workload which, quite frankly, is not here today. It is not a material piece. There are a lot of companies that are absolutely going to strand capital because they’re not following both elements of how that edge is going to mature over time.”

The fact that fiber and wireless spectrum are both very much in demand is evidence that both workloads and data are in motion. It, therefore, makes little sense to base one’s distributed data center campus deployment plan on the expectation of data somehow coming to rest someplace, either atop a tower or in some suburb, for a workload to come to rest beside it and go to work. Sharp has called data the new “center of gravity.” While that may be true, it’s not a black hole.

“I very rarely run into a use case where the data is an island,” he said. “Even these smart factories, as much as they create petabytes of data, there’s still a deluge of data that goes back to Corporate or other manufacturing partners. Same with autonomous vehicles: Their terabits of traffic based upon your driving experience, all that’s transacted on in the car; only about a couple of percentage points of data come back to a more centralized location.”

The Zero-Edge Shape

We draw diagrams of networks with cores and edges and we assume that connectivity is a matter of moving IT assets through pipelines from place to place. We take snapshots along the way and we assume that IT has “moved from the core to the edge.” And we talk about moving toward a workload-focused model of networking and empowering software developers to bring that about.

What we fail to recognize is that developers are working from a completely different map. On the dev map, all that network movement is automated. There is one staging ground for workloads, all of it being virtual and abstracted from infrastructure. From their vantage point, it’s the workloads that stay in place and the network that is moving. We like to say lofty phrases like “software enables the future of networking.” What we overlook is the fact that the application doesn’t give a damn about the network. To the extent that an application has to take network topology, availability, or latency into account, the network is not doing its job.

Vertiv’s SmartMod pre-fabricated modular data center — a truck-sized Tier-III facility. [Courtesy Vertiv]

Vertiv’s SmartMod pre-fabricated modular data center — a truck-sized Tier-III facility. [Courtesy Vertiv]

Our original questions included asking how much capacity we have to move from the core to the edge to enable the edge — wherever that is — to do its job. The only way we can resolve that question from an engineering standpoint is to first conceive the entire network the way the applications perceive it: as all one place, without layers, without a core, and an edge. It should be the set of all assets worldwide that collectively provide IT assets with reliable infrastructure, consumable resources, minimal latency, and optimal security, given that the consumers of those assets are all in motion relative to the workload.

The next step in edge evolution and the first genuine look we’ll get at its true shape will come when we stop perceiving data at rest, stop imagining workloads at rest, and stop provisioning business transactions in kilowatts. Everything is an edge and nothing is an edge. It’s all relative.

Today’s IT departments must respond to a wide range of demands made by many groups within an enterprise. New technologies are being developed and deployed while entire companies are moving to completely digital infrastructure. As IoT technologies and the need for computing near the edge increase in importance, new thinking of what a data center really consists of needs to be addressed.

A wide range of servers and software products will enable organizations of all sizes to benefit and meet their business goals. The process of picking a system that meets your needs and the environment where it resides can be a tricky one. A quick rundown of options:

Edge and IoT

The infrastructure needed at the Edge requires quite different components than those in a data center. Such servers that collect and filter sensor data, and those that reside as part of the telco infrastructures, are very different from the types of systems installed in air-conditioned data centers.

Systems that live at the Edge need to withstand a range of harsh outdoor environments, perform after earthquakes, and even potentially withstand vandalism acts. In most cases, the systems must be self-cooled (no fans) and may have to run with low amounts of power. Choosing the right type of equipment for these physical environments should be of high importance. NEBS Level 3 certification is an excellent way to identify equipment that can handle these edge environments.

Distributed Data Centers

Not all data centers consume megawatts of power and consist of thousands of servers and storage systems. Intermediate type systems may need to operate in a controlled environment without a large-scale data center resiliency.

Full-Scale Data Center

Massive data centers can handle a wide range of servers, storage, and networking systems. Various form factors may be used depending on workloads. Example include:

  • High-density computing systems such as blades enable workloads like HPC and data analytics.
  • Multiple GPU systems are excellent choices for Artificial Intelligence and Machine Learning.
  • Multi-CPU systems housed in the same enclosure may benefit some workloads, while traditional enterprise workloads may use low-cost single socket systems.

Storage Choices

Traditional hard disk drives (HDD) are being replaced by Solid State Drives (SSD) due to their ability to access data significantly faster. Hot and cold data may be sent to and stored on different types of devices. New technologies, such as Persistent Memory, allow for large increases in memory at lower price points than DRAM.

Software Ecosystems

Whether it’s single servers at the Edge or massive cloud data centers, software plays a significant role in implementing server and storage infrastructure. Besides the expected software supplied by system vendors that monitor and control servers at the rack level, solutions that combine the right sized hardware with the underlying software solution are of critical importance.

Based on industry-standard CPUs, operating systems, and defined interfaces, open systems allow for a significant catalog of software to be pre-installed or loaded onto these systems. Many end customers require the testing and installing of a specific software stack, which simplifies the bring-up of servers and storage systems, resulting in a faster time to production.

Server and storage systems that respond quickly and are right-sized for various IT requirements are the wave of the future. The CPU performance, storage capacity, and power demands will continue to be critical elements and can make or break an efficient and optimized solution. While a comprehensive product range from an OEM supplier may seem daunting at first, defining end-user requirements and determining the right solution will benefit everyone in the end.

Alex Lim is a certified IT Technical Support Architect with over 15 years of experience in designing, implementing, and troubleshooting complex IT systems and networks. He has worked for leading IT companies, such as Microsoft, IBM, and Cisco, providing technical support and solutions to clients across various industries and sectors. Alex has a bachelor’s degree in computer science from the National University of Singapore and a master’s degree in information security from the Massachusetts Institute of Technology. He is also the author of several best-selling books on IT technical support, such as The IT Technical Support Handbook and Troubleshooting IT Systems and Networks. Alex lives in Bandar, Johore, Malaysia with his wife and two chilrdren. You can reach him at [email protected] or follow him on Website | Twitter | Facebook

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that is committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we have not implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you are currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.