Today’s data centers come with a new set of challenges. A recent article by Steve Gillaspy of Intel discusses many of the challenges and outlined the top 10 concerns faced by those responsible for designing, operating, and sustaining the IT and physical support infrastructure found in today’s data centers. Read on this article as we explore four of the five macro trends discussed by Gillaspy, how they influence the decision-making processes of data center managers and the role that power infrastructure plays in mitigating the effects of the following trends:
- Hyper Growth and Hyperscale
- Hyper Density
- New Workloads
- New Hardware
Top Ten Concerns of Data Center Managers for 2019
Hyper Growth and Hyperscale
Edge and cloud computing are seeing rampant growth, causing the build-out of new data centers both large and small, with growth coming so fast that traditional approaches to construction and management are unable to keep up.
Hyper Density
The need to minimize OPEX and CAPEX costs while increasing the efficiency of the data center is causing a drive towards consolidation, virtualization, containerization on the compute side, while the increasing number of both data sources and data consumers is driving the growth in demand for storage capacity.
New Workloads
Big data, AI/ML/DL, Internet of Things (IoT) and other new types of workloads are placing increasing demands on most data centers and networks.
New Hardware
Specialized silicon is now being deployed in many data centers, with the intent of improving throughput and latency in support of the new workloads. FPGAs and ASICS for AI, GPUs for AI and cryptocurrency mining. Along with new chips, there are different types of storage, processors, and interconnects available to choose from and support that makes it difficult to standardize on “one-size-fits-all” hardware.
Macro Trend Impacts on the Data Center
Since the year 2000, the world has seen the rise of numerous companies that have achieved phenomenal amounts of growth thanks to the expansion of the internet and the applications that run on it. Google, Facebook, Apple, Microsoft, Amazon, Uber, and countless others have all seen incredible success around the world through business models that rely on the presence of ubiquitous internet service combined with rapid access to applications running in company-owned data centers that they designed and built themselves. And with each additional data center they construct, these companies make successive leaps in productivity, energy efficiency, and profitability, all while managing the stresses that come from rapid growth, changing workloads, growing storage requirements and accommodating new hardware technologies.
Choosing to build a new data center or retrofit an existing one is a risky decision for many enterprises today. Committing money to design, build, and operate a facility devoted solely to the wellbeing of IT equipment and the few people overseeing it can be a capital-intensive task that may not pay off for the company. Knowing why the data center needs to be built is crucial to achieving a successful outcome. Does the company already have a successful product or service that requires the support of a data center? What is the growth rate of that product or service? Is the user or customer latency sensitive? If the customer wants more of the product or service, is the infrastructure that delivers it scalable? Would we be better served building the application in the cloud, and migrating it later to our own data center?
Once the decision has been made to start construction on a data center, most companies want to bring the new asset on line as rapidly as possible, sometimes without even knowing exactly what they are going to deploy in the data center until after construction is already under way. Will the next generation of processors be available in volume when the data center is ready to be outfitted? If not, can the CPUS be changed out in the servers once the newest processor generation becomes available? The drive to maximize return on investment (ROI) demands the data center “go live” as soon as the hardware arrives, yet the goal for energy efficiency dictates that the latest (fastest) generation always be installed as soon as possible. Short lead times and flexible configurations are a must for satisfying many data center applications.
Most data center designers and managers resort to setting standards early in the conception phase of a data center for power distribution architecture (480V/415V/208V to the rack), power density (5/10/20/50kW per rack), cooling (air, liquid, immersion, containment), rack size (42U/48U/52U/etc.), lighting, and so forth. These parameters become the boundary conditions for decisions that follow, such as what equipment goes into each rack, and how much air or water cooling capacity must be delivered to the rack to enable the equipment to operate reliably.
Meanwhile, software applications that run in the data center are being written or tweaked daily. New versions or entirely new applications can create increased user uptake, along with creating new data types and new volumes of data that will have to be accommodated in the data center that is already under construction. Data from “internet of things” (IoT) devices or from driverless vehicles may suddenly be added to the mix of applications requiring support within the data center, and may in turn foster new workloads such as big data analytics or artificial intelligence (AI) to be implemented.
“Today’s data centers are increasingly called upon to run much larger, more complex workloads that are often very different from one another — so the hardware requirements to run them may vary widely from workload-to-workload, and may also change over the course of a day or even an hour. For example, some workloads might need more, and some less, processing or memory capacity. Still others might require NVMe storage or special purpose processors. Furthermore, to lower TCO it might also be desirable to leverage higher-end devices across multiple workloads at different times.”
Top 10 Concerns of Data Center Managers in 2019
Design
1. “Future proofing” the design of a data center to accommodate changing hardware and application requirements that scale over time without forcing frequent “rip and replace”.
2. Figuring out how to facilitate the automation of the data center to minimize both headcount and downtime.
3. Creating a data center that can be run as hot as possible in the hot aisle, but still be accessible for people to enter to work on the equipment while in operation.
Installation and Configuration
4. Provisioning hardware for new applications can take days or weeks, and may require numerous specialists.
Operation
5. Virtualized and containerized environments rarely exceed 50 percent average utilization, and non-virtualized data centers run at 20-30 percent.
6. Interoperability across equipment and management software from different software vendors is often problematical, limiting functionality and programmability.
7. Maintaining security from external bad actors seeking to disrupt or disable the data center operation.
Upgrade and Retrofit
8. CPU upgrades often require replacement of an entire server chassis and all the resources in the server, retiring storage, power supplies, fans, and network adapters sooner than necessary.
9. Identifying where there is available power capacity within a circuit or rack to accommodate new hardware.
10. Technicians in the data center can be slowed by the current requisition, deployment, validation, and provisioning processes when hardware fails.
Mitigating the Pain Points
The concept of “Software-Defined Everything” (SDE) is one of the most talked-about trends in the evolution of data center design philosophy. “Software-defined networking (SDN), software-defined storage (SDS) and software-defined data center are part of a general movement towards infrastructure that decouples the bare metal that executes point data transactions from the software layer that orchestrates them.” Rather than the individual elements of compute, storage and networking, SDE treats infrastructure as a set of resources that are joined together through software and tailored to a specific workload. “Composable infrastructure” and “rack scale design (disaggregated server architecture)” are two approaches to achieving SDE in the data center, and go a long way towards addressing many of the previously stated pain point.
SDE assumes that all hardware is on and always available. For those data centers that want to go another step on the SDE journey, taking hardware offline (powered off) when not in use, implementing a high density, remotely managed power distribution unit within the IT rack provides another level of composability and control. By deploying intelligent PDUS with high outlet density and a mix of C13 and C19 outlets, physical hardware changes in the rack can easily be accommodated.
Choosing suppliers that can bring together racks, PDUs, cable management, cooling, cabling and containment all with short lead times enable the data center designer and manager to make last second decisions in how to finish equipping the data center, helping to avoid the “rip and replace” problem experienced with buying the wrong gear too early in the provisioning process.
Source from Server Technology: Top 10 Concerns of Data Center Managers in 2019 White Paper written by Marc Cram, CDCD