Skip to Content

Best Practices for Green Computing in Data Centers to Maximize Energy-efficiency and Sustainability

In light of the increasing urgency of climate change, incorporating sustainability best practices in your data center can significantly enhance its eco-friendliness. This not only benefits the environment but also serves as a valuable advantage for your business. Organizations around the world realize that reducing their environmental footprint is good for both businesses and the environment. This article explores best practices that IT administrators and data center operators can take to reduce the amount of energy used when operating a modern data center.

From the location of a data center down to the servers that are chosen to run the applications that are critical for the business, service level agreements can be kept, while reducing environmental impact. A reduction in the Total Cost to the Environment (TCE) of a data center can also be achieved.

Best Practices for Green Computing in Data Centers to Maximize Energy-efficiency and Sustainability

To enhance the environmental sustainability of a data center, the initial step is to conduct a thorough analysis of power and other resource usage within the facility. Subsequently, it is essential to identify and implement modifications that can lead to a reduction in the consumption levels. This can involve a range of measures, such as the replacement of obsolete and energy-inefficient technology assets with newer ones or collaborating with ecologically-conscious IT vendors to establish a sustainable data center.

Data centers consume a significant amount of energy, with the U.S. Department of Energy reporting consumption levels of 10 to 50 times that of a typical office building with the same floor space. These centers account for 2% of the nation’s total electricity use. According to a report by the International Energy Agency in September 2022, data centers used up to 1.3% of the world’s electricity consumption in 2021. Furthermore, data centers and data transmission networks contributed to a combined 0.9% of all energy-related greenhouse gas emissions in 2020, as reported by the IEA.

Data center owners are facing growing pressure to proactively address their carbon emissions in light of the escalating global concern around climate change, and as corporate ESG initiatives increasingly become a critical evaluation criteria for customers, employees, and investors. Employing green computing practices to decrease the environmental impact of data centers would not only be environmentally responsible, but also financially beneficial, as it would minimize energy and IT expenses, while simultaneously contributing to broader corporate sustainability objectives.

In today’s business landscape, constructing environmentally sustainable data centers has become a crucial priority for organizations across diverse industries and regions. To achieve this objective, incorporating green computing practices is essential for enhancing energy efficiency and promoting long-term sustainability. Here are the best practices and strategies that you can implement in your data center for developing and managing an energy-efficient data center to minimize operational expenditures, which will not only benefit the environment but also provide lucrative business opportunities.


Globally, data centers use at least two hundred Terawatts (TW) of electricity per year (about 2% of global energy use), and growth models show it will continue to grow to 4% to 8% by 2030 depending on sources. With an expectation to grow between 2% and 8% by 2030. Reducing electricity usage in the data center results in less environmental impact and lower costs through OPEX reductions. With the expectation that climate change will continue to worsen, now is the time to take action on reducing the Total Cost to the Environment of data centers. Whether a hyperscale data center, co-location data center or a data center physically located within an enterprise, there are a number of opportunities to reduce the effect on the environment and increase sustainability. According to one study, there are over 700 hyperscale data centers worldwide at the end of 2021, with an estimate of about 1200 by 2026. While the demand for digital services has been increasing at a much higher rate than the energy consumption due to increased server efficiencies, data center operators and designers need to implement practices that reduce energy demand to help governments and businesses reach environmental goals. From the location of a data center to the choice of microprocessors, there are numerous places where data center designers, operators, and end users can help reduce the data center’s environmental impact.

Projected electricity generation worldwide from 2020 to 2050, by energy source (in 1,000 terawatt-hours)

In a recent survey with data center operators and decision-makers, worldwide, over 77% of the respondents stated that the environmental impacts of their data centers were very important to their organization. This confirms earlier discussions in past years that a primary concern among these operators continues to be the enormous amounts of electricity that is needed for the operation of data centers, and needs to be reduced.

Are the environmental impacts of your data center(s) important to your organization?

Worldwide, in some regions, about 80% of electricity is currently produced by burning fossil fuels (coal, gas, oil), although this varies by geography. Data centers still use substantial amounts of grid power that burn fossil fuels to fulfill energy needs, even though some data centers—and corporations that use them—are transitioning to renewable sources or have targets to reduce fossil fuel consumption. While the world is transitioning to renewable energy, a significant amount of electricity will continue to be generated by fossil fuels.

To “green” a data center, a number of actions can be taken, even once a data center is up and running. These actions range from purchasing green (renewable) energy to planning ahead and purchasing the most efficient servers available for the SLAs of workloads required. This paper will explore 10 Best Practices that can be implemented and have already shown to be effective in reducing power usage and improving the Power Usage Effectiveness (PUE) of a data center.

Several considerations will reduce environmental impacts when creating or refreshing a data center. Worldwide, 93% of the survey respondents stated that their data center’s environmental impacts were either very important or of secondary importance.

From the survey, no region was below 76% for “Yes, very important” when looking at the importance of environmental impacts of data centers from different geographies.

Best practices to reduce the environment impact of your data center

We present a list of recommended best practices that can significantly reduce the environmental impact of data centers. Adopting these suggestions can lead to lower operating expenses and a measurable decrease in carbon footprint. To optimize energy efficiency in your data center, consider implementing the following measures:


Best Practice 1: Begin by tracking your base energy usage to identify areas of improvement.

To effectively manage your data center’s power usage, it is essential to obtain a comprehensive understanding of current energy consumption levels. Begin by monitoring overall electricity usage and subsequently analyze the data to forecast future energy requirements. A recommended approach is to break down energy usage into categories, such as HVAC, server, infrastructure, network, and storage consumption. With this knowledge, you can identify opportunities to enhance energy efficiency through improved power management and data center modifications.

Begin by tracking your base energy usage to identify areas of improvement.

Best Practice 2: Right-size your servers to ensure they are operating at optimal capacity.

Maintaining a constant operation of all servers in your data center can result in underutilization, leading to higher energy consumption than necessary. Certain servers may only process requests during specific times of the day, while some may not serve any purpose at all. By using server monitoring tools such as Zabbix, Netreo, and Paessler PRTG Network Monitor, system administrators can track utilization and identify functions that can be consolidated onto fewer machines. Virtualizing some servers can further reduce their physical footprint, and decommissioning others can lead to significant energy savings.

Best Practice 3: Modify the temperature and humidity settings to reduce energy consumption.

Modern data centers are equipped with HVAC systems that are typically designed to provide more cooling capacity than necessary. However, with the advancement of newer data center assets, it is possible to operate them safely at higher temperatures. This can result in a reduction of HVAC load, which can in turn lead to cost savings. While it is important to maintain the right temperature and humidity levels to prevent damage to IT equipment, it is advisable to assess and calculate your data center’s cooling requirements before making any adjustments to the thermostat.

Best Practice 4: Rearrange your data center to minimize hotspots and improve airflow.

Optimizing the efficiency of your data center can be achieved by strategically rearranging it based on energy consumption and temperature requirements. Incorporating smart configurations, such as the hot and cold aisle layout, can effectively group warmer assets together and capitalize on HVAC vent placement.

To implement such configurations, it is crucial to have a comprehensive understanding of the intake and outtake vent locations within the facility, allowing for the placement of assets in appropriate zones. Additionally, by placing supplementary cooling units in hotter zones, overall electricity costs and the strain on the HVAC system can be significantly reduced.

Best Practice 5: Replace older assets with more efficient ones to reduce energy waste.

Legacy data center assets can consume more power, generate higher heat levels and have lower physical tolerances compared to newer ones. The latest servers, switches, racks, and HVAC technologies come equipped with energy-efficient components and processors. It is advisable to consider installing these new assets when it is appropriate for your data center, such as during the equipment end-of-life and sunset processes or parts replacement and maintenance. Additionally, replacing physical servers with virtual ones or transitioning resources to the cloud can help reduce the number of physical technologies in use.

Modify the temperature and humidity settings to reduce energy consumption.

Best Practice 6: Invest in smart facilities management tools to monitor and optimize energy usage.

Effective management of IT services necessitates comprehensive data collection and storage regarding data centers, encompassing elements such as power consumption and data loads. By analyzing this data, valuable insights can be gained and applied to optimize asset usage in environmental control systems, leading to a reduction in power consumption and HVAC loads.

To improve energy efficiency in data centers, AI-powered monitoring tools are a viable solution. These tools leverage machine learning to analyze energy data and create a power usage effectiveness forecasting model. Additionally, some organizations have adopted AI tools to manage HVAC functions autonomously, using IoT sensors to provide continuous temperature data to the system. The software automatically analyzes the data and adjusts the HVAC system to maintain optimal temperature levels consistently. For instance, Google leveraged such technology to reduce energy consumption in its data center cooling systems by 40%.

Best Practice 7: Investigate green energy technologies, such as renewable energy sources, to power your data center.

Organizations seeking to reduce the carbon emissions from their data centers can also consider green energy alternatives, such as geothermal cooling, wind power and hydroelectric power. For example, data center services provider Verne Global uses a combination of geothermal, hydroelectric, solar and wind technologies to power and cool its facilities in Iceland and Finland. Similarly, one of services provider TierPoint’s data centers in Spokane, Wash., was built with a geothermal cooling system driven by water from an underground aquifer below the facility. Iron Mountain operates underground data centers in Missouri and Pennsylvania that also take advantage of natural cooling.

It is recommended that you explore the green energy solutions that are accessible to your organization. With the advancements in the development of renewable power and cooling techniques, there are high chances that you can discover feasible ways to diminish the carbon footprint of your data center.

Best Practice 8: Partner with green IT vendors and organizations to stay up-to-date on the latest energy-efficient solutions.

Establish strategic alliances with information technology vendors that provide eco-friendly solutions, as well as collaborate with entities specializing in recognizing and recommending sustainable IT alternatives.

IT teams have the opportunity to take advantage of the U.S. government’s Energy Star certification program to identify energy-efficient computer systems, monitors, and other technology products. Additionally, the Global Electronics Council manages a registry of products that meet the Electronic Product Environmental Assessment Tool (EPEAT) standard criteria, which lists servers, networking equipment, and end-user computers, among other environmentally preferable technologies. The U.S. Environmental Protection Agency supported the development of both the registry and the EPEAT standard to ensure that environmental sustainability remains a priority.

Furthermore, it is possible to review the sustainability and energy efficiency standards of IT vendors and service providers by consulting reputable organizations, such as CDP (previously known as the Carbon Disclosure Project) and the RE100 renewable energy initiative. Also, ESG ratings agencies like MSCI, Refinitiv, and Sustainalytics can offer valuable insights.

Partner with green IT vendors and organizations to stay up-to-date on the latest energy-efficient solutions.

Best Practice 9: Right Size System Designs to Match Your Workload Requirements

There are many component choices and configuration options on a server. Traditional general-purpose servers are designed to work for any typical workload, which leads to over-provisioning resources to ensure the system works for the widest range of applications. A workload-optimized system, instead, optimizes component choices and configuration options to exactly match the requirements for a target set of workloads. These optimizations reduce unnecessary functionality, which reduces cost, but also reduces power consumption and heat generation. When the optimized solution is scaled to 100s, 1000s, or 100,000s of systems, the savings are significant.

Different product lines are optimized for different workloads, for example, servers with more CPUs, cooling capacity, memory capacity, I/O capacity, or networking performance. HPC applications require fast CPUs, while content delivery networks need massive I/O capabilities Using the server type that is designed for the workload reduces the excess and unused capacity and, thus, a cost reduction.

Right Size System Designs to Match Your Workload Requirements

Best Practice 10: Share Common Scalable Infrastructure (Multi-node, Blade Efficiency)

Systems can be designed in a way to share resources, which can lead to better overall efficiency. For example, sharing power supplies or fans among several nodes reduce the need to duplicate these components for each node. For instance, the Twin family of servers (TwinPro, BigTwin, FatTwin, and GrandTwin) all share power supplies and fans. The result is that larger fans and more efficient power supplies can be used, reducing the electricity use when all of the nodes are running applications.

Another method that decreases energy usage when air cooling a server is to be aware of and reduce cabling issues. Power and network cables that block airflow require the fans to operate at a higher RPM, using more electricity. Careful placement of these cables within the chassis and external to the chassis reduces this possible issue. In addition, a server that consists of blades with integrated switching will typically have fewer cables connecting systems, as this is done through a backplane.

Selection Of Servers with Shared Components

Best Practice 11: Operate at Higher Ambient Temperature (Thermostat, Free Air)

When using traditional air cooling mechanisms, the air entering the server (inlet temperature) is maintained by Computer Room Air Conditioning (CRAC).The amount of air conditioning used in a data center contributes the most to the PUE calculation. Reducing the amount of air conditioning significantly lowers the PUE and, thus, OPEX costs. Around the world, many data centers are keeping inlet temperatures too low. Data center operators can reduce power usage by increasing the inlet temperatures to the manufacturers recommended maximum value. Looking at the results from the survey, there is a wide range of inlet temperatures, which also shows that most IT administrators are limiting the inlet temperature to less than the manufacturer sets as the high limit.

Average Server Inlet Temperatures

Cooling with “free air,” can be defined as using external air and fans which is filtered and the humidity adjusted, can be a significant contributor to a green data center by lowering the need and use of a computer room air conditioner (CRAC). The use of outside air is only possible in certain climates and geographies and may be part of a decision-making process regarding where a new data center is to be located. Careful consideration should be given to understanding the yearly climate norms and extremes.

Free Air Cooling Use

In the recent survey, over 90% of the respondents stated that they use some amount of free air cooling.

Knowledge of PUE In Respondent’s Data Center

Best Practice 12: Capture Heat at the Source (Hot or Cold Aisle Containment, Liquid Cooling)

Computer room air conditioning is the most significant variable to optimize in order to lower overall PUE. The PUE of a data center is defined as the total amount of power delivered to the data center divided by the amount of power used by the IT components. The lower the value, the more energy efficient the data center is. In the recent survey, about 80% of the respondents knew the PUE of their data centers, and the most frequent PUEs were in the 1.11 to 1.40 range.

Average PUE In Respondents Data Center

Liquid Cooling

Liquid cooling of the CPUs and GPUs can significantly reduce the need for having CRAC units in data centers and the need to push air around. There are several different methods to use liquid cooling to reduce the need for forced air cooling and the potential to reduce the time to completion of a workload. Using liquid cooling can significantly lower the PUE of the data center.

Direct To Chip (DTC or D2C) Cooling

In this method, a cold liquid is passed over the hot CPU or GPU. Since liquid is much more efficient at removing and transporting heat than air is, the CPU or GPU can be kept within its thermal design power (TDP) envelope. As CPUs and GPUs are drawing more power, each creates more heat that must be removed. With more heat in an air cooled data center, the CRAC units must work harder to cool the hot air before it is returned to the inlet face of the server. When D2C liquid cooling is implemented, a significant reduction in CRAC and a reduction in the fan speed are needed to keep the system cool. The hot liquid is then sent to an external cooling apparatus, which may be external to the data center.

Direct To Chip (DTC or D2C) Cooling

Rear Door Heat Exchanger (RDHx)

The rear door of the rack contains liquid and fans which cools the hot server exhaust air before the air enters the data center. The hot liquid needs to be cooled before it is returned to the data center CRAC. This method of using liquid keeps the air at a lower temperature in the data center, allow the reduction of cooling demands on the CRAC, which will lessen the data center electricity needed.

Rear Door Heat Exchanger (RDHx)

Immersion Cooling

The entire server or groups of servers are immersed in a dielectric liquid. The close contact of the liquid molecules with the hot CPUs, GPUs, and other components are an efficient way to cool the servers, as fans will be removed from the servers. There are some minor modifications that must be done to the server before immersion. This type of cooling is considered a closed loop system, where the cooler liquid is pumped into the immersion tank. As the hot electronics heat the liquid, the liquid rises and is then pumped out of the tub and cooled elsewhere. An entire rack of servers can be cooled in this manner.

Immersion Cooling

Hot and Cold Aisles

A significant amount of electricity can be saved in the usage of the Computer Room Air Conditioning (CRAC) if the hot and cold aisles are separated in the data center. When designed with hot and cold aisles, the inlet and exhaust air should not mix, allowing the data center cooling to operate more efficiently. For adequate cooling, the rows of racks need to be installed so that the rear of the racks face each other, creating a hot aisle. Therefore, one of the more important best practices when designing an energy efficient data center is to have hot and cold aisles. The benefit is that the warm/hot air is kept away and separated from the cooler air.

Traditional methods to remove heat from a server include moving cooler air over the electronic components, where hot air rises and is expelled from the chassis. Typically, the airflow is from front to back (as inserted in a rack), with fans pulling the air through the server located at the server’s rear. With multiple servers in a rack, the hot air from all servers creates a very warm/hot zone behind the rack. As multiple racks are lined up side-by-side, the total amount of hot air expelled from the back of the servers increases, resulting in an aisle full of hot air that should be contained. Figure 9 – Hot Aisle and Cold Aisle Containment Options

Containment: Creating a cold aisle and hot aisle will keep the cold aisles cold and allow just the hot air to be returned to the CRAC.

Systems to remove the hot air to be cooled: Below are examples of an isolated hot aisle where the hot air is returned to the CRAC unit and a cold aisle containment where the cooler air is delivered beneath the floor as intake to the servers. The hot air is then chilled with computer room air conditioners or other methods.

Designing a data center with a hot aisle or cold aisles depends on the initial investment. Cold aisle containment is generally less expensive because you may only require doors and a roof for cold aisle containment, making it more affordable. Additionally, it’s a more straightforward setup, and expansion costs are less if additional growth is needed.

Many hot aisle setups are raised floor configurations, which are more efficient with the advantage of reducing costly use of CRAC units. The disadvantage is the initial cost of building a data center from the ground up, raised floors, and ductwork. Still, the long-term cost-benefit is more significant than a cold aisle configuration if optimized by utilizing best practices.

Use of Hot and Cold Aisles in Respondents Data Centers

Best Practice 13: Select & Optimize Key Components for Workload Perf/Watt (CPU, GPU, SSD, …)


As CPU technology constantly improves, one of the most critical gains is that more work per watt is accomplished with each generation of CPUs and GPUs. The most recent offerings by Intel and AMD are up to three times more performant in terms of the work produced per watt consumed. This technological marvel has enormous benefits for data centers that wish to offer more services with constant or reduced power requirements.

Performance per Watt Over Time (Normalized)

CPU Differences

CPUs for servers and workstations are available in many different configurations. CPUs are generally categorized by the number of cores per CPU, the processor clock rate, the power draw, the boost clock speed, and the amount of cache. The number of cores and clock rate are generally related to the amount of electricity used. Higher numbers of cores and clock rates will usually require more electricity delivered and will run hotter. Conversely, the lower number of cores and clock rates will use less power and run cooler.

For example, suppose workload A is not required to be completed in a defined amount of time. In that case, a server with lower powered CPUs (generally related to performance) can be used compared to a higher powered system when the SLA may be more stringent. An email server is an example. The response time to view or download emails to a client device needs to be interactive, but a slower and less power demanding CPU could be used since the bottlenecks would be storage and networking. On the other hand, a higher performing and a higher energy CPU would be appropriate for a database environment where data may need to be analyzed quickly. While putting an email server onto a high performing system would not cause harm, the system will not be used for its intended purpose.


Today’s computing environments are becoming more heterogeneous. Accelerators are available to increase the performance of specific tasks, even while CPU performance has increased exponentially over the past few years.

Accelerators are available to increase the performance of specific tasks, even while CPU performance has increased exponentially over the past few years.

The most popular and visible type of accelerators are GPUs, which can be used for massively parallel tasks. New GPUs contain thousands of “cores” compared to tens to the low hundreds in CPUs. With HPC and AI applications, GPUs deliver tremendous performance increases but come with an increased electricity requirement. CPUs (late 2022) are topping out at 350W, but GPUs are up to 700W. However, since the performance for HPC and AI applications is significantly improved when the application has been designed to take advantage of the massive parallelism of GPUs, the time to run the application will be decreased as well as electricity. For example:

Since the performance for HPC and AI applications is significantly improved when the application has been designed to take advantage of the massive parallelism of GPUs, the time to run the application will be decreased as well as electricity.

Thus, the combined CPU + GPU system uses 40% less power for a given task than a CPU only system. In the recent survey, almost 80% of the respondents use GPUs.


Hard Disk Drives (HDD) have been the primary storage method for over 50 years. While the capacity of HDDs in recent years has increased dramatically, the access time has remained relatively constant. Throughput has increased over time, as has the capacities of HDDs. However, Solid State Drives (SDD) are faster for data retrieval and use less power than HDDs, although HDDs are suitable for longer term storage within an enterprise. M.2 NVMe drives, SSD drives currently transfer about 3GB/sec, which can significantly reduce the time required to complete a heavily I/O application. Overall, this performance will result in lower energy consumption for the time to complete a task compared to other I/O technologies.

While the capacity of HDDs in recent years has increased dramatically, the access time has remained relatively constant.

Data Center GPU Use

Best Practice 14: Optimize Refresh Cycles at Component Level for Perf/Watt (Disaggregated, Universal GPU, Long Life Chassis/PS)

The major components of servers are continually improving in terms of price and performance. As applications continue to use more data, whether for AI training, increased resolution, or more I/O (as with content delivery networks), the latest servers that contain the most advanced CPUs, memory, accelerators, and networking may be required. However, each of the sub-systems evolves at a different rate. With refresh cycles decreased from five and three years, according to some estimates, entire servers do not need to be discarded, contributing to E-waste. With a disaggregated approach, the components or sub-systems of a server can be replaced when newer technology is deployed. A well designed chassis will be able to accommodate a number of electronic component technology cycles, which allows for component replacement. By designing a chassis for future increases in the power required for CPUs or GPUs, the chassis will not have to be discarded as new CPUs are made available.

For example, in a recent white paper, Intel discusses how disaggregated servers allow Intel to reduce capital expenditures while upgrading certain technology as needed.

Best Practice 15: Optimize Power Delivery

Power conversion from AC to DC entails some amount of heat generated. With AC being delivered to the data center, the power must be converted to DC for the system. With each conversion of AC to DC, power is lost, contributing to the inefficiency of the data center. More efficient conversion will result is less wasted power during the conversion, with heat being the by-product which must be removed from the system.

Titanium power supplies are the most efficient option, offering 96% power efficiency. Platinum power supplies are slightly less efficient at 94%. Gold power supplies offer a lower efficiency of 92%. The efficiency of a power supply isn’t linear or flat when it comes to the supply’s output range. Most power supplies operate at their maximum efficiency when they’re running in the upper ranges of their rate capacity. This means that an 800-watt power supply providing 400 watts of power (50% capacity) will be less efficient than a 500-watt power supply providing that same 400 watts of output power (80% capacity).

Additionally, the power distribution for a rack of servers is best accomplished by using multi-node and blade systems. The higher the AC input voltage, the more efficient the entire power conversion process. Multi-node and blade systems share the AC power supplies among a number of independent servers. This results in a more efficient process in the AC to DC power conversion process.

Optimizing Power Conversion Steps

One method to reduce the PUE of a data center is to optimize the different power conversion steps. In our survey, 66% of the respondents said they optimized the power conversion steps.

Best Practice 16: Utilize System Consolidation, Virtualization and Power Management

System consolidation, Virtualization and power management tools are all great examples of ways to improve power utilization and increase flexibility.

 Comparison of Non-Virtualized and Virtualized Environments

IT administrators who can control power utilization can significantly affect the overall power consumed in a data center. Empowering the administrators to monitor and then regulate this critical aspect of the entire data center can lead to a more efficient operation and less expense. For example, by analyzing logs of power usage over time, IT administrators can move specific applications (jobs) to use the systems when more renewable power is available or when power costs are lower (time of day pricing). Capping the power a server or a cluster of servers can use through the Supermicro SuperCloud Composer can reduce costs by limiting how much energy a server can use. Less power delivered to the server will result in lower performance, but it can still meet SLAs for specific workloads. Virtualization and container management systems are also critical for reducing power usage in the data center. Virtual servers allow for a higher utilization of the CPU and memory resources, which can decrease the number of servers in the data center.

Use of software management systems

Best Practice 17: Source Green (Renewable Energy, Green Manufacturing)

A data center’s energy source has the most significant impact on its carbon footprint and poses the most substantial opportunity to benefit the environment. Typically, a data center operator decides the energy source for all facility users. In addition, many data centers, including many colocation data centers, publicize the movement toward energy generation from 100% renewable sources.

Renewable energy programs for commercial customers include generation through utility, third-party power purchase agreements (PPA), or renewable energy credits (REC). Distributed renewable energy production owned or controlled by data centers is optimal. But on-site renewable energy sources do not always satisfy data center energy demands. Fortunately, clean grid energy can augment this. There are also increasingly effective energy storage solutions for deployment on-site, coming down in cost as battery technology improves and scales.

When our survey asked how renewable energy or RECs were used, the results showed that about 86% of the data centers using renewable energy, some more than others.

Renewal Energy Use in Data Centers

In addition, about 72% of the respondents stated that they use Renewable Energy Credits or RECs.

Renewable Energy Credit Use

Additionally, it is important to understand and document the sourcing of components when manufacturing servers (Scope 3 emissions). By monitoring and using organizations that are part of the supply chain whose emissions are low, the overall contribution to the environmental impact of IT equipment will be lowered.

Best Practice 18: Rethink Site Selection Criteria with Climate – Location, Location, Location

Large-scale data centers cost a lot of money to operate. For example, a single hyper-scale data center can demand 100 MW of power to keep servers, storage, and networking infrastructure performing as expected (enough to power 80,000 US households). In addition, while electronics use most of the energy consumed in a data center, cooling those electronics to maintain operating temperatures can consume 40% of facility energy. For example, there may be locations where a data center could be located to use free-air cooling for the entire year, or use 100% renewable energy, but latencies to users around the world may be impacted.

Building costs consist of the land value as well as the cost of construction. Construction prices vary depending on the region of the country and the world. Unlike building a home or an office building, a data center’s location has some unique requirements to be considered “green” and deliver agreed-upon Service Level Agreements (SLAs). In addition, factors such as the climate, energy pricing, risk of natural disasters, water costs, and the cost of network bandwidth all contribute to the choice of data center locations.

Distributed digital infrastructure means an organization’s IT infrastructure may have several locations, some needing greater proximity to business operations than others. Many data centers remain in hot or temperate climates. Although real estate prices may be lower in certain climates, the overall costs may be higher since free-air cooling may not be an option and CRAC utilization may need to be very high. Data centers in cooler climates can use outside air to cool the systems, reducing the need for Computer Room Air Conditioners (CRAC). Outside air can be brought into the data center directly (after passing through some filtering mechanism) to provide cool air for the cold aisles in the data center. However, poor air quality can affect the efficiency of bringing in outside air. Although this may reduce power costs, remote locations where real estate is less expensive may also be relatively distant from internet exchanges, resulting in higher latencies for customers. Connecting a data center to multiple trunk providers may be necessary. Before locating a data center in a colder climate, be aware of where energy comes from.

Share of internet users above the world average.

Some locations with outside cooler climates can provide natural cooling benefits. For example, suppose a data center is located near a moderate to cold body of water. In that case, systems can be constructed to allow the cooling system to tap into that nearby cool water. For example, naturally cooler river water can be used in a data center heat exchanger to significantly reduce the amount of power needed to cool the facility. As a result, much less power is needed to chill standard water delivery.

In our recent survey, 84% of the respondents stated that they do have the ability to locate their data centers where electricity prices are lower.

Ability to Locate Data Center to Where Lower Electricity Prices Are Available

Summary of data centers impact on the environment

The environmental impact of a large data center is significant in terms of energy usage and E-waste. Data center owners can take a number of straightforward steps to reduce the electricity required for operating a data center. The result of these actions is both a reduction in the costs to operate the data center and doing what is better for the environment as well. Several choices can be made when designing, retrofitting, or upgrading a data center. There are many ways to make data centers more efficient and use less power, from the type and source of electricity generated to the refresh rate of new technologies. For example, looking at the location of a data center, designing the data center, and choosing suitable servers for the data center can significantly reduce the power needed for operation, which reduces the environmental impact and costs.

  • Up to 400 TWh of Electricity Used to Power Data Centers Worldwide
  • Estimated 2+ Million Tons of E-Waste Generated By Data Centers Per Year
  • Performance/Watt Increasing Faster than CPUs and GPUs Power Requirements
  • In Some Regions, 80% of Electricity Generated by Fossil Fuels
  • Up to 8 Billion Trees Will Not Have to be Planted for Carbon Offset if Data Centers Worldwide Used More Efficient Servers


In conclusion, sustainability is no longer an option but a necessity, and businesses are now responsible for taking action towards a greener future. By implementing the above measures, data centers can enhance their eco-friendliness and reduce their carbon footprint while also reaping the benefits of cost savings, increased efficiency, and improved reputation. Investing in sustainability not only benefits the environment but also creates a competitive advantage for businesses in an increasingly eco-conscious world. It’s time to lead by example and make a positive impact on the planet.



    Ads Blocker Image Powered by Code Help Pro

    Ads Blocker Detected!!!

    This site depends on revenue from ad impressions to survive. If you find this site valuable, please consider disabling your ad blocker.