Skip to main content

Please Select Your Country/Language Preferences

Shot of a Working Data Center With Rows of Rack Servers Connected with Ethernet Connection Visualisation Lines.
NEWS ARTICLE
Space, Power and Racks: What’s Really Limiting Data Centre Capacity

 

Perceptions of limitations in data centres may not be what they first appear, as demand drives new examination.

Authored by Markus Gerber, Senior Business Development Manager, nVent Schroff 

The world has seen increasing demand for digital services in recent decades that has only grown since the pandemic.  As we moved beyond the pandemic, work practices were also changed forever, which combined with developing business models and digital transformation, has seen demand grow even more, as well as new requirements that have seen the likes of edge computing proliferate. This has driven growth in data centres, with pressures such as space, density, and power all coming under the spotlight as potential limitations.  

Scale of growth 
To get an idea of the scale of growth, data volume is a key indicator. According to Statista, since 2010, the volume of data created, captured, and consumed has grown from 2 zetabytes to 97 zetabytes in 2022, with the figure for 2025 expected to be 181 zetabytes. Despite this near exponential growth, according to the International Energy Agency, energy demand since 2010 has only gone from 194 terawatt hours (TWh) to just over 200 TWh in 2022.

Data Centre


These contrasting figures show the extraordinary strides that have been made in energy efficiency in computing, a clear demonstration of Moore’s Law. Now though, there are concerns from no less a figure than the CEO of Nvidia, Jensen Huang, that the Moore’s Law effect may be coming to an end. While this is disputed, there can be little doubt that processors are likely to become ever more powerful, while producing more heat.  To meet that demand, it is likely that data centre limitations will be encountered, with space being chief among them.  

Space and power 

Space was often seen as one of the chief limitations for data centres, described in terms of the ability to power equipment in a given unit of measure, such as watts per square foot or square meter. A useful rule of thumb for specifications and facility design, architects planned power and cooling accordingly. This approach has seen the data centre become progressively hotter, using more power. In an air-cooled data centre this required more and more air pumped through, meaning for every watt drawn, less and less was used for compute.  

As a result, data centres in the nineties and early 2000s became less and less efficient overall. Many experienced a threshold where they simply could not just pump a room full of air, making it increasingly unfeasible for much of what is already deployed.  

Equipment management 

Management too became an issue. As data centres evolved, equipment was upgraded or altered, moved around or replaced due to failure. Gaps, spaces, and expansions often meant that even carefully implemented methodologies such as hot aisle/cold aisle systems, were left working poorly, as guidelines for airflow management were often ignored in the need for expediency and demand. This compounded the impression of space limitations, when in fact a facility, if properly managed, could take more before reaching the inevitable limit of pumped air cooling.  

Optimisation 

However, not everyone has the luxury of being able to move to entirely new cooling methodologies. By focusing on optimising the use of existing space, with better rack, row, and enclosure management, much better density and performance can be achieved before the need for the likes of liquid cooling or wholesale change.  

Site surveys can be a very useful tool to evaluate current states and understand if best practice is being maintained. As mentioned, sometimes expediency trumps best practice and can lead to inefficient operation.  A site survey can ascertain what needs to be brought back to best practice spec to ensure that what you already have is operating at peak efficiency.  

Test and improve 

Secondly, management systems have become much more sophisticated, allowing for digital modelling. This can mean that configurations can be tested to their limits without endangering operating equipment. Configurations of racks, cabinets, and rows can be tested to see if improvements in the likes of in rack or in row power, combined with similarly deployed fans or coolers, could relieve pressure points. Great strides have been made in everything from cable management systems, blanking plate to ducting and delivery, to ensure that existing capabilities can be optimised.  

Modelling is also a good way to identify potential problems, such as hot spots allowing for mitigation before it is a problem. With increased computational ability, combined with artificial intelligence (AI), air flows can be modelled to include temperature increases to understand the effects of running a few degrees hotter when workloads allow. Configurations can be adjusted accordingly, making it easier to cope with either higher demand, or the likes of an ambient heat wave.  

Heat margins 

There have been many studies, publications and articles in recent times all detailing how data center temperatures can often be run hotter than previously thought safe, due to improvements in manufacturing, management and controls. The American Society of Heating, Refrigerating and Air-Conditioning Engineers put out guidance as much as a decade ago saying that DC operating temperatures could, with proper management be kept around 27c, with jumps to 32c when needed.  

Google, Facebook, and other hyperscalers have also reported experimentation in the 26c+ range, with Intel experiments peaking at 33.3c, according to reports.  

Combined approach 

What all of this demonstrates is that a coherent, holistic approach that brings together the full combined capability of infrastructure improvements, with better monitoring and management, as well as better design and best practice implementation, can mean that data centers can be made to overcome perceived space and cooling limitations. This can mean that efficiency and performance can be achieved before resorting to other costly and complex solutions.  

Constrained, not curtailed 

Many data centers now find themselves in themselves in areas where restrictions on water, power and back-up energy mean that they must operate in more stringent environments.  

Today’s range of racks, enclosures, and their attendant accessories, combined with new techniques for modelling, design and operation can allow operators and services providers to get the most from what they have before reaching what might have been previously perceived as hard limitations.  

RESOURCES

About nVent

We connect and protect with inventive electrical solutions.

RESOURCES

News and Insights

See all news

RESOURCES

Customer Success Stories

See all stories