Subscribe Now

* You will receive the latest news and updates on your favorite celebrities!

Trending News
Building the High-Density Data Centers of Tomorrow
Data Center Operations

Building the High-Density Data Centers of Tomorrow

There are few technologies as truly disruptive and with as much potential as artificial intelligence (AI). And, with the ability to better leverage historical and operational data to identify efficiencies, streamline operations and improve decision-making, there are advanced use cases for AI technologies and solutions across practically every vertical market and industry.

AI Transformation of the High-Density Data Center

In fact, in an article published in 2021, the Forbes Technology Council identified 14 different industries and markets where AI could have a revolutionary impact. The 15th entry on their list effectively acknowledges the fact that the use cases of AI are limitless by admitting that, “There is no single industry that will benefit from it; all will,” and that, “Any industry with automatable tasks,” was ripe for transformation from AI.

But these advanced AI, data analytics, and machine learning (ML) applications and solutions come with a much larger compute requirement than traditional workloads and applications. While they may be the key to untold benefits and operational improvements for any number of organizations, they’re forcing a change in the density of server racks, the way those racks are organized, and even the data centers where the racks are deployed in.

Let’s take a deep dive into how AI and other advanced applications are transforming the data center – from the servers, the racks, the rooms, and the buildings that hold it all. Let’s also explore how this new generation of high-density data centers impacts how rooms and buildings are designed, powered, cooled, and constructed.

From Lithe Pickups to Juiced-up Monster Trucks

The immense computational requirements of these advanced analytics solutions and AI applications are forcing hyper scalers and other enterprises to embrace advanced technologies – and new use cases for existing technologies – to get the most computing power from each rack.

For example, something that we’re seeing emerges across the Vantage fleet includes system platforms that rely heavily on computational offload. It is common in these environments for GPUs to take on certain computational workloads that would be traditionally tasked to the computers’ CPU. By offloading these computations, a single computer dramatically increases its overall computational capacity. GPU offloading is nothing new, but it’s a technology that we’re seeing with increased frequency across our data centers.

Example of a data module at Vantage Data Centers’ VA12 facility.

Another traditional or existing technology that we’re seeing embraced in new ways includes the use of server clusters or pods – multiple servers or computers working together to run a single application or calculation.  Today’s clusters differ dramatically from their long, distant cousins in that the software linking the computers together allows the power of multiple computers to be aggregated into one High-Performance Computing (HPC) environment.

HPC environments not only require advanced computing techniques, they also must have extremely fast networks for information sharing.  In days past, technologies like InfiniBand networks were used to link storage devices to the network. These high-speed connections would make that storage device feel like it was directly attached to the server. That technology is now being utilized to connect server clusters creating neural networks. The power of the platform coupled with blazing-fast networks allows computing systems to behave very much like the human brain.

All these things add up to create high-density data centers. By embracing all of these new and existing technologies and supercharging the computer power of each server, rack, or cluster of racks, the user is effectively creating a power-hungry monster. While a traditional application may drive server rack densities to upwards of 15 kW, HPC configurations are creating monster racks that can require upwards of 50 kW and climbing. It’s the equivalent of taking a base truck and turning it into a big, monster truck with eight-foot tires.

If we’re going to make these monster energy-hungry server racks, then we need to understand how to provide for their basic needs. So, what must be done within the data center to deliver everything necessary to keep these monsters running?

Ensuring Cool Running

This next generation of high-density data centers requires new cooling considerations that can have a waterfall effect on the way that data centers are designed and constructed. Data centers are often cooled by air. The nature of air cooling caused data modules to be laid out as long, rectangular rooms allowing for maximum air flow to all rack-rows. Today, these power-hungry, high-density racks make that less likely – they simply can’t be arranged side-by-side if they’re going to be effectively cooled by air without major alterations to both the length and configuration of the row.

If a facility is going to be a high-density data center, cooling becomes one of the first and most important considerations. Am I going to use air to cool the site? If the answer is “yes,” then I must strategically change the way I arrange the physical data center to increase the amount of aisle space and increase the distance between rows to allow appropriate airflow. We need to re-envision how we approach containment and start getting behind rows that have more blanking panels than they do server racks.

The other alternative is to embrace liquid cooling. However, that creates other considerations. How do I deliver water to the room? How are leaks handled and their damage minimized? How is plumbing run to the room in a way that doesn’t impact other systems and utilities? Where do ownership demarcs land in this new plumbing system, and what part of the system is the data center providers’ responsibility? These are all questions that need to be answered early on since they impact the actual design and construction of the data center.

As applications drive even greater rack density, companies may need to take the plunge and look at full immersion cooling, which involves racks (now tanks) being placed horizontally supporting inert, liquid coolant

These technologies have been around for a while, but it is just recently that the systems moved from messy oils to chemically engineered fluids that are both inert and do not refract light. As server manufacturers embrace the nuances of a horizontal support plan, these solutions will become more and more prevalent.

There are a number of questions and considerations that come along with immersion cooling as well. The data center needs to be designed and constructed with tank placement in mind. Because the racks are flipped on their side, there are different weight distributions that are essential to consider – especially as data centers increasingly go vertical. Also, since immersion cooling enables more density in each room, cooling can be eliminated as the bottleneck. Instead, delivering adequate power becomes the main concern or challenge.

But cooling is just one consideration. There’s also a distance issue to address.

Overcoming the Tyranny of Distance

As we discussed previously, multiple servers are now being asked to work in unison to run as one to analyze a data set. For this to be possible, the latency on the network that connects these servers needs to be incredibly low. For high-speed, super-low-latency networks to be possible, the distance that separates servers and server racks in a pod or cluster needs to be dramatically reduced.

For some of these incredibly high-speed networks, including the InfiniBand networks that we discussed, the distance between server racks and their network hubs needs to be kept to no more than 30 meters. That means we only have 90 feet to play with – and that can be a challenge when a network needs to span multiple rooms in a single building.

To meet this restrictive cabling and networking requirement, the data center builder needs to think outside of the box – or outside of the traditional approach of utilizing conduit and cable trays – to connect disparate rooms. And one of the simplest solutions is to run direct connections through the floor between high-density data centers that are in the same, multi-level building.

In the past, this would have been difficult because single-story data centers were the norm. However, as multi-story data centers are increasing in popularity and number as real estate restrictions and customer demand make them more desirable to data center providers, they introduce a unique side-effect ideal for HPC environment – verticality.

While more multi-story data centers are being constructed, that doesn’t necessarily mean that they’re being designed for direct connections between floors, more specifically, vertically adjacent data modules.

There are considerations that need to be taken into account during the design and construction phase to ensure that multi-story data centers can accommodate direct access between modules, eliminating the need to traverse long, winding conduit systems.

First, there is a safety consideration. For there to be a direct connection between floors, there needs to be chasing between floors with openings up to and exceeding 12 square feet in the area simply to handle cable volume. Putting a 12-square-foot hole in the floor is a fall risk, and the designer needs to plan accordingly to keep employees safe.

Next, there’s a structural integrity consideration. If every room is going to have one or more holes in the ceiling/floor to connect it to the room above/below, that can have a significant impact on the amount of weight that the floor can support. This becomes increasingly essential to consider when immersion cooling is being weighed as an option since tanks of liquid can be heavy.

The intense computing requirement of tomorrow’s advanced AI applications and solutions is only growing and driving the need for a new generation of high-density data centers. These high-density data centers require a significant amount of power, but they also require data center providers to plan, design and construct data centers that can meet their immense cooling needs and restrictive networking requirements.

As hyper scalers and enterprises increasingly embrace these advanced solutions, it’s essential that they partner with data center providers who understand these requirements and can work in partnership with them to achieve the desired outcome.

If you would like to learn more about Vantage’s fleet of hyperscale data centers, visit www.vantage-dc.com

WordPress Theme built by Shufflehound. © 2024 Vantage Data Centers | Terms of Use | Privacy Policy