Developers, IT and Facilities – Why It’s Best to Bring Them Together
Data centers used to be the purview of two different departments within an organization–IT and facilities. Traditionally, these two groups operated in silos, isolated from each other and doing their own thing, yet intrinsically linked because of the nature of their work.
The IT department would rely on the facilities team to build and maintain data centers and the equipment that would house their servers and other networking infrastructure. And the facilities team’s ability to keep that data center powered and cooled would be made easier or harder based on decisions made by the IT department. But, despite this inherent connection, in many enterprises, they infrequently communicated or collaborated.
Now, the lack of communication and collaboration is expanding thanks to the integration of another group of individuals who have been added to the mix–the application developers. These individuals are the people responsible for creating the services, applications and tools that make companies more efficient, provide new offerings to customers and give company employees more capabilities. But the applications that they develop and the data that they generate will ultimately be housed in the data centers that are being built out and managed by the IT departments and the facilities teams.
The ongoing disconnect between these three groups is a problem since what each does impacts the other in some way. Let’s look at the cascading waterfall of problems that can occur when developers, IT and facility teams work in their respective vacuums.
Aligning developers with IT.
When developers create a new application, they’re often working in a laboratory environment–a place where all the files, data and information live together. In this environment, everything works and, as a result, developers often overlook how their application will work in their company’s live computer architecture.
Unfortunately for the developer, how an application operates and functions in a research lab may not be the same as how it will function in a production environment. There are different things to consider in a production environment that may not have been a concern in a lab–things like latency, storage tiering and data security all need to be contemplated when applications are promoted.
For example, if an application frequently accesses customer data, but that data is housed in another data center–or even in another part of the same data center–the latency from each of those transactions could negatively impact the user experience and bog down the application’s performance. Or, if the application creates a significant amount of data that is required, by law, to be stored for a long period of time, it could very quickly exhaust the company’s storage capacity. Or, an application may utilize data encryption that is legal in one country, but not others–so when that application is rolled out across a global enterprise, the built-in encryption and security fail to meet regional requirements.
These are just a few things that could go wrong with an application if it isn’t designed with the enterprise computer infrastructure in mind. Developers need to know what that infrastructure and architecture looks like. Conversely, the IT department needs to know what the application will require so they can outfit data centers with the equipment and systems that meet the application’s requirements along with legal or compliance standards.
If these two parties fail to collaborate, the result could be applications that were built at significant cost and time to the company, but that don’t work as designed in a production environment. The IT department then needs to scramble to accommodate the requirements of that application to ensure that it can function properly and deliver on its promises to the company and its customers. And that’s where they can run into problems with the facilities team.
Aligning IT with facilities.
They may not get the same amount of credit or receive the same amount of attention and adoration as the application developers and the IT department, but the facilities team is just as important for making this whole thing work. These are the people who make sure that the physical data center can support the servers, network infrastructure and devices that make the computer infrastructure function.
For racks upon racks of servers to function properly, they need to be powered. They need to be cooled. There needs to be appropriate cabling between them. And that doesn’t just happen as if by magic. Also, there isn’t just one industry standard or standard template for how a data center is laid out. And when the IT department determines what the layout of its data center will look like, it can have major impacts on how that data center needs to be constructed, powered and cooled.
Each data center can only house a finite number of servers and devices. If the IT department tries to put too many machines in a room, or puts those machines and racks too close together, it can spell disaster.
Too many machines may mean not enough power to go around. Too little space between racks may mean that cool air may not flow everywhere it’s needed. Too much density and the cooling systems may not be capable of keeping up with the heat being generated by the machines. This may require the facilities team to move forward with an alternative cooling method–such as water cooling–which the facility may not have been designed to support, either making it physically impossible or too expensive to implement.
And those are just a few issues that can arise when the IT department and facilities aren’t coordinated. There are more, ranging from the weight of the equipment being too much for a raised floor to handle, to hanging wire racks not being large or strong enough to handle all of the cables necessary to connect the equipment.
And this is how the problems can expand from one department to another. The application developers didn’t talk to the IT department, forcing them to scramble to accommodate the unique requirements of a new, highly anticipated application. The IT department then tries to build out an IT and computer infrastructure to power the application, but what they design can’t be supported by the data center that the facilities team has constructed for them.
So, how can an organization avoid having these problems rolling downhill across the enterprise?
Bringing it all together. Why smashed silos aren’t enough.
The natural response that companies may have to this problem is to attempt to increase the collaboration, communication and coordination across the enterprise–specifically between developers, IT departments and facilities. And that’s certainly an important step. But I would take it even further.
Some of the most successful technology companies that I’ve seen bring these three, disparate groups together early on. When the first meeting occurs about an application or system that the company is looking to develop and implement, all three of these groups should have a seat at the table. With all parties in one room, the players can ask the hard questions:
- What are we looking for this application to do? What capabilities should it have?
- What is the design of the system, storage and network architecture needed to support the application?
- Can this work on our existing computer infrastructure? Will changes need to be made or will new data centers need to be provisioned?
- What will the layout of that data center look like? How dense will it be? What are our power and cooling requirements?
- Where does the application need to be geographically to provide the lowest latency?
Just having the occasional meeting between these groups–doing the bare minimum to break down the silos between the organizations–isn’t enough. Every decision needs to have input from each department. From the start of a project, each department needs to weigh in on what they’ll need to be successful. There are too many ways in which these departments are inherently linked and impact each other to not get them together early and often.
Steve Conner serves as vice president, solutions engineering at Vantage Data Centers. He is responsible for leading the company’s sales team on technical requirements in pursuit of new business.
Conner has more than 25 years of experience in building and leading highly motivated sales and engineering teams. Prior to Vantage, Conner led the sales and engineering teams at Cloudistics, taking the start-up’s revenue from $0 to over $5 million in its first year of selling. He held multiple senior level positions at Nutanix where he built a multi-million-dollar business unit focused on managed service providers.
Conner holds a Bachelor of Science degree in computer science from University of Richmond, a Master of Science degree in computer science from George Mason University, and an MBA from Florida Institute of Technology. As part of his focus on technology and enterprise architecture, Conner has earned multiple certifications including CCNP/DP, CISSP and ISSAP.
The Innovative Green Features of Vantage’s VA1 Campus
In late October, Data Center Frontier kicked off a new content series called, “Greener Data,” which is intended to explore the progress that data center…