A Look At Server Racks Of The Future, From Major Tech FirmsServer racks have looked pretty much the same for years.
There have certainly been changes in the way that data centers use racks; for example, racks with integrated switching and cabling have become more common, modular units make it simple to switch out components that can support multiple Gigabit connections in any rack position, and some racks can now handle as many as 96 servers. And top-of-rack switching is on its way to becoming dominant in large installations, allowing the placement of switches in each rack for easier server aggregation with shorter cable runs.
Even so, server racks still basically look the same: tall cabinets with lots of slots for servers, easy access for cabling work and often, integrated fans.
The landscape is shifting drastically, though, thanks to the Open Compute Project.
Open-source software has been a boon to developers and designers for nearly twenty years. It’s led to the creation of some of the most important software available, not just for end-users who take advantage of free programs instead of expensive proprietary software, but for those on the data center end who heavily rely on open-source technologies like Apache and Linux. And open-source means more control, security (at least in many people’s view) and stability.
What Is The Open Compute Project?
The same types of benefits are now becoming available on the hardware side of the equation, thanks to the Open Compute Project. It was created in 2011 and most industry giants like Intel, Apple and Google now participate. As members, they openly share information about many of the components they use in their data centers and the way they’re configured. And Microsoft and LinkedIn have recently contributed eye-opening specs and details about the way they operate their back ends.
A Peek at the FutureMicrosoft’s latest additions to the project are the previously-secret next-gen details and specs used for its Azure cloud data centers’ new “Project Olympus.” Some of those plans are only half-finished, but Microsoft believes that sharing the information now will help the hardware community and allow them to contribute suggested modifications, and will also help vendors plan for the future.
The Project Olympus racks feature a versatile 1U/2U chassis, a high-availability power supply for each rack along with a universal rack PDU (power distribution unit), a rack management card that is standards-compliant, and the capacity for high-density storage expansion.
Meanwhile, LinkedIn has shared details of its enormous data centers and their architecture; like Microsoft, LinkedIn is thinking hyperscale.
The company uses standalone racks that can house servers in high-density, using up to 14 kilowatts per rack. That’s a lot of power and it generates a lot of heat, so LinkedIn is one of the first to pump water through all of its racks. The company pre-cools the water outside, then brings it into the data center and runs it through the heat exchangers mounted in the racks’ rear doors. That way, all hot air is neutralized before coming in proximity to the machines, so the electronics are safely cooled. The entire system is continuously monitored for leaks which could obviously create problems.
Linked In also uses top-of-rack “spine” switches running a mix of OEM and proprietary code. They work together with a multi-level architecture that creates a “fabric” to reduce the latency often caused when data travels through a large number of chipsets.
Coming soon from LinkedIn: an open-source standard that combines servers, storage and networking in a 19-inch rack, which will allow much faster rack integration while halving the number of common components required.
Much of this is architectural server design that the average data center can only dream of. But it’s coming – although perhaps on a smaller scale – sooner, rather than later.