It feels like summer! With the heat of summer, we usually begin worrying about keeping our data center cool and fully operational as the demand for power strains the electrical grid in downtown Richmond. We can now worry a lot less: I am pleased to announce that the upgrade of all major data center infrastructure components at the University Computer Center (UCC) will be fully complete in mid-June 2016. The data center has enjoyed fully redundant electrical power and cooling since late February 2016. The installation of the ceiling grid, ceiling tiles, and new modern lighting will be completed by next week. This leaves only a complete data center cleaning, including inside the racks and under the floor, and a new paint job.
As a refresher, VCU’s primary data center is located in the state-owned Pocahontas Building, 900 E. Main St., in the downtown Capitol Square complex. The data center occupies 5,200 square feet of raised floor machine space, with over 600 pieces of computing/networking/data storage equipment on site with over 1 Petabyte of storage. The Poca data center also serves as a major networking junction for VCU’s private fiber network as well as for server and file serving traffic. Previously, the VCU data center relied on the city water system for cooling and did not have a redundant electrical power source. The building’s previous Uninterruptible Power Supply (UPS) was past end-of-life and did not support the machine room Computer Room Air Conditioning (CRAC) units.
On February 20, 2015, work was started on a multi-million dollar upgrade of all major cooling and electrical systems, which included removing and replacing all current electrical wiring, the existing UPS system, and installing (static) transfer switches; adding a 750 KvW diesel powered motor generator with fuel tanks for 36 hours of full run time; and replacing the CRAC units with a n+1 configuration, utilizing closed-loop glycol-based cooling. Prior to these improvements, electrical power to the racks was limited to 120 volts and approximately 2,500 watts per rack. Power to the racks is now all 240 volts and 5,000 watts per rack delivered via fully redundant dual power sources connected via overhead bus-ways. This will allow for much denser equipment placement in the racks. All under the floor wiring has been removed and is now overhead, further enhancing under-the-floor cold air delivery.
By utilizing Automatic Transfer Switches (ATS) and Static Transfer Switches (STS) coupling the UPS and motor generator seamlessly with building power, electrical power for computing equipment and cooling equipment will always be available. By utilizing glycol-based cooling instead of water-based (with redundancy), computer room air conditioning will also always be available.
Several innovative techniques were utilized to improve all facets of data center cooling: the use of ‘free cooling’ in all new CRAC units; design and fabrication of a creative technique to lift populated racks with live equipment, to remove the half floor tile under these racks, replace with a full tile (this improves cold air flow and forces all cold air to enter in the front of the equipment racks in the ‘cold aisles’) and then ever so carefully lower the racks back down; and installation of improved ‘cold aisle’ louvered adjustable floor tiles to allow precise delivery of cold air.
Having our critical services running in this kind of environment mitigates a substantial amount of risk as well as ensures the best up-time possible. The considerable expense and work to provide a fully redundant, reliable and resilient data center for all University administrative and departmental computing virtually eliminates the possibility of loss of computing, data storage and network infrastructure services provided by Technology Services. These infrastructure improvements will bring electrical power and machine room cooling to a fully redundant Tier 4 status, with the end result being a strong Tier 2 overall data center status. Most commercial data centers are a strong Tier 3. The improvements have also allowed the data center to house several racks of Research Informatics compute/storage clusters, thereby expanding our customer base and reducing risk for VCU.
Your UCC team will be pleased to schedule tours of the improved facility for Technology Services colleagues and all customers, including potential customers, once all work has been completed.
Thanks to all who attended our Spring Divisional meeting! It’s always a pleasure to see the entire group together and to welcome new people to Technology Services. As always, keep your feedback coming.