InfoWorld has an interesting article about a new upcoming trend in datacenters; using DC power instead of AC. The whole idea behind this being is that as much as 50% of the energy consumed in datacenters is wasted in the AC to DC conversion. The typical path of electricity is AC from power company to racks, then converted to DC then back to AC by a UPS, then in the server power supply converted back to DC for the internal electronics. Each conversion loses a little energy but multiply that by the amount of servers and the loss starts to add up.
The solution is to forget AC all together and run DC directly to the servers. HP, IBM, and Sun are all starting to make their servers with optional DC power supplies and trying to get a standard wiring, voltage, and connector ratified. This move will help companies with limited space expand their datacenter. This is because the energy lost in the conversion from AC to DC and back is released as heat, so the fewer conversion the less cooling needed the more servers you can pack into a datacenter.

By Gary LaPointe January 30, 2009 - 11:57 pm
Makes sense to me. It’d probably give longer time for the UPSs (or you could buy cheaper ones).