More and more enclosures in more and more data centers are filling with blade servers, and one of the primary reasons is economics. Quite simply, blades deliver the same performance as their rack-optimized counterparts at a lower TCO. The math behind blades’ lower TCO falls into three categories: acquisition costs, operational costs (primarily power and cooling) and personnel costs.
The first number CFOs typically look at is the cost of acquisition, which is a capital expenditure that goes on the books (unless the units are leased). Here, blades have a clear advantage. Because multiple blades fit into a single backplane, they can share components such as adapters, cables and switches, whereas every independent server needs its own set. For enterprise data centers this can mean the elimination of thousands of components, but even smaller companies with half a dozen servers can achieve a cost benefit.
IDC recently analyzed the cost savings of organizations that moved from a traditional infrastructure to a blade environment, and the numbers are striking. Server hardware costs were reduced 55 percent, from $2,919 to $1,618. Network hardware costs were reduced an even more impressive 60 percent, from $2,812 per server to $1,124 (over a 12-month period). Obviously, these figures will differ from organization to organization, but the scale of the savings indicates a very attractive pricing ballpark.
There is another cost issue related to capital spending which is very important for some enterprises, and that is the increased density of blades.
With every-increasing requirements for storage and compute capacity, many data centers are running out of space. Building a new data center is an enormous capital expenditure that no organization can take lightly. Fortunately, blade servers can reduce the demand for raised floor “white space” anywhere from 30 to 50 percent, eliminating – or at least delaying – the need for an expensive construction project.