Does SSD Power Savings Pay for Itself?
I wanted to determine how much power savings solid state storage (SSD) drives would offer over ordinary SATA hard drives. At first, I considered running a hardware experiment of my own, but changed my mind when a colleague of mine showed me that Tom’s Hardware already has benchmarks for SSD and benchmarks for SATA drives that reflect power consumption under different workloads. Each of the benchmarks has data for between 50 and 85 hard drives in a similar class, so I took averages of each of the classes, and produced a comparison chart:
The workloads shown are:
- Idle: Power consumption after 10 minutes of system idle
- Read: The system streaming an HD movie file
- Database: While running a database benchmark (mixed read/write)
- Max Write: Maximum Write Throughput of the drive
I already knew that SSD drives used less power than rotational drives when idle/reading, but what surprised me is how much power savings they offer when under mixed and heavy write workloads. I think that a typical server would have about 50% idle time, and about 50% mixed use in a typical production workload. If that assumption is true, then the power consumption comparison would look something like this:
This tells us that SSD drives are 5.4 times more energy efficient than ordinary SATA drives. There is a 5.24 Watt difference in power consumption between them. The more idle your servers are, the more dramatic this difference will be. Most enterprise workloads are surprisingly idle, so this should actually be a conservative picture of the actual difference. This convinced me that today’s SSD drives do in fact use a lot less power than SATA drives. The SSD drives also cost a lot more. There have been plenty of studies on the relative performance of SSD versus SATA for various workloads, and there are clear performance advantages to using them for a variety of reasons. Rather than look at that some more, I decided to pursue the answer to a question that your CFO would like to know:
Can the additional cost of SSD drives be offset by energy savings alone?
To judge the cost savings, imagine you have a room full of servers. Let’s say you have 500 servers running in a single data center, and you want to remove all SATA drives and replace them with SSD drives to conserve power, and allow you to maybe pack in some more servers before you build a new data center. Let’s see how this would work. If you assume that each sever consumes 250 Watts of power, and that each one has 4 hard drives installed in it on average, then perhaps you could save 5.24 x 4 = 20.96 Watts for each server or 10,480 Watts for your 500 server data center.
Electricity costs vary depending on where you buy it, and how much you buy. In Los Angeles, prices are about $0.20 per Kilowatt Hour. If you have a data center that pays a lot less, then the savings will be less dramatic for you. Your servers run 24 x 7 x 365, so the total annual electricity to buy for our example data center is:
250 Watts x 24 x 365 x 500 Servers = 1,095,000,000 / 1000 = 1,095,000 Kilowatt Hours
1,095,000 Kilowatt Hours @ $0.20 each = $219,000 annually
It my actually cost more than this to both power and cool the equipment, but the savings ratios are the same regardless. The costs are directly proportional to the power consumption.
Your SSD hard drives in this scenario would save you about:
20.96 Watts x 24 x 365 x 500 Servers / 1000 = 91,805 Kilowatt Hours
91,805 Kilowatt Hours @ $0.20 = $18,361 annually
If your hard drives last you five years, then that’s about $127,195 in electricity cost savings over the life of the equipment. If you picked SSD hard drives that cost about $250 each, then you would have spent:
500 Servers x 4 SSD Drives x $250 = $500,000
Guess what… the savings over 5 years by using SSD drives exclusively instead of SATA does not justify the cost of the drives to begin with. Perhaps if you could source SSD drives at $62 each, then you might have a compelling reason to switch to SSD technology for power savings. When you buy power in large volumes, it can cost less, so that can widen the price gap further. I suppose this is why we don’s see SATA drives piling up in dumpsters across the country. You can perform your own analysis for your particular situation.
Data centers are expensive to build. The most expensive parts of them are the power and cooling infrastructure.
If your data center is “full” because your power/cooling capacity is maxed out, you can swap out your SATA hard drives for SSD drives. You may reclaim about 8% of your power capacity allowing you to add more servers. Less power consumption means less heat produced, so it’s like getting a bigger data center. In my example above, there would be enough power capacity freed up to allow 40 more servers to be installed at the original power consumption level. The additional capacity afforded to you by using SSD drives may be worth a small fortune to you.
There are at least three good reasons for using SSD drives in servers. I suggest giving it some serious consideration.
Reasons why NOT to switch to SSD drives
- If you need a lot of storage. The cost per GB of the SSD storage is considerably higher than the cost per GB of SATA storage, even considering the performance and power savings.
- If you are constantly writing to the hard drives over and over. SSD drives to have a limited duty cycle, and in general may be less durable than regular hard drives. Eventually they do wear out, just for different reasons than drives with moving parts. However, most of the drives that are on the market today are rated for MTBF durability that’s comparable to what traditional hard drives offer.
- You run your data center on solar power (yeah, sure you do). Seriously, if your cost for power is dirt cheap, and you need a lot of storage, then regular hard drives may be a better value for you.
You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

