PCs today, especially laptops but even some desktops, are starting to use SSDs for data storage. They are, however, also starting to get into servers. The relatively (compared to HDD) younger technology is often praised for its speed and its reliability, mostly due to its use of NAND flash memory instead of physically moving parts and magnetic platters. A new study, however, shows that while that much is true, SSDs fail in a different set of reliability tests that might even be more problematic. The paper chose for its subject the company that eats through data storage devices like there’s no tomorrow: Google.
The paper, published for the 2016 File and Storage Technologies (FAST) USENIX Conference, focused on the reliability of flash memory, specifically SSDs, in a production setup, specifically in servers. They studied data from a wide variety of Google’s own SSDs, ranging from 10 different drive models, different flash types, used over 6 years. And the findings might surprise proponents of SSDs as well as those just starting to believe in their promise.
But first, a bit of a good news. The study claims that multi-layer cell (MLC) SSDs are just as reliable as single-layer (SLC) ones. Since SLCs are considerably more expensive, that means you don’t have to spend too much on an SSD and still get almost the same data reliability. SLCs are noted to be more expensive because of the greater level of over-provisioning, a technique that sets aside a permanent amount of free storage, in order to increase reliability when writing on the flash chips. As the paper points out later, however, that might actually be excessive.
Another bit of good news: SSDs don’t usually even reach their maximum number of writes before they go kaput. Even the 3,000 writes of an MLC was more than enough. So those almost ominous specs about max writes aren’t what you should be worried about.
Instead, you should be worried about disk failure in terms of data integrity and errors. It is the age of an SSD, not its use, that eventually determines its reliability. That makes over-provisioning not exactly a great safeguard and actually raises the costs unnecessarily. SSDs are observed to be more susceptible to higher uncorrectable bit error rates (UBER) as they age. What this means in a nutshell is that while SSDs are less likely to even reach their maximum write lifespans, they are more likely to lose data over that same period. As such, they require even more backups than other data storage solutions.