Facebook serves up a huge amount of data each and every day from the millions and millions of users of its social network. When Facebook introduced its graph search feature, the amount of data the social networks served up increased significantly. Not only did the amount of data served up increase, the speed at which the data was required increased as well.
Facebook engineer Ashoat Tevosyan recently outlined some of the challenges of Facebook faced in building out an infrastructure behind the scenes that can handle the massive amount of data served. Tevosyan noted that storing over 700 TB of data in RAM brought with it a large amount of overhead. That task involved maintaining a massive index spread across many racks of machines according to the engineer.
Tevosyan also noted that the performance cost of having those machines coordinate with each other resulted in the infrastructure team investigating new solutions. The team ended up going with lots of SSDs thanks to their ability to serve data quickly and reduce costs when compared to storing data in RAM. This resulted in the creation of the new server using all flash called Dragonstone.
Facebook is always out to find the perfect mix of performance and cost it comes to serving data to a huge number of users of its website. Facebook’s VP of engineering Jay Parikh used an analogy that’s interesting to describe what Facebook hopes to do with its hardware infrastructure. Parikh said current hard disks are like minivans and current flash drives are like Ferraris. He said that Facebook is looking for the Toyota Prius solution at delivers the right balance of speed, efficiency, and cost.