IN THIS ARTICLE
Provides an overview of Qumulo's FIFO based SSD caching in version 2.8.5 or above of Qumulo Core
- Cluster running Qumulo Core 2.8.5 and above
NOTE: After upgrading to 2.8.5, customers may see an increase in HDD activity for up to 6 hours due to the SSD caching improvements.
In Qumulo Core, the SSDs are used to store data layout information, provide a fast persistence layer for data modifications coming into the cluster, and cache file system data and metadata for fast data retrieval. Prior to 2.8.5, a data block would only be stored on either the SSD or the HDD. Qumulo Core aimed to keep at least 20% of the SSD available for these incoming writes but if the write load was too fast, those incoming writes would be blocked until the file data blocks were moved to the HDDs. The period of time when SSDs could accept incoming writes without being blocked by writes to the HDD is called the “burst write window”. The policy for choosing which data to evict from the SSD and move to the HDD was random and did not take into account recent access to the data to impact whether the data would remain on the SSD or get moved to the HDD. The SSD Caching with Qumulo Core 2.8.4 and below article explains in more detail how the data flowed through the Qumulo Core SSD and HDDs prior to the 2.8.5 release.
In Qumulo Core 2.8.5, the FIFO based SSD caching feature allows data to be written to the HDD while still residing on the SSD. An approximate FIFO eviction policy is used to evict data from the SSDs based on the order of the writes. This results in an increased burst write window since we are able to write to a larger portion of the SSD at burst speed without waiting for any writes to HDD. Additionally, recently written data has an increased likelihood of being on the SSD and available for faster retrieval since eviction of data is approximate based on the order of writes.
You should now have an overall understanding of Qumulo's FIFO based SSD caching available in version 2.8.5 or above of Qumulo Core
Like what you see? Share this article with your network!