We are always interested to hear details on what tests you run and the results you see. Feel free to discuss them on the Qumulo community or reach out to us with any questions or comments.
IN THIS ARTICLE
Outlines the reported NFS numbers and testing process used to collect the data for Qumulo Performance Benchmarks included in each release of Qumulo Core
- Cluster running Qumulo Core
With every release, Qumulo publishes performance benchmark results in Tableau from testing we have run on a newly formed cluster using Linux clients.
For the published throughput and IOPS numbers, we use some popular, off-the-shelf tests and run them in a couple different configurations. The numbers reported are benchmarks of expected performance if the same tests were run in the same manner.
It's important to note that performance of storage is impacted by many factors and results may vary between environments due to the following:
- Clients: configuration, OS, total number, connection to storage system, etc
- Block size
- File size
- File count,
- File topology
- Application semantics
- Effect of multiple workloads occurring simultaneously
Keep in mind that you may see lower or higher throughput and/or IOPS with your actual workloads compared to the test results we report since the environments and workloads may differ. The best way (and arguably the only way) to determine how a cluster will perform with your workload is to run that exact workload on the cluster.
NOTE: The published numbers are NFS only and do not include SMB results.
We currently publish the following throughput test results:
- Single-stream Read - Cached
- Data read from SSDs
- Single-stream Read - Uncached
- Data read from HDDs
- Multi-stream Read - Cached
- Data read from SSDs
- Multi-stream Read - Uncached
- Data read from HDDs
- Single-stream Write - Sustained
- Multi-stream Write - Burst
- Available write buffer space on the SSDs
- Multi-stream Write - Sustained
- Data is writing to the cluster at such a rate that the write buffer space on the SSDs is exhausted, requiring the cluster to throttle performance to allow data to be moved to the HDDs
For the throughput tests, IOzone is utilized since it was the preferred testing method for several of our early customers. The Multi-stream Write - Sustained test is the one exception where we use an internal benchmark due to the fact that IOzone did not allow clear differentiation with the throughput of the cluster before and after the write buffer was exhausted.
NOTE: The Multi-stream Read - Uncached test with IOzone uses an internal tool to force the data to be 'expired' to HDDs.
For the multi-stream variants of the test, 4 clients are connected to a 4-node cluster (1 client per node) and run 10 (read) or 40 (write) streams per client. Every so often we run with 8 clients against 8 nodes, or 12 clients against 12 nodes, just to verify that cluster performance scales linearly so we can accurately report findings on a per node bases.
Here are the exact commands we run for each throughput test:
iozone -r 1m -I -i 1 -t 1 -s 1g -w -R -F
Multi-stream Read - Cached
iozone -r 1m -c -i 1 -t 40 -s 1g -w -R -+m -+h
Multi-stream Read - Uncached
*Same test as above after forcing 'expiration' of data from SSD to HDD
iozone -r 1m -c -i 0 -t 1 -s 1g -w -R -F
Multi-stream Write - Burst
iozone -r 1m -I -i 0 -t 160 -s 1g -w -R -+m -+h
Multi-stream Write - Sustained
Note that the multi-stream tests require a config file to coordinate multiple clients.
Here is a list of what each argument means:
-r = record size
-c = include close()
-i 0/1 = test type 0-write/rewrite 1-read/reread
-t = run in throughput mode and use this many threads
-s = size of file to test
-w = do not unlink temporary files when finished; leave them present
-R = generate Excel report
-F = specify each of the temporary file names to be used
The most interesting thing to note is that we are writing 1GB files in 1MB record sizes so changing either of those numbers will almost certainly change the performance results.
The industry-standard SPECsfs2008 test is used which basically kicks off a series of attempts to reach an increasing level of I/O operations per second with a specific mix of NFS operations. Qumulo Performance Benchmarks report the highest level of IOPS that the test attempts and achieves during testing.
You should now have an understanding of how Qumulo tests and reports the performance benchmarks included in each release of Qumulo Core
Like what you see? Share this article with your network!