Print Email PDF

Performance Results of MinIO with Qumulo

IN THIS ARTICLE

Outlines the expected performance results when using MinIO to talk S3 with a Qumulo cluster

REQUIREMENTS

  • Cluster running Qumulo Core

For additional details, reference the Use MinIO to talk S3 with Qumulo article.

TEST ENVIRONMENT

  • Qumulo nodes: 4 x QC24
  • Network Configuration: 10GbE ethernet
  • MinIO Server hardware: 4 x x86 servers running 16.04 version of Linux
  • MinIO data access to Qumulo: Each MinIO server connects to a Qumulo node with an NFS mount using defaults options
  • MinIO server software: 4 instances running in docker containers on each server with the command below:
docker run -d -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" --name minio -v /mnt/minio-test:/nas minio/minio gateway nas /nas
  • MinIO client software (mc): 1 instance running natively on each MinIO server

PERFORMANCE RESULTS

Depending on how many MinIO gateway instances are spun up, performance will vary. Generally speaking, the more parallelizable the workload and the more gateways in front of Qumulo, the better performance will be. To help Qumulo customers gauge whether or not MinIO might help them, we’ve published both our performance test results and our test methodology.

Single-Stream Write: 84MB/s

Streamed zeros to Qumulo via MinIO client's mc pipe command:

dd if=/dev/zero bs=1M count=10000 | ./mc pipe minio1/test/10Gzeros
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 124.871 s, 84.0 MB/s

Using Qumulo analytics, we see a mixture of read and write IOPS during this time with the reads coming from the .minio.sys/multipart directory:

SSW.png

This result is due to the way that the S3 protocol deals with large files wherein the file is uploaded in chunks and then reassembled into the final file from those parts. When in NAS gateway mode, MinIO implements this behavior by making each chunk its own temporary file. MinIO then reads and appends these temp files in order to form the final file. Essentially, there is a write amplification factor of 2x and an extra read of all of the data that was written.

Single-Stream Read: 84MB/s

Streamed back a written file via MinIO’s “mc cat” command after dropping the Linux filesystem cache and Qumulo cache first:

./mc cat minio1/test/10Gzeros | dd of=/dev/null bs=1M
524+274771 records in
524+274771 records out
10485760000 bytes (10 GB) copied, 16.3165 s, 643 MB/s

Multi-Stream Write: ~600MB/s-1GB/s

This test was run with 32 10GB write streams running in parallel using the same method described above (2 per MinIO instance):

MSW.png

Multi-Stream Read: 1.1-1.7GB/s

This test was run with 32 10GB read streams running in parallel using the same method described above (2 per MinIO instance):

MSR.png

S3 BENCHMARKS

Since MinIO in gateway mode does not support object versioning, we modified the benchmark (original benchmark assumes support). Using MinIO’s modified version of Wasabi Tech’s S3 benchmark, we were able to produce the following results in our test environment.

Single Client

./s3-benchmark -a minio -s minio123 -u http://localhost:9001 -t 100
Wasabi benchmark program v2.0
Parameters: url=http://localhost:9001, bucket=wasabi-benchmark-bucket, duration=60, threads=100, loops=1, size=1M
Loop 1: PUT time 60.2 secs, objects = 7562, speed = 125.5MB/sec, 125.5 operations/sec.
Loop 1: GET time 60.2 secs, objects = 23535, speed = 390.8MB/sec, 390.8 operations/sec.
Loop 1: DELETE time 17.7 secs, 427.9 deletes/sec.
Benchmark completed.

Multi-Client

In this variant of the test, we ran one instance of s3-benchmark per MinIO instance for a grand total of 16 concurrent instances. Each s3-benchmark run was assigned its own bucket. In aggregate, write speeds seemed to reach about ~700MB/s while read speeds peaked at 1.5GB/s and then tailed off:

MCBM.png

By increasing the file size to 16MiB, we were able to achieve about 1.5-1.8 GB/s aggregate write throughput and 2.5 GB/s aggregate read throughput at peak. Higher write throughput is possible by specifying more threads, but MinIO started to return 503 errors most likely as a result of running four MinIO containers per client machine.

To gather these results, the following bash script was run on each of the client machines:

for i in $(seq 1 4); do 
s3-benchmark/s3-benchmark -a minio -s minio123 -u http://localhost:900$i -b $HOSTNAME-$i -t 20 -z 16M &
done;

RESOLUTION

You should now have an overall understanding of the expected performance results when using MinIO to talk S3 with a Qumulo cluster

ADDITIONAL RESOURCES

Use MinIO to talk S3 with Qumulo

Create a Virtual Cluster running Qumulo Core

QQ CLI: Cluster Configuration

 

Like what you see? Share this article with your network!

Was this article helpful?
0 out of 0 found this helpful

Comments

1 comment

  • In your multi-client test, do you have more information on the 503 error returned by minio - was it due to reaching peak load? Running minio with env. variable MINIO_HTTP_TRACE=/tmp/log set can dump the http trace to a file.

    0

Please sign in to leave a comment.

Have more questions?
Open a Case
Share it, if you like it.