Print Email PDF

Use MinIO to talk S3 with Qumulo


Outlines how to use MinIO to talk S3 with a Qumulo cluster


  • Cluster running Qumulo Core 


  • Qumulo nodes: All Qumulo products are compatible with MinIO
  • Network Configuration: at least 10GbE ethernet
  • MinIO Server hardware: 4 x x86 servers
  • MinIO data access to Qumulo: Each MinIO server connects to a Qumulo node with an NFS mount using defaults options
  • MinIO server: 4 instances running in docker containers on each server
  • MinIO client software (mc): 1 instance running natively on each MinIO server

S3 Compatibility with MinIO

The following S3 APIs are not supported by MinIO:

Bucket APIs

  • BucketACL (Use bucket policies instead)
  • BucketCORS (CORS enabled by default on all buckets for all HTTP verbs)
  • BucketLifecycle (Not required for MinIO erasure coded backend)
  • BucketReplication (Use mc mirror instead)
  • BucketVersions, BucketVersioning (Use s3git)
  • BucketWebsite (Use caddy or nginx)
  • BucketAnalytics, BucketMetrics, BucketLogging (Use bucket notification APIs)
  • BucketRequestPayment
  • BucketTagging

Object APIs

  • ObjectACL (Use bucket policies instead)
  • ObjectTorrent
  • ObjectVersions

NOTE: Object names that contain characters `^*|" are unsupported on Windows and other file systems that do not support filenames with these characters.


At Qumulo, making sure customers can easily access and manage their data is hugely important as we work to fulfill our mission of becoming the company the world trusts to store its data forever. For a long time now, users have been able to interact with their data via SMB, NFS, and RESTful APIs. For most customers, these protocols meet their needs. However, a growing subset of our customers are looking to talk to their Qumulo through an S3 compatible API in order to leverage the economics and performance of file storage with modern tools written for object.

Object storage is an increasingly popular option for customers looking to store their data in the cloud. Even for customers who aren’t looking to leverage object storage, many tools they’re starting to use assume an object backend, and communicate via Amazon’s S3 API, which has become the de facto standard in object storage APIs.

For customers who want to interact with Qumulo via the S3 SDK or API, we recommend using MinIO. MinIO is a high-performance object storage server which acts as a S3 compatible front-end to different cloud and local storage. This means you can have a MinIO server sit in front of your Qumulo storage and handle S3 requests.

NOTE: If you do not already have a Qumulo cluster configured, please reference the Virtual Cluster: Create a virtual cluster running Qumulo Core article.

Deployment Model

For optimal performance, MinIO’s distributed gateway model is recommended. Using a load balancer or round-robin DNS, multiple MinIO instances can be spun up and connected to the same NAS. The load balancer can distribute application requests across a pool of MinIO servers which talk via NFS to Qumulo. From your applications’ perspective, they’re talking to S3 while Qumulo just sees several NFS clients attached to it, so no need to worry about locking.

Download MinIO

Let’s get started by downloading MinIO. MinIO is available for all major operating systems, and can even be run as a Docker or Kubernetes container.


docker pull minio/minio


chmod +x minio


brew install minio/stable/minio


Download and install from the MinIO Installer page. 

Run MinIO in Gateway Mode

Inside each Docker container on your clients, spin up a MinIO instance with the following command:


docker run -d -p 9000:9000 -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" --name minio -v /mnt/minio-test:/nas minio/minio gateway nas /nas


./minio gateway nas ./Path-To-Mounted-Qumulo


minio gateway nas ./Path-To-Mounted-Qumulo


minio.exe gateway nas X:\Path-To-Mounted-Qumulo

Depending on how you deployed MinIO, you should see something similar to the following in your terminal:


To test that your MinIO server is working, we’re going to download Boto, the S3 Python SDK and write a simple script.

pip3 install boto3

Now we're going to create a test script in Python called, “”. The code included is featured below and uses Boto3 to read the file ‘minio-read-test.txt’ stored in the ‘minio-demo’ folder and prints the file contents to the console.

import boto3
from botocore.client import Config

# Configure S3 Connection
s3 = boto3.resource('s3',  
 aws_access_key_id = 'YOUR-ACCESS-KEY-HERE',
 aws_secret_access_key = 'YOUR-SECRET-KEY-HERE',                                                                                                         
 endpoint_url = 'YOUR-SERVER-URL-HERE',

# Read File
object = s3.Object('minio-demo', 'minio-read-test.txt')
body = object.get()['Body']

A full code sample which shows how you can perform additional S3 operations can be found below.

# Import AWS Python SDK
import boto3
from botocore.client import Config

bucket_name = 'minio-test-bucket' # Name of the mounted Qumulo folder
object_name = 'minio-read-test.txt' # Name of the file you want to read inside your Qumulo folder

# Configure S3 Connection
s3 = boto3.resource('s3',
aws_access_key_id = 'YOUR-ACCESS-KEY-HERE',
aws_secret_access_key = 'YOUR-SECRET-KEY-HERE',
endpoint_url = 'YOUR-SERVER-URL-HERE',

# List all buckets
for bucket in s3.buckets.all():

input('Press Enter to continue...\n')

# Read File
object = s3.Object(bucket_name, object_name)
body = object.get()['Body']
print('File Read')
input('Press Enter to continue...\n')

# Stream File - Useful for Larger Files
object = s3.Object(bucket_name, object_name)
body = object.get()['Body']
with io.FileIO('/tmp/sample.txt', 'w') as tmp_file:
while file.write(

print('File Streamed @ /tmp/sample.txt')
input('Press Enter to continue...\n')

# Write File
s3.Object(bucket_name, 'aws-write-test.txt').put(Body=open('./aws-write-test.txt', 'rb'))
print('File Written')
input('Press Enter to continue...\n')

# Delete File
s3.Object(bucket_name, 'aws_write_test.txt').delete()
print('File Deleted')
input('Press Enter to continue...\n')

# Stream Write File

# Create Bucket
print('Bucket Created')
input('Press Enter to continue...\n')

# Delete Bucket
bucket_to_delete = s3.Bucket('new-bucket')
for key in bucket_to_delete.objects.all():


print('Bucket Deleted')
input('Press Enter to continue...\n')

MinIO is a stable and hugely popular open source project touting over 105 million downloads. The project is popular with an extremely active community, which makes us excited about customers deploying it into their environments. We’re also excited because we take our customers’ feedback seriously, and deploying MinIO as a front-end for Qumulo addresses a top request for an S3 compatibility layer.

To see how MinIO performs with Qumulo in our own test environment, check out the Performance Results of MinIO with Qumulo article.


You should now be able to successfully use MinIO to talk S3 with a Qumulo cluster


Performance Results of MinIO with Qumulo

Create a Virtual Cluster running Qumulo Core

Configure DNS Round Robin on a Windows Server for Qumulo Core

QQ CLI: Cluster Configuration


Like what you see? Share this article with your network!

Was this article helpful?
1 out of 1 found this helpful



Please sign in to leave a comment.

Have more questions?
Open a Case
Share it, if you like it.