This section explains how to configure and get started working with the S3 API. This API lets clients and applications interact with the Qumulo file system natively, by using the Amazon S3 API.

Prerequisites

To use the S3 API, you must install the aws and qq CLI.

Step 1: Configure HTTPS

The Qumulo Core S3 API accepts only HTTPS requests by default. To enable HTTPS support for your Qumulo cluster, you must install a valid SSL certificate on it.

Every Qumulo cluster is preconfigured with a self-signed SSL certificate. However, because certain applications don’t accept the default certificate, we recommend installing your own.

For information about configuring HTTPS for your cluster, see Installing the Qumulo Core Web UI SSL Certificate on Qumulo Care.

Enabling and Disabling Plaintext HTTP Connections

Step 2: Enable the S3 API for Your Qumulo Cluster

To let your Qumulo cluster accept S3 traffic, you must enable the S3 API by using the qq s3_modify_settings --enable command.

After you run the command, all nodes in your cluster begin to accept S3 API traffic on TCP port 9000.

Step 3: Create an Access Key Pair

To create and manage S3 buckets you must have a valid S3 access key pair associated with a specific user in your Qumulo cluster or in a connected external identity provider (such as Active Directory). For more information, see Creating and Managing S3 Access Keys.

Use the qq s3_create_access_key command and specify the username. For example:

$ qq s3_create_access_key my-username

Step 4: Configure the AWS CLI for Use with Qumulo Core

To create and manage S3 buckets, you must configure AWS CLI to work with your Qumulo cluster.

  1. Configure the AWS CLI to use path-style bucket addressing by using the aws configure command and specify your profile.

    $ aws configure \
      --profile my-qumulo-profile set s3.addressing_style path
    
  2. Use the access key pair that you have created earlier and the aws configure command to:

    1. Specify your profile and access key ID. For example:

      $ aws configure
        --profile my-qumulo-profile set aws_access_key_id \
          000000000001fEXAMPLE
      
    2. Specify your profile and secret access key. For example:

      $ aws configure
        --profile my-qumulo-profile set aws_secret_access_key \
          TEIT4liMZ8A32iI7JXmqIiLWp5co/jmkjEXAMPLE
      
  3. Because it isn’t possible to specify your cluster’s URI persistently, create a shell alias to specify your cluster’s URI, in the following format:

    $ alias aws="aws --endpoint-url https://<qumulo-cluster>:9000 --profile my-qumulo-profile"
    
  4. (Optional) If you haven’t configured your machine to trust the SSL certificate installed on your Qumulo cluster, to configure the path to the trusted SSL certificate bundle that you have created and installed earlier manually, run the aws configure command. For example:

    $ aws configure \
      --profile my-qumulo-profile set ca_bundle MySpecialCert.crt
    
  5. To test your configuration, send an S3 API request to your Qumulo cluster by using the aws s3api list-buckets command.

    A successful response includes an empty JSON array named Buckets.

    {
     "Buckets": []
    }
    

Step 5: Create an S3 Bucket

Run the aws s3api create-bucket command and specify the bucket name. For example:

$ aws s3api create-bucket \
  --bucket my-bucket

The S3 API creates the new directory /my-bucket/. All of the bucket’s objects are located under this directory. For more information, see Creating and Working with S3 Buckets in Qumulo Core.

Step 6: Test Writing and Reading S3 Objects

  1. To test writing data to your Qumulo cluster, perform a PutObject S3 API action by using the aws s3api put-object command. For example:

    $ aws s3api put-object \
      --bucket my-bucket \
      --key archives/my-remote-file.zip \
      --body my-local-file.zip
    

    The S3 API uploads the contents of my-local-file.zip into an object named my-remote-file.zip.

  2. To test reading read data from and S3 bucket, perform a GetObject S3 API action by using the aws s3api get-object command. For example:

    $ aws s3api get-object \
      --bucket my-bucket \
      --key archives/my-remote-file.zip local-file.zip
    

    The S3 API downloads the contents of the my-remote-file.zip object into local-file.zip and returns the object metadata. For example:

    {
      "AcceptRanges": "bytes",
      "LastModified": "Wed, 14 Dec 2022 20:42:46 GMT",
      "ETag": "\"-gUAAAAAAAAAAwAAAAAAAAA\"",
      "ContentType": "binary/octet-stream",
      "Metadata": {}
    }