IN THIS ARTICLE
Outlines how to configure IP failover with Qumulo in AWS
REQUIREMENTS
- AWS account
- Account limits large enough for 20TB of EBS ST1 and 2TB of EBS GP2
- IAM permission to launch EC2 instances
- Four or more nodes running Qumulo in AWS version 2.9.4 or higher
- Command line (qq CLI) tools installed via API & Tools in the Web UI
IAM PERMISSIONS
The table below lists the required IAM permissions for configuring IP failover with a Qumulo cloud cluster in AWS.
ec2:AssignPrivateIpAddresses | ec2:DescribeInstances | ec2:UnassignPrivateIpAddresses |
To learn more about IAM roles, check out the IAM roles for Amazon EC2 and Assign an Existing IAM Role AWS support articles.
DETAILS
With the release of version 2.9.4, Qumulo Core supports IP failover in AWS using the same floating IP functionality that currently exists with Qumulo's on-prem clusters.
Manage IP Addresses in AWS Console
- Log into the AWS Console
- Navigate to the EC2 Dashboard
- Right click on the instance
- Select Networking and click Manage IP Addresses
- Click Assign new IP for as many additional IPs you desire on that node
- Leave the default Auto-assign option in place
- Click Yes, Update
- Close the window
- Repeat the above steps for each node in the cluster
In order to avoid running up against per-instance secondary IP address limits in AWS, it is recommended that the floating IPs be provisioned evenly across all instances in the cluster. See the IP failover with Qumulo Core article for more details on the recommended number of floating IPs for your cluster size. Presently, only IPv4 addresses are supported.
Assign Secondary IPs as Floating IPs in Qumulo Core
- Open a terminal window
- Copy the provisioned secondary IPs from the Description tab on the instance listing
- Paste the IPs into the following command in your terminal window using a space to separate
qq network_mod_network --network-id 1 --floating-ip-ranges <address-or-range> [<address-or-range> ...]
- Repeat the process until all secondary IPs for all nodes are included in the same command above
- For example:
qq network_mod_network --network-id 1 --floating-ip-ranges 10.81.14.40 10.81.4.254
- Check that the floating IPs have been assigned to your nodes correctly by running the following command
qq network_poll
- Each node's DHCP network should have at least one floating IP assigned to it as outlined in the example below:
{
"interface_details": { ... },
"interface_status": { ... },
"network_statuses": [
{
"address": "10.81.5.64",
"assigned_by": "DHCP",
"dns_search_domains": [ ... ],
"dns_servers": [ "10.81.13.17" ],
"floating_addresses": [ "10.81.14.40" ],
"mtu": 9001,
"name": "Default",
"netmask": "255.255.240.0",
"vlan_id": 0
}
],
"node_id": 1,
"node_name": "...",
"update_status": "CHANGES_APPLIED"
}
You may notice that the original secondary IP assignments that you made have now changed. Qumulo Core is now using these IPs and will manage their assignments to each node for you. Note that if you un-assign one of the Floating IPs secondary private IP address from an instance, Qumulo Core will attempt to reassign it to the instance. Make sure to remove the floating IP from the Qumulo Core networking configuration before un-assigning the secondary private IP address from a Qumulo Core instance.
You should now be able to mount the cluster via the floating IPs. If the node currently assigned a floating IP fails, the floating IP will be transferred to another node in the cluster and client operations should be able to continue uninterrupted.
NOTE: Confirm that your clients are using the new floating IPs. If a client already has a mount point based on a previous static IP, you will need to unmount and remount the cluster, ensuring that the new mount picks up the floating IP set. The DNS record must be updated to list the floating IPs and not the static ones for the cluster before remounting.
RESOLUTION
You should now be able to successfully configure IP failover with Qumulo in AWS
ADDITIONAL RESOURCES
Set Explicit DNS Hostnames and IP Mappings
Like what you see? Share this article with your network!
Comments
0 comments