Print Email PDF

P-23T, P-92T, P-184T, and P-368T Networking

IN THIS ARTICLE

This article explains how you can network a Qumulo cluster that uses P-23T, P-92T, P-184T, or P-368T nodes.

REQUIREMENTS

  • P-23T, P-92T, P-184T, or P-368T hardware
  • Network Switch that meets the following criteria:
    • 100 Gbps ethernet
    • Fully non-blocking architecture
    • IPv6 capable
    • Jumbo Frame support (minimum 9000 MTU) for the back end network
  • Compatible network cables
  • Enough ports to connect all nodes to the same switch fabric
  • One static IP per node per defined VLAN

Important: All the switch ports connected to back end NIC (see diagram below) must be configured with at least 9000 MTU. Before attempting cluster creation, you must configure those ports to allow jumbo frames in your switch settings or cluster formation will be unsuccessful.

RECOMMENDATIONS

  • One set of redundant switches for the front end network with minimum 9000 MTU configured
  • One set of redundant switches for the back end network with minimum 9000 MTU configured
  • One physical connection per node to each redundant switch
  • One LACP port-channel per network (front end and back end) on each node with the following configuration:
    • Active mode
    • Slow transmit rate
    • Trunk port with a native VLAN
  • DNS servers
  • Time server (NTP)
  • Firewall protocol/ports allowed for Proactive Monitoring
  • N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
    Note: The number of floating IPs is dependent on your workflow and clients connecting to the cluster, with a minimum of 2 floating IPs per node per client-facing VLAN, but no more than 10 floating IPs per node per client-facing VLAN.
  • You can achieve advertised performance only if you connect your nodes at their maximum Ethernet speed. To avoid network bottlenecks, Qumulo validates system performance with this configuration by using clients that are connected at the same link speed and to the same switch as the nodes.

DETAILS

The all-flash platform uses a networking configuration where back end and front end traffic is handled by different NICs. The front end and back end NICs in a cluster may all be connected to the same switch, or the back end NICs may be connected to a different switch from the front end NICs.

QP_NIC_BOND_ETH.png

For reliability, the recommended configuration is fully-cabled where all four ports on every node should be connected. Both ports on the front end NIC should be connected to the front end switch, and both ports on the back end NIC should be connected to the back end switch.

Connecting a single port on the back end NIC is not recommended. If the single back end connection fails, the node will be unavailable. 

Tip: For IPMI configuration details, such as port location and configuration, see IPMI Quick Reference Guide.

Connect to Redundant Switches 

The details below outline how to connect a four-node cluster with P-23T, P-92T, P-184T, or P-368T nodes to dual switches for redundancy. This is the recommended configuration for P-23T, P-92T, P-184T, and P-368T hardware. If either switch goes down, the cluster will still be accessible from the remaining switch.

Front End

  • The two front end NIC ports (2x40Gb or 2x100Gb) on the nodes are connected to separate switches
  • Uplinks to the client network should equal the bandwidth from the cluster to the switch
  • The two ports form an LACP port channel via multi-chassis link aggregation group

Back End

  • The two back end NIC ports (2x40Gb or 2x100Gb) on the nodes are connected to separate switches with an appropriate inter-switch link or virtual port channel
  • For all connection speeds, the default behavior is LACP with a 9000 MTU. This Qumulo configuration, in conjunction with a Multi-Chassis Link-Aggregation configuration on the switch side and associated 9216 network MTU, benefits from both link redundancy and increased bandwidth capacity.
  • For all connection speeds, the default behavior is LACP with 9000 MTU. The switch should be running in a Link-Aggregation configuration with 9216 network MTU. As of build 3.0.5, you can optionally configure these ports in Active-Backup through the qq command-line interface. Please see the section below for those instructions.

Connect to a Single Switch

The details below outline how to connect a four-node cluster with P-23T, P-92T, P-184T, or P-368T nodes to a single switch. Note if this switch goes down, the cluster will not be accessible. 

Front End

  • Each node contains two front end ports (2x40Gb or 2x100Gb) that are connected to the switch
  • Uplinks to the client network should equal the bandwidth from the cluster to the switch
  • The two ports form an LACP port channel 

Back End 

  • Each node contains two back end ports (2x40Gb or 2x100Gb) that are connected to the switch
  • For all connection speeds, the default behavior is LACP with a 9000 MTU. This Qumulo configuration, in conjunction with a Link-Aggregation configuration on the switch side and associated 9216 network MTU, benefits from both link redundancy and increased bandwidth capacity.
  • For all connection speeds, the default behavior is LACP with 9000 MTU. The switch should be running in a Link-Aggregation configuration with 9216 network MTU. As of build 3.0.5, you can optionally configure these ports in Active-Backup through the qq command-line interface. Please see the section below for those instructions.

Set the Back End MTU and Bonding Mode via QQ CLI (3.0.5 and Higher)

The bonding mode and MTU for the back end network can be configured via the qq command-line in Qumulo Core 3.0.5 and above.  

Use the command below to show the current configuration:

qq network_get_interface --interface-id 2 

To set the MTU, use the following command:

qq network_mod_interface --interface-id 2 --mtu 9000

Run the following command to set the bonding mode to Active/Backup:

qq network_mod_interface --interface-id 2 --bonding-mode ACTIVE_BACKUP

Important: If Active_Backup is set, please ensure all configurations related to LACP or other redundancy methodologies are removed from the switch configurations on the affected ports. Failure to do so may result in unpredictable behavior.

To list both interfaces on the back end network, run the following command:

qq network_list_interfaces

ADDITIONAL RESOURCES

IPMI Quick Reference Guide

QQ CLI: Networks and IP Addresses

 

Like what you see? Share this article with your network! 

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.

Have more questions?
Open a Case
Share it, if you like it.