IN THIS ARTICLE
This article outlines the best practices for configuring networking for a Qumulo cluster.
- Cluster running Qumulo Core
Network Switch that meets the following criteria:
- 10 Gbps, 25 Gbps, 40 Gbps or 100 Gbps ethernet, depending on platform
- Fully non-blocking architecture
- IPv6 capable
Note: Neighbor Discovery Protocol (NDP) traffic over IPv6 flows over untagged VLAN (VLAN1 or the default VLAN for your switch). If you disable this functionality on your switch, your nodes can't detect or communicate with each other.
IGMP Snooping, MLD Snooping, and DDoS rate limiting on multicast traffic or equivalent broadcast protection disabled
- Note: Normal cluster operations require the use of IPv6 multicast and having any multicast optimization or DDoS protections enabled can interfere with cluster communication.
- Compatible network cables
- Enough ports to connect all nodes to the same switch fabric
- One static IP per node per defined VLAN
- Two redundant switches
- One physical connection per node to each redundant switch
One LACP port-channel per node with the following configuration:
- Active mode
- Slow transmit rate
- Trunk port with a native VLAN
- DNS servers
- Time server (NTP)
- Firewall protocol/ports allowed for Proactive Monitoring
N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
Note: The number of floating IPs depends on your workflow and clients connecting to the cluster, with a minimum of 2 floating IPs per node per client-facing VLAN, but no more than 10 floating IPs per node per client-facing VLAN.
- You can achieve advertised performance only if you connect your nodes at their maximum Ethernet speed. To avoid network bottlenecks, Qumulo validates system performance with this configuration by using clients that are connected at the same link speed and to the same switch as the nodes.
Networking is required for front-end and intra-node communication on Qumulo clusters.
- Front-end networking supports IPv4 and IPv6 for client connectivity and also offers support for multiple networks.
Intra-node communication requires no dedicated backend infrastructure and shares interfaces with front-end connections.
- Clusters use IPv6 link-local and Neighbor Discovery protocols for node discovery and intra-node communication.
TIP! For IPMI configuration details, check out the IPMI Reference Guide for information on port location and setup.
Layer 1 Connectivity for QC24/QC40
- Supports 10 GbE Only
- SFP+optics with LC/LC Fiber
- SFP+Passive Twinax Copper (Max length 5M)
Layer 1 Connectivity for QC104, QC208, QC260, and QC360
- Supports 40 GbE
- QSFP+ transceivers
- Bidirectional (BiDi) transceivers are supported with Mellanox Connect-X 4/5 NICs
- QSFP+Passive Twinax Copper (Max length 5M)
NOTE: Currently only the left-most network card is utilized on the 4U platforms. The card on the right is reserved for future expansion and is not available for use.
Compatible Network Cables QC24 and QC40
10 GbE Copper (SFP+)
10 GbE Fiber (LC/LC)
Note: Most 850nm and 1310nm optical transceiver pairs are supported.
Compatible Network Cables QC104, QC208, QC260, and QC360
Important: Use the two left-most ports when looking at the back of the node for cabling.
40 GbE Copper (QSFP+)
40 GbE Fiber (MTP/MPO)
40 GbE Fiber (LC/LC)
Important: This option requires a Mellanox Connect-X 4/5 NIC.
Switch Connectivity for QC24 and QC40
10 G to 10 G Switch
10 G to 40 G Switch
Note: 10 G to 1 G switches aren't supported.
Switch Connectivity for QC104, QC208, QC260, and QC360
40 G to 40 G Switch
Important: The BiDi option and long range optical transceivers require a Mellanox Connect-X 4/5 NIC.
Unsupported Configurations for 40 G to 10 G Stepdown
Unsupported Configurations for 40 G to 10 G Stepdown to 40 G Switch
Unsupported Configurations for 40 G to 10G Stepdown to 10 G Switch
Layer 2 Connectivity and Interface Bonding
Interface Bonding combines multiple physical interfaces into a single logical interface. Bonding enables built-in redundancy so that a logical interface can survive a physical interface failure. In the case of LACP, additional bond members increase the aggregate throughput. Note that LACP is Qumulo's default network bonding and preferred configuration.
Below are the different types of supported bonding for active port communication:
Link aggregation control protocol (LACP)
- Active-active functionality
- Requires switch-side configuration
- May span multiple switches when utilizing multi-chassis link aggregation
Active-backup NIC bonding
- Automatic fail-back to the primary interface (the lower-numbered interface)
- Does not require switch side configuration
- All primary ports must reside on the same switch