IN THIS ARTICLE
Outlines the best practices for network configuration with a Qumulo cluster
REQUIREMENTS
- Cluster running Qumulo Core
- Network Switch that meets the following criteria:
- 10Gbps, 25Gbps, 40Gbps or 100Gbps ethernet, depending on platform
- Fully non-blocking architecture
- IPv6 capable
- Compatible network cables
- Enough ports to connect all nodes to the same switch fabric
- One static IP per node per defined VLAN
RECOMMENDATIONS
- Two redundant switches
- One physical connection per node to each redundant switch
- One LACP port-channel per node with the following configuration:
- Active mode
- Slow transmit rate
- Trunk port with a native VLAN
- DNS servers
- Time server (NTP)
- Firewall protocol/ports allowed for Proactive Monitoring
- N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
NOTE: The number of floating IPs is dependent on your workflow and clients connecting to the cluster, with a minimum of 2 floating IPs per node per client-facing VLAN, but no more than 10 floating IPs per node per client-facing VLAN or 70 floating IPs per namespace.
DETAILS
Networking is required for front-end and intra-node communication on Qumulo clusters.
- Front-end networking supports IPv4 and IPv6 for client connectivity and also offers support for multiple networks.
- Intra-node communication requires no dedicated backend infrastructure and shares interfaces with front-end connections.
- Clusters use IPv6 link-local and Neighbor Discovery protocols for node discovery and intra-node communication.
TIP! For IPMI configuration details, check out the IPMI Quick Reference Guide for information on port location and setup.
Layer 1 Connectivity for QC24/QC40
- Supports 10GbE Only
- SFP+optics with LC/LC Fiber
- SFP+Passive Twinax Copper (Max length 5M)
Layer 1 Connectivity for QC104/QC208/QC260/QC360
- Supports 40GbE
- QSFP+ transceivers
- Bidirectional (BiDi) transceivers are supported with Mellanox Connect-X 4/5 NICs
- QSFP+Passive Twinax Copper (Max length 5M)
NOTE: Currently only the left-most network card is utilized on the 4U platforms. The card on the right is reserved for future expansion and is not available for use.
Compatible Network Cables QC24/QC40
10 GbE Copper (SFP+)
10 GbE Fiber (LC/LC)
NOTE: Most 850nm and 1310nm optical transceiver pairs are supported.
Compatible Network Cables QC104/QC208/QC260/QC360
IMPORTANT: Use the two left-most ports when looking at the back of the node for cabling.
40 GbE Copper (QSFP+)
40 GbE Fiber (MTP/MPO)
40 GbE Fiber (LC/LC)
IMPORTANT: This option requires a Mellanox Connect-X 4/5 NIC.
Switch Connectivity for QC24/QC40
10G to 10G Switch
Click image to enlarge
10G to 40G Switch
Click image to enlarge
NOTE: 10G to 1G Switch is not supported.
Click image to enlarge
Switch Connectivity for QC104/QC208/QC260/QC360
40G to 40G Switch
IMPORTANT: The BiDi option and long range optical transceivers require a Mellanox Connect-X 4/5 NIC.
Click image to enlarge
Unsupported configurations for 40G to 10G Stepdown
Click image to enlarge
Unsupported Configurations for 40G to 10G Stepdown to 40G Switch
Click image to enlarge
Unsupported Configurations for 40G to 10G Stepdown to 10G Switch
Click image to enlarge
Layer 2 Connectivity & Interface Bonding
Interface Bonding combines multiple physical interfaces into a single logical interface. Bonding enables built-in redundancy so that a logical interface can survive a physical interface failure. In the case of LACP, additional bond members increase the aggregate throughput. Note that LACP is Qumulo's default network bonding and preferred configuration.
Below are the different types of supported bonding for active port communication:
- Link aggregation control protocol (LACP)
- Active-active functionality
- Requires switch-side configuration
- May span multiple switches when utilizing multi-chassis link aggregation
- Active-backup NIC bonding
- Automatic fail-back to the primary interface (the lower-numbered interface)
- Does not require switch side configuration
- All primary ports must reside on the same switch
RESOLUTION
You should now have an overall understanding of networking best practices with a Qumulo cluster
ADDITIONAL RESOURCES
Connect to Multiple Networks in Qumulo Core
QQ CLI: Networks and IP Addresses
Like what you see? Share this article with your network!
Comments
2 comments
10Gbps, 40Gbps or 100Gbps ethernet, depending on platform
Needs update to
10Gbps, 25 Gbps, 40Gbps or 100Gbps ethernet, depending on platform
Hi would it be possible to get this document updated with P-class and HPE gen 10 networking best practice diagrams?
Please sign in to leave a comment.