Print Email PDF

Best Practices for Networking Qumulo Clusters

⚠️ Important: The QC24, QC40, QC104, QC208, QC260, and QC360 platforms will reach their End of Platform Support (EoPS) on February 28, 2026.

IN THIS ARTICLE 

This article outlines the best practices for configuring networking for a Qumulo cluster.

REQUIREMENTS  

  • Cluster running Qumulo Core
  • Network Switch that meets the following criteria:
    • 10 Gbps, 25 Gbps, 40 Gbps or 100 Gbps ethernet, depending on platform
    • Fully non-blocking architecture
    • IPv6 capable
      Note: Neighbor Discovery Protocol (NDP) traffic over IPv6 flows over untagged VLAN (VLAN1 or the default VLAN for your switch). If you disable this functionality on your switch, your nodes can't detect or communicate with each other.
  • Compatible network cables
  • Enough ports to connect all nodes to the same switch fabric
  • One static IP per node per defined VLAN

RECOMMENDATIONS

  • Two redundant switches
  • One physical connection per node to each redundant switch
  • One LACP port-channel per node with the following configuration:
    • Active mode
    • Slow transmit rate
    • Trunk port with a native VLAN
  • DNS servers
  • Time server (NTP)
  • Firewall protocol/ports allowed for Proactive Monitoring
  • N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
    Note: The number of floating IPs depends on your workflow and clients connecting to the cluster, with a minimum of 2 floating IPs per node per client-facing VLAN, but no more than 10 floating IPs per node per client-facing VLAN.
  • You can achieve advertised performance only if you connect your nodes at their maximum Ethernet speed. To avoid network bottlenecks, Qumulo validates system performance with this configuration by using clients that are connected at the same link speed and to the same switch as the nodes.

DETAILS

Networking is required for front-end and intra-node communication on Qumulo clusters.

  • Front-end networking supports IPv4 and IPv6 for client connectivity and also offers support for multiple networks.
  • Intra-node communication requires no dedicated backend infrastructure and shares interfaces with front-end connections.
    • Clusters use IPv6 link-local and Neighbor Discovery protocols for node discovery and intra-node communication.

TIP! For IPMI configuration details, check out the IPMI Quick Reference Guide for information on port location and setup.

Layer 1 Connectivity for QC24/QC40

  • Supports 10 GbE Only
  • SFP+optics with LC/LC Fiber
  • SFP+Passive Twinax Copper (Max length 5M)

Layer 1 Connectivity for QC104, QC208, QC260, and QC360  

  • Supports 40 GbE
  • QSFP+ transceivers
  • Bidirectional (BiDi) transceivers are supported with Mellanox Connect-X 4/5 NICs
  • QSFP+Passive Twinax Copper (Max length 5M)

NOTE: Currently only the left-most network card is utilized on the 4U platforms. The card on the right is reserved for future expansion and is not available for use.

Compatible Network Cables QC24 and QC40

10 GbE Copper (SFP+) 

compatible_cables.png

10 GbE Fiber (LC/LC)

10gbe_fiber.png

Note: Most 850nm and 1310nm optical transceiver pairs are supported.

Compatible Network Cables QC104, QC208, QC260, and QC360

Important: Use the two left-most ports when looking at the back of the node for cabling.

40 GbE Copper (QSFP+)

40gbe_copper.png

40 GbE Fiber (MTP/MPO)

40gbe_fiber.png

40 GbE Fiber (LC/LC) 

Important: This option requires a Mellanox Connect-X 4/5 NIC.

40gbe_fiber_lc.png

Switch Connectivity for QC24 and QC40

10 G to 10 G Switch

10g_to_10g.png

10 G to 40 G Switch

10g_to_40g.png

Note: 10 G to 1 G switches aren't supported.

10g_to_1g.png

Switch Connectivity for QC104, QC208, QC260, and QC360

40 G to 40 G Switch

Important: The BiDi option and long range optical transceivers require a Mellanox Connect-X 4/5 NIC.

Unsupported Configurations for 40 G to 10 G Stepdown

Unsupported Configurations for 40 G to 10 G Stepdown to 40 G Switch

Unsupported Configurations for 40 G to 10G Stepdown to 10 G Switch

Layer 2 Connectivity and Interface Bonding

Interface Bonding combines multiple physical interfaces into a single logical interface. Bonding enables built-in redundancy so that a logical interface can survive a physical interface failure. In the case of LACP, additional bond members increase the aggregate throughput. Note that LACP is Qumulo's default network bonding and preferred configuration.

Below are the different types of supported bonding for active port communication:

  • Link aggregation control protocol (LACP)
    • Active-active functionality
    • Requires switch-side configuration
    • May span multiple switches when utilizing multi-chassis link aggregation
  • Active-backup NIC bonding
    • Automatic fail-back to the primary interface (the lower-numbered interface)
    • Does not require switch side configuration
    • All primary ports must reside on the same switch

ADDITIONAL RESOURCES

IP failover with Qumulo Core

Configure IPv6 in Qumulo Core

Configure LACP in Qumulo Core

IPMI Quick Reference Guide

Connect to Multiple Networks in Qumulo Core

QQ CLI: Networks and IP Addresses 

 

Like what you see? Share this article with your network!

Was this article helpful?
0 out of 1 found this helpful

Comments

3 comments

  • 10Gbps, 40Gbps or 100Gbps ethernet, depending on platform

    Needs update to

    10Gbps, 25 Gbps, 40Gbps or 100Gbps ethernet, depending on platform

    1
  • Hi would it be possible to get this document updated with P-class and HPE gen 10 networking best practice diagrams?

    2
  • Need to update doc to include C,K,P Series

    2

Please sign in to leave a comment.

Have more questions?
Open a Case
Share it, if you like it.