IN THIS ARTICLE
Outlines the various ways an HPE Apollo 4200 Gen9 and Gen10 clusters can be connected to a network
REQUIREMENTS
- HPE Apollo 4200 Gen9 or Gen10 cluster
- Network Switch that meets the following criteria:
- Appropriate ethernet:
- 40 Gbps for HPE Apollo 4200 Gen9 (90T, 180T, 288T)
- 25 Gbps, 40 Gbps, or 100Gbs for HPE Apollo 4200 Gen10 (36T, 90T)
- 100 Gbps for HPE Apollo 4200 Gen10 (192T)
- 25 Gbps for HPE Apollo 4200 Gen10 (336T)
- Fully non-blocking architecture
- IPv6 capable
- Appropriate ethernet:
- Compatible network cables
- Enough ports to connect all nodes to the same switch fabric
- One static IP per node per defined VLAN
NOTE: Prior to hooking up any Qumulo-supported equipment to your network, it is always best to consult with your Network engineering team.
RECOMMENDATIONS
- Two redundant switches
- One physical connection per node to each redundant switch
- One LACP port-channel per node with the following configuration:
- Active mode
- Slow transmit rate
- Trunk port with a native VLAN
- IEEE 802.3x flow control enabled (full duplex)
- DNS servers
- Time server (NTP)
- Firewall protocol/ports allowed for Proactive Monitoring
- N-1 (N=number of nodes) floating IPs per node (maximum 10) per client-facing VLAN
NOTE: The number of floating IPs is dependent on your workflow and clients connecting to the cluster, with a minimum of 2 floating IPs per node per client-facing VLAN, but no more than 10 floating IPs per node per client-facing VLAN or 70 floating IPs per namespace.
HPE APOLLO 4200 GEN9 NIC LOCATIONS
HPE 90T SINGLE NIC
HPE 180T DUAL NICS
NOTE: NIC2 is currently not used on this model.
HPE 288T SINGLE NIC
HPE 288T DUAL NICS
NOTE: NIC2 is currently not used on this model.
HPE APOLLO 4200 GEN10 NIC LOCATION
CONNECT TO A NETWORK SWITCH
Connect to redundant switches
The details below outline how to connect a 4 node HPE Apollo 4200 cluster to dual switches for redundancy. This is the recommended configuration for HPE Apollo 4200 hardware. If either switch goes down, the cluster will still be accessible from the remaining switch.
- The two ports (2x25Gb, 2x40Gb, or 2x100Gb) on the nodes are connected to separate switches.
- A single port (minimum) on both switches should be uplinked to the client network. Use the appropriate combination of 10/25/40/100G network uplinks required to ensure you maintain an acceptable level of physical network redundancy, as well the ability to meet client throughput access rates.
- A single peer link (minimum) should be established between both switches.
Connect to a single switch
The details below outline how to connect a 4 node HPE Apollo 4200 cluster to a single switch. Note if this switch goes down, the cluster will not be accessible.
- Each node contains two ports (2x25Gb, 2x40Gb, or 2x100Gb) that are connected to the switch.
- Uplink ports should be connected to the client network.
RESOLUTION
You should now be able to successfully connect a HPE Apollo 4200 Gen9 or Gen10 cluster to your network
ADDITIONAL RESOURCES
Quick Start Guide: HPE Apollo 4200 Gen9
Quick Start Guide: HPE Apollo 4200 Gen10
QQ CLI: Networks and IP Addresses
Like what you see? Share this article with your network!
Comments
0 comments
Please sign in to leave a comment.