In a galaxy far, far away, networking with PowerFlex (ScaleIO days) was simple, there were 3 switches (Management, Access #1, Access #2).
And on the nodes it had:
- 1x Out of Band iDRAC port (1GbE)
- 1x OS Management port (1GbE)
- 2x Backend Data Ports (10GbE)
- 2x Application Facing Ports (10GbE) – for HCI / Compute nodes.
Very easy, it didn’t even have or need VLAN tagging on many of the ports.
But then things started to evolve – some folks demanded redundancy at every level, so now the management network also had to be fully redundant. (FYI, in the old days if the non-redundant management network failed, it would only prevent management access – it would not interrupt I/O between compute & storage, the most important part by far!).
Then new features were added like replication (more bandwidth), NAS support, etc. All of a sudden there were VLAN tags and LACP everywhere! Not uncommon to most enterprise environments of course.
The challenge for the engineering teams behind this is how many combinations and permutations to support and document! The easier option in a way was the most complex option which supports the most possible outcomes.
Fortunately, PowerFlex has always been about choice, and with version 4.6 a lot of that choice is properly back! Don’t want LACP? No worries, PowerFlex Manager can now fully support that. PowerFlex has an incredible native IP multi-pathing feature which makes additional layer-2 redundancy unnecessary in most cases. (I won’t go into that religious battle in this blog, they both have their pros and cons).
Any other aspect to consider though is the management stack – on the latest 4.x versions with RKE (containers) underneath, the networking complexity went up by an order of magnitude. Much of which makes sense in large environments, but for simpler shops it can often be overwhelming.
As an example, in the latest Network Planning Guide, there can be up to 23 VLANs!
The good news is that many of these are optional, and it is even possible for PowerFlex Manager itself to operate within a single VLAN only, and just route to the other networks. (This currently does require an RPQ approval though to ensure you’ll be properly supported).
In a typical setup, PowerFlex Manager itself requires the following VLANs configured to the management VMs (MVM’s):
VLAN ID example | Description |
101 | Out of Band Management Network |
130 | Management VLAN |
151 | Data1 VLAN |
152 | Data2 VLAN |
Like anything there are pros and cons to this:
Pros | Cons |
Fully tested and validated | More VLANs to manage |
Handy to have direct L2 access to all networks | Security concerns that the MVM’s could act as a jump box (multi-homed). |
Removes dependencies on routers/firewalls for key functionality | Challenges to stretch the L2 VLANs to PowerFlex Manager in some environments |
In regards to the Data1/Data2 VLANs, it appears that PowerFlex Manager only requires these in environments that use the LIA component to upgrade the SDC (e.g. Linux / Windows machines). In theory these could be happily updated using the Management VLAN, but it seems there are some legacy challenges behind this. Worst case scenario, it may require manual SDC updates instead for the time being.
Personally, I am a fan of simplicity – “make things as simple as possible but not simpler” – this is my default approach unless a particular situation calls for more complexity.
Frankly there is no right or wrong way, I have been burned both ways! My preference is for a simple single VLAN PowerFlex Manager deployment, but if I have dependencies on others for Firewalls/routing and they are either unable or too slow to open the necessary ports; then it’s going to be a nightmare of troubleshooting. Currently the documentation is also not quite detailed enough to dive down to the unique IP addresses required either.
From what I have observed though; it should be as follows in a PowerFlex Cluster with the 3x Physical IP’s and 5x Virtual IP’s required:
Inbound Traffic to PFXM to the Virtual IP’s:
RoutableIPPoolCIDR IP1 | HTTPS(443),SSH(22) |
RoutableIPPoolCIDR IP2 (TCP) | NFS(2049:TCP,20048:TCP,111:TCP,32767:TCP,32765:TCP) |
RoutableIPPoolCIDR IP2 (UDP) | NFS(111:UDP,32767:UDP,32765:UDP) |
RoutableIPPoolCIDR IP3 | SNMP(162:UDP), SYSLOG(514:UDP,514:TCP) |
RoutableIPPoolCIDR IP4 | ? |
RoutableIPPoolCIDR IP5 | ? |
Outbound traffic from PFXM, I am still not 100% sure which of the Physical IP’s it will send out from.
By the way, a few handy commands you can run from your MVM’s:
kubectl get svc -A
For example to look for the SNMP & Syslog ports:
delladmin@pfmvm01:~> kubectl get svc -A | grep snmp
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
powerflex snmp-listener LoadBalancer 10.42.0.71 10.10.105.21 162:30048/UDP,514:30080/UDP,514:30080/TCP 78d
powerflex snmp-listener-data1 LoadBalancer 10.43.0.70 192.168.151.21 162:31687/UDP,514:32057/UDP,514:32057/TCP 78d
powerflex snmp-listener-data2 LoadBalancer 10.43.0.140 192.168.152.21 162:31457/UDP,514:30922/UDP,514:30922/TCP 78d
powerflex snmp-listener-oob LoadBalancer 10.42.0.124 10.10.101.21 162:30448/UDP,514:31310/UDP,514:31310/TCP 78d
This command will also help to show all the ports that are listening on the External IP’s:
kubectl get svc -A | grep -v "<none>"
In summary though, I believe that choice is really key. In enterprise environments, there are no two the same. Flexibility is paramount and each day (or quarter) PowerFlex is living up to its namesake. Putting the Flex into PowerFlex!