All posts by Arne Heitmann

About Arne Heitmann

Arne Heitmann works as a Staff System Engineer at Mellanox Technologies, focusing his efforts on the Ethernet market in Central Europe. He joined Mellanox in 2015 and has more than 25 years industry experience. During this time, he has focused on networking technologies, primarily LAN and WAN related technologies. Previous employments including training and consultancy, focusing on data centers and Ethernet.

And some Switches actually do support 4000 VLANs

 

Layer 2 switching is a well-known established technology and VLANs are part of it. Nevertheless, people in operations face challenges and unpredictable behavior of their networking gear again and again:

Limited ranges of configurable VLANs: Many switch vendors and NOSs reserve ranges of different sizes for specific tasks.  For example, logical port mapping, function mapping, or special address management. This leads to the fact of a pre-defined user configurable number and range of VLAN IDs

Hitting system limits: A lot of switch systems simply start showing unpredictable behavior once a certain number of VLANs is exceeded. BUM traffic forwarding seem to change, convergence time rise, or switch systems memory starts failing.

However, what if you need more than a few hundred VLANs; what if you need almost all of the standard VLAN ranges?

At Mellanox, we believe in standards conformity and openness of our systems. Our own ASIC and our MLNX-OS are designed to put the least burden on the operations guy and to give the freedom of scalability. Configuring and activating 4k VLANs on a switch should be configurable and work.

We recently tested this in a POC with 2 different layouts, a snake configuration and a spine-leaf topology. The setup was built with Mellanox SpectrumTM based SN2100, 16x100GbE switch, our half rack width device for storage and small to medium DC.

The setup was quite straight forward: 4000 VLANs, all ports configured as trunks with all VLANs allowed, uplinks in LACP mode with the spines being an MLAG pair. There was no interconnection between the leaf switches to force traffic going up to the spine. Traffic was generated by an IXIA with 100 Gb/s of bi-directional traffic.

Figure 1: Spine-Leaf test topology

Starting with the snake test, we saw a max speed of 99.7b Gb/s without frame loss or any additional latency. Taking these results we built the spine-leaf topology, as shown in figure 1, did baseline tests confirming the snake test results and then failing uplinks to and downlinks from spines as well as whole devices.

The good old RFC2544 tests showed a sub-second convergence of the topology and proved that scaling VLANs to the maximum is doable – and realistically working with Mellanox SN2000 series.

 

See also:

What happened to good old RFC2544?