I currently have a new blade chassis connected to my Nexus 5020 which has a Mulitchassis Etherchannel to a pair of 6509s running VSS. The blade system is using HP Flex 10 cards*. My first issue that I ran into was that I had to enable Jumbo MTUs on the Nexus. The reason being is the HP Flex 10 cards where sending jumbo frames which were getting dropped I also had to enable Jumbo Frames on the the trunk ports as well as the ethernet ports on the VSS that connect back to the Nexus since there will be some inter-vlan routing going on. I guess I should enable jumbomtu on the layer 3 interfaces as well now that I think about it.
Do I also need to enable system jumbomtu on the VSS? What about the VSL links? The reason I ask is that when I run the command “show interface counter errors” I see giant frame errors on the VSL links as well as FCS errors. However, I cannot adjust the MTU size on the VSL links or the port channels since they are not switchports. Can I make the VSL links switchports or will VSS not allow this? I guess I’m a bit confused on the whole jumbomtu thing. Once you start to enable it where do you stop.
Thanks for the help,
You have some good questions here. The first thing to point out is that you will end up with some restrictions when trying to use certain commands on interfaces/portchannels that are now a VSL link. You won’t be able to change much of anything once it’s a VSL link. See below ::
virtual link interface commands (restricted):
Have a look at the link below regarding Jumbo Frame support.
Now since each switch controls their own data plane, I would hope that the traffic that traverses the VSL link would be minimized. If you dual home everything, you shouldn’t have much going across the VSL honestly. Keep in mind, you may be punting traffic to the CPU causing it to go across the VSL. One cause is that traffic may be needing to get fragmented due to MTU 1500 on VLAN/layer 3 interfaces. See the snippet below ::
Traffic on the VSL
The VSL carries data traffic and in-band control traffic between the two chassis. All frames forwarded over the VSL link are encapsulated with a special 32-byte header, which provides information for the VSS to forward the packet on the peer chassis.
The VSL transports control messages between the two chassis. Messages include protocol messages that are processed by the active supervisor engine, but received or transmitted by interfaces on the standby chassis. Control traffic also includes module programming between the active supervisor engine and switching modules on the standby chassis.
The VSS needs to transmit data traffic over the VSL under the following circumstances:
Layer 2 traffic flooded over a VLAN (even for dual-homed links).
Packets processed by software on the active supervisor engine where the ingress interface is on the standby chassis.
The packet destination is on the peer chassis, such as the following examples:
Traffic within a VLAN where the known destination interface is on the peer chassis.
Traffic that is replicated for a multicast group and the multicast receivers are on the peer chassis.
The known unicast destination MAC address is on the peer chassis.
The packet is a MAC notification frame destined for a port on the peer chassis.
VSL also transports system data, such as NetFlow export data and SNMP data, from the standby chassis to the active supervisor engine.
To preserve the VSL bandwidth for critical functions, the VSS uses strategies to minimize user data traffic that must traverse the VSL. For example, if an access switch is dual-homed (attached with an MEC terminating on both VSS chassis), the VSS transmits packets to the access switch using a link on the same chassis as the ingress link.
Traffic on the VSL is load-balanced with the same global hashing algorithms available for EtherChannels (the default algorithm is source-destination IP).
I think what you have done regarding Jumbo frame support is fine. The question still remains regarding support for jumbo frames across the VSL link. I also have “no switchport” and “no ip address” on the VSL link so it’s not really a layer2 link anymore–so the global MTU command probably doesn’t apply to these ports. The only thing I can think of is setting the MTU on the VSL portchannel before converting to a VSL link. But honestly, you really shouldn’t need jumbo frame support on the VSL–going across VSL means your probably using software (CPU) for some reason.
Here’s an interesting snippet from the Jumbo Frame link above ::
Bridged and Routed Traffic Size Check at Egress Gigabit Ethernet and 10-Gigabit Ethernet Ports
Jumbo frame support compares egress traffic size with the global egress LAN port MTU size at egress Gigabit Ethernet and 10-Gigabit Ethernet LAN ports that have a nondefault MTU size configured. The port drops traffic that is oversized. You can configure the global LAN port MTU size
Bridged and Routed Traffic Size Check at Ingress 10, 10/100, and 100 Mbps Ethernet and 10-Gigabit Ethernet Ports
Jumbo frame support compares ingress traffic size with the global LAN port MTU size at ingress 10, 10/100, and 100 Mbps Ethernet and 10-Gigabit Ethernet LAN ports that have a nondefault MTU size configured. The port drops traffic that is oversized. You can configure the global LAN port MTU size
Routed Traffic Size Check on the PFC
For traffic that needs to be routed, Jumbo frame support on the PFC compares traffic sizes to the configured MTU sizes and provides Layer 3 switching for jumbo traffic between interfaces configured with MTU sizes large enough to accommodate the traffic. Between interfaces that are not configured with large enough MTU sizes, if the “do not fragment bit” is not set, the PFC sends the traffic to the RP to be fragmented and routed in software. If the “do not fragment bit” is set, the PFC drops the traffic.
So for layer2 ports (etherchannel included), you have to configure Jumbo Frame support globally using “system jumbomtu” command. You probably already have done this. From what I’ve read, globally enabling this should take care of jumbo frame support for bridged traffic.
Now looking at my own VSL interfaces, I also see some amount of “giants” and I haven’t enabled Jumbo MTU support. This could be the special 32byte header added to all frames going across VSL link. It’s not something I’m worried about, but I would worry about FCS errors. If I were seeing FCS errors across all VSL interfaces it may be related to Jumbo Frames. If the frames get dropped due to MTU, I’m not sure it’s going to throw an FCS error though. Check your interfaces closely to make sure everything is negotiating correctly.
As far as where to stop with Jumbo frame support goes, your switches will probably end up fragmenting them at some point (as long as DF bit is not set). When the egress is via a layer 3 and it doesn’t support the MTU of the frames, it should send to RP to be fragmented in software (punt to CPU). This is not a good thing (for VSS), so try to minimize what would go across the VSL!
If all else fails Mark, open a TAC case to get Cisco’s official word on this. If you get any new information, please let me know!
Hope this helps,
CCIE No. 21785