Nexus 7000 Virtual Portchannel Part 3 (final)

Ok guys, sorry for the delay on this one… Between work and my personal life, I have been so busy!

At this point, we have a working vPC peer link between our Nexus boxes. Remember our vPC peer keep-alive link needs to be up as part of this (I think of it as the heartbeat link).

So now we need to connect the access switch to the Nexus 7000 boxes to take advantage of virtual port channels. Let’s take a look again at our diagram.

Nexus

In this example, I’m using one of the newer 4948 TenGigabit switches. There are two TenGigabit ports on the 4948. I’m connecting one to “Nexus1″ and the other to “Nexus2″. This doesn’t have to be 10Gigabit, it could be any number of 1gig ports if you like. For example, you could connect 4 gig ports to Nexus1 and 4 gig ports to Nexus2. Remember, you can have up to 8 active ports in a port channel.

Now since we only create one port channel on the access switch, we actually make full use of the bandwidth we have uplinked to the Nexus boxes. For example, if you have 2 10 gig links, you have a total aggregate of 20 Gigabits. No more spanning tree, no more blocked ports, no more wasted bandwidth!

Let’s start with the configuration on each Nexus 7000.

Nexus 1 (HSRP active)

interface Vlan59
no shutdown
ip address 1.1.1.2/29
hsrp 1
priority 105
ip 1.1.1.1

interface port-channel59
switchport
switchport mode trunk
vpc 59

interface Ethernet3/1
switchport
switchport mode trunk
channel-group 59 mode active
no shutdown

Nexus 2 (HSRP standby)

interface Vlan59
no shutdown
ip address 1.1.1.3/29
hsrp 1
ip 1.1.1.1

interface port-channel59
switchport
switchport mode trunk
vpc 59

interface Ethernet3/1
switchport
switchport mode trunk
channel-group 59 mode active
no shutdown

Ethernet 3/1 on each Nexus connects down to the 4948 switch. I chose port channel 59 as well. Now the good thing is that what you probably already do regarding HSRP configuration wont change much. Meaning you still have one active and one standby. You are probably wondering how we can make use of both links on the 4948 if we only have one active switch. Remember the access switch doesn’t really know anything about what exists above it. All it knows is that it has two ports in a port channel and it’s going to load balance according to standard port channel algorithms. That means that some of the traffic is going to be received by Nexus 2 which is currently “standby” according to HSRP. This is the magic of vPC guys! Traffic received by the standby HSRP device (Nexus 2) is still forwarded as if it was active. So you can think of it like we have a “active active” HSRP configuration. The dataplane side of things acts in an “active active” fashion, while the control plane shows “active standby”. This is really the secret behind vPC.

The key thing that I want to point out about the configuration above is the “vpc 59″ command on each box. This is absolutely vital to make vPC work. I chose to use the same vpc number as the port channel number (you don’t have to). Do make sure that each Nexus box has the same vpc number assigned for a given port channel interface. If you are missing the vpc command, the access switch will only have one of the two ports joined into the port channel (spanning tree should block the other).

Now for the 4948 configuration at the access layer.

interface port-channel59
switchport mode trunk

interface range ten1/48 – 49
switchport mode trunk
channel-group 59 mode active
no shutdown

interface ten1/48
description connection to Nexus1

interface ten1/49
description connection to Nexus2

Very simple config at the access layer. One port channel for each vPC that now utilizes both 10 Gigabit ports.

Are you guys seeing the benefits yet?

-Possibly less cable runs in your network (especially if you use 10 Gigabit)
-No more wasted bandwidth (thank spanning tree for this one)
-Simplified topology between access and distribution
-Improved scalability and performance
-Faster link failover (port channel faster than spanning tree)

Hope this helps,

Ted Romer
CCIE No. 21785

2 Responses to “Nexus 7000 Virtual Portchannel Part 3 (final)”

  1. chris stand says:

    I know one of the benefits of VPC is that there is a “virtual” port channel and if I were to hook up 3750E clusters to 7K w/vpc or 5010 w/vpc I would see one virtual switch connected to one virtual switch with one virtual link – great. How would this effectively be different from using GLBP to do multiple outbound path routing on two 6513s ? If I have 2 6513s cross-connected to 4 x 3750E stacks with a dozen servers connected to each 3750E stack ( the 3750Es are doing Layer 2, not L3 – its an IBM requirement ) with the servers having a GLBP provided default gateway on the 6513 vlans – would I not get essentially the same thing ? I ask because having looked at port channel interface counters between port channel connected 3750E clusters before I don’t always see 1:1 counters.

  2. chris stand says:

    oops, my email is cstand141@gmail.com
    sorry, bad typing.

Leave a Reply

You must be logged in to post a comment.