Archive for the ‘Data Center’ Category

Integrating the Nexus 1000v VSM with vCenter

Monday, August 10th, 2009

This is a follow-up to my previous post regarding the Nexus 1000v. Now that I help set the stage for what the Nexus 1000v really is, we can start looking at what is needed to get one up and running.

The Cisco install guide for the Nexus 1000v is a great reference, and I’ll help clarify and fill in the gaps the best I can. I won’t regurgitate too much of the steps in the install guide, so you will want to use the install guide for sure.

First let’s talk about the prerequisites ::

vCenter Server — In most cases, we have a data center of some sort with multiple ESX hosts running. Each ESX host has its own Virtual Machines running. So within our data center, it’s a good idea to define a cluster –or grouping of ESX hosts. It’s beneficial to group our ESX servers in a cluster because we can take advantage of some of the bells and whistles with VMware such as vMotion and high availability. vMotion is cool because if we need to take a server down for maintenance, we can drag and drop the VM to another ESX host. vMotion can also be a dynamic process if there is a unforeseen ESX server failure. So we define our data center, put our ESX servers in one or more clusters and have our VM’s running on each ESX server. The hierarchical structure is one of the benefits of vCenter because we now can centrally manage all of our ESX hosts from a single view. Instead of using the vSphere client to connect to each ESX server separately, we connect our vSphere client to the vCenter server which allows us to manage a whole data center of ESX hosts and their associated VM’s. Most importantly, we must be running vCenter server because that’s what the Nexus VSM communicates with directly.


Understanding the Nexus 1000v

Tuesday, August 4th, 2009

I noticed that Cisco has been advertising free evaluations of the Nexus 1000v on their website.

I decided I would give it a go and share my experiences and thoughts. Now I’m by no means an ESX expert, but I have recently started playing more and more with it. I have always been partial to regular VMware Workstation myself.


I’m not going to go into too much detail here, but I wanted to give some background to add some context to the Nexus 1000v.

VMware has come out with this new Distributed Virtual Switching (DVS) term and Cisco has basically latched on with the introduction of the 1000v (as can other 3rd parties). Historically, we are used to using the integrated VMware virtual switch (vswitch) within the hypervisor in our ESX servers. You can basically think of the Nexus 1000v as a way of doing-away with the VMware vswitch and putting a Cisco virtual switch in its place.

So why replace a VMware switch with a Cisco switch? Let’s think about where we typically draw the imaginary line in ownership between our Server and Network teams. If I have an ESX hypervisor hosting 5 different virtual machines, they get connected internally with the vmware soft switch (vswitch). As a server admin, I have to create my port groups (or profiles) and assign them manually to each VM. I could use this to put each VM in its own VLAN for example. So the server guy is definitely forced to know a little bit about networking because he now has to manage the vswitch. As a network guy, I handle the port connecting to the physical ESX host. That’s going to be the “demarc” for me, as that’s pretty standard for how far my responsibility traditionally goes. If the server guy thinks there’s a problem with connectivity, I’ll verify all the way up to the port that connects to the server. At that point, I’ve verified everything on my end. Things are starting to change with virtualization…


Nexus 7000 Field Notice

Wednesday, June 24th, 2009

I was looking around in my Nexus boxes today and noticed that my 48 port Rj45 line card was not showing up in the running config. The “show mod” showed it as “ok” but the interfaces were missing from the config. The model number is ‘N7K-M148GT-11’ and you need to have hardware version 1.3. This seems to be a side effect of enabling virtual port channels on the Nexus boxes.

Turns out there is a field notice and I need the latest hardware version of the cards.

Open a TAC SR and get an RMA going. I figured it would be an EPLD upgrade issue, but I guess not.

Notice the version in the output from “show mod”

nexus7000# sho mod
Mod Ports Module-Type Model Status
— —– ——————————– —————— ————
1 32 10 Gbps Ethernet Module N7K-M132XP-12 ok
2 32 10 Gbps Ethernet Module N7K-M132XP-12 ok
3 32 10 Gbps Ethernet Module N7K-M132XP-12 ok
4 48 10/100/1000 Mbps Ethernet Module N7K-M148GT-11 ok
5 0 Supervisor module-1X N7K-SUP1 active *

Mod Sw Hw
— ————– ——
1 4.1(5) 1.3
2 4.1(5) 1.3
3 4.1(5) 1.3
4 4.1(5) 1.0
5 4.1(5) 1.1

Nexus 7000 Virtual Portchannel Part 3 (final)

Thursday, April 30th, 2009

Ok guys, sorry for the delay on this one… Between work and my personal life, I have been so busy!

At this point, we have a working vPC peer link between our Nexus boxes. Remember our vPC peer keep-alive link needs to be up as part of this (I think of it as the heartbeat link).

So now we need to connect the access switch to the Nexus 7000 boxes to take advantage of virtual port channels. Let’s take a look again at our diagram.


In this example, I’m using one of the newer 4948 TenGigabit switches. There are two TenGigabit ports on the 4948. I’m connecting one to “Nexus1” and the other to “Nexus2”. This doesn’t have to be 10Gigabit, it could be any number of 1gig ports if you like. For example, you could connect 4 gig ports to Nexus1 and 4 gig ports to Nexus2. Remember, you can have up to 8 active ports in a port channel.


Nexus 7000 Virtual Portchannel Part 2

Monday, March 30th, 2009

Wow, last week I was swamped preparing for a domestic MPLS migration in India (more on that later). I wanted to post this sooner…

In Part 1 of configuring virtual port channels on the Nexus, I talked about what may be needed to enable the vpc feature. Here’s a continuation of the process.

Between your two nexus boxes, you’re going to have a vpc peer link and a vpc peer keep-alive link. Without the keep-alive link, your vpc peer link wont come up.

Cisco vpc config guide

See Diagram


! Again, the first thing you want to do is enable vpc

Nexus1(config)# feature vpc

The vpc peer link between switches is going to be your traditional layer 2 trunk. There are some other minor configurations you will need regarding your vpc peer link.