Integrating the Nexus 1000v VSM with vCenter

This is a follow-up to my previous post regarding the Nexus 1000v. Now that I help set the stage for what the Nexus 1000v really is, we can start looking at what is needed to get one up and running.

The Cisco install guide for the Nexus 1000v is a great reference, and I’ll help clarify and fill in the gaps the best I can. I won’t regurgitate too much of the steps in the install guide, so you will want to use the install guide for sure.

First let’s talk about the prerequisites ::

vCenter Server — In most cases, we have a data center of some sort with multiple ESX hosts running. Each ESX host has its own Virtual Machines running. So within our data center, it’s a good idea to define a cluster –or grouping of ESX hosts. It’s beneficial to group our ESX servers in a cluster because we can take advantage of some of the bells and whistles with VMware such as vMotion and high availability. vMotion is cool because if we need to take a server down for maintenance, we can drag and drop the VM to another ESX host. vMotion can also be a dynamic process if there is a unforeseen ESX server failure. So we define our data center, put our ESX servers in one or more clusters and have our VM’s running on each ESX server. The hierarchical structure is one of the benefits of vCenter because we now can centrally manage all of our ESX hosts from a single view. Instead of using the vSphere client to connect to each ESX server separately, we connect our vSphere client to the vCenter server which allows us to manage a whole data center of ESX hosts and their associated VM’s. Most importantly, we must be running vCenter server because that’s what the Nexus VSM communicates with directly.

Here’s and example screenshot ::

tromer-xpro = Server name hosting vCenter server
Data Center name = “Lab”
Cluster name = “POC”
ESX host = (only have a single ESX hypervisor/server running in this example)
VM’s running on ESX host = Nexus VSM, and Nostalgia


You can use the following to run vCenter server :: XP pro sp2, Win 2k3 server, Win 2k8 server.

Go here to get vCenter Server as well as the ESXi 4 Hypervisor files. You will need to sign up with an email account not hosted by a free site like yahoo, etc. You can try it free for 60 days. The file name for vCenter Server should look something like this, “VMware-VIMSetup-all-4.0.0-162902.iso”. When you run it, you will get this screen ::


On the box you designate as the vCenter server, install “vCenter Server” and the ‘vCenter Update Manager”. I’ll explain the vCenter Update Manager later, but it makes your life a lot easier. You can also install the “vSphere Client” to whatever box you choose. This client is what you run to connect to either a single ESX host, or better yet, the vCenter Server. For the most part, you will be using it to connect to the vCenter server which is how I got the vCenter screenshot above.

ESXi 4 host— When you go to download vCenter, you should be able to download the ESX hypervisor as well. The file name is something like this, “VMware-VMvisor-Installer-4.0.0-171294.x86_64.iso”. The hypervisor is what gives you the ESX host for which you run your VM’s on. Ideally, you need at least 2, but you can start with 1 hypervisor if you are limited on hardware or resources to run a second. You will want a second to test moving a VM between ESX hosts (vMotion). You can connect to your ESX host IP address using the vSphere client, but honestly, I would do everything from the vCenter server since it manages all ESX hosts in your data center.

Nexus VSM— If you remember from my previous post, the VSM is essentially the Supervisor for the virtual switch chassis. It can exist as a virtual machine running on a ESX host, or a separate physical appliance. You can download it from Cisco here. In the screenshot above, you can see that I installed it on the one ESX hypervisor I have running so far. For demo purposes, installing the VSM as a VM is perfectly fine (this is valid for production too). You need a Cisco login to download this file. Now it probably makes sense to install the VSM on its own ESX host outside the cluster, but you don’t necessarily have to.

So following along the Cisco Install guide, you should be able to get the VSM up and running. Let’s stop and talk about installing and configuring the VSM. The install guide mentions having 3 virtual adapters running on your VSM. Think of it as a PC running with 3 network cards connecting it to the network.

So we define 3 separate port groups on the vswitch that’s connected to our VSM. You go to network configuration settings for the ESX host and define the port group and assign it to a VLAN.

See below ::


Once you have them defined, you can assign them to each of the 3 Network adapters you created for the VSM. Think of assigning them as a method of assigning a port profile or configuration to an interface. In this case, it’s an access port connecting to our VSM or VM running VSM software. The order in which you assign each port group (profile) to each adapter is important, so be sure to read the install guide carefully.

This is where you assign the port group or profile to each adapter ::


So at a high level, you are essentially doing this ::


Again, to make your brain not hurt as much, just think of the VSM as any old server with 3 network cards connecting to a switch with each on its own VLAN. Why separate them out into VLANs? Well, each VLAN or port group/interface has a purpose. The Mgmt Interface along with the VLAN it’s associated with is how you are going to telnet or SSH to the VSM. We telnet to it so we can configure the switch, just like we are used to doing. The Mgmt interface is also important because this is the interface the VSM will use to talk to the vCenter server. On the VSM or supervisor module, we can define things such as a data VLAN for all my Web servers. I define them using the CLI (more on that later) and once they are defined, they get pushed to the vCenter server. This is why the vCenter is so important, because the VSM is constantly connected to it. So we need to make sure the Mgmt interface has connectivity to the vCenter Server. To make your life easier, you can put the vCenter Server on the same VLAN you defined for the Mgmt interface on the VSM (I chose VLAN 1).

The “packet” and “control” VLANs are used between the VSM and the VEMs. Basically, the supervisor talks to each VEM over two channels. Some things it only does over the control link, while other things are done only over the packet channel. The VSM uses the packet vlan for such layer 2 protocols as CDP, LACP and IGMP. The control VLAN is basically what keeps the VEM and the VSM on the same page. If a new host is connected to the VEM or line card, then it will update the VSM with link status (“Hey, I have a new virtual interface online”). Netflow information from the VEM is also passed up to the VSM via the control link.

Make sure the switch port connected to your ESX host–running the VSM– is configured as a trunk port allowing whatever 3 VLANs you decide on. Technically, I don’t think you have to put them in separate VLANs, but let’s do it because it’s recommended by Cisco and it really makes things logically separate.

So something like this ::

VSM with Switch

So all of these VLANs you create get referenced again when you finally have the VSM up and running and you start it. At this point, you can run through the setup script on the Nexus 1000 VSM. It’s just like running “setup” on a Cisco IOS router–a bunch of prompts asking you for input. The install guide outlines this well, so you should be able to follow along with that.

Similar to a 6500 chassis, you can configure multiple VSM’s or Supervisor engines in an active/standby role. In my opinion, all you need is the standalone to start doing your testing but know that you can do it.

At this point, let’s do a sanity check to see what we have achieved so far.

-We should have a vCenter server up and running.
-We should have at least one ESX4 server up and running.
-We may also have the VSM running on the same ESX4 server.

Now if you have plenty of machines, then you probably have two or three ESX hosts running and one of them has the VSM running as a virtual machine. At this point, you should have pointed your vSphere client to your vCenter IP address and defined your Data Center, Cluster and added all your ESX hosts. You have installed the VSM and ran through the basic setup. You may have gotten far in the guide and connected the VSM to the vCenter server already. Just remember what VLANs you defined earlier because you’ll have to reference them.

So your setup may look like this ::

VSM without VEM

In the example above, we have the VSM running on one ESX box and another ESX box ready for a VM. In my screenshot earlier, you may have noticed I used a VM called “nostalgia”, this is a free VM you can download from the VM marketplace. It’s nice because it needs very little resources and can be a good candidate in the future for testing vMotion. So it’s just a easy way to get a VM up and running fast…

Notice that I have NO VEMs running in any of the ESX hosts yet. Both ESX hosts in the diagram still have the VMware vSwithes. You will eventually have to install the VEM software on each ESX host, and then basically add the Nexus VEM or interface line card to each ESX of your choosing. If you installed the “update manager” on the vCenter server, it should automatically download the VEM from VMware and install on the ESX host. If it’s installed, then that means you will be able to enable it on the ESX box. So right now when I connect to the VSM and do a “show mod” I don’t see any line cards or VEMs yet. You won’t see any until we associate a VEM with an ESX host.

Stick around because there’s a lot more to talk about. I will discuss more about the VEM side of things because this is really where I ran into problems and confusion.

Here’s something to leave you with. Were you wondering by chance if the Nexus VEM has to in fact totally replace the VMware Vswitch? Can both switches be on the same ESX box? How about this, “Can I have a VSM running on the same ESX host where I have a VEM? Just remember one thing, the VSM will always be connected to a VMware vSwitch (you can’t put the cart before the horse ;).

So if your like me, I decided to start with one ESX host and try to get the VSM running as well as another VM (Nostalgia). I did this all on one ESX box because I was short on RAM (more on the way!). Honestly, I had some trouble understanding this until I put it on paper. This can be a real face-melter until you actually work it out on paper…

Here’s a little tease for you of what a working setup can look like with a VSM and one VEM running on a single ESX host.

VSM with VEM

Yes, the VEM has two uplinks to the network. A little weird but I will discuss it in more detail in the near future. It’s part of that whole “keeping the channels separate” between the VSM and the VEM. Basically, the VMware data traffic is isolated from the control and packet traffic. Remember, the communication between VSM and VEM is crucial and should not be impacted by other traffic (data), so it’s a way of separating the two. More on these uplinks later…

Ted Romer
CCIE No. 21785

5 Responses to “Integrating the Nexus 1000v VSM with vCenter”

  1. Richard Bell says:

    Nice post! Love the detail. Gave me a quick overview of how the Cisco Nexus 1000V really works as I couldn’t tell from Cisco’s docs where the VSM had to run.

  2. vmconsultant says:

    Nice article. Thanks for putting the time and effort in to publishing this. It really helps.

  3. Matt says:

    It seems you haven’t blogged in a while, this is one of the better blogs out there for this product.

    Please please keep going !!!


  4. Manish Chacko says:

    Dude, Long time. Working with Vmotion-that’s awesome-I was a big fan of VMware the 3.5 years at netQoS, but I never got to play with the enterprise stuff-vmotion, esx, vsphere, etc. Only free vmware server, player and workstation.
    Anyways, good blog-you should consider R&D and writing for a university or large firm like IBM-that wya you will never be bored :).
    Sent u a message on linkedin

  5. Jerry Gallagher says:

    Your Visio diagrams show Vnics. On the ESX host side of the Vswitch, the Physical nics should be referred to as vmnics. The Nics on the virtual host side of the Vswitch should be vnics. Does this differ when a Nexus 1000v switch is used?

Leave a Reply

You must be logged in to post a comment.