Understanding the Nexus 1000v

I noticed that Cisco has been advertising free evaluations of the Nexus 1000v on their website.

http://www.cisco.com/go/1000v

I decided I would give it a go and share my experiences and thoughts. Now I’m by no means an ESX expert, but I have recently started playing more and more with it. I have always been partial to regular VMware Workstation myself.

Background

I’m not going to go into too much detail here, but I wanted to give some background to add some context to the Nexus 1000v.

VMware has come out with this new Distributed Virtual Switching (DVS) term and Cisco has basically latched on with the introduction of the 1000v (as can other 3rd parties). Historically, we are used to using the integrated VMware virtual switch (vswitch) within the hypervisor in our ESX servers. You can basically think of the Nexus 1000v as a way of doing-away with the VMware vswitch and putting a Cisco virtual switch in its place.

So why replace a VMware switch with a Cisco switch? Let’s think about where we typically draw the imaginary line in ownership between our Server and Network teams. If I have an ESX hypervisor hosting 5 different virtual machines, they get connected internally with the vmware soft switch (vswitch). As a server admin, I have to create my port groups (or profiles) and assign them manually to each VM. I could use this to put each VM in its own VLAN for example. So the server guy is definitely forced to know a little bit about networking because he now has to manage the vswitch. As a network guy, I handle the port connecting to the physical ESX host. That’s going to be the “demarc” for me, as that’s pretty standard for how far my responsibility traditionally goes. If the server guy thinks there’s a problem with connectivity, I’ll verify all the way up to the port that connects to the server. At that point, I’ve verified everything on my end. Things are starting to change with virtualization…

Server virtualization using VMware or Blade enclosures have certainly blurred the lines a bit. We are now starting to see Network guys working more closely with the Server team– Server virtualization has contributed a lot to getting both teams to start talking again ;). So now the server guys have to tell me that they have 5 different VLANs behind this one virtual host, so I need to make sure I configure the port connecting to the physical server as a trunk (and allow vlans w-z). As you can imagine, lots of room for communication break-downs from both sides. I personally have experienced this when working in a data center environment. When bringing online a new Blade enclosure or VMware server, us network guys really need to know the details about what’s virtualized within that server. What I’ve noticed is that the Server guys don’t want to deal with all the vlan and policy creations for a given virtual machine, because that’s more of a network-centric item. I’m sure most server admins would like to just keep everything on one VLAN to keep is simple. Unfortunately, we sometimes need to have certain customizations at the edge ports connecting to our servers. It can also be a business requirement to make sure certain servers are protected with security policies (ACLs, VLANs, etc). I’ve had Server guys come up to me and tell me they want me to own the connection all the way down to the host, even if it’s virtual. If that’s the case, then we would now be looking into the vswitch level as our new “demarc” or logical divide between ownerships. I don’t see it as a problem, but we need something we can work with– something that’s familiar to us.

The vswitch isn’t bad, but if your looking for your typical CLI as a network guy your going to be dissapointed. While it does have some decent bells and whistles, it doesn’t nearly come close to what we can get from a Cisco switch for example. As a network guy, I can’t exactly manage the vswitch unless I log into the GUI to do so. I wouldn’t be happy about having to manage each vswitch running on every ESX host I have in a cluster. That sounds like a management-overhead-nightmare to me, and I can see why the Server guys don’t like it. We need a solution that will suit both parties and ultimately make our lives easier.

If I have 2 ESX hosts running in my cluster, I can move my virtual machines around between physical boxes using Vmotion. This is cool, but what about those port profiles I setup on ESX host A? I better make sure they exist in both places so that my virtual machine can retain its VLAN for example. This is something that the Server admins have to deal with. Keep in mind, there can be a lot more to a port profile than just a VLAN designation (consider other things like netflow, ACL’s, etc.). In an ideal world, they follow the VM independent of what physical ESX hypervisor it’s on.

The Nexus 1000v does some things to help us out ::

- Get rid of the management overhead of having to deal with separate virtual switches (one for each ESX host).

- Give the network guys something they are familiar with (assuming a Cisco background) and let them own the connection down to the host, virtualized or not.

- Help out the server guys when they need to move a VM to another physical server due to maintenance or problems with the existing server. And make it seamless!!! They don’t like having to reconfigure profiles on the new vswitch.

- Allow server guys to easily assign profiles to a server no matter what ESX host it resides on. Make it so they don’t mess with the network side so much.

- Give server and network guys insight into the virtualized environment. Give virtualized servers the same bells and whistles (ACL’s, netflow, Port spanning, Vlans, etc.) we can get on our physical servers connected to real switches. Help us bridge the virtualization gap!

What does it look like?

Think of the Nexus 1000v architecture as a virtualized modular switch. Think of a Cisco 6500 platform…

Modular Chassis

Here is the VEM at the ESX host level ::

VEM

Remember, each ESX host basically gets its own Ethernet module or VEM. The VEM handles all the forwarding-plane duties and doesn’t rely on the Supervisor (VSM) to forward traffic. Each VM that is attached to the VEM gets a unique Virtual Ethernet interface number (vEth). This vEth number will stay with this VM no matter which ESX host it gets moved to (make sure the new ESX host has a VEM too if using vmotion). The VEM’s have dedicated uplinks and vlans that are used to maintain a connection with the VSM. When I need to manage the Nexus 1000v, I login to the VSM (via mgmt IP address) and have a centralized management scheme. I can do a “show module” on the VSM, and see my supervisors (VSM active/standby) and every VEM (or module #) listed out that exists in each ESX hypervisor. It’s just like logging into a 6500 chassis and seeing all your modules and supervisors, it’s just logically separated.

Here’s another look at the logical connection from the VSM to each VEM ::

VSM to VEM

Here is what things looked like with the traditional Vmware vswitch ::

vswitch

You can have two VSM instances which are basically your active and standby supervisor that you are used to seeing. A VEM or Ethernet module exists in each ESX host of your choosing. This will allow you to build a switch with 64 Ethernet modules (across 64 ESX hosts). Now that’s pretty cool when you think about it. There is no backplane or switch fabric to connect everything together like you have in a typical modular switch. So what has to happen is you create a few dedicated VLANs in your network so that the supervisors (VSM) can communicate with each of its modules. The Ethernet network is now the backplane that links the modules to the supervisors. The cool thing about this architecture (Similar to the 7k) is that the forwarding is handled by the line cards (or VEMs in this case). The Supervisor (or VSM) just takes care of the control responsibilities and allows you to manage the entire DVS or distributed virtual switch.

Stick around, I’ll have more information on what you need to get things going, and some help on the configuration side of things. I ran into a lot of road blocks personally. The documentation is out there, but there are some gaps here and there–especially for someone who’s not an ESX pro!. Lots more to say about the 1000v integration with Vcenter server as well.

Pretty amazing stuff imho…

Ted Romer
CCIE No. 21785

7 Responses to “Understanding the Nexus 1000v”

  1. NetBoy says:

    IMHO…Network guys deal with protocols, transmission, media, connectivity, congestion, and the like. A server guy is a server guy whether its software virtual or hardware virtual. Hardware Server Virtualization or Software Server Virtualization address two different issues. Virtualization is turning into a term as broad and faceless as the Internet. Your information is good but we need to quit bluring the line so much between what the two address.

    1. Typically Hardware Server Virtualization (best example is a blade server) have more raw IO and processing capabilities in a smaller physical space.

    The idea is meant for a data center where, for example, you need a lot of processing power say for hosting game server and you want to throw a lot of new servers online to handle the increasing volume.

    Another example would be a web hosting facility or similar server host facility where rack space is a premium.

    In a hosting facility you pay monthly for rack space so the initial costs of the blade server environment can make sense (although this is a delicate balance vs small 1U servers).

    2. Typical Software Server Virtualization (best examples VMWare, Hyper-V, Xen) offers better utilization of hardware resources and less power.

    This idea is meant for underutilized server; for example, a corporate environment where hardware is underutilized like a file server that requires a lot of disk space but not much on processor or memory and a web server that is not much on hardware, processor, or memory and maybe a login server that doesn’t do much processing. Make them all virtual machines and save yourself some hardware cost.

    Most, corporate servers typically only utilize about 10 to 15% of the hardware’s capabilities. Also, and this is important, software virtualization give a ton of flexibility.

  2. chris stand says:

    you added the vcenter to actually get a blade ?

    you are right about the demarc – but this should be an opportunity to work with the server folks to get them to understand vlans and the like …It will be a win/win.

  3. ted says:

    Chris, I think most server admins can handle the typical network stuff. I think they have more important things to do instead of troubleshooting a network issue. The 1000v helps us help server admins. Now we have something we are familiar with.

  4. ted says:

    Netboy, I agree. Virtualization is a broad term that can be referencing software and hardware. Gotta love marketing buzz words… Good post!

  5. Steve Kaplan says:

    Ted,
    Great post – thanks. This help clarify a lot for me.

Leave a Reply

You must be logged in to post a comment.