Archive for August, 2009

Integrating the Nexus 1000v VSM with vCenter

Monday, August 10th, 2009

This is a follow-up to my previous post regarding the Nexus 1000v. Now that I help set the stage for what the Nexus 1000v really is, we can start looking at what is needed to get one up and running.

The Cisco install guide for the Nexus 1000v is a great reference, and I’ll help clarify and fill in the gaps the best I can. I won’t regurgitate too much of the steps in the install guide, so you will want to use the install guide for sure.

First let’s talk about the prerequisites ::

vCenter Server — In most cases, we have a data center of some sort with multiple ESX hosts running. Each ESX host has its own Virtual Machines running. So within our data center, it’s a good idea to define a cluster –or grouping of ESX hosts. It’s beneficial to group our ESX servers in a cluster because we can take advantage of some of the bells and whistles with VMware such as vMotion and high availability. vMotion is cool because if we need to take a server down for maintenance, we can drag and drop the VM to another ESX host. vMotion can also be a dynamic process if there is a unforeseen ESX server failure. So we define our data center, put our ESX servers in one or more clusters and have our VM’s running on each ESX server. The hierarchical structure is one of the benefits of vCenter because we now can centrally manage all of our ESX hosts from a single view. Instead of using the vSphere client to connect to each ESX server separately, we connect our vSphere client to the vCenter server which allows us to manage a whole data center of ESX hosts and their associated VM’s. Most importantly, we must be running vCenter server because that’s what the Nexus VSM communicates with directly.

(more…)

Understanding the Nexus 1000v

Tuesday, August 4th, 2009

I noticed that Cisco has been advertising free evaluations of the Nexus 1000v on their website.

http://www.cisco.com/go/1000v

I decided I would give it a go and share my experiences and thoughts. Now I’m by no means an ESX expert, but I have recently started playing more and more with it. I have always been partial to regular VMware Workstation myself.

Background

I’m not going to go into too much detail here, but I wanted to give some background to add some context to the Nexus 1000v.

VMware has come out with this new Distributed Virtual Switching (DVS) term and Cisco has basically latched on with the introduction of the 1000v (as can other 3rd parties). Historically, we are used to using the integrated VMware virtual switch (vswitch) within the hypervisor in our ESX servers. You can basically think of the Nexus 1000v as a way of doing-away with the VMware vswitch and putting a Cisco virtual switch in its place.

So why replace a VMware switch with a Cisco switch? Let’s think about where we typically draw the imaginary line in ownership between our Server and Network teams. If I have an ESX hypervisor hosting 5 different virtual machines, they get connected internally with the vmware soft switch (vswitch). As a server admin, I have to create my port groups (or profiles) and assign them manually to each VM. I could use this to put each VM in its own VLAN for example. So the server guy is definitely forced to know a little bit about networking because he now has to manage the vswitch. As a network guy, I handle the port connecting to the physical ESX host. That’s going to be the “demarc” for me, as that’s pretty standard for how far my responsibility traditionally goes. If the server guy thinks there’s a problem with connectivity, I’ll verify all the way up to the port that connects to the server. At that point, I’ve verified everything on my end. Things are starting to change with virtualization…

(more…)