Site icon Chris Colotti's Blog

How The Nicira NVP ESXi vApp Works

Earlier this week, I posted a primer on the Nicira Network Virtualization Platform (NVP), but something William Lam pointed out that I updated with a note was that the Open vSwitch for ESXi is not “Integrated” into the hypervisor.  This is in fact true, but that being said there is an Open vSwitch “vApp”, which is really a Virtual Machine you build using an ISO image from Nicira.  I wanted to explain briefly how this works and what you need to think about in the deployment of this current version of the Open vSwitch for ESXi.  Bear in mind this is how it is done today with NVP 2.2.1 so I cannot speak to what changes will come in the future this is just to show how it works in this current release.

Open vSwitch vApp Considerations

  1. The Open vSwitch vApp needs to be created and place 1:1 with given ESXi hypervisors.  Simply put if you have 10 ESXi hypervisors, each one needs a copy of the OVS vApp running on it.  Also this Virtual Machine needs to be EXCLUDED from DRS migrations and “pinned” to their respective ESXi hypervisors.
  2. The Open vSwitch vApp needs at least three network adapters for the various connections:  Management, Data Tunnel, Trunk Access
  3. The Open vSwitch Trunk Port uses Promiscuous mode to see any Virtual Machines attached to the vSwitch on the Virtual Machine Integration Bridge port group
  4. All Virtual Machines connect to the same Port Group that acts as an Integration Bridge for NVP
  5. A Read Only account needs to be established on each ESXi host so the OVS vApp can be configured to connect to the ESXi SDK to gather information on the vSphere Switches and Virtual Machines
  6. There are recommended configurations for this Virtual Machine in order to maintain the performance and I would even add in some of my own like Memory Reservations to ensure that the OVS vApp get the physical resources it needs
  7. The Current Nicira NVP OVS vApp is supported and works with ESXi 5.0 and 5.0U1 at this time.

I have illustrated the Open vSwitch vApp Connectivity below.

Key Things to Point Out:

The Data Path

As you can guess with this current implementation the Virtual Machine data actually flows through the VM Port Group to the Trunk Port Group and ultimate out the Data Port Group through the established tunnel to another Transport Node so it is in fact passing through the Open vSwitch.  As you can see this is functional and I do have it working in my home lab.  This also means that the facilitation if moving Virtual Machines from standard vSphere networking to Nicira NVP may not be that complicated once the fabric is in place.  In theory it could be as easy as a change in the Virtual Machines Port Group Connection, and then the Virtual Machine will be communicating over the NVP Fabric.  This is assuming the other parts of the Nicira Fabric and other vSphere DVS requirements are configured first, but I can already see a way to migrate from one network to the other.  I may even record a Video of this in my lab to show that it can be done at some point.

Exit mobile version