Site icon Chris Colotti's Blog

VMware vCloud “In A Box” for Your Home Lab

Introduction:

This outlines the configuration I personally used for installing setting up a lab based “vCloud in a box” for testing and understanding of all the vCloud components. This setup can be done almost 100% virtually with a single physical server hosting all the components.  Parts of the configuration are unsupported by VMware for production as well as on certain platforms, however almost all of it works for training and testing purposes with limited hardware.  Although this can be done on other platforms the storage and servers used here were IOMEGA and DELL.  The Switch was Netgear GS742TR routing switche although many others at least support VLAN tagging just not routing functions.  I assume the reader has knowledge and expertise already with setting up switches, VLANs, routing, as well as vSphere.  If you do not require full routing functions portions of this can be changed as needed.  The components and configurations listed here are what I used to build my implementation, but you can use some of the information to adapt to your needs.  I will have to assume knowledge exists to create the VLAN’s, setup routing as well as other network items.  If that needs to be detailed I can always update this post later.

Server Hardware Used:

Networking Hardware Used:

Storage Hardware Used:

Software Used:

Port Group VLAN’s Used:

Visio Diagram (Newly Added):

vCloud In A Box Logical Visio Diagram

Storage Configuration of ix-400D:

The ix-400D has dual network adapters which make it well suited to dedicate one of them to your “Production” home network and the other to the ESX Lab “NAS” network.  From above we can see that one is VLAN 100 and the other is VLAN 120.  Therefore we simply tag the physical ports in the switch and assign each port an IP address on the appropriate VLAN.  You also need to enable iSCSI on the NAS as well as configure root access for NFS connections to work from ESXi.  Once this is complete you will be able to add the VMKernel port for NAS/iSCSI access over the dedicated layer 2 link.  I also chose not to enable CHAP and authentication since this setup is all private and separated.  I chose to create a 250GB iSCSI volume as well as a NAS file share.  The NAS share will require IP addresses of the VMkernel ports added to the security list.

Note:  Port #1 is VLAN 100 and Port #2 is VLAN 120 for ESX NAS Access

Physical Host Networking Setup:

The physical host was installed with ESXi 4.1 with basic vSwitch networking initially.  I elected to configure both physical NIC’s as a LAG’s to simply the physical connectivity due to all of the virtual networking.  Below you can see the port groups configured and highlighted is VLAN 4095.  This is a key VLAN becuase the “Virtual” ESX VM’s will use this to see all the same VLAN’s as the physical to fully mimic a true VLAN network configuration.  If you don’t do this you will have to add multiple NIC’s to each Virtual ESX server to access the port groups.  This gets a little confusing so after playing with it I decided to go the 4095 route which works perfectly.  Also pictures is the vSwitch security settings.  for all the follow on nested ESX virtual machines to work and boot up nested VM’s on them as well as access the networks.  This is a key limitation that makes this not well suited for production networks.

Physical Host Storage Setup:

Now that we have the storage device configured and ESXi installed and the networking all configured we need to add the NAS storage to the physical host so we can use it for various items.  We will also mount this networked storage within the nested ESX virtual machines when we get there.  Configure a simple iSCSI software initiator to connect to the iSCSI storage.  However if you use the dynamic discovery ESX will find the iSCSI avaialble on BOTH the 100 and 120 VLAN so you will need to take one extra step to force iSCSI over the 120 VLAN.  The following screens will show these configurations.  Be sure to also add the NFS mount point, and for the Iomega device the path for NFS is /nfs/.



Setting Up The Virtual Machines to host vCloud Components:

Now that we have the all the parts of the ESX host we can get into some of the actual vCloud director requirements.  First and foremost is the need to setup all the windows machines to host the basic vSphere requirements.  We can do this by creating in initial Windows 2008 R2 x 64 template.  From there we need the following Virtual Machines configured.  I elected to use completely separate Virtual Machines for all these roles

Once these machines are configured, we can move onto the required virtual machines for the VMware vCloud Director components.  I don’t see the need to fully detail the install and configuration of vCenter, AD, and SQL.  The Virtual Machines required for vCloud Director are the following.  All of these should have IP addresses on VLAN 110 to maintain isolation of the machines from your other networks.  If you chose to create VLAN routes in the core switch that will provide you access to these devices.  Otherwise this will result in all of these being isolated.  In my case I do have a VLAN route from the Production 100 VLAN to the 110 VLAN only for terminal services, SSH, and remote console access.

I will not go over the installation of vCD or it’s components because those are available in the VMware vCloud Director Documentation on the VMware website.  I will go into a few key configurations of the Virtual ESXi servers as there is a few quirks to be aware of.

Setting Up The Virtual Machines to host vCloud Components:

ESXi 4.1 by default does not support a Guest Operating system OF ESX, so in order to make that work we need to create a special .VMX file to support the installation of an ESXi virtual machine.  Below are the steps to create the correctly configured virtual machiens to run ESXi.  Generally these steps are outlined on other sites but this is the minimum needed to not only install ESXi, but to actuall allow a NESTED virtual machine to run on them.  These two ESX hosts will become the vCD “Compute” cluster that you will point vCD to for resources.

Step #1:

Step #2:

Step #3:

Summary:

Although this article has not detailed the individual configurations of ALL the components the purpose here was to detail out the single server solution to run VMware vCloud Director in your home lab.  Once you have these components you can expand on your configuration as you see fit.  To date I have also added the following:

Although this article has not detailed the individual configurations of ALL the components the purpose here was to detail out the single server solution to run VMware vCloud Director in your home lab.  Once you have these components you can expand on your configuration as

Below is a screen shot of all the Virtual Machines running on my single Dell Server.  Although not are currently powered up they usually are.  I’d say this is a testament to ESXi 4.1 running on a single server with all this going on.

I hope you have found this useful to setup your own home lab for VMware vCloud Director.  Please provide some suggestions for ways to improve this article.  I’d be happy to edit and add to it as feedback comes in.  The nice part about doing all this virtuall is as you want to update and change things you can make use of Snapshots or Templates to quickly deploy new components or update the ones you have.

Exit mobile version