Now that we have installed and configured NSX I think it’s time we connected it to vCAC. In version 6.1 there are some changes to the integration with NSX and vCAC. When I say changes I should say there are some great new changes. The integration now utilizes a vCO Plug-in that handles all the interactions between NSX and vCAC.
Benefits of vCO plug-in for NSX to vCAC integration
The benefits of the vCO plug-in are huge. These workflows that now exist in vCO are there for you to use in your own customization giving you the ability to interact with NSX in a custom way without having to code against it’s api. Personally I await the day for all integrations to be this way.
As most of you know the vCAC appliance has vCO built in and the built in vCO server already has the NSX plug-in installed for. If you want to use an external vCO you will have to deploy the plug-in to that appliance before trying to connect vCAC to NSX.
I have been received a number of questions about the MoaC Lab so I decided to put together an article to cover what the MoaC is. The MoaC or Mother of all Clouds lab is a project Tom Bonanno and myself (Sid Smith) started to help with sharing information. The goal is to build a lab that will allow us to build the use cases that everyone wants to learn about. It’s not about building a lab that has a huge number of resources, but a lab that has a huge number of integrations. Integrations that that we can document and share with the world.
The MoaC Lab consists of two site. Site 1 is located in my basement in Harrisburg, PA and Site 2 is located in Tom’s Basement near Atlantic City, NJ. The two site are currently connected using an IPSec VPN run over NSX.
In my previous NSX articles we covered installing and configuring NSX, We discussed deploying/configuring Transport Zones, Logical Switches, Logical Routers, Edge Gateways, and connecting the Logical and Edge Gateways. With all these completed we now have an environment that with the appropriate routes and transport traffic from our physical network to our logical networks that we deployed. The missing price is the routes. We could go and configure a bunch of static routes throughout all the NSX routers and our physical routers, but that wouldn’t be fun. It also wouldn’t be automated. In this post I am going to walk through configuring the NSX routers to use OSPF for route distribution.
So far we have deployed (2) Logical Switches and (1) Distributed Logical Router and deployed a VM on to each logical switch. Our VM’s can communicate with each other across the Distributed Logical Router, but they can’t communicate to anything else. What we now need to do is deploy an Edge Gateway that we will configure to communicate upstream to the physical network and downstream to the logical network. Where we could technically just connect the Distributed Logical Router upstream to your physical network, it’s not really a best practice approach and it’s not a supported approach when integrating with vCAC.
In this walk-through we will be deploying a logical router and configuring routing between (2) logical networks that we created in an earlier post. Logical routers consist of two components. A virtual appliance that is deployed into your vSphere environment. In the MoaC lab all routers are deployed to our management cluster and the vSphere Kernel module. Remember the host preparations we performed as part of the NSX installation? That was installing the NSX kernel modules.
The NSX Logical Routers Perform East-West (VM-VM) routing as well and North-South Routing. The East-West routing performed by the Logical Routers afford you some extra efficiencies by allowing VM-VM communications across different subnets to happen at the vSphere Kernel when those vm’s reside on the same host. You can also gain efficiencies when communicating between vm’s on different hosts as well. Traffic for the communications will traverse host to host instead of needing to go out to a physical router on the network and then to the other vm. In the post you will witness this as we place a virtual machine on each of the logical switches we created and the Logical Router performs routing between the two networks right in the hosts kernel. Although this specific post focuses on the East-West routing within the Logical Router we will be covering the North-South routing configuration in another post.
NSX Logical Switches can be looked at as the equivalent of a virtual VLAN. They identify the networks that you will be connecting your virtual machines to that ride over your VXLAN Transport Zones. Each Logical Switch is assigned a Segment ID that is similar to a VLAN ID. The difference is the packet encapsulation. Each of the exercises I will be writing build on top of the previous. If you are reading this and are looking for the preceding articles click here.
During this walk-through you are going to configure (2) Logical Switches that we will use in a later article where we are going to configure Logical Routing. For this article we will only be configuring the Logical Switches.
In my previous article I walked through configuring Transport Zones. I’m going to be using the Desktop Cluster Transport Zone that I created in that article. I will be creating (2) Logical switches in the MoaC Lab Attached the Desktop-Transport-Zone.
If you are familiar with “Network Scopes” from vCNS then “Transport Zones” should be familiar to you. If not here is some useful information to know regarding these Zones.
Transport Zones dictate which clusters can participate in the use of a particular network. Prior to creating your transport zones some thought should go into your network layout and what you want to be available to each cluster. Below are some different scenarios for transport zones.
In the “MoaC” environment I have three clusters. There is a Management Cluster in which all management servers are hosted included all components of NSX which will include all Logical and Edge routers that we have not yet configured, but this concept is important to know. I will not be placing any routers in any other cluster than my management cluster. I then have a Services cluster which will be hosting all of my provisioned machines that are not part of the core infrastructure, and finally I have a desktop cluster in which I will be hosting VDI desktop instances.
I know your excited to get right down to the meat of the installtion, but there is some housekeeping that we need to get out of the way first. There are a number of pre-requisites that we need to ensure exist in the environment first.
A properly configured vCenter Server with at least one cluster. (Ideally (2) clusters – (1) Management Cluster & 1(1) Cluster for everything else.)
Cluster should have at least (2) hosts. (More would be better. Memory will be important)
You will need to be using Distributed Virtual Switches (DvSwitch) NOT Standard vSwitches.
If you are NOT running vSphere 5.5 you will need to have your physical switches configured for Multicast. (Unicast requires vSphere 5.5)
You will need a vLAN on your physical network that you can utilize for VXLAN.
To give you an idea below is the configuration for the “MoaC” lab that I will be working in.
vCenter 5.5 U2b
Management Cluster with (2) vSphere ESXi 5.5 U2 Hosts
Cluster only DvSwitch using NIC Teaming
Services Cluster with (4) vSphere ESXi 5.5 U2 Hosts
Cluster only DvSwitch using LAG.
Desktop Cluster with (2) vSphere ESXi 5.5 U2 Hosts
Cluster only DvSwitch using LAG.
Physical vLAN trunked to all vSphere hosts in all clusters.