If you are looking to try out vRA7 integration with NSX make sure you upgrade your NSX deployment. This update include support for the NSX 1.0.3 vRO plugin needed for vRA integration.
New in 6.2.1
The 6.2.1 release delivers a number of bug fixes that have been documented in the Resolved Issues section.
- 6.1.5 fixes: Release includes the same critical fixes as NSX-vSphere 6.1.5 content.
- Introduced new ‘show control-cluster network ipsec status’ command that allows uses to inspect the Internet Protocol Security (IPsec) state.
- Connectivity status: NSX Manager user interface now shows the connectivity status of the NSX Controller cluster.
- Support for vRealize Orchestrator Plug-in for NSX 1.0.3: With NSX 6.2.1 release, NSX-vRO plugin version 1.0.3 is introduced for use with vRealize Automation 7.0.0. This plugin includes fixes that improve performance when vRealize Automation 7.0 uses NSX for vSphere 6.2.1 as a networking and security end point.
- Starting in 6.2.1, NSX Manager queries each Controller node in the cluster to get the connection information between that controller and the other controllers in the cluster.
This is provided in the output of the NSX REST API (“GET https://[NSX-MANAGER-IP-ADDRESS]/api/2.0/vdn/controller” command), which now shows the peer connection status among the controller nodes. If NSX Manager finds the connection between any two controller nodes is broken, a system event is generated to alert the user.
- Service Composer now exposes an API that enables users to configure auto creation of Firewall drafts for Service Composer workflows.
This setting can be turned on/off using REST API and the changes can be saved across reboot. When disabled, no draft is created in the Distributed Firewall (DFW) for policy workflows. This limits the number of drafts that are auto-created in the system and provides better performance.
NSX-vSphere 6.2.1 Release Notes:
NSX-vSphere 6.2.1 Landing Page:
NSX-vSphere Product Documentation:
Why is there a perception that you can use Cisco or NSX? If you perform a simple google search you will find many articles that aim to answer the question of Cisco vs. NSX? This is like saying HP or vSPhere? It doesn’t make any sense. Cisco and NSX can co-exist in a datacenter it’s not a one or the other proposition. Let’s face it Cisco owns the network layer in most datacenters and they should, they make damn good networking hardware. But that’s just it. They make hardware much like HP, Dell, and IBM make hardware. It has limitations.
Don’t get me wrong hardware is a necessary evil for obvious reasons for all types of virtualization whether it be computer, networking, or storage. I just don’t understand the big debate regarding Cisco vs. NSX. It’s really very simple. Keep your existing Cisco hardware and get more out of it with NSX. I hear many making an argument that network virtualization is not needed because you cannot consolidate multiple switches or routers into one. This baffles me as well. If you support this argument or feel it is valid you don’t understand the value of Network virtualization
Continue reading “VMware NSX – What is with the Cisco or NSX debate?”
If you are familiar with “Network Scopes” from vCNS then “Transport Zones” should be familiar to you. If not here is some useful information to know regarding these Zones.
Transport Zones dictate which clusters can participate in the use of a particular network. Prior to creating your transport zones some thought should go into your network layout and what you want to be available to each cluster. Below are some different scenarios for transport zones.
In the “MoaC” environment I have three clusters. There is a Management Cluster in which all management servers are hosted included all components of NSX which will include all Logical and Edge routers that we have not yet configured, but this concept is important to know. I will not be placing any routers in any other cluster than my management cluster. I then have a Services cluster which will be hosting all of my provisioned machines that are not part of the core infrastructure, and finally I have a desktop cluster in which I will be hosting VDI desktop instances.
Continue reading “VMware NSX 6.1 for vSphere – Setting up Transport Zones”
I know your excited to get right down to the meat of the installtion, but there is some housekeeping that we need to get out of the way first. There are a number of pre-requisites that we need to ensure exist in the environment first.
- A properly configured vCenter Server with at least one cluster. (Ideally (2) clusters – (1) Management Cluster & 1(1) Cluster for everything else.)
- Cluster should have at least (2) hosts. (More would be better. Memory will be important)
- You will need to be using Distributed Virtual Switches (DvSwitch) NOT Standard vSwitches.
- If you are NOT running vSphere 5.5 you will need to have your physical switches configured for Multicast. (Unicast requires vSphere 5.5)
- You will need a vLAN on your physical network that you can utilize for VXLAN.
To give you an idea below is the configuration for the “MoaC” lab that I will be working in.
- vCenter 5.5 U2b
- (3) Clusters
- Management Cluster with (2) vSphere ESXi 5.5 U2 Hosts
- 32GB Memory
- Cluster only DvSwitch using NIC Teaming
- Services Cluster with (4) vSphere ESXi 5.5 U2 Hosts
- 196GB Memory
- Cluster only DvSwitch using LAG.
- Desktop Cluster with (2) vSphere ESXi 5.5 U2 Hosts
- 112GB Memory
- Cluster only DvSwitch using LAG.
- Physical vLAN trunked to all vSphere hosts in all clusters.
Continue reading “VMware NSX 6.1 for vSphere Step-By-Step Installation”