It seem that there is a bit of confusion around using vCO workflows with multi-machine blueprints. Before I discuss how to build vCO workflows for multi-machine blueprints I want to discuss the differences between single machine and multi-machine blueprints and how they relate to each other.
Single Machine Blueprints
Single machine blueprints are pretty straight forward. When a custom property is defined on a single machine blueprint it only affects that machine. Makes sense right? When we trigger a vCO workflow to run during a state transition of a single machine it interacts with only that machine. It is important to be mind full of the vCO workflows that are assigned to single machine blueprints that may be used as a component machine of a multi-machine blueprint.
Multi-Machine blueprints are extremely versatile allowing single machine blueprints to be grouped together for and requested in a single deployment. They are so versatile that you can add single machine blueprints of different types that are possible deployed to different types of Endpoint and across geographies. This however also makes them somewhat complex requiring you to be careful and thoughtful as to how you structure custom properties and the vCO workflows that you may choose to run on them.
Custom properties that are defined at the Multi-Machine blueprint are passed down to the component virtual machines that are a part of them. This can be very useful, but can also be a bit dangerous. Take the hostname property. If we define a hostname using this property at the Multi-Machine level it will cause chaos during the deployment and cause the deployment to fail because all machine will inherit the property and the value and ultimately have the same name.
This is the case with any different properties when used at the multi-machine level. You also need to be mindful of the effect of that property across different platform, provisioning types as well as geographies. This becomes even more complicated when executing state transition workflows that run vCO workflows. If you attach a workflow to the multi-machine it will in turn become attached to every component machine as well. This can be very useful if you want to execute the workflow on every component machine, however if that workflow is utilizing an entity that doesn’t exists at the parent multi-machine level it will again cause chaos for your deployment. The good news is it doesn’t have to as long as the vCO workflows are built to support the intended result.
In the following walk-through I will be using the Custom vCenter Folders Extension to demonstrate what you can do to account for the Multi-Machine and Single Machine aspects of vCO workflows.
Continue reading “vRealize Automation – vCAC 6.1 – Building vCO workflows for Multi-Machine Blueprints”
vCAC by default will place all provisioned machines into a vCenter folder named VRM. You can override this using the custom property VMware.VirtualCenter.Folder to tell vCAC where to place the provisioned machine. While this is great that you can tell vCAC where to place the provisioned machine it isn’t very flexible. I built the Custom vCenter Folder Extension to fix that and make folder placement as flexible as you need it to be. VM folder placement is just about organizing virtual machines. It provides a way to control access to these machines through vCenter as well. Many organizations control permissions to these environments using these folders and need to be able to place any machine where they need for these purposes.
Multi-Machine blueprints is another area where this extension adds value. You can control placement of virtual machines by defining the VMware.VirtualCenter.Folder property on a Multi-Machine blueprint folder, but all VM’s for all Multi-Machine apps are placed in the same folder creating confusion as to which VM’s belong to which Multi-Machine application. Now if you add NSX into the mix and you have Multi-Machine components spread all over the place with no way to easily determine which VM’s as well as NSX Edges go to which application.
When used with Multi-Machine blueprints the Custom vCEnter Folder Extension can place all component Virtual Machines as well as Deployed NSX Edge appliances in a folder named after the Multi-Machine application if you desire making it easy to identify related components of an application. This also allows you to easily permission vCenter access to the components of the application if necessary.
- Dynamic Folder Names based on custom naming scheme
- Multi-Machine folder placement including NSX Edge applince
- Automatic Multi-Machine folder removal when Multi-Machine app is destroyed
Continue reading “vRealize Automation – vCAC 6.1 – Custom vCenter Folder Extension”
One-to-One NAT environments allow you to perform both SNAT and DNAT for all machines provisioned behind and NSX Edge Gateway. For each machine provisioned onto the One-to-One NAT network an External IP is added to the Edge gateway for the NAT translation. The External IP is assigned from the External Network Profile that is assigned to the One-to One NAT Network Profile. Although the One-To-One NAT network will use NAT translation to communicate with the upstream networks (North – South) it is routed to other networks connected to the same NSX Edge Gateway. When deploying a multi-tier application that has multiple network tiers attached to an NSX Edge Gateway all the back-end networks are routable so it’s important no to re-use IP space across different Network Profiles.
In the below diagram there are three Multi-Machine apps. Each one has three web servers, two app servers, and two database servers. The database servers are on a private network, no NAT translation to the upstream networks. The App servers are using a One-to-Many NAT network where they are using SNAT to get access to the upstream network, and the Web Servers are using DNAT for both inbound and outbound traffic. You will notice that each of the Multi-Machine Apps are using the same Up address on the backend, however the IP’s assigned to the NSX Edge Gateway’s external interfaces are different. Notice for the One-to-One NAT that there is an equal number of external IP address. Lao notice in this scenario where we are using both One-to-One and One-To Many the external IP’s are on the same subnet. They all should come from the same External Network Profile that the related to the network that the NSX Edge Uplink interface is provisioned to.
Continue reading “vRealize Autoamtion – vCAC 6.1 – Creating a One to One NAT Network Profile”
Private networks have no upstream (North – South) NAT or routing when they are deployed. They are networks attached to the deployed NSX Edge Gateway that have East – West routing to other netowrks attached to the same NSX Edge Gateway and that is it. Due to this unlike the other NSX related Network Profiles we can create the Private Network Profile does not need to have an External Network Profile attached to it. It’s simply a range of IP’s to be used for the machines provisioned on to the network.
In the below diagram the blue network will be my private network. Machines placed on the blue network will only be able to communicate with machines placed on the orange or green network and not anything upstream. I can also limit it’s communications further by using security policies which we will discuss as a separate topic
Continue reading “vRealize Automation – vCAC 6.1 – Creating a Private Network Profile”
When configuring a NAT Network Profile you have two options. You can configure a one-to-one or a one-to-many. Here we are going to walk through creating a one-to-many NAT Network Profile. One-to-many NAT network are networks that do source NAT only. This will allow any machine provisioned onto the network to communicate out of the network under one IP address, however there is no NAT translation configured to come into the network for any services. When you use a one-to-many NAT network profile in a Multi-Machine Blueprint an NSX Edge Gateway will be deployed, however routing will not be enabled and a Source NAT rule and relevant firewall rules will be created. NAT network get IP address from a pool of IP’s that will be reused over and over again for each deployment. The nature of NAT let’s us reuse the IP’s because the different apps being deployed will all communicate using unique IP address on the outside of the provisioned Edge Gateway. Although I am using a class C network in my example I really don’t need to. If I will never have more than six machines on the NAT network I could use a /29 network if I wanted to, but for simplicity I used a class C and assigned a fairly large range……just in case 😉
In the below diagram I’m going to represent the orange network as a one-to-many NAT network. All machines provisioned behind the router will get an IP address from the NAT Pool and all will SNAT to the upstream network as the external IP address of the router. The external IP is assigned from the External Network Profile that is assigned to the NAT Network Profile.
Continue reading “vRealize Automation – vCAC 6.1 – Creating a One to Many NAT Network Profile”