vCloud Automation Center – vCAC 5.2 – Virtual Machine LifeCycle Demystified

vCAC has what is referred to as the “Master Workflow” which makes up the Virtual Machine Lifecycle. The Master workflow is the top level workflow states that a virtual machine will go through, throughout it’s life. These workflow states tie pretty closely to the Workflow stubs that are shipped with the designer, but they are not a direct match to them. I often see confusion around the workflow states and the workflow stubs. I’m hoping to clear up the confusion around this and help everyone understand the difference between them.

Master WorkFlow States

The vCAC Master workflow states are as follows:

  1. Request State
  2. Approval State
  3. Provision State
  4. Manage State
  5. Expired State
  6. Decommissioned State

Continue reading “vCloud Automation Center – vCAC 5.2 – Virtual Machine LifeCycle Demystified”

vCloud Automation Center – vCAC 5.2 – Installing the 5.2 Guest Agent on Linux

The Linux Guest agent has not changed much since 5.1. You will notice most everything except the agent version remains basically the same as my article on executing scripts with the 5.1 Linux Guest agent.

Linux Guest Agent

The Linux guest agent has a number of feature benefits that you receive if you utilize it. The Linux guest agent is a small agent that acts very similarly to the vCAC proxy agents. When it is installed you give it the name or IP address of the vCAC server. This allows it to check in with the server when it loads on a newly provisioned machine and determine if there is anything it needs to do. If the vCAC server has work for it to do it send the instructions and the agent executes the instructions on the local guest operating system. The guest agent comes with a number of pre-built scripts and functions, but also allows you to execute your own scripts. Some of the features available with the Linux Guest Agent are:

  • Disk Operations – Partition, Format, and mount disk that is added to the machine.
  • Execute Scripts – Execute scripts after the machine is provisioned.
  • Network Operations – Configure setting for additional network interfaces added to the machine.

Continue reading “vCloud Automation Center – vCAC 5.2 – Installing the 5.2 Guest Agent on Linux”

vCloud Automation Center – vCAC 5.2 – Installing the 5.2 Guest Agent on Windows

So I have been getting a lot of questions regarding the vCAC 5.2 Guest Agents. In vCAC 5.2 the guest agents have changed and there are a few bugs in the Windows Installation. Good new for those of you who had upgraded from vCAC 5.1, you don’t need to scramble to move form the 5.1 guest agent, to the vCAC 5.2 guest agent. The vCAC 5.1 guest agent will still work as usual as long as you had it configured for SSL. The big driver for the change to the Windows agent is Windows Server 2012. The previous vCAC 5.1 agent will not work with Windows Server 2012 so if you are planning on using 2012, you will need to use the 5.2 guest agent.

Installing the vCAC 5.2 Windows Agent

You have two options for using the vCAC guest agent. You can pre-install the agent in your templates, or if you want to keep your templates clean you can install the agent as part of the Sysprep customization by using customization specifications. For information on auto deploying the guest agent see the following post:
Continue reading “vCloud Automation Center – vCAC 5.2 – Installing the 5.2 Guest Agent on Windows”

DailyHypervisor Forums are online.

We have just launched our DailyHypervisor Forum located at http://www.dailyhypervisor.com/forum. Stop by, contribute and be a part of our community. The DH Forum is intended to be for all things cloud. Currently we have forums created for vCAC, vCD, vCO, Cloud General, and Openstack. More forum categories will be coming based on demand. If you have a category you would like to see shoot us a note and let us know.

Our goal is to create a common place where anyone can come to learn, get help, share ideas, or just about anything that will help foster knowledge regarding cloud computing. Considering this very blog is the announcement of our forum you could image there isn’t a whole lot happening yet so what are you waiting for, be the first. Go ask a question, post an issue, share a thought and let’s get things rolling.

vCloud Automation Center – vCAC 5.1 – HTTPS Installation

I have been receiving a lot of questions about HTTPS installations. In this article you will find instructions for performing an HTTPS installation of vCAC 5.1. In this article I am only providing screenshots that differ form the http installation. If you need you can refer to the http insallation documents:

The HTTP Installation instructions are the following:
vCloud Automation Center- vCAC 5.1 – vCAC Manager Installatio
vCloud Automation Cetner – vCAC 5.1 – DEM Installation
vCloud Automation Center – vCAC 5.1 – Laying the foundation
vCloud Automation Center – vCAC 5.1 – Connecting to vCenter

Getting everything in order

Make sure you completed all the items in vCloud Automation Center – vCAC 5.1 – What to know before you install
 
Continue reading “vCloud Automation Center – vCAC 5.1 – HTTPS Installation”

Just some more vSphere information

Here is some information about vSphere that I thought would be good to share with the world. As with everything else this is just a drop in the bucket. I’m currently working on putting together some upgrade videos and screenshots so take a look back and hopefully I will have them done by the end of the week.

Here is some interesting information about vSpehere and what it supports keep in mind these are just some notes I jotted down:

ESX 4 Hosts (vSphere Host)
256VM’s per host
64 Cores per host
512GB Ram per host

vSphere VMs (Hardware version 7)
8 vCPUs
256GB Ram
VMDirectPath I/O
Hot Plug Support (Supports CPU’s and Memory)
ESX 2.x and 3.x VM support
Paravirtual SCSI adapter
MSCS 2008
Persistent Reservations in vmkernel
LSI Logic SAS (Virtual SAS controller)

Networking Improvements
New iSCSI stack with 10-30% improved performance
TCPIP 2 Support (Based on FreeBSD 6.1 / IPv6 / locking and threading capabilties)
VMXNet3
MSI/MSI-X
Receive Side Scaling
VLAN offloading
VMware Directpath I/O

Storage Improvements
SCSI-3 Compliant
VMFS still SCSI-2
Target PortGroup Support (TPGS)
Asymmetric Logical Unit (ALUA)
Pluggable Storage Architecture (PSA)
Updated iSCSI stack
Native SATA

Service Console
64-bit, 2.6 based Linux kernel compatible with RHEL 5.2
Supports for both 32bit and 64bit applications
root file system stored in VMDK
vmkernel runs and owns device drivers only 64-bit
Address Space layout Randomization (ASLR)
No Linux dev packagers and libraries

CPU
Enhanced Intel step down
Enhanced AMD Power Now

Security
Trusted Platform Module (TPM)
Digitally signed and validated modules
Memory integrity techniques with microprocessor capabilities to protect against buffer-overflow

Guided Consolidation
500 Simultaneous Physical Machines
Modular Plug-in can be installed on different machine

Coverter

Physical / Virtual / 3rd party
Server 2008 Support
Covert Hyper-V Machines to VM’s

Update Manager

ESX / ESXi and Virtual Appliance Upgrades
Upgrade Virtual Hardware
VMware Tools
Base Line Groups

Upgrade vcenter steps

No SQL 2000 Support
2.x & 3.x Upgade Path
Upgrade vCenter
Upgrade Update Manager
Use Upgrade Manager to Upgrade Hosts
Upgrade VMware Tools, then the VMware Hardware.

vSphere Host Update Utility
3.x to 4.x
Dosn’t Upgrade VMFS Datastores or VMs
Installs with vSphere client
Support Rollback for ESX only
Can be used to install patch releases to standalone hosts
Copies Script and ISO to ESX hosts reboots and installs

VMware vSphere 4 (ESX 4.0, vCenter 4.0) Alarms and Host Profiles

Some are speculating that next Tuesday VMware is going to announce the release of VMware vSphere which is what essentially is Virtual Infrastructure 4.0 which would include ESX 4.0. I can’t say what VMware is going to do but over the next few weeks I will be publishing information on vSphere as well as some instructional videos. For now I have some teasers for you.

Here is a screen shot of the alarms available in vSphere. A you can see they have expanded the alarm feature from what was available in VI3.

vsphere_alarms

I’m sure most of you have heard of the new host profiles. If you haven’t had the fortune of checking out this cool new feature here are some screenshots to show you what options are available to you as part of a host profile. If you are not much for scripting and just can’t stand those pesky automated build scripts then you will love this feature. It gives you the ability to configure just about every aspect of the ESX host without having to deal with any scripting.

vsphere_host_profiles_1

vsphere_host_profiles_2

vsphere_host_profiles_3

vsphere_host_profiles_4

vsphere_host_profiles_5

vsphere_host_profiles_6

vsphere_host_profiles_7

As you can see in this screenshot all these settings are very easy to set via the GUI.

vsphere_host_profiles_8

So stay tuned as there is much more to come. I’m currently working on making videos covering installing, and configuring vSphere from the ground up and plan on getting into all of the new feature available in this release.

Network configuration for automated ESX deployment

I have been asked this question a few times so I thought it would be wise to post an article on it. When deploying an automated build script with the kickstart and/or installation files located on http, ftp, or nfs there are network configuration dependencies that you need to be aware of.

The ESX installer is a modified version of anaconda which is the same installer used for RedHat and a few or Linux variants. Anaconda is what allows for the kickstart portion of the automated build script. Anaconda itself has some limitations as far as what it supports.

Anaconda does not support 802.1q VLAN tagging. If you plan on tagging the service console network traffic this will affect your kickstart installation. The anaconda installer will not tag the vlan id to the traffic and therefor will not be able to perform the installation. You have a few options on how to handle this.

  1. Don’t have the networking folks tag the vlan until after the install finished.  However this can cause problems if your post installation script needs to grab some files from across the network so be aware of what you are doing during your post installation.
  2. Use a dedicated deployment network.  If you use this option take a look at my ESX 3.x Deployment script #2 located on our download page.
  3. Don’t tag the service console traffic.  If you share vSwitch0 with both the vmkernel(vMotion) interface and the service console only tag the vmkernel traffic.  This still allows for isolation of the traffic.  Have your network guys set the service console vlan as the native(untagged)vlan.
  4. Create a custom installation CD with all the necessary files located on the CD.

ESX local disk partitioning

I had a conversation with some colleagues of mine about ESX local disk partitioning and some interesting questions were raised.

How many are creating local vmfs storage on their ESX servers?
How many actually use that local vmfs storage?

Typically it is frowned upon to store vm’s on local vmfs because you loose the advances features of ESX such as vMotion, DRS, and HA. So if you don’t run vm’s from the local vmfs, then why create it? Creating this local datastore promotes it’s use just by being there. If you’re short on SAN space and need to deploy a vm and can’t wait for the SAN admins to present you more storage, what do you do? I’m sure more frequently than not you deploy to the local storage to fill the need for the vm. I’m also sure that those at least 20% of the time those vm’s continue to live there.

Is the answer to not utilize local vmfs storage? If you don’t what do you do with the left over space? Not all servers are created equal, sometimes servers have different size local drives so you have a few options. Do you create standards for your partitioning and set a partition such as / to grow and have varying configurations amongst your hosts? Or do you create a standard for all partition sizes and leave the rest of the space raw?

Typically this is the partition scheme I use for all deployments I do.

Boot = 250 (Primary)
Swap = 1600 (Primary)
/ = Fill (Primary)
/var = 4096 (Extended)
/opt = 4096 (Extended)
/tmp =4096 (Extended)
/home =4096 (Extended)
vmkcore = 100 (Extended)

This configuration will create inconsistencies amongst hosts with varying drive sizes. To maintain consistency I could do something like the following and leave the rest of the space raw.

Boot = 250 (Primary)
Swap = 1600 (Primary)
/ = 8192 (Primary)
/var = 4096 (Extended)
/opt = 4096 (Extended)
/tmp =4096 (Extended)
/home =4096 (Extended)
vmkcore = 100 (Extended)

I’m a fan for utilizing all the space you have available, but others like consistency. What is your preference? Weight in an let us know.