ESX automated deployment email completion notification

How would you like to kick off your ESX installation, then go have some coffee, go for a jog, or just hang out by the water cooler until it is finished without worrying if you’re wasting time while it’s waiting done and waiting for you. Well you can with this ESX email script. Incorporating this script as part of your ESX automated deployment script allows you to configure your server to email you once the post installation configuration is finished.

So what do you need to do? It simple you can get the mail_notify script that I found on yellow-bricks.com from our downloads page. Once you have the script you will need to get it on to your server along with the MIME Lite.pm file that you can download here. Once you download and extract the package you can find the Lite.pm file under /lib/MIME/ folder.

The take the Lite.pm file and the mail_notify.pl file and tar them together for easy retrieval. Then upload the mail_notify.tar file to your web server. Next include the following in your automated deployment script.

##### Setting up Mail Notification ########
echo Setting up mail notification
echo Setting up mail notification >> /var/log/post_install.log

cd /tmp
lwp-download http://[server ip]/path/mail_notify.tar
tar xvf mail_notify.tar
mkdir /usr/lib/perl5/5.8.0/MIME
mv Lite.pm /usr/lib/perl5/5.8.0/MIME/

##### Move the files to where they belong #######
mv mail_notify.pl /usr/local/bin/
chmod +x /usr/local/bin/mail_notify.pl

####### Let’s send an email that the install is finished #####
/usr/local/bin/mail_notify.pl -t youremail@yourdomain.com -s “Server installation complete” -a /var/log/post_install.log -m “Server Installation complete please review the attached log file to verify your server installed correctly” -r [your smtp server]

Optionally you could set the smtp server in the mail_notify.pl script and not have to specify when sending a mail message.

if you include this at the end of your post installation portion part of your script but before the EOF line you will get a nice email notification informing you that your installation has finished with the post_install.log file attached.

Network configuration for automated ESX deployment

I have been asked this question a few times so I thought it would be wise to post an article on it. When deploying an automated build script with the kickstart and/or installation files located on http, ftp, or nfs there are network configuration dependencies that you need to be aware of.

The ESX installer is a modified version of anaconda which is the same installer used for RedHat and a few or Linux variants. Anaconda is what allows for the kickstart portion of the automated build script. Anaconda itself has some limitations as far as what it supports.

Anaconda does not support 802.1q VLAN tagging. If you plan on tagging the service console network traffic this will affect your kickstart installation. The anaconda installer will not tag the vlan id to the traffic and therefor will not be able to perform the installation. You have a few options on how to handle this.

  1. Don’t have the networking folks tag the vlan until after the install finished.  However this can cause problems if your post installation script needs to grab some files from across the network so be aware of what you are doing during your post installation.
  2. Use a dedicated deployment network.  If you use this option take a look at my ESX 3.x Deployment script #2 located on our download page.
  3. Don’t tag the service console traffic.  If you share vSwitch0 with both the vmkernel(vMotion) interface and the service console only tag the vmkernel traffic.  This still allows for isolation of the traffic.  Have your network guys set the service console vlan as the native(untagged)vlan.
  4. Create a custom installation CD with all the necessary files located on the CD.

Using the Ultimate Deployment Appliance to test ESX kickstart scripts – Part II

In Part 2 of this series we are going to deploy our virtual ESX host in a VMware Workstation 6.5 virtual machine. We will utilize the UDA setup that we created in the first part to this series. If you haven’t setup your UDA you will want to do so before proceeding. Make sure you check out the sample deployment scripts available on our download page. In this example I am deploying VMware ESX 3.5 Update 4 in VMware Workstation 6.5 build 126130.



Using the Ultimate Deployment Appliance to test ESX kickstart scripts – Part I

In this series I am going to walk you through setting up the Ultimate Deployment Appliance (UDA) and VMware Workstation 6.5 to test Automated ESX Deployment Scripts (kickstart).  The same principals that you will learn in this video also apply to using the UDA in a physical environment. The UDA is a very powerful appliance and I have found many uses for it. Using it as a medium to quickly and effectively test deployment scripts that I develop is just one.

Even in environments where the UDA is not allowed it can still be utilized. I regularly carry a 5 port gigabit switch which I can use to connect to my laptop to up to (4) servers to quickly deploy up to (4) ESX hosts at a time.



Deploying Automted Kickstart Scripts Over HTTP

Originally I was going to cover all the various options for initiating your automated kickstart installation as “Automated Deployment of ESX Hosts Part IV”, but I have since decided to cover each method individually as there is a lot to cover and it makes more sense to break them out.

In this post I am going to cover deploying your servers over the network utilizing HTTP. To begin you will need a few things in place for this to work.  Below is a list of what you will need:

  • Web Server to hosts the kickstart files and optionally your ESX installation.
  • ESX Installation media or ISO’s for all versions of ESX you plan to deploy
  • Your kickstart script

The first thing we need to do is setup our web server so we can host our kickstart files and optionally our installation files.  You can utilize apache, IIS, or whatever your favorite web server is for hosting HTTP.  You will need to configure a folder under your web server root for the files to be stored.  Below is my recommended structure.

-webroot
—deployment
——ESX35U1
——ESX35U2
——ESX35U3
——ESX35U4
——kickstart

Once the folder structure is created we need to copy the contents of the installation media to the respective folder. To do this you will literally copy everything on the CD and place it in the folder. Then next you will need to copy your kickstart.cfg files to the kickstart folder.

Once you have all the files uploaded to the web server it is a good idea to use your web browser to test that you are able to access them.

As part of our kickstart we define where we are going to be installing from with the following line replacing server_IP with your server IP address and ESX25U4 with the version you would like to install.

url –url http://server_IP/deployment/ESX35U4

If you wanted to pull just your kickstart.cfg files form the http server but install from the local CD media you would replace the above string with “cdrom” to let the kickstart know to look to the cdrom drive for the installation media.

Now that we have our web server up, our installation copied to our webserver, and our kickstart.cfg files on the server we can kick off our kickstart installation.

To do this we need to boot the server from the installation CD. You can boot from the CD in the cdrom drive or remote mounted over a lights out port like iLo, DRAC, or RSA. If you are going to remote mount the CD over a lights out connection you can use a much smaller portion of the ESX CD.

On your ESX installation media there is an iso file named boot.iso located under the “images” folder on the CD. You can extract that ISO image which is roughly 4mb and remote mount that to your server for the boot process if you intend to install over HTTP.

OK so now we boot off of our media either the full ESX CD or the boot.iso image and when the ESX installation screen appears we need to tell the installation where to find the kickstart file. There are a couple of options for this which are below:

If you are using dhcp then your installation string will look similar to the below string:

esx append ip=dhcp ksdevice=eth0 network ks=http://server_name/deployment/kickstart/kickstart.cfg

If you are not using dhcp it would like similar to the follow string:

esx append ip=192.168.1.2 netmask=255.255.255.0 gateway=192.168.1.1 ksdevice=eth0 network ks=http://Server_IP/deployment/kickstart/kickstart.cfg

The statement ksdevice=eth0 tells anaconda (the installer) to use the eth0 interface for the install. I recommend always using eth0 for your installs. ESX will by default make the install interface the Service Console interface. So it will become the interface that is assigned to vSwitch0.

If you are using a seperate kickstart file for each server then you can call each one by name. If you are using a script like the one I discuss here then you will only need to have one kickstart file.

ESX 3.x Deployment Script # 3

This script is very similar to ESX 3.x Deployment Script #1, but I made a handy change. I built this script to allow for easier modification for each ESX host you want to deploy. Once you change all the settings you need changed there is one important area where you will add information about all your ESX hosts.

Below if the area that you will need to be concerned with:

if ['hostname -s' == "esxhost1" ] ; then
esxcfg-vswif -i [Service_Console_IP] -n [Service_Console_Netmask] vswif0
esxcfg-vmknic -a -i [VMKernel_IP] -n [VMKernel_Netmask] "vMotion"
fi

You will create this if statement for each of your esxhosts you want to deploy. Once you setup each servers information in this area all you need to do is change the hostname to match the server you are deploying and that is it. If you use dhcp to set the initial installation IP and it is able to resolve to the appropriate hostname then you won’t even have to change the script.

For example if you change this line:

network --device eth0 --bootproto static --ip [SC IP ADDRESS] --netmask [SC NETMASK] --gateway [SC GATEWAY] --nameserver [NAMESERVERS comma serperated] --hostname [HOSTNAME] --addvmportgroup=0

to the following:

network --device eth0 --bootproto dhcp

and then add the following setting the appropriate IP addresses and hostnames:

if ['hostname -s' == "esxhost1" ] ; then
esxcfg-vswif -i [Service_Console_IP] -n [Service_Console_Netmask] vswif0
esxcfg-vmknic -a -i [VMKernel_IP] -n [VMKernel_Netmask] "vMotion"
fi

if ['hostname -s' == "esxhost2" ] ; then
esxcfg-vswif -i [Service_Console_IP] -n [Service_Console_Netmask] vswif0
esxcfg-vmknic -a -i [VMKernel_IP] -n [VMKernel_Netmask] "vMotion"
fi

and you setup each ESX server in dhcp and DNS you will never need to modify this script. You need to ensure that the DNS and gateway that the server initially get’s from DHCP are correct. If you are doing this on a different subnet then what you will be running your ESX server on then you will need to do this a little differently. This can be done with my ESX 3.x Deployment Script #2.

I have included a script with this code included in our download section.

VI Toolkit powershell simple script #4 – VM Information

This is a good powershell script for tacking virtual machine inforamtion for change management. It will output the vm’s name, the host it is on, the powerstate, Memory, Number of CPU’s, IP address, and FQDN to a csv file.


$IPprop = @{ Name = "IP Address"; Expression = { $_.Guest.IpAddress } }
$HostNameProp = @{ Name = "Hostname"; Expression = { $_.Guest.Hostname } }
Get-VM | select name, host, powerstate, MemoryMB, numCPU, $IPprop, $HostNameProp | export-csv c:vm_info.csv

Fixed: VMware Tools status shows as not running after running VMware Consolidated Backup

A while back I mentioned that VMware Tools would appear to change to a not running status after a VCB Snapshot was taken. Vmware said a fix would be forthcoming in ESX U4. VMegalodon posted on the communities this morning that he is running VC 2.5U3 and ESX 3.5 U4 (Which is probably a bad combination…) and the VMware Tools issue appears to be corrected.

So, what are you waiting for?? Get to upgrading!

Thanks VMegalodon!

VMware HA Cluster Sizing Considerations

To properly size a HA fail over cluster there are a few things that need to be determined.  You need to know how many hosts are going to be in your cluster, how many hosts you want to be able to fail (N+?), and it helps to know resource utilization information about your vm’s to gauge fluctuation.  Once we know this information we can use a simple formula to determine the maximum utilization for each host to maintain the appropriate DRS fail over level.

Here is an example:

Let’s say we have 5 hosts in a DRS cluster and we want to be able to fail (1) hosts (N+1).  We also want to have 10% overhead on each server to account for resource fluctuation.  First we need take 10% off the top of all (5) servers which leaves up with 90% utilizable resources on all hosts.  Next we need to account for the loss of (1) hosts.  In the event that a host is loss we need to distribute its load across the remaining (4) host.  To do this we need to divide up one hosts 90% possible resources by (4) remaining hosts.  This tells us that we need to distribute 22.5% of the servers load to each of the remaining hosts.

Taking in to account the original 10% over head plus the 22.5% capacity needed for fail over we need to have 32.5% of each hosts resources available which means we can only utilize 67.5% of each host in the cluster to maintain an N+1 fail over cluster with 10% overhead for resource fluctuation.  The formula for this would be:

((100 – %Overhead)*#host_failures)/(num_hosts – #host_failures)+%overhead = overhead needed per ESX host

Example 1:

((100-10)*1)/(5-1)+10 = 32.5    
(5 Server cluster with 10% overhead allowing 1 host failure) 67.5& of each host usable

((100-20)*2)/(8 -2)+20 =46.6   
(8 Server cluster with 20% overhead allowing for 2 host failures) 53.4% of each host usable

Example 2:

Fail over of 1 host

((100-20)*1)/(8 -1)+20 =31.4   
(8 Server cluster with 20% overhead allowing for 1 host failures) 68.6% of each host usable

Fail over of 2 hosts

((100-20)*2)/(8 -2)+20 =46.6   
(8 Server cluster with 20% overhead allowing for 2 host failures) 53.4% of each host usable

Determining the %Overhead can be tricky without a good capacity assessment so be careful if you don’t allocate enough overhead and you have host failures performance can degrade and you could experience contention within the environment.  I know some of the numbers seem dramatic but redundancy comes with a cost no matter what form of redundancy it may be.

VMware ESX 3.5 Update 4 Released

What’s New

Notes:

1. Not all combinations of VirtualCenter and ESX Server versions are supported and not all of these highlighted features are available unless you are using VirtualCenter 2.5 Update 4 with ESX Server 3.5 Update 4. See the ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility Matrixes for more information on compatibility.
2. This version of ESX Server requires a VMware Tools upgrade.

The following information provides highlights of some of the enhancements available in this release of VMware ESX Server:

Expanded Support for Enhanced vmxnet Adapter — This version of ESX Server includes an updated version of the VMXNET driver (VMXNET enhanced) for the following guest operating systems:

* Microsoft Windows Server 2003, Standard Edition (32-bit)
* Microsoft Windows Server 2003, Standard Edition (64-bit)
* Microsoft Windows Server 2003, Web Edition
* Microsoft Windows Small Business Server 2003
* Microsoft Windows XP Professional (32-bit)

The new VMXNET version improves virtual machine networking performance and requires VMware tools upgrade.

Enablement of Intel Xeon Processor 5500 Series — Support for the Xeon processor 5500 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

QLogic Fibre Channel Adapter Driver Update — The driver and firmware for the QLogic fibre channel adapters have been updated to version 7.08-vm66 and 4.04.06 respectively. This release provides interoperability fixes for QLogic Management Tools for FC Adapters and enhanced NPIV support.

Emulex Fibre Channel Adapter Driver Update — The driver for Emulex Fibre Channel Adapters has been upgraded to version 7.4.0.40. This release provides support for the HBAnyware 4.0 Emulex management suite.

LSI megaraid_sas and mptscsi Storage Controller Driver Update — The drivers for LSI megaraid_sas and mptscsi storage controllers have been updated to version 3.19vmw and 2.6.48.18 vmw respectively. The upgrade improves performance and enhance event handling capabilities for these two drivers.

Newly Supported Guest Operating Systems — Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the Guest Operating System Installation Guide: http://www.vmware.com/pdf/GuestOS_guide.pdf.

* SUSE Linux Enterprise Server 11 (32-bit and 64-bit).
* SUSE Linux Enterprise Desktop 11 (32-bit and 64-bit).
* Ubuntu 8.10 Desktop Edition and Server Edition (32-bit and 64-bit).
* Windows Preinstallation Environment 2.0 (32-bit and 64-bit).

Furthermore, pre-built kernel modules (PBMs) were added in this release for the following guests:

* Ubuntu 8.10
* Ubuntu 8.04.2

Newly Supported Management Agents — Refer to VMware ESX Server Supported Hardware Lifecycle Management Agents for the most up-to-date information on supported management agents.

Newly Supported I/O Devices — in-box support for the following on-board processors, IO devices, and storage subsystems:

SAS Controllers and SATA Controllers:

The following are newly supported SATA Controllers.

* PMC 8011 (for SAS and SATA drives)
* Intel ICH9
* Intel ICH10
* CERC 6/I SATA/SAS Integrated RAID Controller (for SAS and SATA drivers)
* HP Smart Array P700m Controller

Notes:
1. Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5 (KB 1008673).
2. Storing VMFS datastores on native SATA drives is not supported.

Network Cards: The following are newly supported network interface cards:

* HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter
* HP NC362i Integrated Dual port Gigabit Server Adapter
* Intel 82598EB 10 Gigabit AT Network Connection
* HP NC360m Dual 1 Gigabit/NC364m Quad 1 Gigabit
* Intel Gigabit CT Desktop Adapter
* Intel 82574L Gigabit Network Connection
* Intel 10 Gigabit XF SR Dual Port Server Adapter
* Intel 10 Gigabit XF SR Server Adapter
* Intel 10 Gigabit XF LR Server Adapter
* Intel 10 Gigabit CX4 Dual Port Server Adapter
* Intel 10 Gigabit AF DA Dual Port Server Adapter
* Intel 10 Gigabit AT Server Adapter
* Intel 82598EB 10 Gigabit AT CX4 Network Connection
* NetXtreme BCM5722 Gigabit Ethernet
* NetXtreme BCM5755 Gigabit Ethernet
* NetXtreme BCM5755M Gigabit Ethernet
* NetXtreme BCM5756 Gigabit Ethernet

Expanded Support: The E1000 Intel network interface card (NIC) is now available for NetWare 5 and NetWare 6 guest operating systems.

Onboard Management Processors:

* IBM system management processor (iBMC)

Storage Arrays:

* SUN StorageTek 2530 SAS Array
* Sun Storage 6580 Array
* Sun Storage 6780 Array