VMware HA Cluster Sizing Considerations

To properly size a HA fail over cluster there are a few things that need to be determined.  You need to know how many hosts are going to be in your cluster, how many hosts you want to be able to fail (N+?), and it helps to know resource utilization information about your vm’s to gauge fluctuation.  Once we know this information we can use a simple formula to determine the maximum utilization for each host to maintain the appropriate DRS fail over level.

Here is an example:

Let’s say we have 5 hosts in a DRS cluster and we want to be able to fail (1) hosts (N+1).  We also want to have 10% overhead on each server to account for resource fluctuation.  First we need take 10% off the top of all (5) servers which leaves up with 90% utilizable resources on all hosts.  Next we need to account for the loss of (1) hosts.  In the event that a host is loss we need to distribute its load across the remaining (4) host.  To do this we need to divide up one hosts 90% possible resources by (4) remaining hosts.  This tells us that we need to distribute 22.5% of the servers load to each of the remaining hosts.

Taking in to account the original 10% over head plus the 22.5% capacity needed for fail over we need to have 32.5% of each hosts resources available which means we can only utilize 67.5% of each host in the cluster to maintain an N+1 fail over cluster with 10% overhead for resource fluctuation.  The formula for this would be:

((100 – %Overhead)*#host_failures)/(num_hosts – #host_failures)+%overhead = overhead needed per ESX host

Example 1:

((100-10)*1)/(5-1)+10 = 32.5    
(5 Server cluster with 10% overhead allowing 1 host failure) 67.5& of each host usable

((100-20)*2)/(8 -2)+20 =46.6   
(8 Server cluster with 20% overhead allowing for 2 host failures) 53.4% of each host usable

Example 2:

Fail over of 1 host

((100-20)*1)/(8 -1)+20 =31.4   
(8 Server cluster with 20% overhead allowing for 1 host failures) 68.6% of each host usable

Fail over of 2 hosts

((100-20)*2)/(8 -2)+20 =46.6   
(8 Server cluster with 20% overhead allowing for 2 host failures) 53.4% of each host usable

Determining the %Overhead can be tricky without a good capacity assessment so be careful if you don’t allocate enough overhead and you have host failures performance can degrade and you could experience contention within the environment.  I know some of the numbers seem dramatic but redundancy comes with a cost no matter what form of redundancy it may be.

VMware ESX 3.5 Update 4 Released

What’s New

Notes:

1. Not all combinations of VirtualCenter and ESX Server versions are supported and not all of these highlighted features are available unless you are using VirtualCenter 2.5 Update 4 with ESX Server 3.5 Update 4. See the ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility Matrixes for more information on compatibility.
2. This version of ESX Server requires a VMware Tools upgrade.

The following information provides highlights of some of the enhancements available in this release of VMware ESX Server:

Expanded Support for Enhanced vmxnet Adapter — This version of ESX Server includes an updated version of the VMXNET driver (VMXNET enhanced) for the following guest operating systems:

* Microsoft Windows Server 2003, Standard Edition (32-bit)
* Microsoft Windows Server 2003, Standard Edition (64-bit)
* Microsoft Windows Server 2003, Web Edition
* Microsoft Windows Small Business Server 2003
* Microsoft Windows XP Professional (32-bit)

The new VMXNET version improves virtual machine networking performance and requires VMware tools upgrade.

Enablement of Intel Xeon Processor 5500 Series — Support for the Xeon processor 5500 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

QLogic Fibre Channel Adapter Driver Update — The driver and firmware for the QLogic fibre channel adapters have been updated to version 7.08-vm66 and 4.04.06 respectively. This release provides interoperability fixes for QLogic Management Tools for FC Adapters and enhanced NPIV support.

Emulex Fibre Channel Adapter Driver Update — The driver for Emulex Fibre Channel Adapters has been upgraded to version 7.4.0.40. This release provides support for the HBAnyware 4.0 Emulex management suite.

LSI megaraid_sas and mptscsi Storage Controller Driver Update — The drivers for LSI megaraid_sas and mptscsi storage controllers have been updated to version 3.19vmw and 2.6.48.18 vmw respectively. The upgrade improves performance and enhance event handling capabilities for these two drivers.

Newly Supported Guest Operating Systems — Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the Guest Operating System Installation Guide: http://www.vmware.com/pdf/GuestOS_guide.pdf.

* SUSE Linux Enterprise Server 11 (32-bit and 64-bit).
* SUSE Linux Enterprise Desktop 11 (32-bit and 64-bit).
* Ubuntu 8.10 Desktop Edition and Server Edition (32-bit and 64-bit).
* Windows Preinstallation Environment 2.0 (32-bit and 64-bit).

Furthermore, pre-built kernel modules (PBMs) were added in this release for the following guests:

* Ubuntu 8.10
* Ubuntu 8.04.2

Newly Supported Management Agents — Refer to VMware ESX Server Supported Hardware Lifecycle Management Agents for the most up-to-date information on supported management agents.

Newly Supported I/O Devices — in-box support for the following on-board processors, IO devices, and storage subsystems:

SAS Controllers and SATA Controllers:

The following are newly supported SATA Controllers.

* PMC 8011 (for SAS and SATA drives)
* Intel ICH9
* Intel ICH10
* CERC 6/I SATA/SAS Integrated RAID Controller (for SAS and SATA drivers)
* HP Smart Array P700m Controller

Notes:
1. Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5 (KB 1008673).
2. Storing VMFS datastores on native SATA drives is not supported.

Network Cards: The following are newly supported network interface cards:

* HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter
* HP NC362i Integrated Dual port Gigabit Server Adapter
* Intel 82598EB 10 Gigabit AT Network Connection
* HP NC360m Dual 1 Gigabit/NC364m Quad 1 Gigabit
* Intel Gigabit CT Desktop Adapter
* Intel 82574L Gigabit Network Connection
* Intel 10 Gigabit XF SR Dual Port Server Adapter
* Intel 10 Gigabit XF SR Server Adapter
* Intel 10 Gigabit XF LR Server Adapter
* Intel 10 Gigabit CX4 Dual Port Server Adapter
* Intel 10 Gigabit AF DA Dual Port Server Adapter
* Intel 10 Gigabit AT Server Adapter
* Intel 82598EB 10 Gigabit AT CX4 Network Connection
* NetXtreme BCM5722 Gigabit Ethernet
* NetXtreme BCM5755 Gigabit Ethernet
* NetXtreme BCM5755M Gigabit Ethernet
* NetXtreme BCM5756 Gigabit Ethernet

Expanded Support: The E1000 Intel network interface card (NIC) is now available for NetWare 5 and NetWare 6 guest operating systems.

Onboard Management Processors:

* IBM system management processor (iBMC)

Storage Arrays:

* SUN StorageTek 2530 SAS Array
* Sun Storage 6580 Array
* Sun Storage 6780 Array

VI Toolkit powershell simple script #3 – Find Snapshots

This really isn’t a script but more of a command but I think everyone gets the idea.

Find VM Snapshots for all servers in Virtual Infrastructure and display the VM Name, Snapshot Name, Date Created and power state. You can limit the VM’s this affects by using the location specific commands covered earlier.

Get-snapshot –vm (get-vm) | select vm, name, created, powerstate | export-csv [path_filename_csv]

Nothing to modify except the location and name of the csv file you would like to save the information to.  Go ahead and give it a try you’ll be amazed at what you will find.  I frequently hear “We don’t use snapshots” and then I run this little command and always find a few.  Needless to say this little guy becomes a power tool in the VMware admins arsenal of weapons.

Service Console Memory for ESX 3.x

I’ve been asked this question a lot lately, “How much memory should we assign to the service console?”  My default answer is always 800Mb.  I have a number of reasons why I would recommend this, but the short answer is “Why not?”  What do you really have to loose by assigning the service console 800Mb?  The answer is nothing, but you do have a lot to gain.  Even if you are not running any agents there are benefits to this.  One thing most people dont’ realize is even if you don’t install any third party agents at the service console you are still running agents.  There is the vpxa agent that allows you to communicate with vCenter, and there is the HA agent if you are running VMware HA, and if you have more than 4Gb of memory installed VMware recommends increasing the RAM to 512Mb.

Considering all this and that most systems today have 16Gb of memory or a lot more I just don’t understand why anyone would leave the service console at 272mb.  When deploying a new server always create a swap partition of 1600Mb which is double the maximum amount of service console memory.  This will at least allow you to increase the service console memory later without having to re-deploy your host.   Having an easy option when you call tech support with a problem and they tell you to increase the memory to 800Mb is always a great idea.  I’ve seen a large number of users having HA issues that have called tech support and the first thing they are told is to increase the SC memory to 800Mb.  So before you deploy your next ESX server take the Service Console memory into consideration, and at least create the 1600Mb SWAP partition so you can easily bump the memory up to the max of 800Mb.

Virtualization and Security

Security is huge when it comes to virtualization. The extra moving parts require a special care and feeding.  The Defense Information Services Agency is basically the IT department for the US Defence Department. They have an arm, called the Information Assurance Support Environment. The IASE is a has some serious information about securing any system. They post Security Technical Implementation Guides (STIGS) and Security Checklists that are very comprehensive. They even have STIGs and Checklists for all the different versions of winders. Some of the information is specific to the DoD, but those things, like certificates, etc. still have a place in any IT shop. I subscribe to their newsletter, so they just came to mind again because they posted a Draft XenApp STIG. I glanced at the docs, but they look pretty deep and I have reading narcolepsy…

So, why do I bring this up? They also posted a STIG for ESX Server a while ago and recently posted an updated Security Checklist for ESX. I know that Sid used these as a guide for his kickstart / post installation script. When coupled with the Unix STIG and Checklist, you will get a very secure system. So go check them out. They a free and that is my favorite price. So go get some.

:o)

ESX Datastore sizing and allocation

I have been seeing a lot of activity in the VMTN forums regarding datastore sizing and free space.  That said I decided to write a post about this topic.  There are endless possibilities when it comes to datastore sizing and configurations but I’m going to focus on a few keep points that should be considered when structuring your ESX datastores.

All VM files kept together

In this configuration all VM files are kept together on one datastore.  This includes the vmdk file for each drive allocated to the VM, the vmx file, log files, the nvram file, and the vswap file.  When storing virtual machines this way there are some key considerations that need to be taken into account.  You should always allow for 20% overhead on your datastores to allow enough space for snapshots and vmdk growth if necessary.   When allocating for this overhead you have to realize that when a VM is powered on a vswap file is created for the virtual machine equal in size to the VM’s memory.  This has to be accounted for when allocating your 20% overhead.

For Fiber Channel and iSCSI SAN’s you should also limit the number of VM’s per datastore to no more than 16.  WIth these types of datastores file locking and scsi reservations create extra overhead.  Limiting the number of VM’s to 16 or less reduces the risk of contention on the datastore.  So how big should you make your datastores?  That’s a good question and it will vary from environment to environment.  I always recommend 500GB as a good starting point.  This is not a number that works for everyone but I use it because it helps limit the number of vm’s per datastore.

Consider the following your standard VM template consist of two drives an OS drive and a Data drive.  Your OS drive is standardized at 25Gb and your Data drives default starts at 20Gb with larger drives when needed.  Your standard template also allocated 2Gb of memory to your VM.  Anticipating a max of 16 VM’s per datastore I would allocate as follows:

((osdrive + datadrive) * 16) = total vm disk space + (memory * 16) =vm disk & vswap + (16 * 100Mb(log files) = total VM space needed * 20% overhead

(25 + 20) * 16 = 720Gb + ((2Gb * 16)=32) = 752Gb + ((16 * 100mb) = 1.6Gb) = 753.6Gb * 20% = 904.32Gb Round up to 910Gb needed

Depending on how you carve up your storage you may want to bump this to 960Gb or 1024Gb so as you can see the 500Gb rule was proven wrong for this scenario.  The point is you should have a standardized OS and data partition to properly estimate and determine a standardized datastore size.  This will never be perfect as there will always be VM’s that are anomalies.

Keep in mind if you fill your datastore and don’t leave room for the vswp file that is created when a VM powers on you will not be able to power on the VM.  Also if you have a snapshot that grows to fill a datastore the VM will crash and your only option to commit the snapshot will be to add an extent to the datastore because you will need space to commit the changes.  Extents are not recommended should be avoided as much as possible.

Separate VM vswap files

There are a number of options available in Virtual Infrastructure on how to handle the VM’s vswap file.  You can set the location of this file at the vm, the ESX Server, or the cluster.  You can choose to locate it on a local datastore or one or more shared datastores. Below are some examples:

Assign a local datastore per ESX server for all VM’s running on that server.

This option allows you to utilize a local vmfs datastore to store the VM’s vswap saving valuable disk space.  When using a local datastore I recommend allocating enough storage for all the available memory in the host + 25% for memory over subscription.

Create one shared datastore per ESX cluster.

In this option you can set one datastore at the cluster level for all vswap files.  This allows you to create one large datastore and set the configuration option once and never worry about it again.  Again I would allocate enough space for the total amount of memory for the whole cluster +25% for over subscription.

Multiple shared datastores in a cluster.

In this option you have different scenarios.  You can have one shared datastore per esx hosts in the cluster or one datastore for every two servers in the cluster, etc..  You would need to assign the vswap datastore at the esx host level for this configuration.

Note: When moving the vswap to a separate location it can impact the performance of vmotion.  It could extend the amount of time it takes for the vm to fully migrate from one host to another.

Hybrid Configuration.

Just as it’s possible to locate the vswap on another datastore it is also possible to split the vmdk disks on to separate datastores.  For instance you could have datastores for:

OS Drives
Data Drives
Page Files
vSwap files

To achieve this you would tell the vm where to create the drive and have different datastores allocated for these different purposes.  This is especially handy when planning to implement DR.  This allows you to only replicate the data you want and skip the stuff you don’t like the vswap and page files.  With this configuration you can also have different replication strategies for the data drives an OS drives.

Hope you found this post useful.

Script to Restart VMware Tools Remotely

I was “forced” to learn how Powershell and the VI Toolkit works for an engagement a few months ago. Once you learn how powershell works and how the VI Toolkit integrates with Powershell, you will love it. This is coming from a linux guy who sees some of the VBScript stuff and just goes “HUH?!?” If you like VB SCripts, check out this post on Jase’s Place. Back in the day, I knew DOS scripting pretty well and I have learned rudimentary bash and perl scripting. To be frank, Powershell was easy for a knucklehead like me to pick up. I use it frequently to automate tasks in VI3 and the winders VMs it manages.

In my last post, I mentioned that VCB snapshots will cause VMware Tools to appear to go off line, even though they are still running. The fixes were to restart the management services on the host or login/logout of the guest. Restarting the management services on the host could cause issues if someone set up to automatically start VMs on boot. Logging in to the VMs is fine unless you have hundreds of VMs.

A third option is to restart the VMware Tools service. This is something that can easily be scripted as long as you have admin access to the guest via RPC services. There are a few methods to script the restart of services on a server remotely. The first is using the sc.exe utility. The syntax of the script looks like this:

sc.exe \guestname stop VMTools
sc.exe \guestname start VMTools

This can be easily scripted using the good-old DOS for command. Create a text file (C:scriptsserverlist.txt) with all of the servers that need to have the VMware Tools service restarted, one guest per line in the file so it looks like this:

guest1
guest2
guest3
guest4

Then run a command that looks like this:

For /F %%SERVER in C:scriptsserverlist.txt do
sc.exe \%SERVER stop VMTools
sleep 10
For /F %%SERVER in C:scriptsserverlist.txt do
sc.exe \%SERVER start VMTools

You can get the sleep utility in the Resource Kit Tools for Windows 2003. A 10 second pause seems to work pretty well to make sure the service actually stops.

Since I lost all of my DOS scripting chops, I only know how to automate this fully using the VI Toolkit and Powershell. The script below will use the VI Toolkit to automatically create a list of guests that report as not having VMware Tools running and pass that information to standard powershell commands to stop and start the services:

#Connect to the vCenter Server
Connect-ViServer <vCenter.FQDN>

#Get a list of guests where VMware Tools are not running
$servers = get-vm | where { $_.PowerState -eq “PoweredOn” } | Get-VMGuest | where { $_.State -ne “Running” } | select vmName, State

# Stop VMTools Service
foreach ($srv in $servers)
{

Write-Host “Stopping services on $srv”
# Get the VMTools Service
$Service = get-wmiobject -ComputerName $srv -query “select * from win32_service where name=’VMTools'”
if ($Service -ne $null)

#Stop the VMTools Service
{$Service.StopService()}

Sleep 10
Write-Host “Starting services on $srv”

#Start the VMTools Service
$Service.StartService()
}

Another thing I recently needed was a quick way to list the guests with snapshots as a quick method to make sure VCB exited properly. You can use this:

Get-VM | Get-Snapshot | Select VM, Name, Created, Description

Well, there you have it. Script your VMware Tools restart…
:o)

Automated Deployment of ESX Hosts Part III

This is Part III of a multi-part blog. If you haven’t read Part I or II I recommend that you do before continuing.

In Part II we developed a standard build for our hosts that we are going to use to build our automated build script.  Keep in mind the information I provided in my standard build is not all inclusive and is limited for the demostration of building the script.

First we will start with the kickstart portion of the script.  The kickstart is what configures the basic part of the installation that you would normally do manually from the CD.  The ESX 3.5 kickstart is a modified version of anaconda.

In the begining of our script we have the Regional Settings that we need to set. Below are the regional settings for my installation.

# Regional Settings
keyboard us
lang en_US
langsupport --default en_US
timezone --utc America/New_York

Next we have some important installation settings. Part of the installtion section is the location of our ESX installations. In my script I have included some samples. cdrom, ftp, & http. I have chosen http for my installation so I did not comment http out. I personally prefer http because it can be much less problematic than ftp or nfs.


# Installatition settings
skipx
mouse none
firewall --disabled

# Authentication
auth –enableshadow –enablemd5

# Unencrypted root password: password
rootpw –iscrypted $1$5a17$In5zYe6YsCty76AycpGaf/

#Reboot Server after finished
reboot

#install ESX3.5U3 do not perform upgrade
install

#Location of installation medium
#cdrom
#url –url ftp://192.168.12.200/esx/esx353/
url –url http://192.168.12.200/esx/esx353/

In this next secition we are going to be configuring out hard disk drive partitions and boot loader options. You’ll notice that my disk drive is cciss/c0d0. This is because I will be installing on HP hardware with HP SCSI controllers. If I were to script this for IBM, DELL, or other servers I would typically utilize sda for my drive.


# Bootloader options
bootloader --driveorder=cciss/c0d0 --location=mbr

# Partitioning – This area is where you define your partitioning scheme
clearpart –all –initlabel –drives=cciss/c0d0
part /boot –fstype ext3 –size 100 –ondisk=cciss/c0d0 –asprimary
part swap –size 1600 –ondisk=cciss/c0d0 –asprimary
part / –fstype ext3 –size 10240 –ondisk=cciss/c0d0 –asprimary
part /var –fstype ext3 –size 8192 –ondisk=cciss/c0d0
part /tmp –fstype ext3 –size 4096 –ondisk=cciss/c0d0
part /opt –fstype ext3 –size 5120 –ondisk=cciss/c0d0
part /home –fstype ext3 –size 5120 –ondisk=cciss/c0d0
part None –fstype vmkcore –size 100 –ondisk=cciss/c0d0
part None –fstype vmfs3 –size 1 –grow –ondisk=cciss/c0d0

In this next secition we are configuring the networking settings for our Service Console. I recommend always using nic0(eth0) (vmnic0) for the Service Console and always perform your automated installation over this network interface. It is possible to do an automated installation over a different network interface but requires additional scritping to properly allcoate the interfaces.


# Network Configurations for service console. This will be applied to the Network interface that the kickstart is performed on. We are also choosing to not create a default portgroup.
network --device eth0 --bootproto static --ip 192.168.10.100 --netmask 255.255.255.0 --gateway 192.168.10.1 --nameserver 192.168.5.30 --hostname vmware1.sidtest.local --addvmportgroup=0

# VMWare License options
#Accept VMware License Agreement
vmaccepteula
#Configure host to talkt o license server
vmlicense –mode=server –server=27000@vcenter.sidtest.local –edition=esxFull –features=backup,vsmp

%vmlicense_text

%packages
@base

%post

That completes the kickstart portion of our script.

Everything beyong the %post portion of the kickstart is our post installation script.  The post installtion script is run on the first boot of the server.  A post installation script can contain bash, perl and other scripting but for simplicity I will be using bash scripts executing ESX and some other commands for my post installation part of the script.

I will start my Post Installation script off with the netowork configuration for the host.

#!/bin/sh
###### Configure Networking##########
###Setup vSwitch0######
echo Adding vmnic2 to vSwitch0 >> /var/log/post_install.log
esxcfg-vswitch -L vmnic2 vSwitch0

######### Add PortGroup for VMotion vmkernel adapter #########
echo Creating VMotion Portgroup
esxcfg-vswitch -A “VMotion” vSwitch0

## Tag VLAN to VMkernel ##
esxcfg-vswitch -p VMotion -v 301 vSwitch1

## Creating VMKnic and Assigning VMkernel IP and Gateway ##
echo Assigning VMKernel IP and Gateway – Please Wait
echo Assigning VMKernel IP and Gateway – Please Wait >> /var/log/post_install.log
esxcfg-vmknic -a -i 192.168.12.100 -n 255.255.250.0 “VMotion”
esxcfg-route 192.168.12.1

## Restart the Managment service so vimsh will notice changes to vSwitch0 ##
service network restart
sleep 300
service mgmt-vmware restart

## Enable vmkernel on vmk0 for vmotion ##
echo Enabling vMotion on VMkernel interface – Please Wait
echo Enabling vMotion on VMkernel interface >> /var/log/post_install.log
vmware-vim-cmd hostsvc/vmotion/vnic_set vmk0 >> /var/log/post_install.log

## Set both vmnic0 and vmnic2 to active for vSwitch0 ##
echo Confgiuring both vmnic0 and vmnic2 to be active for vSwitch0
echo Confgiuring both vmnic0 and vmnic2 to be active for vSwitch0 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –nicorderpolicy-active=vmnic0,vmnic2 vSwitch0 >> /var/log/post_install.log

## Configure NIC Priority Order for VMkernel and Service Console ##
echo Configuring NIC Priority for SC and VMkernel – Please Wait
echo Configuring NIC Priority for SC and VMkernel >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/portgroup_set –nicorderpolicy-active=vmnic0 –nicorderpolicy-standby=vmnic2 vSwitch0 “Service Console” >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/portgroup_set –nicorderpolicy-active=vmnic2 –nicorderpolicy-standby=vmnic0 vSwitch0 “VMotion” >> /var/log/post_install.log

## Reject Forged Transmits and Mac Address Changes for vSwitch0 ##
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch0
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch0 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –securepolicy-forgedxmit=false vSwitch0 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –securepolicy-macchange=false vSwitch0 >> /var/log/post_install.log

## Reject Forged Transmits and Mac Address Changes for Service Console PortGroup ##
echo Rejecting Forged Transmits and MAC Address CHanges for Service Console PortGroup
echo Rejecting Forged Transmits and MAC Address CHanges for Service Console PortGroup >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/portgroup_set –securepolicy-forgedxmit=flase vSwitch0 “Service Console” >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/portgroup_set –securepolicy-macchange=false vSwitch0 “Service Console” >> /var/log/post_install.log

## Reject Forged Transmits and Mac Address Changes for VMotion Portgroup ##
echo Rejecting Forged Transmits and MAC Address CHanges for VMotion PortGroup
echo Rejecting Forged Transmits and MAC Address CHanges for VMotion PortGroup >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/portgroup_set –securepolicy-forgedxmit=flase vSwitch0 “VMotion” >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/portgroup_set –securepolicy-macchange=false vSwitch0 “VMotion” >> /var/log/post_install.log

echo Settings for vSwitch0 complete
echo Settings for vSwitch0 complete >> /var/log/post_install.log

####### Setup vSwitch1 #######
echo Configuring settings for vSwitch1 - Please Wait
echo Configuring settings for vSwitch1 - Please Wait >> /var/log/post_install.log
## Creating vSwitch1, assigning vmnics, creating portgroups and assigning vlans ##
echo Creating vSwitch1
echo Creating vSwitch1 >> /var/log/post_install.log
esxcfg-vswitch -a vSwitch1
echo Adding vmnic5, and vmnic7 to vSwitch1
echo Adding vmnic5, and vmnic7 to vSwitch1 >> /var/log/post_install.log
esxcfg-vswitch -L vmnic5 vSwitch1
esxcfg-vswitch -L vmnic7 vSwitch1

 

echo Creating PortGroups on vSwitch1
echo Creating PortGroups on vSwitch1 >> /var/log/post_install.log
esxcfg-vswitch -A "VLAN2" vSwitch1
esxcfg-vswitch -A "VLAN15" vSwitch1
esxcfg-vswitch -A "VLAN150" vSwitch1
esxcfg-vswitch -A "VLAN151" vSwitch1
esxcfg-vswitch -A "VLAN152" vSwitch1

 

echo Adding vlan assignments to PortGroups on vSwitch1
echo Adding vlan assignments to PortGroups on vSwitch1 >> /var/log/post_install.log
esxcfg-vswitch -p VLAN2 -v 2 vSwitch1
esxcfg-vswitch -p VLAN15 -v 15 vSwitch1
esxcfg-vswitch -p VLAN150 -v 150 vSwitch1
esxcfg-vswitch -p VLAN151 -v 151 vSwitch1
esxcfg-vswitch -p VLAN152 -v 152 vSwitch1

 

## Restart the Managment service so vimsh will notice changes to vSwitch1
service mgmt-vmware restart
## Wait 4 minutes for the hostd-vmdb service to fully start running it can take awhile for it to fully load and vimsh to work ##
echo Sleeping for 4 minutes - Please Wait
sleep 240
## Setting all vmnics to active for vSwitch1 ##
echo Confgiuring all vmnics to be active for vSwitch1
echo Confgiuring all vmnics be active for vSwitch1 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy --nicorderpolicy-active=vmnic5,vmnic7 vSwitch1 >> /var/log/post_install.log
## Reject Forged Transmits and Mac Address Changes for vSwitch1 ##
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch1
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch1 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy --securepolicy-forgedxmit=false vSwitch1 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy --securepolicy-macchange=false vSwitch1 >> /var/log/post_install.log
echo Settings for vSwitch1 complete
echo Settings for vSwitch1 complete >> /var/log/post_install.log
 echo Configuring settings for vSwitch2 - Please Wait
echo Configuring settings for vSwitch2 - Please Wait >> /var/log/post_install.log

## Creating vSwitch2, assigning vmnics, creating portgroups and assigning vlans ##
echo Creating vSwitch2
echo Creating vSwitch2 >> /var/log/post_install.log
esxcfg-vswitch -a vSwitch2

echo Adding vmnic4, vmnic6 to vSwitch2
echo Adding vmnic4, vmnic6 to vSwitch2 >> /var/log/post_install.log
esxcfg-vswitch -L vmnic6 vSwitch2
esxcfg-vswitch -L vmnic4 vSwitch2

echo Creating PortGroups on vSwitch2
echo Creating PortGroups on vSwitch2 >> /var/log/post_install.log
esxcfg-vswitch -A “VLAN200” vSwitch2
esxcfg-vswitch -A “VLAN201” vSwitch2
esxcfg-vswitch -A “VLAN203” vSwitch2

echo Adding vlan assignments to PortGroups on vSwitch2
echo Adding vlan assignments to PortGroups on vSwitch2 >> /var/log/post_install.log
esxcfg-vswitch -p VLAN200 -v 200 vSwitch2
esxcfg-vswitch -p VLAN201 -v 201 vSwitch2
esxcfg-vswitch -p VLAN203 -v 203 vSwitch2

## Restart the Managment service so vimsh will notice changes to vSwitch2 ##
service mgmt-vmware restart

## Wait 4 minutes for the hostd-vmdb service to fully start running it can take awhile for it to fully load and vimsh to work ##
echo Sleeping for 4 minutes – Please Wait
sleep 240

## Setting all vmnics to active for vSwitch2 ##
echo Confgiuring all vmnics to be active for vSwitch2
echo Confgiuring all vmnics be active for vSwitch2 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –nicorderpolicy-active=vmnic4,vmnic6 vSwitch2 >> /var/log/post_install.log

## Reject Forged Transmits and Mac Address Changes for vSwitch2 ##
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch2
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch2 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –securepolicy-forgedxmit=false vSwitch2 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –securepolicy-macchange=false vSwitch2 >> /var/log/post_install.log

echo Settings for vSwitch2 complete
echo Settings for vSwitch2 complete >> /var/log/post_install.log

echo Configuring settings for vSwitch3 – Please Wait
echo Configuring settings for vSwitch3 – Please Wait >> /var/log/post_install.log

## Creating vSwitch3, assigning vmnics, creating portgroups and assigning vlans ##
echo Creating vSwitch3
echo Creating vSwitch3 >> /var/log/post_install.log
esxcfg-vswitch -a vSwitch3

echo Adding vmnic1 and vmnic3 to vSwitch3
echo Adding vmnic10and vmnic3 to vSwitch3 >> /var/log/post_install.log
esxcfg-vswitch -L vmnic1 vSwitch3
esxcfg-vswitch -L vmnic3 vSwitch3

echo Creating PortGroups on vSwitch3
echo Creating PortGroups on vSwitch3 >> /var/log/post_install.log
esxcfg-vswitch -A “VLAN400” vSwitch3
esxcfg-vswitch -A “VLAN401” vSwitch3

echo Adding vlan assignments to PortGroups on vSwitch3
echo Adding vlan assignments to PortGroups on vSwitch3 >> /var/log/post_install.log
esxcfg-vswitch -p VLAN400 -v 400 vSwitch3
esxcfg-vswitch -p VLAN401 -v 401 vSwitch3

## Restart the Managment service so vimsh will notice changes to vSwitch3 ##
service mgmt-vmware restart

## Wait 4 minutes for the hostd-vmdb service to fully start running it can take awhile for it to fully load and vimsh to work ##
echo Sleeping for 4 minutes – Please Wait
sleep 240

## Setting all vmnics to active for vSwitch3 ##
echo Confgiuring all vmnics to be active for vSwitch3
echo Confgiuring all vmnics be active for vSwitch3 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –nicorderpolicy-active=vmnic1,vmnic3 vSwitch3 >> /var/log/post_install.log

## Reject Forged Transmits and Mac Address Changes for vSwitch3 ##
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch3
echo Rejecting Forged Transmits and MAC Address CHanges for vSwitch3 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –securepolicy-forgedxmit=false vSwitch3 >> /var/log/post_install.log
vmware-vim-cmd hostsvc/net/vswitch_setpolicy –securepolicy-macchange=false vSwitch3 >> /var/log/post_install.log

echo Settings for vSwitch3 complete
echo Settings for vSwitch3 complete >> /var/log/post_install.log 

Next I will configure the firewall rules to allow all the services I need to communicate in/out of the ESX host.

echo Configuring Firewall Rules - Please Wait
echo Configuring Firewall Rules >> /var/log/post_install.log

esxcfg-firewall -o 2381,tcp,in,hpim
esxcfg-firewall -o 2381,tcp,out,hpim
esxcfg-firewall -o 88,tcp,out,KerberosClient
esxcfg-firewall -o 6389,tcp,in,Naviagent
esxcfg-firewall -o 88,udp,out,
esxcfg-firewall -o 464,tcp,out,KerberosPasswordChange
esxcfg-firewall -e smbClient
esxcfg-firewall -e sshClient
esxcfg-firewall -e ntpClient
esxcfg-firewall -e CIMHttpServer
esxcfg-firewall -e snmpd
echo Configuring Firewall rules complete
echo Configuring Firewall rules complete >> /var/log/post_install.log

Here we are going to configure our DNS Servers 

echo Configuring DNS Servers - Please Wait
echo Configuring DNS Servers >> /var/log/post_install.log
echo Making backup of /etc/resolv.conf
echo Making backup of /etc/resolv.conf >> /var/log/post_install.log
cp /etc/resolv.conf /etc/resolv.conf.bak
echo nameserver 192.168.5.20 > /etc/resolv.conf
echo nameserver 192.168.5.21 >> /etc/resolv.conf

echo DNS Server configuration complete
echo DNS Server configuration complete >> /var/log/post_install.log

Now we will configure our NTP Settings.

echo configuring NTP Settings - Please Wait
echo configuring NTP Settings >> /var/log/post_install.log
echo Making backup of /etc/ntp.conf
echo Making backup of /etc/ntp.conf >> /var/log/post_install.log
cp /etc/ntp.conf /etc/ntp.conf.bak
echo restrict default kod nomodify notrap noquery nopeer > /etc/ntp.conf
echo restrict 127.0.0.1 >> /etc/ntp.conf
echo server 192.168.5.30 >> /etc/ntp.conf
echo server 192.168.5.31 >> /etc/ntp.conf
echo server 192.168.5.32 >> /etc/ntp.conf
echo driftfile /var/lib/ntp/drift >> /etc/ntp.conf
echo broadcastdelay 0.008 >> /etc/ntp.conf
echo Making backup of /etc/ntp/step-tickers
echo Making backup of /etc/ntp/step-tickers >> /var/log/post_install.log
cp /etc/ntp/step-tickers /etc/ntp/step-tickers.bak
echo server 192.168.5.30 > /etc/ntp/step-tickers
echo server 192.168.5.31 > /etc/ntp/step-tickers
echo server 192.168.5.32 > /etc/ntp/step-tickers
chkconfig --level 345 ntpd on
hwclock --systohc
service ntpd start >> /var/log/post_install.log
echo NTP settings complete
echo NTP settings complete >> /var/log/post_install.log
echo Restarting VMware Management Service
echo Restarting VMware Management Service >> /var/log/post_install.log
echo Sleeping for 4 minutes - Please Wait
service mgmt-vmware restart >> /var/log/post_install.log
sleep 240
echo VMware management Service restarted
echo VMware management Service restarted >> /var/log/post_install.log

Now I will set the service consoel memory to 800Mb

echo Setting Service Console Memory to 800Mb
echo Setting Service Console Memory to 800Mb >> /var/log/post_install.log

sed -i 's/memSize = "272"/memSize = "800"/g' /etc/vmware/esx.conf
esxcfg-boot -g
esxcfg-boot -b
echo Service Console Memory assigned to 800Mb complete
echo Service Console Memory assigned to 800Mb complete >> /var/log/post_install.log

Now we will configure AD integration for service console logins

echo Configuring PAM AD Integration
echo Configuring PAM AD Integration >> /var/log/post_install.log
esxcfg-auth --enablead --addomain=hq.nt.newyorklife.com --addc=hq.nt.newyorklife.com >> /var/log/post_install.log
echo Backing up /etc/krb5.conf
echo Backing up /etc/krb5.conf >> /var/log/post_install.log
cp /etc/krb5.conf /etc/krb5.conf.bak
echo Creating /etc/krb5.conf
echo [domain_realm] > /etc/krb5.conf
echo sidtest.local = sidtest.local >> /etc/krb5.conf
echo .sidtest.local = sidtest.local >> /etc/krb5.conf
echo [libdefaults] >> /etc/krb5.conf
echo default_realm = sidtest.local >> /etc/krb5.conf
echo [realms] >> /etc/krb5.conf
echo sidtest.local = { >> /etc/krb5.conf
echo admin_server = sidtest.local:464 >> /etc/krb5.conf
echo default_domain = sidtest.local >> /etc/krb5.conf
echo kdc = dc1.sidtest.local:88 >> /etc/krb5.conf
echo kdc = dc2.sidtest.local:88 >> /etc/krb5.conf
echo } >> /etc/krb5.conf
echo Finished Configuring PAM AD Integration
echo Finished Configuring PAM AD Integration >> /var/log/post_install.log
echo Adding authorized Active Directory users - Please Wait
echo Adding authorized Active Directory users >> /var/log/post_install.log
useradd -m ssmith -g wheel -G adm
useradd -m jbower -g wheel -G adm
useradd -m cobrian -g wheel -G adm
useradd -m talmeda -g wheel -G adm
useradd -m t01bo3L -g wheel -G adm
useradd -m t44bblk -g wheel -G adm
echo Autorized Active Directory Users added
echo Autorized Active Directory Users added >> /var/log/post_install.log

 

Now we will prevent root login from the physical console

 

echo Preventing root form loggin on to console
echo Preventing root form loggin on to console >> /var/log/post_install.log
echo Backing up /etc/securetty
echo Backing up /etc/securetty >> /var/log/post_install.log
mv /etc/securetty /etc/securetty.bak
touch /etc/securetty
echo Preventing root form loggin on to console complete
echo Preventing root form loggin on to console complete >> /var/log/post_install.log

Here we will set our local password policy for all local accounts

echo Setting local password policy
echo Setting local password policy >> /var/log/post_install.log
echo Setting maximum number of days to keep a password
esxcfg-auth --passmaxdays=90
echo Setitng password minimum days between changes
esxcfg-auth --passmindays=1
echo Setting Password warning time befor change required
esxcfg-auth --passwarnage=14
echo Setting local password policy complete
echo Setting local password policy complete>> /var/log/post_install.log

Now let’s set the Message of Day and Welcome.js message

echo COnfiguring MOTD login banner – Please Wait
echo COnfiguring MOTD login banner >> /var/log/post_install.log

echo “

Warning!!! This computer system is private and may be accessed only
by authorized users. Data and programs in this system are confidential
and proprietary to the system owner and may not be accessed without
authorization. Unauthorized users or users who exceed their authorized
level of access are subject to disciplinary action, up to and including
termination and are subject to prosecution under state or federal law.
Activity on this computer system is logged.

” > /etc/motd

echo MOTD Login Banner COnfiguration Complete
echo MOTD Login Banner COnfiguration Complete >> /var/log/post_install.log

echo COnfiguring /usr/lib/vmware/hostd/docroot/en/welcomeRes.js banner – Please Wait
echo COnfiguring /usr/lib/vmware/hostd/docroot/en/welcomeRes.js banner >> /var/log/post_install.log

echo Backing up existing WelcomeRes.js
echo Backing up existing WelcomeRes.js >> /var/log/post_install.log
mv /usr/lib/vmware/hostd/docroot/en/welcomeRes.js /usr/lib/vmware/hostd/docroot/en/welcomeRes.js.bak

esxcfg-firewall –allowOutGoing
sleep 20

cd /tmp

lwp-download http://192.168.12.200/welcomeRes.js

cp /tmp/welcomeRes.js /usr/lib/vmware/hostd/docroot/en/welcomeRes.js

esxcfg-firewall –blockOutGoing
sleep 20

echo COnfiguring /usr/lib/vmware/hostd/docroot/en/welcomeRes.js banner Complete
echo COnfiguring /usr/lib/vmware/hostd/docroot/en/welcomeRes.js banner Complete >> /var/log/post_install.log

Last but not least we will install the EMC Naviagent and the HP SIM agent

echo Permorming the installation of the EMC NaviAgentCli
echo Permorming the installation of the EMC NaviAgentCli >> /var/log/post_install.log

echo Downloading the EMC NaviAgentCli – Please Wait
echo Downloading the EMC NaviAgentCli – Please Wait >> /var/log/post_install.log
esxcfg-firewall –allowOutGoing
sleep 20

cd /tmp
lwp-download http://192.168.12.200/naviagentcli-6.19.4.7.0-1.noarch.rpm

esxcfg-firewall –blockOutGoing
sleep 20

echo Installing the EMC NaviAgentCli – Please wait
echo Installing the EMC NaviAgentCli – Please wait >> /var/log/post_install.log
rpm -ivh naviagentcli-6.19.4.7.0-1.noarch.rpm
sleep 20
/etc/init.d/naviagent start

echo Permorming the installation of the EMC NaviAgentCli Complete
echo Permorming the installation of the EMC NaviAgentCli Complete >> /var/log/post_install.log

echo Permorming the installation of the HP SIM Agent ver 7.9.1
echo Permorming the installation of the HP SIM Agent ver 7.9.1 >> /var/log/post_install.log

echo Downloading the HP SIM Agent – Please Wait
echo Downloading the HP SIM Agent – Please Wait >> /var/log/post_install.log
esxcfg-firewall –allowOutGoing
sleep 20

cd /tmp
lwp-download http://192.168.12.200/NYL/hpmgmt-7.9.1-vmware3x.tgz

esxcfg-firewall –blockOutGoing
sleep 20

echo Building HP SIM Answer File – Please wait
echo Building HP SIM Abswer File – Please Wait >> /var/log/post_install.log

echo export CMASILENT=”YES” > /tmp/sidtest_AF.conf
echo export CMANOSTARTINSTALL=”hpasmd” >> /tmp/sidtest_AF.conf
echo export ENABLEHPIMPORT=Y >> /tmp/sidtest.conf
echo export ENABLESNMPSERVICE=Y >> /tmp/sidtest_AF.conf
echo export ENABLESIMCERTPORT=Y >> /tmp/sidtest_AF.conf

echo Installing the HP SIM Agent – Please wait
echo Installing the HP SIM Agent – Please wait >> /var/log/post_install.log
tar xvfz /tmp/hpmgmt-7.9.1-vmware3x.tgz
cd /tmp/hpmgmt/791/
./installvm791.sh –silent –inputfile /tmp/sidtest_AF.conf
sleep 60
echo Installing the HP SIM Agent – Complete
echo Installing the HP SIM Agent – Complete >> /var/log/post_install.log

echo Configuring SNMPd.conf – Please wait
echo Configuring SNMPD.conf – Please wait >> /var/log/post_install.log

echo Making a backup of /etc/snmp/snmpd.conf
echo Making a backup of /etc/snmp/snmpd.conf >> /var/log/post_install.log

mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.bak

echo dlmod cmaX /usr/lib/libcmaX.so > /etc/snmp/snmpd.conf
echo rocommunity hrasmidep 127.0.0.1 >> /etc/snmp/snmpd.conf
echo rocommunity hrasmidep 172.31.44.12 >> /etc/snmp/snmpd.conf
echo rocommunity hrasmidep 172.28.137.14 >> /etc/snmp/snmpd.conf
echo trapcommunity hrasmidep >> /etc/snmp/snmpd.conf
echo trapsink 172.31.44.12 hrasmidep >> /etc/snmp/snmpd.conf
echo trapsink 172.28.137.14 hrasmidep >> /etc/snmp/snmpd.conf
echo syscontact root@localhost >> /etc/snmp/snmpd.conf
echo syslocation CNJ >> /etc/snmp/snmpd.conf
echo dlmod SNMPESX /usr/lib/vmware/snmp/libSNMPESX.so >> /etc/snmp/snmpd.conf

echo Configuring SNMPD.conf – Complete
echo Configuring SNMPD.conf – Complete >> /var/log/post_install.log

echo Permorming the installation of the HP SIM Agent ver 7.9.1 Complete
echo Permorming the installation of the HP SIM Agent ver 7.9.1 Complete >> /var/log/post_install.log

echo Recording Build Script data to host > /etc/build_info
echo Built using Sid_Smith_kickstart_test_script_v1_0.cfg >> /etc/build_info
echo Script Version 1.0 >> /etc/build_info
echo ESX 3.5 Update 3 >> /etc/build_info


######## Installationa and Configuration is finished ######
echo Your server is now installed and configured.  Please review the installation
echo log file at /var/log/post_install.log to verify that there are no errors. Your
echo server will now entered into maintenace mode and will reboot.  Please follow the
echo remaining steps in your automated deployment documentation. 
echo          
echo     Script developed by: Sid Smith
##########
sleep 30
##### Put Server in Maintenance Mode #####
echo Server will now enter maintenance mode
echo Server will now enter maintenance mode >> /var/log/post_install.log
vmware-vim-cmd /hostsvc/maintenance_mode_enter >> /var.log.post_install.log
EOF1
###Make esxcfg.sh eXcutable
chmod +x /tmp/esxcfg.sh

###Backup original rc.local file
cp /etc/rc.d/rc.local /etc/rc.d/rc.local.bak

###Make esxcfg.sh run from rc.local and make rc.local reset itself
cat >> /etc/rc.d/rc.local <<EOF
cd /tmp
/tmp/esxcfg.sh
mv -f /etc/rc.d/rc.local.bak /etc/rc.d/rc.local
shutdown -r now
EOF

It’s important to remember that you should always use a linux compatible editor to edit your script.  If you have been using Notepad remember to open change and save your script in a linux compatible editor before trying to run.  In the next and final part to this blog we will go throuigh deployment options and perform a test deployment of this script.

 I have also included the script as an attachment to this post below:

ESX Automated Deployment Script

*Please excuse some of the formatting in this post.  I am still getting use to the interface and had some issues with the formatting.