vCloud Automation Center vCAC 6.0 – Using Linux Kickstart to Provision to Physical HP Server over iLo

That big ole title pretty much says it all. In this article I’m going to walk through how to deploy RHEL (Centos) Linux onto a Physical HP Server over the iLo interface using Kickstart. When provisioning to Physical servers such as an HP Proliant DL360 there are two methods built into vCAC. One is the use of PXE boot, and the other is via the iLo interface.

There are pro and cons to both PXE and remote mounting an ISO over the iLo interface. PXE has the obvious cons of the network requirements, having a PXE server available and if you want true flexibility you will need to do a little custom work. ISO mount over iLo tend to be a bit slower due to the over head of remote mounting a ISO and the speed of the iLo interface. In this article I will be covering remote mounting an ISO over iLo, but I will be covering PXE is a later article.

What do we need

To start we need the Physical HP server to be racked and cabled up. It’s iLo interface should be configured and licensed, the network interfaces should be cabled in and the switches should be configured for the appropriate Vlans etc. The drives in the server should also be initialized. vCAC will not create any raid groups etc for you, you must do this manually. My examples also requires a web server that can be utilized to store the needed files on the network.

[Read more…]

DailyHypervisor Forums are online.

We have just launched our DailyHypervisor Forum located at http://www.dailyhypervisor.com/forum. Stop by, contribute and be a part of our community. The DH Forum is intended to be for all things cloud. Currently we have forums created for vCAC, vCD, vCO, Cloud General, and Openstack. More forum categories will be coming based on demand. If you have a category you would like to see shoot us a note and let us know.

Our goal is to create a common place where anyone can come to learn, get help, share ideas, or just about anything that will help foster knowledge regarding cloud computing. Considering this very blog is the announcement of our forum you could image there isn’t a whole lot happening yet so what are you waiting for, be the first. Go ask a question, post an issue, share a thought and let’s get things rolling.

vSphere Service Console and Disk Partitioning

Everyone at this point should be aware that the Service Console is now located in a vmdk on a VMFS partition.  The Service Console vmdk must be stored on a vmfs datastore and the datastore must either be local stoage or SAN storage that is only presented to the one host.  So I guess no shared vmfs datastores to house all the Service Consoles…….  The next question I had about the new service console was the /boot partition.  Where is it and how is the server bootstrapping?  Well I can’t say I have totally gotten to the bottom of this yet but I have discovered a few things.  When digging into scripting installations of vSphere I first looked at the disk partitioning which sheds a little light on the boot process.  Here is what the disk partitioning portion of the script looks like:

part /boot –fstype=ext3 –size= –onfirstdisk
part storage1 –fstype=vmfs3 –size=30000 –grow –onfirstdisk
part None –fstype=vmkcore –size=100 –onfirstdisk
# Create the vmdk on the cos vmfs partition.
virtualdisk cos –size=8000 –onvmfs=storage1
# Partition the virtual disk.
part / –fstype=ext3 –size=0 –grow –onvirtualdisk=cos
part swap –fstype=swap –size=1600 –onvirtualdisk=cos

Notice the “onfirstdisk” switch at the end of the first three partitions.  The /boot, a vmfs partition, and a vmkcore partition are all on the physical disk.  Notice the swap for the service console is located inside the vmdk.  Notice the creation of the service console vmdk disk.  “virtualdisk cos –size=5000 –onvmfs=storage1”.  To account for this change VMware has added some new configuration option for scripting the installation of the COS.  Next you’ll notice the  the creation of the / and swap partitions fort the COS utilizing the “onvirtualdisk=cos” switch.

I’m still working on and discovering some of the new ways to do scripted installation with vSphere 4.0.  I though this little tid bit would be helpful to those of you wondering how the COS ties in to the whole mix.  A few interesting things about this new method.

No need to specify the actual device, however I would be very cautious about the –onfirstdisk switch if SAN LUNs are present on the server.  I would take extra precautions to ensure that no SAN LUNs are connected.  There really are not any best practices around this configuration yet so there are a few things that I think need to be determined.  If you were planning on running VM’s from the local VMFS should you create multiple VMFS partitions and have one solely for the COS.  I almost think it would be beneficial just for the logical separation.  Well I will be digging a bit deeper into this but would love to hear others views on the COS and how they plan to deploy and why.  So please comment and let me know what you think.

ESX automated deployment email completion notification

How would you like to kick off your ESX installation, then go have some coffee, go for a jog, or just hang out by the water cooler until it is finished without worrying if you’re wasting time while it’s waiting done and waiting for you. Well you can with this ESX email script. Incorporating this script as part of your ESX automated deployment script allows you to configure your server to email you once the post installation configuration is finished.

So what do you need to do? It simple you can get the mail_notify script that I found on yellow-bricks.com from our downloads page. Once you have the script you will need to get it on to your server along with the MIME Lite.pm file that you can download here. Once you download and extract the package you can find the Lite.pm file under /lib/MIME/ folder.

The take the Lite.pm file and the mail_notify.pl file and tar them together for easy retrieval. Then upload the mail_notify.tar file to your web server. Next include the following in your automated deployment script.

##### Setting up Mail Notification ########
echo Setting up mail notification
echo Setting up mail notification >> /var/log/post_install.log

cd /tmp
lwp-download http://[server ip]/path/mail_notify.tar
tar xvf mail_notify.tar
mkdir /usr/lib/perl5/5.8.0/MIME
mv Lite.pm /usr/lib/perl5/5.8.0/MIME/

##### Move the files to where they belong #######
mv mail_notify.pl /usr/local/bin/
chmod +x /usr/local/bin/mail_notify.pl

####### Let’s send an email that the install is finished #####
/usr/local/bin/mail_notify.pl -t youremail@yourdomain.com -s “Server installation complete” -a /var/log/post_install.log -m “Server Installation complete please review the attached log file to verify your server installed correctly” -r [your smtp server]

Optionally you could set the smtp server in the mail_notify.pl script and not have to specify when sending a mail message.

if you include this at the end of your post installation portion part of your script but before the EOF line you will get a nice email notification informing you that your installation has finished with the post_install.log file attached.

Network configuration for automated ESX deployment

I have been asked this question a few times so I thought it would be wise to post an article on it. When deploying an automated build script with the kickstart and/or installation files located on http, ftp, or nfs there are network configuration dependencies that you need to be aware of.

The ESX installer is a modified version of anaconda which is the same installer used for RedHat and a few or Linux variants. Anaconda is what allows for the kickstart portion of the automated build script. Anaconda itself has some limitations as far as what it supports.

Anaconda does not support 802.1q VLAN tagging. If you plan on tagging the service console network traffic this will affect your kickstart installation. The anaconda installer will not tag the vlan id to the traffic and therefor will not be able to perform the installation. You have a few options on how to handle this.

  1. Don’t have the networking folks tag the vlan until after the install finished.  However this can cause problems if your post installation script needs to grab some files from across the network so be aware of what you are doing during your post installation.
  2. Use a dedicated deployment network.  If you use this option take a look at my ESX 3.x Deployment script #2 located on our download page.
  3. Don’t tag the service console traffic.  If you share vSwitch0 with both the vmkernel(vMotion) interface and the service console only tag the vmkernel traffic.  This still allows for isolation of the traffic.  Have your network guys set the service console vlan as the native(untagged)vlan.
  4. Create a custom installation CD with all the necessary files located on the CD.