Creating an Automated ESXi Installer

Back in the summer, I saw Stu’s Post about automating the installation of ESXi. I was reminded again by Duncan’s Post. Then, I found myself in a situation when a customer bought 160 blades for VMware ESXi. With this many systems, it would be almost impossible to do this without mistakes. I took the ideas from Stu and Duncan and created an ESXi automated installer that works from a PXE deployment server, like the Ultimate Deployment Appliance. I took it a step further and added the ability to use a USB stick or a CD for those times when PXE is not allowed. The document below is a result of it.

This is a little different than the idea of a stateless ESXi server, where the hypervisor actually boots from PXE. This is the installer booting from PXE so that the hypervisor can be installed on local disk, an internal USB stick or SD card. You could also use it for a “boot from SAN” situation, but extreme care should be taken so you don’t accidentally format a VMFS disk.

As always, if anyone has comments, corrections, etc., please feel free to post a comment below.

Continue reading “Creating an Automated ESXi Installer”

Advertisements

ESX vs. ESXi Which is Better? Revisited.

For over a year now, I have started off telling customers in Plan and Design engagements that they would be using ESXi unless we uncovered a compelling reason to NOT use it. The “which do I use” argument is still going strong. Our blog post “ESX vs. ESXi which is better?“  was posted in April and is still the most popular. It seems to be a struggle for many people to let go of the service console. VMware is trying to go in the direction of the thinner ESXi hypervisor. They are working to provide alternatives to using the service console.

VMware has provided a comparison of ESX vs. ESXi for version 3.5 for a while. Well, VMware posted a comparison for ESX vs. ESXi for version 4 last night. It’s a great reference.

Changes to the ESX Service Console and ESX vs. ESXi…again

A whitpaper was posted in the VMTN communities Thursday outlining the differences between the ESX 3.x and ESX 4.x service console. It further offers resources for transitioning COS based apps and scripts to ESXi via the vSphere Management Assistant and the vSphere CLI. Also mentioned briefly was the vSphere PowerCLI. If you are a developer or write scripts for VMware environments, also check out the Communities Developer section.

I hear it time and time again…The full ESX console is going away. ESXi is the way to go. I know there are valid arguments for keeping ESX around, but they are few. Failing USB keys may be a valid argument, but I have not heard of this happening. If that is the case, use boot from SAN. You need SAN anyway. As for hung VM processes, there are a few ways to address this in ESXi.

If the techie wonks at VMware are publishing articles about how to transition to ESXi, then resistance is futile…you WILL be assimilated…

VMware ESX Configuration Maximums Comparison Matrix

Have you ever needed an easy to reference way to see what the configuration maximums are for different versions of VMware ESX.  I know I seem to need this all the time.  I find it a huge pain to keep referring to each of the individual VMware documents to get the answers.  Sometimes I also want to see what the changes are between versions and I can’t seem to memorize this information in my tiny little brain.  So I went ahead and created a “Configuration Maximums Comparison Matrix” based on the VMware Configuration Maximums for each version.

You’ll notice some settings don’t have values for each version.  This is because they were not published in the VMware documents.  As I go through some additional documents and extract these values I will update the document to reflect.  For no the document does include everything from the VMware Configuration maximums published for each of these Versions:

VMware ESX 3
VMware ESX 3.5 & ESX 3.5 Update 1
VMware ESX 3.5 Update 2, Update3, & Update 4
VMware vSphere 4.0 (ESX 4)

You can find the document in our downloads section or you can click here. Hope you find this useful I know I will.

VMware vSphere 4 (ESX 4.0, vCenter 4.0) Alarms and Host Profiles

Some are speculating that next Tuesday VMware is going to announce the release of VMware vSphere which is what essentially is Virtual Infrastructure 4.0 which would include ESX 4.0. I can’t say what VMware is going to do but over the next few weeks I will be publishing information on vSphere as well as some instructional videos. For now I have some teasers for you.

Here is a screen shot of the alarms available in vSphere. A you can see they have expanded the alarm feature from what was available in VI3.

vsphere_alarms

I’m sure most of you have heard of the new host profiles. If you haven’t had the fortune of checking out this cool new feature here are some screenshots to show you what options are available to you as part of a host profile. If you are not much for scripting and just can’t stand those pesky automated build scripts then you will love this feature. It gives you the ability to configure just about every aspect of the ESX host without having to deal with any scripting.

vsphere_host_profiles_1

vsphere_host_profiles_2

vsphere_host_profiles_3

vsphere_host_profiles_4

vsphere_host_profiles_5

vsphere_host_profiles_6

vsphere_host_profiles_7

As you can see in this screenshot all these settings are very easy to set via the GUI.

vsphere_host_profiles_8

So stay tuned as there is much more to come. I’m currently working on making videos covering installing, and configuring vSphere from the ground up and plan on getting into all of the new feature available in this release.

VMware Virtual Center – Physical or Virtual?

Over the years there have been some controversy over this topic. Should Virtual Center (vCenter) be physical or virtual? There is the argument that it should be physical to ensure consistent management of the virtual environment. Of course there is also the fact that Virtual Center requires a good amount of resources to handle the logging and performance information.

I’m a big proponent for virtualizing Virtual Center. With the hardware available today there is no reason not to. Even in large environments that really tax the Virtual Center server you can just throw more resources at it.

Many companies are virtualizing mission critical application to leverage VMware HA to protect these applications. How is Virtual Center any different. So what do you do if Virtual Center crashes? How do you find and restart Virtual Center when DRS is enabled on the cluster?

You ave a few options here.

  1. Override the DRS setting for the Virtual Center vm and set it to manual. Now you will always know where your virtual center server is if you need to resolve issues with it.
  2. Utilize Powershell to track the location of your virtual machines. I wrote an article that included a simple script to do this which I will include on our downloads page for easy access.
  3. Run an isolated 2 node ESX cluster for infrastructure machines.

So my last option warrants a little explaining. Why would you want to run a dedicated 2 node cluster just for infrastructure vms? The real question is why wouldn’t you? Think about it. Virtual Center is a small part of the equation. VC and your ESX hosts depend on DNS, NTP, AD, and other services. What happens if you loose DNS? You loose your ability to manage your ESX hosts through VC if you follow best practice and add them by FQDN. Now if AD goes down you have much larger issues, but if your AD domain controllers are virtual and you somehow loose them both that’s a problem. It’s a problem that could affect your ability to access Virtual Center. So why not build an isolated two node cluster that houses your infrastructure servers. You’ll always know where they will be, you can use affinity rules to keep servers that back each other up on separate hosts, and you can always have a cold spare available for Virtual Center.

Obviously this is not a good option for small environments, but if you have 10,30, 40, 80, 100 ESX hosts and upwards of a few hundred VM’s I believe this is not only a great design solution but a much needed one. If you are managing this many ESX hosts it’s important to know for sure where your essential infrastructure virtual machines reside that affect a large part of your environment if not all.

ESX vs. ESXi which is better?

So the question is which is better VMware ESX or VMware ESXi. A lot of die hard Linux fans will always say VMware ESX because of there attachment to the service console. The service is a great tool and once upon a time it served it’s purpose. Today there are many other options available to manage your VMware ESX servers without the service console. There is the Remote CLI, the VI ToolKit for Windows (powershell), and last but not least the VIMA.

With these tools you can effectively create scripts to help manage your VMware ESX environment without the service console. The service console opens up an additional security risks for each and every VMware ESX host you have deployed. Mitigating this risk increases the management overhead involved in the maintaining and deployment of these server in your environment. VMware ESX also consumes more server resources than VMware ESXi. The service console in VMware ESX uses CPU cycles and memory that you could be utilizing for virtual machines on VMware ESXi.

As far as feature and functionality VMware ESX and VMware ESXi are equals. They both support all of the Enterprise feature available as part of VI3. There are some add-on products that require the use of VMware ESX such as Lab manager and Stage Manager but hopefully they as well will be ported to VMware ESXi. You can find the VMware ESX and ESXi comparison here.

A growing number of servers are available from all major vendors that have support for embedded VMware ESXi. If you have one of these servers their are even greater benefits to running VMware ESXi. With these their is no need for internal or SAN storage for your boot partitions. Eliminating internal storage is a great way to go green. The average coast of a 73Gb 15K SAS drive is $400.00. Typically you would have (2) for redundancy adding an average of $800.00 to the cost of each server. The estimated annual cost to run a single SAS drive is $23.00 rounded making it $46.00 per server per year. This does no include the additional cooling capacity needed for the heat produce from the drives.

If you have 40 VMware ESX server running in your environment you can save $32,000 in the acquisition of hard disks and $1,840 per year in energy costs, not to mention the benefits to the environment from the reduction in your carbon footprint and well as the reduced maintenance costs. I don’t have any figures on this but there will most definitely a savings in the overall administration effort required to support VMware ESXi vs. VMware ESX. The sheer need to lock done the service console and keep it secured is pretty demanding task.

I regularly hear “We are waiting on VMware ESXi” and when I ask why I never hear a thought out valid answer. I would like to hear your opinion on this topic. Please leave comments as to your views on VMware ESX vs. VMware ESXi I would like to gain some greater insight into why more organizations are not making the switch.

VMware HA Cluster Sizing Considerations

To properly size a HA fail over cluster there are a few things that need to be determined.  You need to know how many hosts are going to be in your cluster, how many hosts you want to be able to fail (N+?), and it helps to know resource utilization information about your vm’s to gauge fluctuation.  Once we know this information we can use a simple formula to determine the maximum utilization for each host to maintain the appropriate DRS fail over level.

Here is an example:

Let’s say we have 5 hosts in a DRS cluster and we want to be able to fail (1) hosts (N+1).  We also want to have 10% overhead on each server to account for resource fluctuation.  First we need take 10% off the top of all (5) servers which leaves up with 90% utilizable resources on all hosts.  Next we need to account for the loss of (1) hosts.  In the event that a host is loss we need to distribute its load across the remaining (4) host.  To do this we need to divide up one hosts 90% possible resources by (4) remaining hosts.  This tells us that we need to distribute 22.5% of the servers load to each of the remaining hosts.

Taking in to account the original 10% over head plus the 22.5% capacity needed for fail over we need to have 32.5% of each hosts resources available which means we can only utilize 67.5% of each host in the cluster to maintain an N+1 fail over cluster with 10% overhead for resource fluctuation.  The formula for this would be:

((100 – %Overhead)*#host_failures)/(num_hosts – #host_failures)+%overhead = overhead needed per ESX host

Example 1:

((100-10)*1)/(5-1)+10 = 32.5    
(5 Server cluster with 10% overhead allowing 1 host failure) 67.5& of each host usable

((100-20)*2)/(8 -2)+20 =46.6   
(8 Server cluster with 20% overhead allowing for 2 host failures) 53.4% of each host usable

Example 2:

Fail over of 1 host

((100-20)*1)/(8 -1)+20 =31.4   
(8 Server cluster with 20% overhead allowing for 1 host failures) 68.6% of each host usable

Fail over of 2 hosts

((100-20)*2)/(8 -2)+20 =46.6   
(8 Server cluster with 20% overhead allowing for 2 host failures) 53.4% of each host usable

Determining the %Overhead can be tricky without a good capacity assessment so be careful if you don’t allocate enough overhead and you have host failures performance can degrade and you could experience contention within the environment.  I know some of the numbers seem dramatic but redundancy comes with a cost no matter what form of redundancy it may be.

VI Toolkit powershell simple script #3 – Find Snapshots

This really isn’t a script but more of a command but I think everyone gets the idea.

Find VM Snapshots for all servers in Virtual Infrastructure and display the VM Name, Snapshot Name, Date Created and power state. You can limit the VM’s this affects by using the location specific commands covered earlier.

Get-snapshot –vm (get-vm) | select vm, name, created, powerstate | export-csv [path_filename_csv]

Nothing to modify except the location and name of the csv file you would like to save the information to.  Go ahead and give it a try you’ll be amazed at what you will find.  I frequently hear “We don’t use snapshots” and then I run this little command and always find a few.  Needless to say this little guy becomes a power tool in the VMware admins arsenal of weapons.

VI Toolkit powershell simple script #2 – Create new NFS datastore

This simple little useful script can be used to add a new nfs datastore to all the hosts in an esx cluster.  Alternatively you can modify this and replace get-cluster CLUSTER_NAME with get-datacenter DATACENTER_NAME and add an ISO nfs share to your whole datacenter.

foreach ($ESXhost in get-cluster CLUSTER_NAME| get-vmhost)
                   {
                          New-datastore –nfs –vmhost $ESXhost –name [datastore_name] –path [/remote_path] –nfshost [nfs_server]
                   }

Simply replace CLUSTER_NAME with the name of the cluster you would like to run the script against, replace datastore_name with the name you want the datastore to have, replace remote_path with the nfs path on the nfs server, and nfs_server with the hostname or IP address of the nfs server you are connecting to.

Alternate method:

New-datastore –nfs –vmhost (get-cluster “cluster_name” | get-vmhost) –name [datastore_name] –path[/remote_path] –nfshost [nfs_server]