vSphere Service Console and Disk Partitioning

Everyone at this point should be aware that the Service Console is now located in a vmdk on a VMFS partition.  The Service Console vmdk must be stored on a vmfs datastore and the datastore must either be local stoage or SAN storage that is only presented to the one host.  So I guess no shared vmfs datastores to house all the Service Consoles…….  The next question I had about the new service console was the /boot partition.  Where is it and how is the server bootstrapping?  Well I can’t say I have totally gotten to the bottom of this yet but I have discovered a few things.  When digging into scripting installations of vSphere I first looked at the disk partitioning which sheds a little light on the boot process.  Here is what the disk partitioning portion of the script looks like:

part /boot –fstype=ext3 –size= –onfirstdisk
part storage1 –fstype=vmfs3 –size=30000 –grow –onfirstdisk
part None –fstype=vmkcore –size=100 –onfirstdisk
# Create the vmdk on the cos vmfs partition.
virtualdisk cos –size=8000 –onvmfs=storage1
# Partition the virtual disk.
part / –fstype=ext3 –size=0 –grow –onvirtualdisk=cos
part swap –fstype=swap –size=1600 –onvirtualdisk=cos

Notice the “onfirstdisk” switch at the end of the first three partitions.  The /boot, a vmfs partition, and a vmkcore partition are all on the physical disk.  Notice the swap for the service console is located inside the vmdk.  Notice the creation of the service console vmdk disk.  “virtualdisk cos –size=5000 –onvmfs=storage1”.  To account for this change VMware has added some new configuration option for scripting the installation of the COS.  Next you’ll notice the  the creation of the / and swap partitions fort the COS utilizing the “onvirtualdisk=cos” switch.

I’m still working on and discovering some of the new ways to do scripted installation with vSphere 4.0.  I though this little tid bit would be helpful to those of you wondering how the COS ties in to the whole mix.  A few interesting things about this new method.

No need to specify the actual device, however I would be very cautious about the –onfirstdisk switch if SAN LUNs are present on the server.  I would take extra precautions to ensure that no SAN LUNs are connected.  There really are not any best practices around this configuration yet so there are a few things that I think need to be determined.  If you were planning on running VM’s from the local VMFS should you create multiple VMFS partitions and have one solely for the COS.  I almost think it would be beneficial just for the logical separation.  Well I will be digging a bit deeper into this but would love to hear others views on the COS and how they plan to deploy and why.  So please comment and let me know what you think.

ESX Datastore sizing and allocation

I have been seeing a lot of activity in the VMTN forums regarding datastore sizing and free space.  That said I decided to write a post about this topic.  There are endless possibilities when it comes to datastore sizing and configurations but I’m going to focus on a few keep points that should be considered when structuring your ESX datastores.

All VM files kept together

In this configuration all VM files are kept together on one datastore.  This includes the vmdk file for each drive allocated to the VM, the vmx file, log files, the nvram file, and the vswap file.  When storing virtual machines this way there are some key considerations that need to be taken into account.  You should always allow for 20% overhead on your datastores to allow enough space for snapshots and vmdk growth if necessary.   When allocating for this overhead you have to realize that when a VM is powered on a vswap file is created for the virtual machine equal in size to the VM’s memory.  This has to be accounted for when allocating your 20% overhead.

For Fiber Channel and iSCSI SAN’s you should also limit the number of VM’s per datastore to no more than 16.  WIth these types of datastores file locking and scsi reservations create extra overhead.  Limiting the number of VM’s to 16 or less reduces the risk of contention on the datastore.  So how big should you make your datastores?  That’s a good question and it will vary from environment to environment.  I always recommend 500GB as a good starting point.  This is not a number that works for everyone but I use it because it helps limit the number of vm’s per datastore.

Consider the following your standard VM template consist of two drives an OS drive and a Data drive.  Your OS drive is standardized at 25Gb and your Data drives default starts at 20Gb with larger drives when needed.  Your standard template also allocated 2Gb of memory to your VM.  Anticipating a max of 16 VM’s per datastore I would allocate as follows:

((osdrive + datadrive) * 16) = total vm disk space + (memory * 16) =vm disk & vswap + (16 * 100Mb(log files) = total VM space needed * 20% overhead

(25 + 20) * 16 = 720Gb + ((2Gb * 16)=32) = 752Gb + ((16 * 100mb) = 1.6Gb) = 753.6Gb * 20% = 904.32Gb Round up to 910Gb needed

Depending on how you carve up your storage you may want to bump this to 960Gb or 1024Gb so as you can see the 500Gb rule was proven wrong for this scenario.  The point is you should have a standardized OS and data partition to properly estimate and determine a standardized datastore size.  This will never be perfect as there will always be VM’s that are anomalies.

Keep in mind if you fill your datastore and don’t leave room for the vswp file that is created when a VM powers on you will not be able to power on the VM.  Also if you have a snapshot that grows to fill a datastore the VM will crash and your only option to commit the snapshot will be to add an extent to the datastore because you will need space to commit the changes.  Extents are not recommended should be avoided as much as possible.

Separate VM vswap files

There are a number of options available in Virtual Infrastructure on how to handle the VM’s vswap file.  You can set the location of this file at the vm, the ESX Server, or the cluster.  You can choose to locate it on a local datastore or one or more shared datastores. Below are some examples:

Assign a local datastore per ESX server for all VM’s running on that server.

This option allows you to utilize a local vmfs datastore to store the VM’s vswap saving valuable disk space.  When using a local datastore I recommend allocating enough storage for all the available memory in the host + 25% for memory over subscription.

Create one shared datastore per ESX cluster.

In this option you can set one datastore at the cluster level for all vswap files.  This allows you to create one large datastore and set the configuration option once and never worry about it again.  Again I would allocate enough space for the total amount of memory for the whole cluster +25% for over subscription.

Multiple shared datastores in a cluster.

In this option you have different scenarios.  You can have one shared datastore per esx hosts in the cluster or one datastore for every two servers in the cluster, etc..  You would need to assign the vswap datastore at the esx host level for this configuration.

Note: When moving the vswap to a separate location it can impact the performance of vmotion.  It could extend the amount of time it takes for the vm to fully migrate from one host to another.

Hybrid Configuration.

Just as it’s possible to locate the vswap on another datastore it is also possible to split the vmdk disks on to separate datastores.  For instance you could have datastores for:

OS Drives
Data Drives
Page Files
vSwap files

To achieve this you would tell the vm where to create the drive and have different datastores allocated for these different purposes.  This is especially handy when planning to implement DR.  This allows you to only replicate the data you want and skip the stuff you don’t like the vswap and page files.  With this configuration you can also have different replication strategies for the data drives an OS drives.

Hope you found this post useful.