ESX Datastore sizing and allocation

Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)

I have been seeing a lot of activity in the VMTN forums regarding datastore sizing and free space.  That said I decided to write a post about this topic.  There are endless possibilities when it comes to datastore sizing and configurations but I’m going to focus on a few keep points that should be considered when structuring your ESX datastores.

All VM files kept together

In this configuration all VM files are kept together on one datastore.  This includes the vmdk file for each drive allocated to the VM, the vmx file, log files, the nvram file, and the vswap file.  When storing virtual machines this way there are some key considerations that need to be taken into account.  You should always allow for 20% overhead on your datastores to allow enough space for snapshots and vmdk growth if necessary.   When allocating for this overhead you have to realize that when a VM is powered on a vswap file is created for the virtual machine equal in size to the VM’s memory.  This has to be accounted for when allocating your 20% overhead.

For Fiber Channel and iSCSI SAN’s you should also limit the number of VM’s per datastore to no more than 16.  WIth these types of datastores file locking and scsi reservations create extra overhead.  Limiting the number of VM’s to 16 or less reduces the risk of contention on the datastore.  So how big should you make your datastores?  That’s a good question and it will vary from environment to environment.  I always recommend 500GB as a good starting point.  This is not a number that works for everyone but I use it because it helps limit the number of vm’s per datastore.

Consider the following your standard VM template consist of two drives an OS drive and a Data drive.  Your OS drive is standardized at 25Gb and your Data drives default starts at 20Gb with larger drives when needed.  Your standard template also allocated 2Gb of memory to your VM.  Anticipating a max of 16 VM’s per datastore I would allocate as follows:

((osdrive + datadrive) * 16) = total vm disk space + (memory * 16) =vm disk & vswap + (16 * 100Mb(log files) = total VM space needed * 20% overhead

(25 + 20) * 16 = 720Gb + ((2Gb * 16)=32) = 752Gb + ((16 * 100mb) = 1.6Gb) = 753.6Gb * 20% = 904.32Gb Round up to 910Gb needed

Depending on how you carve up your storage you may want to bump this to 960Gb or 1024Gb so as you can see the 500Gb rule was proven wrong for this scenario.  The point is you should have a standardized OS and data partition to properly estimate and determine a standardized datastore size.  This will never be perfect as there will always be VM’s that are anomalies.

Keep in mind if you fill your datastore and don’t leave room for the vswp file that is created when a VM powers on you will not be able to power on the VM.  Also if you have a snapshot that grows to fill a datastore the VM will crash and your only option to commit the snapshot will be to add an extent to the datastore because you will need space to commit the changes.  Extents are not recommended should be avoided as much as possible.

Separate VM vswap files

There are a number of options available in Virtual Infrastructure on how to handle the VM’s vswap file.  You can set the location of this file at the vm, the ESX Server, or the cluster.  You can choose to locate it on a local datastore or one or more shared datastores. Below are some examples:

Assign a local datastore per ESX server for all VM’s running on that server.

This option allows you to utilize a local vmfs datastore to store the VM’s vswap saving valuable disk space.  When using a local datastore I recommend allocating enough storage for all the available memory in the host + 25% for memory over subscription.

Create one shared datastore per ESX cluster.

In this option you can set one datastore at the cluster level for all vswap files.  This allows you to create one large datastore and set the configuration option once and never worry about it again.  Again I would allocate enough space for the total amount of memory for the whole cluster +25% for over subscription.

Multiple shared datastores in a cluster.

In this option you have different scenarios.  You can have one shared datastore per esx hosts in the cluster or one datastore for every two servers in the cluster, etc..  You would need to assign the vswap datastore at the esx host level for this configuration.

Note: When moving the vswap to a separate location it can impact the performance of vmotion.  It could extend the amount of time it takes for the vm to fully migrate from one host to another.

Hybrid Configuration.

Just as it’s possible to locate the vswap on another datastore it is also possible to split the vmdk disks on to separate datastores.  For instance you could have datastores for:

OS Drives
Data Drives
Page Files
vSwap files

To achieve this you would tell the vm where to create the drive and have different datastores allocated for these different purposes.  This is especially handy when planning to implement DR.  This allows you to only replicate the data you want and skip the stuff you don’t like the vswap and page files.  With this configuration you can also have different replication strategies for the data drives an OS drives.

Hope you found this post useful.

One Reply to “ESX Datastore sizing and allocation”

Leave a Reply