Just some more vSphere information

Here is some information about vSphere that I thought would be good to share with the world. As with everything else this is just a drop in the bucket. I’m currently working on putting together some upgrade videos and screenshots so take a look back and hopefully I will have them done by the end of the week.

Here is some interesting information about vSpehere and what it supports keep in mind these are just some notes I jotted down:

ESX 4 Hosts (vSphere Host)
256VM’s per host
64 Cores per host
512GB Ram per host

vSphere VMs (Hardware version 7)
8 vCPUs
256GB Ram
VMDirectPath I/O
Hot Plug Support (Supports CPU’s and Memory)
ESX 2.x and 3.x VM support
Paravirtual SCSI adapter
MSCS 2008
Persistent Reservations in vmkernel
LSI Logic SAS (Virtual SAS controller)

Networking Improvements
New iSCSI stack with 10-30% improved performance
TCPIP 2 Support (Based on FreeBSD 6.1 / IPv6 / locking and threading capabilties)
VMXNet3
MSI/MSI-X
Receive Side Scaling
VLAN offloading
VMware Directpath I/O

Storage Improvements
SCSI-3 Compliant
VMFS still SCSI-2
Target PortGroup Support (TPGS)
Asymmetric Logical Unit (ALUA)
Pluggable Storage Architecture (PSA)
Updated iSCSI stack
Native SATA

Service Console
64-bit, 2.6 based Linux kernel compatible with RHEL 5.2
Supports for both 32bit and 64bit applications
root file system stored in VMDK
vmkernel runs and owns device drivers only 64-bit
Address Space layout Randomization (ASLR)
No Linux dev packagers and libraries

CPU
Enhanced Intel step down
Enhanced AMD Power Now

Security
Trusted Platform Module (TPM)
Digitally signed and validated modules
Memory integrity techniques with microprocessor capabilities to protect against buffer-overflow

Guided Consolidation
500 Simultaneous Physical Machines
Modular Plug-in can be installed on different machine

Coverter

Physical / Virtual / 3rd party
Server 2008 Support
Covert Hyper-V Machines to VM’s

Update Manager

ESX / ESXi and Virtual Appliance Upgrades
Upgrade Virtual Hardware
VMware Tools
Base Line Groups

Upgrade vcenter steps

No SQL 2000 Support
2.x & 3.x Upgade Path
Upgrade vCenter
Upgrade Update Manager
Use Upgrade Manager to Upgrade Hosts
Upgrade VMware Tools, then the VMware Hardware.

vSphere Host Update Utility
3.x to 4.x
Dosn’t Upgrade VMFS Datastores or VMs
Installs with vSphere client
Support Rollback for ESX only
Can be used to install patch releases to standalone hosts
Copies Script and ISO to ESX hosts reboots and installs

VMware vSphere 4 (ESX 4.0, vCenter 4.0) Alarms and Host Profiles

Some are speculating that next Tuesday VMware is going to announce the release of VMware vSphere which is what essentially is Virtual Infrastructure 4.0 which would include ESX 4.0. I can’t say what VMware is going to do but over the next few weeks I will be publishing information on vSphere as well as some instructional videos. For now I have some teasers for you.

Here is a screen shot of the alarms available in vSphere. A you can see they have expanded the alarm feature from what was available in VI3.

vsphere_alarms

I’m sure most of you have heard of the new host profiles. If you haven’t had the fortune of checking out this cool new feature here are some screenshots to show you what options are available to you as part of a host profile. If you are not much for scripting and just can’t stand those pesky automated build scripts then you will love this feature. It gives you the ability to configure just about every aspect of the ESX host without having to deal with any scripting.

vsphere_host_profiles_1

vsphere_host_profiles_2

vsphere_host_profiles_3

vsphere_host_profiles_4

vsphere_host_profiles_5

vsphere_host_profiles_6

vsphere_host_profiles_7

As you can see in this screenshot all these settings are very easy to set via the GUI.

vsphere_host_profiles_8

So stay tuned as there is much more to come. I’m currently working on making videos covering installing, and configuring vSphere from the ground up and plan on getting into all of the new feature available in this release.

Network configuration for automated ESX deployment

I have been asked this question a few times so I thought it would be wise to post an article on it. When deploying an automated build script with the kickstart and/or installation files located on http, ftp, or nfs there are network configuration dependencies that you need to be aware of.

The ESX installer is a modified version of anaconda which is the same installer used for RedHat and a few or Linux variants. Anaconda is what allows for the kickstart portion of the automated build script. Anaconda itself has some limitations as far as what it supports.

Anaconda does not support 802.1q VLAN tagging. If you plan on tagging the service console network traffic this will affect your kickstart installation. The anaconda installer will not tag the vlan id to the traffic and therefor will not be able to perform the installation. You have a few options on how to handle this.

  1. Don’t have the networking folks tag the vlan until after the install finished.  However this can cause problems if your post installation script needs to grab some files from across the network so be aware of what you are doing during your post installation.
  2. Use a dedicated deployment network.  If you use this option take a look at my ESX 3.x Deployment script #2 located on our download page.
  3. Don’t tag the service console traffic.  If you share vSwitch0 with both the vmkernel(vMotion) interface and the service console only tag the vmkernel traffic.  This still allows for isolation of the traffic.  Have your network guys set the service console vlan as the native(untagged)vlan.
  4. Create a custom installation CD with all the necessary files located on the CD.

ESX local disk partitioning

I had a conversation with some colleagues of mine about ESX local disk partitioning and some interesting questions were raised.

How many are creating local vmfs storage on their ESX servers?
How many actually use that local vmfs storage?

Typically it is frowned upon to store vm’s on local vmfs because you loose the advances features of ESX such as vMotion, DRS, and HA. So if you don’t run vm’s from the local vmfs, then why create it? Creating this local datastore promotes it’s use just by being there. If you’re short on SAN space and need to deploy a vm and can’t wait for the SAN admins to present you more storage, what do you do? I’m sure more frequently than not you deploy to the local storage to fill the need for the vm. I’m also sure that those at least 20% of the time those vm’s continue to live there.

Is the answer to not utilize local vmfs storage? If you don’t what do you do with the left over space? Not all servers are created equal, sometimes servers have different size local drives so you have a few options. Do you create standards for your partitioning and set a partition such as / to grow and have varying configurations amongst your hosts? Or do you create a standard for all partition sizes and leave the rest of the space raw?

Typically this is the partition scheme I use for all deployments I do.

Boot = 250 (Primary)
Swap = 1600 (Primary)
/ = Fill (Primary)
/var = 4096 (Extended)
/opt = 4096 (Extended)
/tmp =4096 (Extended)
/home =4096 (Extended)
vmkcore = 100 (Extended)

This configuration will create inconsistencies amongst hosts with varying drive sizes. To maintain consistency I could do something like the following and leave the rest of the space raw.

Boot = 250 (Primary)
Swap = 1600 (Primary)
/ = 8192 (Primary)
/var = 4096 (Extended)
/opt = 4096 (Extended)
/tmp =4096 (Extended)
/home =4096 (Extended)
vmkcore = 100 (Extended)

I’m a fan for utilizing all the space you have available, but others like consistency. What is your preference? Weight in an let us know.