VMware vSphere 4 under the covers – First Look

Tuesday April 21st VMware announced they will be releasing vSphere 4 by the end of 2nd quarter. This is exciting news for many looking to take advantage of some of the new features available with this release. In this post I’m going to walk through a handful of some of these new features. There are over 100 new features in vSphere 4 and this post doesn’t come close to covering them all but I will be touching on some really exciting ones with more to come in my next few posts.

Let’s start with the new home screen. It’s a handy way to navigate all the configuration areas of vSphere.

vsphere_home_screen1

Next let’s take a look at the new “Hardware Status” screen. In this screen shot there is limit hardware information which is due to my hardware. On actual server grade hardware you can view just about anything you want regarding your hardware. If you remember from my other vSphere post you can also trigger alarms based on most of these sensors as well.

vsphere_hardware_status1

Next lets take a look at the changes made to the Host Summary screen. Notice the arrow pointing to the Datastore with the alert. One of the alarms we can set is based on storage usage so we no longer have to manually verify that the free storage is within the acceptable levels.
vsphere_host_info

With ESX 3.5 we had to manually make sure we didn’t overuse resources to maintain and N+1 configuration for HA. I wrote an article on VMware HA and how to size your environment to maintain N+1. Well when you determine that you need to have 37.5% overhead available on each server you can now specify that in your HA configuration rather than manually making sure you don’t exceed.

vsphere_cluster_overview
Take a look at the arrow in the above screen shot it’s showing the reserved capacity of the hosts to ensure the proper failover capacity. The screen shot below show the HA setting to configure this functionality.

vsphere_ha_screen

There are some exciting and new interesting features surrounding networking. Once new feature is the distributed switch. The distributed switch is a really exciting improvement as it comes with support for private vlans, network vMotion, and of course support for 3rd party switches such as the Cisco Nexus v1000 switch.

vsphere_distributed_switch

There are many other enhancements to the networking such as the new VMXNET Generation 3 driver and other features like IP pools similar to whats available in Lab manager.

vsphere_ip_pools

vsphere_ip_pools_2

vsphere_vm_options

In my last post on vSphere I showed configuration items available as part of host profiles. Below is a screen shot showing the hosts compliance similar to that of Update manager.

vsphere_profile_compliance1

One exciting change to the virtual machines is the ability to hot add memory and vCPU’s to the virtual machines. This gives even more flexibility to make changes to servers without having to schedule downtime.

vsphere_mem_cpu_hotplug
The new resource view is very handy. Giving you a snapshot of what is really going on with your vm. It gives you insight into how much memory is private to the vm and how much is shared memory with other vm’s in the environment. This allows you to see how much memory is being “de-duplicated” It also allows you to see the host overhead, memory being swapped, and if ballooning is taking place.

vsphere_vm_resource_allocation

VMware finally moved aware form the annoying license server and now has integrated licensing into vCenter itself. This should make alot of users very happy to not have to deal with managing those license files any longer.

vsphere_license_server

Update manager now has built in support for a shared repository. So if you have a large deployment you can easily manage your update repositories across multiple Update Manager servers.

vsphere_update_manager

Have you had those annoying issues where some of your services would crash o stop running that vCenter is dependent on? Well it’s easy to kep track of your vCenter service status now with the new vCenter Service Status information.

vsphere_service_status

One thing that I think is still lacking is the scheduled tasks. That have added a few new options that can be scheduled, but i would have expected some additional improvements in this area.

vsphere_schedule_tasks

I hope you enjoyed this preview and be sure to check back as I will be covering some additional new features including “Fault Tolerance”. Over the next few weeks and month I will be putting together more overview posts, best practice articles as well as video tutorials.

VMware vSphere 4 (ESX 4.0, vCenter 4.0) Alarms and Host Profiles

Some are speculating that next Tuesday VMware is going to announce the release of VMware vSphere which is what essentially is Virtual Infrastructure 4.0 which would include ESX 4.0. I can’t say what VMware is going to do but over the next few weeks I will be publishing information on vSphere as well as some instructional videos. For now I have some teasers for you.

Here is a screen shot of the alarms available in vSphere. A you can see they have expanded the alarm feature from what was available in VI3.

vsphere_alarms

I’m sure most of you have heard of the new host profiles. If you haven’t had the fortune of checking out this cool new feature here are some screenshots to show you what options are available to you as part of a host profile. If you are not much for scripting and just can’t stand those pesky automated build scripts then you will love this feature. It gives you the ability to configure just about every aspect of the ESX host without having to deal with any scripting.

vsphere_host_profiles_1

vsphere_host_profiles_2

vsphere_host_profiles_3

vsphere_host_profiles_4

vsphere_host_profiles_5

vsphere_host_profiles_6

vsphere_host_profiles_7

As you can see in this screenshot all these settings are very easy to set via the GUI.

vsphere_host_profiles_8

So stay tuned as there is much more to come. I’m currently working on making videos covering installing, and configuring vSphere from the ground up and plan on getting into all of the new feature available in this release.

VMware Virtual Center – Physical or Virtual?

Over the years there have been some controversy over this topic. Should Virtual Center (vCenter) be physical or virtual? There is the argument that it should be physical to ensure consistent management of the virtual environment. Of course there is also the fact that Virtual Center requires a good amount of resources to handle the logging and performance information.

I’m a big proponent for virtualizing Virtual Center. With the hardware available today there is no reason not to. Even in large environments that really tax the Virtual Center server you can just throw more resources at it.

Many companies are virtualizing mission critical application to leverage VMware HA to protect these applications. How is Virtual Center any different. So what do you do if Virtual Center crashes? How do you find and restart Virtual Center when DRS is enabled on the cluster?

You ave a few options here.

  1. Override the DRS setting for the Virtual Center vm and set it to manual. Now you will always know where your virtual center server is if you need to resolve issues with it.
  2. Utilize Powershell to track the location of your virtual machines. I wrote an article that included a simple script to do this which I will include on our downloads page for easy access.
  3. Run an isolated 2 node ESX cluster for infrastructure machines.

So my last option warrants a little explaining. Why would you want to run a dedicated 2 node cluster just for infrastructure vms? The real question is why wouldn’t you? Think about it. Virtual Center is a small part of the equation. VC and your ESX hosts depend on DNS, NTP, AD, and other services. What happens if you loose DNS? You loose your ability to manage your ESX hosts through VC if you follow best practice and add them by FQDN. Now if AD goes down you have much larger issues, but if your AD domain controllers are virtual and you somehow loose them both that’s a problem. It’s a problem that could affect your ability to access Virtual Center. So why not build an isolated two node cluster that houses your infrastructure servers. You’ll always know where they will be, you can use affinity rules to keep servers that back each other up on separate hosts, and you can always have a cold spare available for Virtual Center.

Obviously this is not a good option for small environments, but if you have 10,30, 40, 80, 100 ESX hosts and upwards of a few hundred VM’s I believe this is not only a great design solution but a much needed one. If you are managing this many ESX hosts it’s important to know for sure where your essential infrastructure virtual machines reside that affect a large part of your environment if not all.

VMware ESX 3.5 Update 4 Released

What’s New

Notes:

1. Not all combinations of VirtualCenter and ESX Server versions are supported and not all of these highlighted features are available unless you are using VirtualCenter 2.5 Update 4 with ESX Server 3.5 Update 4. See the ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility Matrixes for more information on compatibility.
2. This version of ESX Server requires a VMware Tools upgrade.

The following information provides highlights of some of the enhancements available in this release of VMware ESX Server:

Expanded Support for Enhanced vmxnet Adapter — This version of ESX Server includes an updated version of the VMXNET driver (VMXNET enhanced) for the following guest operating systems:

* Microsoft Windows Server 2003, Standard Edition (32-bit)
* Microsoft Windows Server 2003, Standard Edition (64-bit)
* Microsoft Windows Server 2003, Web Edition
* Microsoft Windows Small Business Server 2003
* Microsoft Windows XP Professional (32-bit)

The new VMXNET version improves virtual machine networking performance and requires VMware tools upgrade.

Enablement of Intel Xeon Processor 5500 Series — Support for the Xeon processor 5500 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

QLogic Fibre Channel Adapter Driver Update — The driver and firmware for the QLogic fibre channel adapters have been updated to version 7.08-vm66 and 4.04.06 respectively. This release provides interoperability fixes for QLogic Management Tools for FC Adapters and enhanced NPIV support.

Emulex Fibre Channel Adapter Driver Update — The driver for Emulex Fibre Channel Adapters has been upgraded to version 7.4.0.40. This release provides support for the HBAnyware 4.0 Emulex management suite.

LSI megaraid_sas and mptscsi Storage Controller Driver Update — The drivers for LSI megaraid_sas and mptscsi storage controllers have been updated to version 3.19vmw and 2.6.48.18 vmw respectively. The upgrade improves performance and enhance event handling capabilities for these two drivers.

Newly Supported Guest Operating Systems — Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the Guest Operating System Installation Guide: http://www.vmware.com/pdf/GuestOS_guide.pdf.

* SUSE Linux Enterprise Server 11 (32-bit and 64-bit).
* SUSE Linux Enterprise Desktop 11 (32-bit and 64-bit).
* Ubuntu 8.10 Desktop Edition and Server Edition (32-bit and 64-bit).
* Windows Preinstallation Environment 2.0 (32-bit and 64-bit).

Furthermore, pre-built kernel modules (PBMs) were added in this release for the following guests:

* Ubuntu 8.10
* Ubuntu 8.04.2

Newly Supported Management Agents — Refer to VMware ESX Server Supported Hardware Lifecycle Management Agents for the most up-to-date information on supported management agents.

Newly Supported I/O Devices — in-box support for the following on-board processors, IO devices, and storage subsystems:

SAS Controllers and SATA Controllers:

The following are newly supported SATA Controllers.

* PMC 8011 (for SAS and SATA drives)
* Intel ICH9
* Intel ICH10
* CERC 6/I SATA/SAS Integrated RAID Controller (for SAS and SATA drivers)
* HP Smart Array P700m Controller

Notes:
1. Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5 (KB 1008673).
2. Storing VMFS datastores on native SATA drives is not supported.

Network Cards: The following are newly supported network interface cards:

* HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter
* HP NC362i Integrated Dual port Gigabit Server Adapter
* Intel 82598EB 10 Gigabit AT Network Connection
* HP NC360m Dual 1 Gigabit/NC364m Quad 1 Gigabit
* Intel Gigabit CT Desktop Adapter
* Intel 82574L Gigabit Network Connection
* Intel 10 Gigabit XF SR Dual Port Server Adapter
* Intel 10 Gigabit XF SR Server Adapter
* Intel 10 Gigabit XF LR Server Adapter
* Intel 10 Gigabit CX4 Dual Port Server Adapter
* Intel 10 Gigabit AF DA Dual Port Server Adapter
* Intel 10 Gigabit AT Server Adapter
* Intel 82598EB 10 Gigabit AT CX4 Network Connection
* NetXtreme BCM5722 Gigabit Ethernet
* NetXtreme BCM5755 Gigabit Ethernet
* NetXtreme BCM5755M Gigabit Ethernet
* NetXtreme BCM5756 Gigabit Ethernet

Expanded Support: The E1000 Intel network interface card (NIC) is now available for NetWare 5 and NetWare 6 guest operating systems.

Onboard Management Processors:

* IBM system management processor (iBMC)

Storage Arrays:

* SUN StorageTek 2530 SAS Array
* Sun Storage 6580 Array
* Sun Storage 6780 Array