Keep it simple stupid – registering unregistered vm’s

Last week my boss came to me and asked if I could write a script for a customer to register VM’s after being replicated from once VI environment to another.  I agreed to take on the project and go for it.

Like everything I do these days I decided to use powershell to write the script.  I have taken a liking to it and the fact that I can run the scripts on both ESX and ESXi hosts saves me from having to re-create scripts all the time.  So I plugged away to 3am wrote the script, tested it inside out and sideways in my lab.  I was confident in the scripts ability to register all vm’s form all datastores I went ahead and sent it off to the customer.

A few days later I was on a conference call with the customer.  They were having problems with the script.  It wasn’t registering all the vm’s.  After a few hours of troubleshooting I realized that I needed to go back and try to recreate the problem’s in my lab to fix the script, but the customer didn’t have that kind of time.

A short while after getting off the meeting with the customer I received an email from them stating not to worry they had gotten a shell script that worked.  Then I started to think…….  I went in to my lab and created a shell script that would do the job.  The shell script was 5 lines long as oppose to powershell script that is about 40 lines.

The shell script if anyone needs it looks like this:

for v in ‘find /vmfs/volumes/ -name “*.vmx” `
do
echo “Registering $v” >> /log/registeredvms.log
vmware-cmd -s register $v
done

So the short of the story is sometimes it is best to keep it simple stupid.  Utilizing powershell for this problem was just too much overkill and in the end there were issues that were overlooked that I still can’t reproduce in my lab.  A simple shell script is all that was required and what I should have originally decided on.

So in the end this is a lesson learned and hopefully it will prevent someone else from making the same mistake.

VMware ESX Configuration Maximums Comparison Matrix

Have you ever needed an easy to reference way to see what the configuration maximums are for different versions of VMware ESX.  I know I seem to need this all the time.  I find it a huge pain to keep referring to each of the individual VMware documents to get the answers.  Sometimes I also want to see what the changes are between versions and I can’t seem to memorize this information in my tiny little brain.  So I went ahead and created a “Configuration Maximums Comparison Matrix” based on the VMware Configuration Maximums for each version.

You’ll notice some settings don’t have values for each version.  This is because they were not published in the VMware documents.  As I go through some additional documents and extract these values I will update the document to reflect.  For no the document does include everything from the VMware Configuration maximums published for each of these Versions:

VMware ESX 3
VMware ESX 3.5 & ESX 3.5 Update 1
VMware ESX 3.5 Update 2, Update3, & Update 4
VMware vSphere 4.0 (ESX 4)

You can find the document in our downloads section or you can click here. Hope you find this useful I know I will.

VMware vSphere Upgrade Path Overview

Many of you are wonder how you will go about upgrading to VMware vSphere when it is release. Well I’m here to say don’t worry. The upgrade path from ESX 2.x & 3.x is very painless and fairly simple. A lot of you will remember all the phone you have had in the past performing upgrades and scripting installs, well VMware is quickly trying to make all of that a thing from the past with new features available in vSphere.

Read More here

Just some more vSphere information

Here is some information about vSphere that I thought would be good to share with the world. As with everything else this is just a drop in the bucket. I’m currently working on putting together some upgrade videos and screenshots so take a look back and hopefully I will have them done by the end of the week.

Here is some interesting information about vSpehere and what it supports keep in mind these are just some notes I jotted down:

ESX 4 Hosts (vSphere Host)
256VM’s per host
64 Cores per host
512GB Ram per host

vSphere VMs (Hardware version 7)
8 vCPUs
256GB Ram
VMDirectPath I/O
Hot Plug Support (Supports CPU’s and Memory)
ESX 2.x and 3.x VM support
Paravirtual SCSI adapter
MSCS 2008
Persistent Reservations in vmkernel
LSI Logic SAS (Virtual SAS controller)

Networking Improvements
New iSCSI stack with 10-30% improved performance
TCPIP 2 Support (Based on FreeBSD 6.1 / IPv6 / locking and threading capabilties)
VMXNet3
MSI/MSI-X
Receive Side Scaling
VLAN offloading
VMware Directpath I/O

Storage Improvements
SCSI-3 Compliant
VMFS still SCSI-2
Target PortGroup Support (TPGS)
Asymmetric Logical Unit (ALUA)
Pluggable Storage Architecture (PSA)
Updated iSCSI stack
Native SATA

Service Console
64-bit, 2.6 based Linux kernel compatible with RHEL 5.2
Supports for both 32bit and 64bit applications
root file system stored in VMDK
vmkernel runs and owns device drivers only 64-bit
Address Space layout Randomization (ASLR)
No Linux dev packagers and libraries

CPU
Enhanced Intel step down
Enhanced AMD Power Now

Security
Trusted Platform Module (TPM)
Digitally signed and validated modules
Memory integrity techniques with microprocessor capabilities to protect against buffer-overflow

Guided Consolidation
500 Simultaneous Physical Machines
Modular Plug-in can be installed on different machine

Coverter

Physical / Virtual / 3rd party
Server 2008 Support
Covert Hyper-V Machines to VM’s

Update Manager

ESX / ESXi and Virtual Appliance Upgrades
Upgrade Virtual Hardware
VMware Tools
Base Line Groups

Upgrade vcenter steps

No SQL 2000 Support
2.x & 3.x Upgade Path
Upgrade vCenter
Upgrade Update Manager
Use Upgrade Manager to Upgrade Hosts
Upgrade VMware Tools, then the VMware Hardware.

vSphere Host Update Utility
3.x to 4.x
Dosn’t Upgrade VMFS Datastores or VMs
Installs with vSphere client
Support Rollback for ESX only
Can be used to install patch releases to standalone hosts
Copies Script and ISO to ESX hosts reboots and installs

Running VMware ESX 4 RC in a VMware 6.5.2 VM.

I just set up another quick VI4 lab on my laptop for the purposes of capturing screen shots and testing some things out. I was worried because I was not able to start VMs in this lab using ESX 4 Beta 2, but everything is fine again! Here is a screen shot of a Winders 2003 VM running inside an ESX 4 RC VM which is running inside of Workstation 6.5.2 on an Ubuntu machine.

vm-in-vm

Click on the image for a full-size view.

My VMX settings were from a post on VMTN when I was trying to get ESX 3.0.x to run on a WS 6.0.  Actually, XTraVirt came up with the solution originally.

Well, my VMX has not changed MUCH since then. I only added some parameters for sharing SCSI disks so I don’t need an iSCSI server. I found THAT information on Duncan’s Blog.

##################################################
# Start DAC Customization

guestOS = “other-64”

monitor_control.restrict_backdoor = “true”
# monitor_control.virtual_exec = “hardware”
monitor.virtual_exec = “hardware”
monitor_control.vt32 = “true”

# REQUIRD FOR USING NTFS DRIVES WITH LINUX HOSTS
mainMem.useNamedFile=FALSE

# For SCSI disk sharing
disk.locking = “FALSE”
diskLib.dataCacheMaxSize = “0”
diskLib.dataCacheMaxReadAheadSize = “0”
diskLib.dataCacheMinReadAheadSize = “0”
diskLib.dataCachePageSize = “4096”
diskLib.maxUnsyncedWrites = “0”

bios.bootDelay = “5000”

ethernet0.present = “TRUE”
ethernet0.connectionType = “custom”
ethernet0.wakeOnPcktRcv = “FALSE”
ethernet0.vnet = “/dev/vmnet3”
ethernet0.virtualdev = “e1000”
ethernet1.present = “TRUE”
ethernet1.connectionType = “custom”
ethernet1.vnet = “/dev/vmnet3”
ethernet1.virtualDev = “e1000”
ethernet1.wakeOnPcktRcv = “FALSE”
ethernet2.present = “TRUE”
ethernet2.connectionType = “custom”
ethernet2.vnet = “/dev/vmnet3”
ethernet2.virtualDev = “e1000”
ethernet2.wakeOnPcktRcv = “FALSE”
ethernet3.present = “TRUE”
ethernet3.connectionType = “custom”
ethernet3.vnet = “/dev/vmnet3”
ethernet3.virtualDev = “e1000”
ethernet3.wakeOnPcktRcv = “FALSE”
ethernet4.present = “TRUE”
ethernet4.connectionType = “custom”
ethernet4.vnet = “/dev/vmnet3”
ethernet4.virtualDev = “e1000”
ethernet4.wakeOnPcktRcv = “FALSE”
ethernet5.present = “TRUE”
ethernet5.connectionType = “custom”
ethernet5.vnet = “/dev/vmnet3”
ethernet5.virtualDev = “e1000”
ethernet5.wakeOnPcktRcv = “FALSE”

ethernet0.addressType = “generated”
ethernet1.addressType = “generated”
ethernet2.addressType = “generated”
ethernet3.addressType = “generated”
ethernet4.addressType = “generated”
ethernet5.addressType = “generated”
# End DAC Customization
#####################################################

My next posts will involve installing ESX 4 in text mode and some very interesting findings during that install….

VMware vSphere 4 under the covers – First Look

Tuesday April 21st VMware announced they will be releasing vSphere 4 by the end of 2nd quarter. This is exciting news for many looking to take advantage of some of the new features available with this release. In this post I’m going to walk through a handful of some of these new features. There are over 100 new features in vSphere 4 and this post doesn’t come close to covering them all but I will be touching on some really exciting ones with more to come in my next few posts.

Let’s start with the new home screen. It’s a handy way to navigate all the configuration areas of vSphere.

vsphere_home_screen1

Next let’s take a look at the new “Hardware Status” screen. In this screen shot there is limit hardware information which is due to my hardware. On actual server grade hardware you can view just about anything you want regarding your hardware. If you remember from my other vSphere post you can also trigger alarms based on most of these sensors as well.

vsphere_hardware_status1

Next lets take a look at the changes made to the Host Summary screen. Notice the arrow pointing to the Datastore with the alert. One of the alarms we can set is based on storage usage so we no longer have to manually verify that the free storage is within the acceptable levels.
vsphere_host_info

With ESX 3.5 we had to manually make sure we didn’t overuse resources to maintain and N+1 configuration for HA. I wrote an article on VMware HA and how to size your environment to maintain N+1. Well when you determine that you need to have 37.5% overhead available on each server you can now specify that in your HA configuration rather than manually making sure you don’t exceed.

vsphere_cluster_overview
Take a look at the arrow in the above screen shot it’s showing the reserved capacity of the hosts to ensure the proper failover capacity. The screen shot below show the HA setting to configure this functionality.

vsphere_ha_screen

There are some exciting and new interesting features surrounding networking. Once new feature is the distributed switch. The distributed switch is a really exciting improvement as it comes with support for private vlans, network vMotion, and of course support for 3rd party switches such as the Cisco Nexus v1000 switch.

vsphere_distributed_switch

There are many other enhancements to the networking such as the new VMXNET Generation 3 driver and other features like IP pools similar to whats available in Lab manager.

vsphere_ip_pools

vsphere_ip_pools_2

vsphere_vm_options

In my last post on vSphere I showed configuration items available as part of host profiles. Below is a screen shot showing the hosts compliance similar to that of Update manager.

vsphere_profile_compliance1

One exciting change to the virtual machines is the ability to hot add memory and vCPU’s to the virtual machines. This gives even more flexibility to make changes to servers without having to schedule downtime.

vsphere_mem_cpu_hotplug
The new resource view is very handy. Giving you a snapshot of what is really going on with your vm. It gives you insight into how much memory is private to the vm and how much is shared memory with other vm’s in the environment. This allows you to see how much memory is being “de-duplicated” It also allows you to see the host overhead, memory being swapped, and if ballooning is taking place.

vsphere_vm_resource_allocation

VMware finally moved aware form the annoying license server and now has integrated licensing into vCenter itself. This should make alot of users very happy to not have to deal with managing those license files any longer.

vsphere_license_server

Update manager now has built in support for a shared repository. So if you have a large deployment you can easily manage your update repositories across multiple Update Manager servers.

vsphere_update_manager

Have you had those annoying issues where some of your services would crash o stop running that vCenter is dependent on? Well it’s easy to kep track of your vCenter service status now with the new vCenter Service Status information.

vsphere_service_status

One thing that I think is still lacking is the scheduled tasks. That have added a few new options that can be scheduled, but i would have expected some additional improvements in this area.

vsphere_schedule_tasks

I hope you enjoyed this preview and be sure to check back as I will be covering some additional new features including “Fault Tolerance”. Over the next few weeks and month I will be putting together more overview posts, best practice articles as well as video tutorials.

VMware vSphere 4 (ESX 4.0, vCenter 4.0) Alarms and Host Profiles

Some are speculating that next Tuesday VMware is going to announce the release of VMware vSphere which is what essentially is Virtual Infrastructure 4.0 which would include ESX 4.0. I can’t say what VMware is going to do but over the next few weeks I will be publishing information on vSphere as well as some instructional videos. For now I have some teasers for you.

Here is a screen shot of the alarms available in vSphere. A you can see they have expanded the alarm feature from what was available in VI3.

vsphere_alarms

I’m sure most of you have heard of the new host profiles. If you haven’t had the fortune of checking out this cool new feature here are some screenshots to show you what options are available to you as part of a host profile. If you are not much for scripting and just can’t stand those pesky automated build scripts then you will love this feature. It gives you the ability to configure just about every aspect of the ESX host without having to deal with any scripting.

vsphere_host_profiles_1

vsphere_host_profiles_2

vsphere_host_profiles_3

vsphere_host_profiles_4

vsphere_host_profiles_5

vsphere_host_profiles_6

vsphere_host_profiles_7

As you can see in this screenshot all these settings are very easy to set via the GUI.

vsphere_host_profiles_8

So stay tuned as there is much more to come. I’m currently working on making videos covering installing, and configuring vSphere from the ground up and plan on getting into all of the new feature available in this release.

VMware Virtual Center – Physical or Virtual?

Over the years there have been some controversy over this topic. Should Virtual Center (vCenter) be physical or virtual? There is the argument that it should be physical to ensure consistent management of the virtual environment. Of course there is also the fact that Virtual Center requires a good amount of resources to handle the logging and performance information.

I’m a big proponent for virtualizing Virtual Center. With the hardware available today there is no reason not to. Even in large environments that really tax the Virtual Center server you can just throw more resources at it.

Many companies are virtualizing mission critical application to leverage VMware HA to protect these applications. How is Virtual Center any different. So what do you do if Virtual Center crashes? How do you find and restart Virtual Center when DRS is enabled on the cluster?

You ave a few options here.

  1. Override the DRS setting for the Virtual Center vm and set it to manual. Now you will always know where your virtual center server is if you need to resolve issues with it.
  2. Utilize Powershell to track the location of your virtual machines. I wrote an article that included a simple script to do this which I will include on our downloads page for easy access.
  3. Run an isolated 2 node ESX cluster for infrastructure machines.

So my last option warrants a little explaining. Why would you want to run a dedicated 2 node cluster just for infrastructure vms? The real question is why wouldn’t you? Think about it. Virtual Center is a small part of the equation. VC and your ESX hosts depend on DNS, NTP, AD, and other services. What happens if you loose DNS? You loose your ability to manage your ESX hosts through VC if you follow best practice and add them by FQDN. Now if AD goes down you have much larger issues, but if your AD domain controllers are virtual and you somehow loose them both that’s a problem. It’s a problem that could affect your ability to access Virtual Center. So why not build an isolated two node cluster that houses your infrastructure servers. You’ll always know where they will be, you can use affinity rules to keep servers that back each other up on separate hosts, and you can always have a cold spare available for Virtual Center.

Obviously this is not a good option for small environments, but if you have 10,30, 40, 80, 100 ESX hosts and upwards of a few hundred VM’s I believe this is not only a great design solution but a much needed one. If you are managing this many ESX hosts it’s important to know for sure where your essential infrastructure virtual machines reside that affect a large part of your environment if not all.

Business Continuity and Disaster Recovery with Virtualization

In the previous years Business Continuity and Disaster Recovery have been big buzz words. All companies small and large vowed to launch initiatives to implement either or both in their current IT strategies. My question is what happened? Why is it that I rarely see organizations that have implemented or even have a plan to implement Disaster Recovery?

Is it a lack of understanding? Is it that most companies believe it is to expensive or complicated to implement? Well it doesn’t have to be either. Most companies that are undergoing virtualization initiatives already have half if not more of what they need to implement Disaster Recovery. The simple fact is if you already have at least two data centers and are virtualizing you are a prime candidate. Here are some common question and my answers regarding this subject:

1.) Do I need to utilize SAN replication to implement Disaster Recovery in a virtualized environment?

No! There are other option to achieve Disaster Recovery without SAN replication. If you are running VMware you can utilize some of what you already have. VMware VCB in conjunction with VMware converter can be used to implement Disaster Recovery. Now this wouldn’t be as elegant as doing SAN replication but you could implement scheduled V2V’s of your Virtual Machines from one site to another and it’s a very simple solution to implement.

What about the hardware right….where do we get the additional hardware? The answer is simple reuse what you already have. Take those old servers you just freed up and put them to some good use. Beef them up! Need more ram in them tear ram out of some and add it to other, do the same with CPU’s to make a number of more power servers that you can use for DR. Granted you may need more of the reused servers to host all the vm’s needed but at the end of the day you would have a disaster recovery plan.

2.) What if I can’t do SAN replication but want synchronous and asynchronous replication?

This can still be achieved using software based replication in your virtual machines. Software like NSI Doubletake and Replistor provide this functionality at a a relatively low cost. With virtualization you can cut cost even more. With physical servers you traditionally needed to have a 1 to 1 mapping for replication which required a license for each host. With virtualization you can take a many to one approace cutting down on the licenses you need to replicate your data.

With this approach I would still use VCB or VMware converter to make weekly copies of your virtual machine OS drives. You can then utilize one of the mentioned applications (Doubletake or Replisor) to synchronous replication of your data volumes. You can achieve this and save licenses by installing say Doubletake on each of the source systems. The you would create a virtual machine at the DR site and add a drive to it for each of the source systems data volumes and replicate each source data to a different data volume on the destination vm. If you ever need to fail over just dismount the volumes from the destination vm and attach each one it’s respective vm that was created through the use of VCB or VMware converter.

3.) These methods are great but what would it take to bring an environment back up using them?

That’s rather hard to say because it depends on the size of your environment and how many vm’s you are relocating to your DR site. If your environment is large and you have specific SLA’s to adhere to regarding RTO (Recovery Time Objective’s) and RPO (Recovery Point Objectives) then you should consider SAN to SAN replication and utilizing something like VMware SRM which does an outstanding job of handling this. VMware SRM also allows you to run disaster recovery simulations to determine the effectiveness of your DR strategy that allows you to determine if you are meeting your SLA.

If you are doing DR on the cheap the real answer is to this question is you will be able to recover your systems a heck of a lot quicker than if had to restore via backups of rebuild your systems.

4.) This is great but where do we begin?

Don’t know where to begin, the answer is easy. Start small and grow into it. Find at least 2 servers that you can reuse beef’em up determine a configuration for them and deploy ESX to the servers. You need to have some infrastructure in place at your DR location to make DR work so that is a good place to start. You need to add the following service at your DR location:

  • Active Directory Servers
  • DNS Servers
  • NTP Servers
  • Virtual Center Server

It may be required to to deploy additional servers for your specific environment but I think you get the idea.

Next pick a few development machines or test machines that you can replicate to the DR site. Develop a plan and schedule down time and perform a test fail over to the remote site. Once you have work out the kinks and have a written DR plan determine your first phase of servers to incorporate into your DR site. Generally at this point you would want to pick some of your most valuable servers to ensure they are protected.

You can then break all your servers that need to be replicated into phases and determine the host requirements at the DR site and develop a plan for each phase of your DR implementation. It would be a good idea to have a remote replication vm for every 20 or so source vm’s. This really would depend on the data chance rate of your servers but 20 is a good starting point.

This article is obviously not all inclusive and is very high level but hopefully it inspires some of you to start developing a DR strategy and at least start testing some of these solutions in your environment because data is a terrible thing to waste.

ESX local partitioning when booting from SAN

A few days ago I wrote a blog about ESX local partitions. A good question was raised after I wrote the article concerning ESX hosts that boot from SAN. In my last article I asked the question “Should the partition scheme be standardized, even across different drive sizes? My question today is should that standard also be used when booting from SAN? I’ve heard the argument that when booting from SAN you should make the partitions smaller to conserve space. Anyone have an opinion on this? I feel it should conform to the standard. We determine the partition sizes for a reason based on need, and that same need still exists regardless of what medium you are booting from.

My recommendation would be to develop a standard partition scheme and utilze it across all drive sizes and mediums. You can find my recommended partition scheme in my previous post mentioned above.