Keep it simple stupid – registering unregistered vm’s

Last week my boss came to me and asked if I could write a script for a customer to register VM’s after being replicated from once VI environment to another.  I agreed to take on the project and go for it.

Like everything I do these days I decided to use powershell to write the script.  I have taken a liking to it and the fact that I can run the scripts on both ESX and ESXi hosts saves me from having to re-create scripts all the time.  So I plugged away to 3am wrote the script, tested it inside out and sideways in my lab.  I was confident in the scripts ability to register all vm’s form all datastores I went ahead and sent it off to the customer.

A few days later I was on a conference call with the customer.  They were having problems with the script.  It wasn’t registering all the vm’s.  After a few hours of troubleshooting I realized that I needed to go back and try to recreate the problem’s in my lab to fix the script, but the customer didn’t have that kind of time.

A short while after getting off the meeting with the customer I received an email from them stating not to worry they had gotten a shell script that worked.  Then I started to think…….  I went in to my lab and created a shell script that would do the job.  The shell script was 5 lines long as oppose to powershell script that is about 40 lines.

The shell script if anyone needs it looks like this:

for v in ‘find /vmfs/volumes/ -name “*.vmx” `
do
echo “Registering $v” >> /log/registeredvms.log
vmware-cmd -s register $v
done

So the short of the story is sometimes it is best to keep it simple stupid.  Utilizing powershell for this problem was just too much overkill and in the end there were issues that were overlooked that I still can’t reproduce in my lab.  A simple shell script is all that was required and what I should have originally decided on.

So in the end this is a lesson learned and hopefully it will prevent someone else from making the same mistake.

Advertisements

Changes to the ESX Service Console and ESX vs. ESXi…again

A whitpaper was posted in the VMTN communities Thursday outlining the differences between the ESX 3.x and ESX 4.x service console. It further offers resources for transitioning COS based apps and scripts to ESXi via the vSphere Management Assistant and the vSphere CLI. Also mentioned briefly was the vSphere PowerCLI. If you are a developer or write scripts for VMware environments, also check out the Communities Developer section.

I hear it time and time again…The full ESX console is going away. ESXi is the way to go. I know there are valid arguments for keeping ESX around, but they are few. Failing USB keys may be a valid argument, but I have not heard of this happening. If that is the case, use boot from SAN. You need SAN anyway. As for hung VM processes, there are a few ways to address this in ESXi.

If the techie wonks at VMware are publishing articles about how to transition to ESXi, then resistance is futile…you WILL be assimilated…

Stevie’s Unified Event Management, My Cloud Shangri-La

If you know Steve Chambers you know he just moved to Cisco. Before that, he was with VMware and has been a pillar of the VI:OPS boards. He is now working on a document about Unified Event Management and in the spirit of community, he is looking for comments, suggestion, etc. He called my attention to the post via Twitter as we were discussing Splunk and it’s capabilities for “Centralized Event Aggregation” (Steve’s terms). Take a look at his post when you get a chance and make some comments. You know that I have heralded the benefits of a centralized logging server. Steve just plain gets it.

And since I mentioned Cisco, I also discovered that Cisco put out a whitepaper on their take regarding the Virtualization Blueprint for the Datacenter. Its their take on how virtualization will benefit your business.  The chart shows how a business’ agility will increase as we climb the lifecycle from consolidation to virtualization and then on to automation.

It doesn’t matter what you are using underneath of it all – VMware, Xen, Hyper-V – UCS, Matrix. It just matters that you have methods to provide centralized monitoring and centralized automation. Although centralized event monitoring and centralized automation are two different things, they are both necessary if you wish to properly monitor and manage your piece of the cloud. I’ve already said my piece on the need for centralized event monitoring and Steve lays out a sample blueprint.

Automation is the new big thing when it comes to the cloud. VMware saw that way back when and they bought Dunes almost two years ago. VMware Orchestrator (VMO) was a big buzz for a little while, but great big VMware couldn’t couldn’t pull off what teenie little Dunes could when it comes to customizing the Orchestrator. They left it in a fairly decent state for smaller businesses with VMware Lifecycle Manager, but it was a hobbled state and didn’t scale very well. You can customize VMO, but you need to be good at the Dunes interface and have a decent knowledge of JavaScripting and that kind of stuff. Even being free, its not for me. The standard release of VMO allows you to set up a facility to request, approve, provision and archive VMs. A great start, but not quite enough.

A quick search for data center orchestration reveals Cisco at the top of the list. But there are others from Novell PlateSpin, Egenera, and DynamicOps that appear to do more. What we REALLY need is a way to orchestrate/ automate the entire data center. Physical servers, VMs, storage and networking can all be provisioned, monitored and managed. Can they all be managed from a common platform? Once you can have a seamless process for provisioning, managing and monitoring every component of the data center, you will see cloud computing really take off. A user (consumer / customer) that needs an application should not care if it is deployed on a physical or virtual machine, what storage devices hold the data or the network that connects it. The user should know the basic requirements for the application and the ORCHESTRATOR should make the decisions about all of these things. The orchestrator will take a request, ask for approval and make sure the application gets deployed without making mistakes. The orchestrator will interface with the monitoring facility and change management to make sure the application is accounted-for. The orchestrator will hand off to the backup facility. The orchestrator will notify you when the application as reached end of life. That’s when we will have “Cloud Shangri-La” (My term).

What’s New with Hyper-V Server 2008 R2

As many of you know, Hyper-V Server 2008 R2 is the (soon to be released) second release of Microsoft’s free hypervisor.  This is the version of Windows Server 2008 that is a free download – no license needed.  It’s locked into a Server Core installation, capable of only running the Hyper-V role.  There’s some new important advancements coming in the R2 release that will have an impact on market (editorial statement for now, but probably will prove to be true).

  • Hyper-V R2 will increase the CPU support from 4 to 8 sockets, and from 24 to 64 logical CPUs
  • Memory support will increase from 32GB to 1TB
  • Increased VM capacity support per host from 192 VMs to 256 VMs

There are also some pretty interesting advancements on the feature front as well.   The initial release of Hyper-V Server 2008 was a pretty stripped down version – no clustering or Quick Migration support.  All of that changes in R2.  Hyper-V Server 2008 R2 will include:

  • Host Clustering Support (application clustering is not supported as you can’t add any additional roles to Hyper-V Server).
  • Live and Quick Migration Support
  • Clustered File System

Finally, the HVCONFIG utility is being updated to include support for clustering and remote management configuration, and is being renamed to SCONFIG (the same utility that will be in all version of Server 2008 Core).  The SCONFIG utility is a menu-driven configuration tool, instead of being at the mercy of the Windows Server Core prompt.

Hyper-V Server 2008 R2 can be managed from the graphical tools loaded on another Windows Server 2008 R2 (Failover Cluster Manager/Hyper-V Manager) or from VMM2008 R2.  There’s also a Failover Cluster Manager and Hyper-V Manager toolset for Windows 7 here.

Guest OS support is increasing to include all version of Server 2008 R2, Windows 7 and a few new Linux distributions; When Hyper-V Server 2008 R2 is RTM, SUSE Linux Enterprise Server 11 and RedHat RHEL 5 will be supported, in addition to the already-supported SUSE Linux Enterprise Server 10.  No word on whether RHEL support will include the Integration Services.

Hyper-V Server 2008 R2 RC is available now:
http://www.microsoft.com/hyper-v-server/en/us/default.aspx

The RTM will probably be coming in the next week or two.  It should RTM the same time as Windows 7, and Server 2008 R2 is scheduled to hit the retail shelves on October 22, 2009 (same date as Windows 7).

This is big news.  These features will make Hyper-V Server all that’s needed for enterprise deployments.  There isn’t much that the full blown version of Server 2008 R2 brings with regards to Hyper-V.  Of course, you still need to license the guest OS instances, and when you buy Enterprise or Datacenter, you can run the host hypervisor for free anyway (1 physical + 4/unlimited guests for Enterprise/Datacenter, respectfully).

Discounted Exams Available at VMworld

VMware just set up some discounted certification exams at VMworld. It just gives you another reason to go!

Hi all,

VMware will be providing onsite exam services at this year’s VMworld. The exams available are the VCP on VI3, VCP on vSphere 4 and the VCDX Enteprise Administration and Design Exams. Both VCP exams can be taken for only $85, but you MUST pre-register to get this great deal! You can pre-register by visting the Pearson VUE website at http://pearsonvue.com/vmware/vmworld/ . If you do not have the opportunity to pre-register for the exam, you can take it onsite (assuming seats are still available) for only $105, which is still a significant savings.

For more information on the VCDX exams, see my post at: http://communities.vmware.com/thread/222194

So good luck to you all on your path to becoming a VCP and I look forward to seeing you at the VMworld event!

Regards,

Jon C. Hall
Technical Certification Developer
VMware, Inc.

What are you waiting for? I already registered!

vSphere Service Console and Disk Partitioning

Everyone at this point should be aware that the Service Console is now located in a vmdk on a VMFS partition.  The Service Console vmdk must be stored on a vmfs datastore and the datastore must either be local stoage or SAN storage that is only presented to the one host.  So I guess no shared vmfs datastores to house all the Service Consoles…….  The next question I had about the new service console was the /boot partition.  Where is it and how is the server bootstrapping?  Well I can’t say I have totally gotten to the bottom of this yet but I have discovered a few things.  When digging into scripting installations of vSphere I first looked at the disk partitioning which sheds a little light on the boot process.  Here is what the disk partitioning portion of the script looks like:

part /boot –fstype=ext3 –size= –onfirstdisk
part storage1 –fstype=vmfs3 –size=30000 –grow –onfirstdisk
part None –fstype=vmkcore –size=100 –onfirstdisk
# Create the vmdk on the cos vmfs partition.
virtualdisk cos –size=8000 –onvmfs=storage1
# Partition the virtual disk.
part / –fstype=ext3 –size=0 –grow –onvirtualdisk=cos
part swap –fstype=swap –size=1600 –onvirtualdisk=cos

Notice the “onfirstdisk” switch at the end of the first three partitions.  The /boot, a vmfs partition, and a vmkcore partition are all on the physical disk.  Notice the swap for the service console is located inside the vmdk.  Notice the creation of the service console vmdk disk.  “virtualdisk cos –size=5000 –onvmfs=storage1”.  To account for this change VMware has added some new configuration option for scripting the installation of the COS.  Next you’ll notice the  the creation of the / and swap partitions fort the COS utilizing the “onvirtualdisk=cos” switch.

I’m still working on and discovering some of the new ways to do scripted installation with vSphere 4.0.  I though this little tid bit would be helpful to those of you wondering how the COS ties in to the whole mix.  A few interesting things about this new method.

No need to specify the actual device, however I would be very cautious about the –onfirstdisk switch if SAN LUNs are present on the server.  I would take extra precautions to ensure that no SAN LUNs are connected.  There really are not any best practices around this configuration yet so there are a few things that I think need to be determined.  If you were planning on running VM’s from the local VMFS should you create multiple VMFS partitions and have one solely for the COS.  I almost think it would be beneficial just for the logical separation.  Well I will be digging a bit deeper into this but would love to hear others views on the COS and how they plan to deploy and why.  So please comment and let me know what you think.

Setting up a Splunk Server to Monitor a VMware Environment

In a previous article, I compared syslog servers and decided to use Splunk. Splunk is easy to set up as a generic Syslog server, but it can be a pain in the ass getting the winders machines to send to it. There is a home brewed java based app on the Splunk repository of user submitted solutions, but I have heard complaints about its stability and decided that I was going to set out to find a different way to do it.

During my search, I discovered some decent (free!) agents on sourceforge. One will send event logs to a syslog server (SNARE) and one will send text based files to a syslog server (Epilog). Using the SNARE agents appear to be more stable than using the Java App and does a pretty good job. So I basically came up with a free way to set up a great Syslog server using Ubuntu Server, Splunk, SNARE and Epilog.

I created a “Proven Practice Guide” for VI:OPS and posted it there, but it seems that it is stuck in the approval process. I usually psot the doc on VI:OPS and then link to it in my blog post, and follow up later with a copy on our downloads area. To hurry things along, I also posted it in both places:

http://www.www.dailyhypervisor.com/?file_id=17

http://viops.vmware.com/home/docs/DOC-1563

How-To: Disable Debug Mode in Workstation 7.0 Beta

OK… I know the wonks at VMware will frown upon this one, but someone posted a similar hack for WS 6.5 beta, so here it is for WS 7.0 beta. I finally got around to installing the beta code this morning and immediately saw a performance issue. VMware Workstation Beta runs in debug mode by default. It can seriously slow down your VMs. If you are playing with vSphere and ESX/ESXi 4.0 inside a VM, it is horribly slow once you get to the VM inside of the VM. This is actually part of the testing VMware would like you to perform while using the beta.

For Linux, you will find the files in /usr/lib/vmware/bin. For Winders, they are probably somewhere in %PROGRAMS%. I usually stick to Linux for my host.

Basically, perform the following to disable debug mode. Shut down VMware first!

sudo mv /usr/lib/vmware/bin/vmware-vmx-debug /usr/lib/vmware/bin/vmware-vmx-debug.old
sudo cp /usr/lib/vmware/bin/vmware-vmx /usr/lib/vmware/bin/vmware-vmx-debug
The Result After Renaming the Files
Click on the Image for a Larger View

Now, you have tricked the apploader to use the standard build. I would assume you will have similar results with Winders. Just add “.exe” onto the end of the referenced files names. Easy huh?

DISCLAIMER:

This is neither supported nor recommended by VMware. If you have any issues with the beta version and wish to post to the forums or file and SR, you MUST revert back to debug mode and reproduce the error or VMware may not help you. This is a beta TEST. VMware will want debug info to check any suspected bugs before releasing it GA.