Questioning SaaS

I was torn on whether or not to post this rant, but then I read a post that made my head spin….

First, there was the “Great Gmail Outage of February 2009“. There are constant Twitter outages as it grows in popularity and the servers struggle to keep up. Just last week, Yahoo Mail and Hotmail users were suffering through outages. I read on one site “Although the timing of the incident means that UK customers are unlikely to have been affected, the news will add to those doubts some users have over the software-as-a-service model.” This is the post that nudged me into posting this rant. I have had a few Hotmail accounts since 1998 and have had occasional access issues through the years, before I even knew what SaaS meant. My question is this: So What?!?

How can you doubt Saas because your free email is down? Free is free. You get what you pay for. I read that Google has offered credits to the paying GMail customers, and that is the proper thing to do. But how can executives whine because their GMail/Hotmail/Yahoo is off line when they don’t pay for it? Why are they not paying for a business email service?  I have worked for a few companies that have used “ousourced” paid email services – the REAL model for SaaS. I have had scheduled outages during hours when I am sleeping.

The fact is that Saas is here to stay and it is increasing in value and popularity. Yes, Google is leading the way with their free apps.  Saas is a piece of Cloud Computing. Check out this video explaining Cloud Computing in Plain English:

ESX local partitioning when booting from SAN

A few days ago I wrote a blog about ESX local partitions. A good question was raised after I wrote the article concerning ESX hosts that boot from SAN. In my last article I asked the question “Should the partition scheme be standardized, even across different drive sizes? My question today is should that standard also be used when booting from SAN? I’ve heard the argument that when booting from SAN you should make the partitions smaller to conserve space. Anyone have an opinion on this? I feel it should conform to the standard. We determine the partition sizes for a reason based on need, and that same need still exists regardless of what medium you are booting from.

My recommendation would be to develop a standard partition scheme and utilze it across all drive sizes and mediums. You can find my recommended partition scheme in my previous post mentioned above.

This week in virtualization news – Volume 1

Free ESXi 3.5 Update 4 availableget it here

VMware to make big announcement on Aprril21st

Many are speculating VMware will be releasing the next version of Virtual Infrastructure vSphere (VI4). I however have my doubts. I guess we’ll just have to wait and see.

Read about it here

Sun VirtualBox 2.2 Adds Open Virtualization Format Support

Sun Microsystems has released VirtualBox 2.2, an update to the company’s free and open source desktop virtualization solution. The new release includes a number of performance and feature enhancements, as well as support for the Open Virtualization Format (OVF) specification.

Read More Here

Citrix has just opened the beta program for the next version of XenServer

Citrix has just opened the beta program for the next version of XenServer, which is and will be free as everybody knows by now.The new product is codenamed Project George (but the final name will be XenServer 5.1 according to our sources), and features some interesting capabilities:

Read More Here

Hytrust to enter configuration management sector

HyTrust is the latest US startup to enter the virtualization market, specifically invading the access control and configuration management where Catbird, Configuresoft, ManageIQ, Veeam and Tripwire are busy.

Read more here

Parallels Workstation 4.0 Extreme

I have been a big fan of VMware products for a very long time, since the release of VMware Workstation 1.0 actually. I run VMware workstation on Windows, on Linux, and as of recently VMware Fusion on my MacBook. I was telling a friend of mine about how much I like my new MacBook as I have traditionally been a PC guy for many many moons and he asked if I was running “Parallels” on it. I had no idea what he was talking about as I had never pain much attention to parallels before. Well if you have never heard of them or ever seen their desktop virtualization products I highly recommend that you do.

Here is a link to their Workstation 4.0 Extreme demonstration. Just click the demos button and watch the video.   After watching this video I think I need to buy a few more monitors and some extra video cards because I have got to try this out.

Here is their list of features and I have to say it might just become my new favorite desktop hypervisor.

Run graphics-intensive workloads with optimal performance using dedicated system resources on a single workstation.

  • Parallels FastLane Architecture — Utilize a turbo-charged hypervisor engine to support the latest hardware virtualization technologies.
  • Direct I/O Access to Graphic & Network Cards — Take advantage of Intel VT-d technology on the Intel Xeon Processor 5500 series (Nehalem) and Tylesburg platform for full visualization and networking acceleration in a virtual environment. Supported hardware includes NVIDIA Quadro FX professional graphics card and gigabit networking cards.
  • Parallels Tools with support for selected NVIDIA Quadro Graphics Cards — Extensive Windows and Linux integration support for fully-optimized VMs, including native device driver support for NVIDIA Quadro graphic cards.
  • Adaptive Hypervisor — Load-balance CPU resources as you move between host and guest OS to optimize performance.
  • Support for up to 16-way SMP — Assign up to 16 virtual CPUs in a VM for truly high-end computing.
  • Large Memory Support — Assign up to 64GB of RAM in a VM.
  • Supported Primary OSs — Growing list of supported primary OSs include Windows XP SP2 64-bit, Windows Vista SP1 64-bit and RHEL 5.3 64-bit.
  • Supported Guest OSs — Growing list of supported guest OSs include Windows Vista SP1 64-bit, Windows XP SP2 64-bit, RHEL 4.7 and 5.3 64-bit and Fedora 10 64-bit.
  • Supports Virtual Disk sizes up to 2TB.
  • Up to 16 Virtual Network Adapters per VM.

ESX automated deployment email completion notification

How would you like to kick off your ESX installation, then go have some coffee, go for a jog, or just hang out by the water cooler until it is finished without worrying if you’re wasting time while it’s waiting done and waiting for you. Well you can with this ESX email script. Incorporating this script as part of your ESX automated deployment script allows you to configure your server to email you once the post installation configuration is finished.

So what do you need to do? It simple you can get the mail_notify script that I found on yellow-bricks.com from our downloads page. Once you have the script you will need to get it on to your server along with the MIME Lite.pm file that you can download here. Once you download and extract the package you can find the Lite.pm file under /lib/MIME/ folder.

The take the Lite.pm file and the mail_notify.pl file and tar them together for easy retrieval. Then upload the mail_notify.tar file to your web server. Next include the following in your automated deployment script.

##### Setting up Mail Notification ########
echo Setting up mail notification
echo Setting up mail notification >> /var/log/post_install.log

cd /tmp
lwp-download http://[server ip]/path/mail_notify.tar
tar xvf mail_notify.tar
mkdir /usr/lib/perl5/5.8.0/MIME
mv Lite.pm /usr/lib/perl5/5.8.0/MIME/

##### Move the files to where they belong #######
mv mail_notify.pl /usr/local/bin/
chmod +x /usr/local/bin/mail_notify.pl

####### Let’s send an email that the install is finished #####
/usr/local/bin/mail_notify.pl -t youremail@yourdomain.com -s “Server installation complete” -a /var/log/post_install.log -m “Server Installation complete please review the attached log file to verify your server installed correctly” -r [your smtp server]

Optionally you could set the smtp server in the mail_notify.pl script and not have to specify when sending a mail message.

if you include this at the end of your post installation portion part of your script but before the EOF line you will get a nice email notification informing you that your installation has finished with the post_install.log file attached.

Network configuration for automated ESX deployment

I have been asked this question a few times so I thought it would be wise to post an article on it. When deploying an automated build script with the kickstart and/or installation files located on http, ftp, or nfs there are network configuration dependencies that you need to be aware of.

The ESX installer is a modified version of anaconda which is the same installer used for RedHat and a few or Linux variants. Anaconda is what allows for the kickstart portion of the automated build script. Anaconda itself has some limitations as far as what it supports.

Anaconda does not support 802.1q VLAN tagging. If you plan on tagging the service console network traffic this will affect your kickstart installation. The anaconda installer will not tag the vlan id to the traffic and therefor will not be able to perform the installation. You have a few options on how to handle this.

  1. Don’t have the networking folks tag the vlan until after the install finished.  However this can cause problems if your post installation script needs to grab some files from across the network so be aware of what you are doing during your post installation.
  2. Use a dedicated deployment network.  If you use this option take a look at my ESX 3.x Deployment script #2 located on our download page.
  3. Don’t tag the service console traffic.  If you share vSwitch0 with both the vmkernel(vMotion) interface and the service console only tag the vmkernel traffic.  This still allows for isolation of the traffic.  Have your network guys set the service console vlan as the native(untagged)vlan.
  4. Create a custom installation CD with all the necessary files located on the CD.

ESX local disk partitioning

I had a conversation with some colleagues of mine about ESX local disk partitioning and some interesting questions were raised.

How many are creating local vmfs storage on their ESX servers?
How many actually use that local vmfs storage?

Typically it is frowned upon to store vm’s on local vmfs because you loose the advances features of ESX such as vMotion, DRS, and HA. So if you don’t run vm’s from the local vmfs, then why create it? Creating this local datastore promotes it’s use just by being there. If you’re short on SAN space and need to deploy a vm and can’t wait for the SAN admins to present you more storage, what do you do? I’m sure more frequently than not you deploy to the local storage to fill the need for the vm. I’m also sure that those at least 20% of the time those vm’s continue to live there.

Is the answer to not utilize local vmfs storage? If you don’t what do you do with the left over space? Not all servers are created equal, sometimes servers have different size local drives so you have a few options. Do you create standards for your partitioning and set a partition such as / to grow and have varying configurations amongst your hosts? Or do you create a standard for all partition sizes and leave the rest of the space raw?

Typically this is the partition scheme I use for all deployments I do.

Boot = 250 (Primary)
Swap = 1600 (Primary)
/ = Fill (Primary)
/var = 4096 (Extended)
/opt = 4096 (Extended)
/tmp =4096 (Extended)
/home =4096 (Extended)
vmkcore = 100 (Extended)

This configuration will create inconsistencies amongst hosts with varying drive sizes. To maintain consistency I could do something like the following and leave the rest of the space raw.

Boot = 250 (Primary)
Swap = 1600 (Primary)
/ = 8192 (Primary)
/var = 4096 (Extended)
/opt = 4096 (Extended)
/tmp =4096 (Extended)
/home =4096 (Extended)
vmkcore = 100 (Extended)

I’m a fan for utilizing all the space you have available, but others like consistency. What is your preference? Weight in an let us know.

ESX vs. ESXi which is better?

So the question is which is better VMware ESX or VMware ESXi. A lot of die hard Linux fans will always say VMware ESX because of there attachment to the service console. The service is a great tool and once upon a time it served it’s purpose. Today there are many other options available to manage your VMware ESX servers without the service console. There is the Remote CLI, the VI ToolKit for Windows (powershell), and last but not least the VIMA.

With these tools you can effectively create scripts to help manage your VMware ESX environment without the service console. The service console opens up an additional security risks for each and every VMware ESX host you have deployed. Mitigating this risk increases the management overhead involved in the maintaining and deployment of these server in your environment. VMware ESX also consumes more server resources than VMware ESXi. The service console in VMware ESX uses CPU cycles and memory that you could be utilizing for virtual machines on VMware ESXi.

As far as feature and functionality VMware ESX and VMware ESXi are equals. They both support all of the Enterprise feature available as part of VI3. There are some add-on products that require the use of VMware ESX such as Lab manager and Stage Manager but hopefully they as well will be ported to VMware ESXi. You can find the VMware ESX and ESXi comparison here.

A growing number of servers are available from all major vendors that have support for embedded VMware ESXi. If you have one of these servers their are even greater benefits to running VMware ESXi. With these their is no need for internal or SAN storage for your boot partitions. Eliminating internal storage is a great way to go green. The average coast of a 73Gb 15K SAS drive is $400.00. Typically you would have (2) for redundancy adding an average of $800.00 to the cost of each server. The estimated annual cost to run a single SAS drive is $23.00 rounded making it $46.00 per server per year. This does no include the additional cooling capacity needed for the heat produce from the drives.

If you have 40 VMware ESX server running in your environment you can save $32,000 in the acquisition of hard disks and $1,840 per year in energy costs, not to mention the benefits to the environment from the reduction in your carbon footprint and well as the reduced maintenance costs. I don’t have any figures on this but there will most definitely a savings in the overall administration effort required to support VMware ESXi vs. VMware ESX. The sheer need to lock done the service console and keep it secured is pretty demanding task.

I regularly hear “We are waiting on VMware ESXi” and when I ask why I never hear a thought out valid answer. I would like to hear your opinion on this topic. Please leave comments as to your views on VMware ESX vs. VMware ESXi I would like to gain some greater insight into why more organizations are not making the switch.

Microsoft Enterprise Desktop Virtualization

The Infrastructure Planning and Design team has released a new guide: Microsoft Enterprise Desktop Virtualization.

This guide outlines the critical infrastructure design elements that are crucial to a successful implementation of Microsoft Enterprise Desktop Virtualization (MED-V). The reader is guided through the four-step process of designing components, layout, and connectivity in a logical, sequential order. Identification of the MED-V server instances required is presented in simple, easy-to-follow steps, helping the reader to deliver managed virtual machines to end users. Following the steps in this guide will result in a design that is sized, configured, and appropriately placed to deliver the stated business benefits, while also considering the performance, capacity, and fault tolerance of the system.

Download the guide by visiting http://www.microsoft.com/ipd and selecting “Microsoft Enterprise Desktop Virtualization” under the IPD One-click Downloads, listed on the bottom right of the page.

Infrastructure Planning and Design streamlines the planning process by:

  • Defining the technical decision flow through the planning process.
  • Listing the decisions to be made and the commonly available options and considerations.
  • Relating the decisions and options to the business in terms of cost, complexity, and other characteristics.