Over the years there have been some controversy over this topic. Should Virtual Center (vCenter) be physical or virtual? There is the argument that it should be physical to ensure consistent management of the virtual environment. Of course there is also the fact that Virtual Center requires a good amount of resources to handle the logging and performance information.
I’m a big proponent for virtualizing Virtual Center. With the hardware available today there is no reason not to. Even in large environments that really tax the Virtual Center server you can just throw more resources at it.
Many companies are virtualizing mission critical application to leverage VMware HA to protect these applications. How is Virtual Center any different. So what do you do if Virtual Center crashes? How do you find and restart Virtual Center when DRS is enabled on the cluster?
You ave a few options here.
Override the DRS setting for the Virtual Center vm and set it to manual. Now you will always know where your virtual center server is if you need to resolve issues with it.
Utilize Powershell to track the location of your virtual machines. I wrote an article that included a simple script to do this which I will include on our downloads page for easy access.
Run an isolated 2 node ESX cluster for infrastructure machines.
So my last option warrants a little explaining. Why would you want to run a dedicated 2 node cluster just for infrastructure vms? The real question is why wouldn’t you? Think about it. Virtual Center is a small part of the equation. VC and your ESX hosts depend on DNS, NTP, AD, and other services. What happens if you loose DNS? You loose your ability to manage your ESX hosts through VC if you follow best practice and add them by FQDN. Now if AD goes down you have much larger issues, but if your AD domain controllers are virtual and you somehow loose them both that’s a problem. It’s a problem that could affect your ability to access Virtual Center. So why not build an isolated two node cluster that houses your infrastructure servers. You’ll always know where they will be, you can use affinity rules to keep servers that back each other up on separate hosts, and you can always have a cold spare available for Virtual Center.
Obviously this is not a good option for small environments, but if you have 10,30, 40, 80, 100 ESX hosts and upwards of a few hundred VM’s I believe this is not only a great design solution but a much needed one. If you are managing this many ESX hosts it’s important to know for sure where your essential infrastructure virtual machines reside that affect a large part of your environment if not all.
How can you doubt Saas because your free email is down? Free is free. You get what you pay for. I read that Google has offered credits to the paying GMail customers, and that is the proper thing to do. But how can executives whine because their GMail/Hotmail/Yahoo is off line when they don’t pay for it? Why are they not paying for a business email service? I have worked for a few companies that have used “ousourced” paid email services – the REAL model for SaaS. I have had scheduled outages during hours when I am sleeping.
The fact is that Saas is here to stay and it is increasing in value and popularity. Yes, Google is leading the way with their free apps. Saas is a piece of Cloud Computing. Check out this video explaining Cloud Computing in Plain English:
A few days ago I wrote a blog about ESX local partitions. A good question was raised after I wrote the article concerning ESX hosts that boot from SAN. In my last article I asked the question “Should the partition scheme be standardized, even across different drive sizes? My question today is should that standard also be used when booting from SAN? I’ve heard the argument that when booting from SAN you should make the partitions smaller to conserve space. Anyone have an opinion on this? I feel it should conform to the standard. We determine the partition sizes for a reason based on need, and that same need still exists regardless of what medium you are booting from.
My recommendation would be to develop a standard partition scheme and utilze it across all drive sizes and mediums. You can find my recommended partition scheme in my previous post mentioned above.
Sun VirtualBox 2.2 Adds Open Virtualization Format Support
Sun Microsystems has released VirtualBox 2.2, an update to the company’s free and open source desktop virtualization solution. The new release includes a number of performance and feature enhancements, as well as support for the Open Virtualization Format (OVF) specification.
Citrix has just opened the beta program for the next version of XenServer
Citrix has just opened the beta program for the next version of XenServer, which is and will be free as everybody knows by now.The new product is codenamed Project George (but the final name will be XenServer 5.1 according to our sources), and features some interesting capabilities:
HyTrust is the latest US startup to enter the virtualization market, specifically invading the access control and configuration management where Catbird, Configuresoft, ManageIQ, Veeam and Tripwire are busy.
I have been a big fan of VMware products for a very long time, since the release of VMware Workstation 1.0 actually. I run VMware workstation on Windows, on Linux, and as of recently VMware Fusion on my MacBook. I was telling a friend of mine about how much I like my new MacBook as I have traditionally been a PC guy for many many moons and he asked if I was running “Parallels” on it. I had no idea what he was talking about as I had never pain much attention to parallels before. Well if you have never heard of them or ever seen their desktop virtualization products I highly recommend that you do.
Here is a link to their Workstation 4.0 Extreme demonstration. Just click the demos button and watch the video. After watching this video I think I need to buy a few more monitors and some extra video cards because I have got to try this out.
Here is their list of features and I have to say it might just become my new favorite desktop hypervisor.
Run graphics-intensive workloads with optimal performance using dedicated system resources on a single workstation.
Parallels FastLane Architecture — Utilize a turbo-charged hypervisor engine to support the latest hardware virtualization technologies.
Direct I/O Access to Graphic & Network Cards — Take advantage of Intel VT-d technology on the Intel Xeon Processor 5500 series (Nehalem) and Tylesburg platform for full visualization and networking acceleration in a virtual environment. Supported hardware includes NVIDIA Quadro FX professional graphics card and gigabit networking cards.
Parallels Tools with support for selected NVIDIA Quadro Graphics Cards — Extensive Windows and Linux integration support for fully-optimized VMs, including native device driver support for NVIDIA Quadro graphic cards.
Adaptive Hypervisor — Load-balance CPU resources as you move between host and guest OS to optimize performance.
Support for up to 16-way SMP — Assign up to 16 virtual CPUs in a VM for truly high-end computing.
Large Memory Support — Assign up to 64GB of RAM in a VM.
Supported Primary OSs — Growing list of supported primary OSs include Windows XP SP2 64-bit, Windows Vista SP1 64-bit and RHEL 5.3 64-bit.
Supported Guest OSs — Growing list of supported guest OSs include Windows Vista SP1 64-bit, Windows XP SP2 64-bit, RHEL 4.7 and 5.3 64-bit and Fedora 10 64-bit.