The Linux Guest agent has not changed much since 5.1. You will notice most everything except the agent version remains basically the same as my article on executing scripts with the 5.1 Linux Guest agent.
Linux Guest Agent
The Linux guest agent has a number of feature benefits that you receive if you utilize it. The Linux guest agent is a small agent that acts very similarly to the vCAC proxy agents. When it is installed you give it the name or IP address of the vCAC server. This allows it to check in with the server when it loads on a newly provisioned machine and determine if there is anything it needs to do. If the vCAC server has work for it to do it send the instructions and the agent executes the instructions on the local guest operating system. The guest agent comes with a number of pre-built scripts and functions, but also allows you to execute your own scripts. Some of the features available with the Linux Guest Agent are:
- Disk Operations – Partition, Format, and mount disk that is added to the machine.
- Execute Scripts – Execute scripts after the machine is provisioned.
- Network Operations – Configure setting for additional network interfaces added to the machine.
Continue reading “vCloud Automation Center – vCAC 5.2 – Installing the 5.2 Guest Agent on Linux”
We have just launched our DailyHypervisor Forum located at http://www.dailyhypervisor.com/forum. Stop by, contribute and be a part of our community. The DH Forum is intended to be for all things cloud. Currently we have forums created for vCAC, vCD, vCO, Cloud General, and Openstack. More forum categories will be coming based on demand. If you have a category you would like to see shoot us a note and let us know.
Our goal is to create a common place where anyone can come to learn, get help, share ideas, or just about anything that will help foster knowledge regarding cloud computing. Considering this very blog is the announcement of our forum you could image there isn’t a whole lot happening yet so what are you waiting for, be the first. Go ask a question, post an issue, share a thought and let’s get things rolling.
A whitpaper was posted in the VMTN communities Thursday outlining the differences between the ESX 3.x and ESX 4.x service console. It further offers resources for transitioning COS based apps and scripts to ESXi via the vSphere Management Assistant and the vSphere CLI. Also mentioned briefly was the vSphere PowerCLI. If you are a developer or write scripts for VMware environments, also check out the Communities Developer section.
I hear it time and time again…The full ESX console is going away. ESXi is the way to go. I know there are valid arguments for keeping ESX around, but they are few. Failing USB keys may be a valid argument, but I have not heard of this happening. If that is the case, use boot from SAN. You need SAN anyway. As for hung VM processes, there are a few ways to address this in ESXi.
If the techie wonks at VMware are publishing articles about how to transition to ESXi, then resistance is futile…you WILL be assimilated…
Everyone at this point should be aware that the Service Console is now located in a vmdk on a VMFS partition. The Service Console vmdk must be stored on a vmfs datastore and the datastore must either be local stoage or SAN storage that is only presented to the one host. So I guess no shared vmfs datastores to house all the Service Consoles……. The next question I had about the new service console was the /boot partition. Where is it and how is the server bootstrapping? Well I can’t say I have totally gotten to the bottom of this yet but I have discovered a few things. When digging into scripting installations of vSphere I first looked at the disk partitioning which sheds a little light on the boot process. Here is what the disk partitioning portion of the script looks like:
part /boot –fstype=ext3 –size= –onfirstdisk
part storage1 –fstype=vmfs3 –size=30000 –grow –onfirstdisk
part None –fstype=vmkcore –size=100 –onfirstdisk
# Create the vmdk on the cos vmfs partition.
virtualdisk cos –size=8000 –onvmfs=storage1
# Partition the virtual disk.
part / –fstype=ext3 –size=0 –grow –onvirtualdisk=cos
part swap –fstype=swap –size=1600 –onvirtualdisk=cos
Notice the “onfirstdisk” switch at the end of the first three partitions. The /boot, a vmfs partition, and a vmkcore partition are all on the physical disk. Notice the swap for the service console is located inside the vmdk. Notice the creation of the service console vmdk disk. “virtualdisk cos –size=5000 –onvmfs=storage1”. To account for this change VMware has added some new configuration option for scripting the installation of the COS. Next you’ll notice the the creation of the / and swap partitions fort the COS utilizing the “onvirtualdisk=cos” switch.
I’m still working on and discovering some of the new ways to do scripted installation with vSphere 4.0. I though this little tid bit would be helpful to those of you wondering how the COS ties in to the whole mix. A few interesting things about this new method.
No need to specify the actual device, however I would be very cautious about the –onfirstdisk switch if SAN LUNs are present on the server. I would take extra precautions to ensure that no SAN LUNs are connected. There really are not any best practices around this configuration yet so there are a few things that I think need to be determined. If you were planning on running VM’s from the local VMFS should you create multiple VMFS partitions and have one solely for the COS. I almost think it would be beneficial just for the logical separation. Well I will be digging a bit deeper into this but would love to hear others views on the COS and how they plan to deploy and why. So please comment and let me know what you think.
Over the years there have been some controversy over this topic. Should Virtual Center (vCenter) be physical or virtual? There is the argument that it should be physical to ensure consistent management of the virtual environment. Of course there is also the fact that Virtual Center requires a good amount of resources to handle the logging and performance information.
I’m a big proponent for virtualizing Virtual Center. With the hardware available today there is no reason not to. Even in large environments that really tax the Virtual Center server you can just throw more resources at it.
Many companies are virtualizing mission critical application to leverage VMware HA to protect these applications. How is Virtual Center any different. So what do you do if Virtual Center crashes? How do you find and restart Virtual Center when DRS is enabled on the cluster?
You ave a few options here.
- Override the DRS setting for the Virtual Center vm and set it to manual. Now you will always know where your virtual center server is if you need to resolve issues with it.
- Utilize Powershell to track the location of your virtual machines. I wrote an article that included a simple script to do this which I will include on our downloads page for easy access.
- Run an isolated 2 node ESX cluster for infrastructure machines.
So my last option warrants a little explaining. Why would you want to run a dedicated 2 node cluster just for infrastructure vms? The real question is why wouldn’t you? Think about it. Virtual Center is a small part of the equation. VC and your ESX hosts depend on DNS, NTP, AD, and other services. What happens if you loose DNS? You loose your ability to manage your ESX hosts through VC if you follow best practice and add them by FQDN. Now if AD goes down you have much larger issues, but if your AD domain controllers are virtual and you somehow loose them both that’s a problem. It’s a problem that could affect your ability to access Virtual Center. So why not build an isolated two node cluster that houses your infrastructure servers. You’ll always know where they will be, you can use affinity rules to keep servers that back each other up on separate hosts, and you can always have a cold spare available for Virtual Center.
Obviously this is not a good option for small environments, but if you have 10,30, 40, 80, 100 ESX hosts and upwards of a few hundred VM’s I believe this is not only a great design solution but a much needed one. If you are managing this many ESX hosts it’s important to know for sure where your essential infrastructure virtual machines reside that affect a large part of your environment if not all.
I’ve been asked this question a lot lately, “How much memory should we assign to the service console?” My default answer is always 800Mb. I have a number of reasons why I would recommend this, but the short answer is “Why not?” What do you really have to loose by assigning the service console 800Mb? The answer is nothing, but you do have a lot to gain. Even if you are not running any agents there are benefits to this. One thing most people dont’ realize is even if you don’t install any third party agents at the service console you are still running agents. There is the vpxa agent that allows you to communicate with vCenter, and there is the HA agent if you are running VMware HA, and if you have more than 4Gb of memory installed VMware recommends increasing the RAM to 512Mb.
Considering all this and that most systems today have 16Gb of memory or a lot more I just don’t understand why anyone would leave the service console at 272mb. When deploying a new server always create a swap partition of 1600Mb which is double the maximum amount of service console memory. This will at least allow you to increase the service console memory later without having to re-deploy your host. Having an easy option when you call tech support with a problem and they tell you to increase the memory to 800Mb is always a great idea. I’ve seen a large number of users having HA issues that have called tech support and the first thing they are told is to increase the SC memory to 800Mb. So before you deploy your next ESX server take the Service Console memory into consideration, and at least create the 1600Mb SWAP partition so you can easily bump the memory up to the max of 800Mb.