Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)
If your vCenter server 6 has ever crashed or maybe after you performed an upgrade you are presented with a screen telling you that you need to run an fsck on the file system. Now logically you run fsck against /dev/sda3 which is the root partition. If you find that there are no errors on /dev/sda3 you might find yourself scratching your head and wondering what now. You could go and try to check /dev/sda1 & /dev/sda2. With no luck there the head scratching may become a bit more intense. I found myself in this situation recently as well as a few other folks with no answer how to get this resolved.
In vCenter 6 you will notice there are now a number of LVM partitions and these are most likely the cause to your pain. Below are the steps you can take to resolve the issue and get your vCenter back up and running.
- First you need to get access to the filesystem. I like to do this by changing the grub boot parameters, I have better success this way.
- When the grub boot loader is up press space to stop the auto boot.
- The press “p” and enter your password if you didn’t set one it’s “vmware”.
- Next on the 2nd entry in the list press “e” for edit.
- Then on the 2nd entry again press “e” again to edit the line and add “init=/bin/bash” to the end of the line and press “enter”.
- Now press “b” to boot single user mode
- Once the console is up and running you need to mount your / partition in Read/Write mode. You can do this by issuing the following command
mount -n -o remount,rw /
- Now issue the following two commands:
lvm lvscan
lvm vgchange -ay
- Once you have run these two commands you can now run fsck against your lvm volumes. As an example let’s say the volume with errors is log_vg-log. You would issue the command
fsck -y /dev/mapper/log_vg-log
- Once you have repaired all the effected partitions reboot and you should be good to go.
That’s it, you should now have a working vCenter server again.