Good-day Folks,
While troubleshooting this issue with my sales rep, I shared a screenshot of one of my VMs and he noted that it was odd that the boot disk was connected as device xvdb instead of xvda. So he asked me to go through and check if the VMs that were having problems booting, looked similar. I went through and confirmed that all the VMs that were failing to boot did not have an “xvda” device. I went through the Storage menu and found a few disks that did not have a name or description, which was quite odd (to say the least). I mounted each disk, one at a time, to one of the VMs and booted until I identified and renamed each of them. As it stands now, I’ve been able to get all the VMs back up and running again.
However, that leaves some unanswered questions:
How does taking a snapshot of a the MAIL01 VM (which was built with two disks – Disk1 for OS and Disk2 for Data), cause the VM to have its xvda device detached, and snowball that to other VMs?
How is it that VDIs that I name myself during VM creation, all of a sudden become detached from their VMs and lose their name and description?
How does a VM Template lose its disk? Because I was able to identify the disk that’s supposed to be attached to the template, but now it’s orphaned, and I don’t know how to re-attach it to the template.
In any case, I’m glad this happened in this lab environment, so I'm willing to work with you all at Vates to see if we can do a root cause analysis to prevent this in the future. All I did was take a snapshot, and this is certainly not the experience I should have had (nor is what I'm used to seeing).
Some screenshots to illustrate what I saw:
Screenshot_NetXMS01_No_VDI.png
Screenshot_NetXMS01_No_VDI_2.png