Removed VM - Now have unhealthy VDI
-
Well the day has finally came where i need help... so thanks in advance.
I'm not sure what exactly I did, although I did get interrupted 3 times so its entirely possible I did something stupid. I have 2 SR's on a single host, (HDD & SSD). I was running low on space on the HDD SR so I attempted to migrate a non-critical VM to the SSD SR. After that finished it appeared as though it cloned two of the VDIs as they were duplicated on both SR's, but the base copy was still on the HDD SR. I then tried to start the VM and got an error about the VDI not being available. Since this was a non-critical VM and I just needed the space I decided to remove the VM.
The VDI's on the SSD SR are gone (I think i removed them manually - this might be where it went from bad to worse)
The HDD SR still has a VDI and a base copy.Under HEALTH it shows that there is an unhealthy VDI to coalesce with a length of 1. As well it indicates its an Orphan VDI.
In the logs I have 3 entries for SR_BACKEND_FAILURE_46. (I'm not sure what i did to make those log entries).
I dont think the GC process is cleaning this up:
I dont see any disk activity and in SMlog I see: "GC process exiting, no work left" (taken from below)Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/ebc48212-0c25-09e1-d118-d8f338548938/*.vhd'] Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] pread SUCCESS Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/ebc48212-0c25-09e1-d118-d8f338548938/a9a3bc54-0200-4171-a900-bff38f09d832.vhd'] Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] pread SUCCESS Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/ebc48212-0c25-09e1-d118-d8f338548938/3e3343bc-e1e3-4431-8812-8c99e36083dc.vhd'] Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] pread SUCCESS Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/ebc48212-0c25-09e1-d118-d8f338548938/82ff74e0-978b-4f39-a5aa-6712378f0469.vhd'] Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] pread SUCCESS Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] ['ls', '/var/run/sr-mount/ebc48212-0c25-09e1-d118-d8f338548938', '-1', '--color=never'] Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] pread SUCCESS Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] Kicking GC Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] Kicking SMGC@ebc48212-0c25-09e1-d118-d8f338548938... Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] lock: released /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SM: [205458] lock: closed /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] === SR ebc48212-0c25-09e1-d118-d8f338548938: gc === Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: opening lock file /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/running Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: opening lock file /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/gc_active Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: opening lock file /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: acquired /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: tried lock /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/gc_active, acquired: True (exists: True) Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: released /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] Found 0 cache files Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: tried lock /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr, acquired: True (exists: True) Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/ebc48212-0c25-09e1-d118-d8f338548938/*.vhd'] Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] pread SUCCESS Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] SR ebc4 ('LocalSSD_1TB') (3 VDIs in 1 VHD trees): Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] *82ff74e0(200.000G/65.237G?) Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] a9a3bc54(200.000G/35.010G?) Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] 3e3343bc(200.000G/416.500K?) Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: released /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] Got sm-config for *82ff74e0(200.000G/65.237G?): {'vhd-blocks': 'eJzt2rEKwjAQgOGTDh1d3PUpuuYBXHykPpKrghjoc4guzrpZMJDGYtqhKB2MJ/h/kHDhIG0CoVeI91cRWRgPqLm8yR1Du7XRfR+6k93sztX6sJVsOStWoir9zgBIzj0PtPHt9/Brcu2FAwAAJXX8lygfwUT7dX5NGasl8/lJX2Sly7quVgsDG+PpqGfUaYtHAAAAAAAAAAAAAADwB/rbpS6zYTwfXlFoAPPPOgw='} Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] No work, exiting Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] GC process exiting, no work left Mar 22 17:44:54 host-xcpng-nst1 SM: [205467] lock: released /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/gc_active Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] In cleanup Mar 22 17:44:54 host-xcpng-nst1 SMGC: [205467] SR ebc4 ('LocalSSD_1TB') (3 VDIs in 1 VHD trees): no changes Mar 22 17:44:54 host-xcpng-nst1 SM: [205475] lock: opening lock file /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr Mar 22 17:44:54 host-xcpng-nst1 SM: [205475] sr_update {'host_ref': 'OpaqueRef:f125e08d-9484-fcd9-7d2d-58e7182e9b90', 'command': 'sr_update', 'args': [], 'device_config': {'SRmaster': 'true', 'device': '/dev/sdb'}, 'session_ref': '******', 'sr_ref': 'OpaqueRef:fdf63bfe-3d91-f7bd-cc6f-84598c29eacc', 'sr_uuid': 'ebc48212-0c25-09e1-d118-d8f338548938', 'subtask_of': 'DummyRef:|1cd1662c-8c35-6a3a-e05e-174e97212357|SR.stat', 'local_cache_sr': '8f1df4f9-3081-3223-233d-fbcf0ba03703'} Mar 22 17:44:54 host-xcpng-nst1 SM: [205475] lock: closed /var/lock/sm/ebc48212-0c25-09e1-d118-d8f338548938/sr
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login