too many VDI/VHD per VM, how to get rid of unused ones
-
Why? How it's related? Have you cleaned the maximum number of VDIs?
-
no...how do i do that Olivier
do you mean delete the orphan ones?
-
Yes, removing all possible orphaned VDIs
-
mmm... not looking good, got a problem with my backend SR...
Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Found 2 orphaned vdis Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Found 2 VDIs for deletion: Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] *4c5de6b9[VHD](20.000G//192.000M|ao) Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] *98b10e9d[VHD](50.000G//9.445G|n) Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Deleting unlinked VDI *4c5de6b9[VHD](20.000G//192.000M|ao) Jan 21 08:57:16 lon-p-xenserver01 SM: [6577] lock: tried lock /var/lock/sm/0f956522-42d7-5328-a5ec-a7fd406ca0f3/sr, acquired: True (exists: True) Jan 21 08:57:16 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use. Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] ' Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #0 Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:24 lon-p-xenserver01 SM: [9438] sr_scan {'sr_uuid': 'e4497404-baa7-c26f-49b1-266fd1f89e5f', 'subtask_of': 'DummyRef:|26c1154f-1c31-4ff7-8455-7899be8018d9|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8', 'session_ref': 'OpaqueRef:a5cdddcc-1b39-4fe7-8de0-457e40783795', 'device_config': {'username': 'robert.wild.admin', 'vers': '3.0', 'cifspassword_secret': '8a5270fc-72ca-59aa-a71e-c8cd9dc91750', 'iso_path': '/engineering/xen/iso', 'location': '//10.110.130.101/mmfs1', 'type': 'cifs', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:88f40308-1653-4d3e-906b-c85de994844d'} Jan 21 08:57:25 lon-p-xenserver01 SM: [9464] sr_update {'sr_uuid': 'e4497404-baa7-c26f-49b1-266fd1f89e5f', 'subtask_of': 'DummyRef:|d6f3febc-c434-446f-b152-8b226d935e8c|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8', 'session_ref': 'OpaqueRef:6355efbb-370e-4192-b69b-d7e529058db9', 'device_config': {'username': 'robert.wild.admin', 'vers': '3.0', 'cifspassword_secret': '8a5270fc-72ca-59aa-a71e-c8cd9dc91750', 'iso_path': '/engineering/xen/iso', 'location': '//10.110.130.101/mmfs1', 'type': 'cifs', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:88f40308-1653-4d3e-906b-c85de994844d'} Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use. Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] ' Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #1 Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use. Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] ' Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #2 Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use. Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] ' Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #3 Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use. Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] ' Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #4 Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use. Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] ' Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #5 Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653'] Jan 21 08:57:50 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: ' /run/lvm/lvmetad.socket: connect failed: No such file or directory Jan 21 08:57:50 lon-p-xenserver01 SM: [6577] WARNING: Failed to connect to lvmetad. Falling back to internal scanning. Jan 21 08:57:50 lon-p-xenserver01 SM: [6577] Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
-
great, seems a bug in XS 7.6 and im running it LOL
-
@robert-wild I'm not sure about that. See this comment https://github.com/xcp-ng/xcp/issues/104#issuecomment-449381556
-
so i know my continous backup replicas arnt happening because it cant do a coalesce any of my VDI's
WARNING: Failed to connect to lvmetad
so if i run this command in my xen server 7.6
vgchange -a y --config global{metadata_read_only=0}
the next time my xen server does an auto coalesce any of my VDI's, will it become good again?
-
think i have answered my own question -
basically dom0 is protecting the source vdi until vbd operations complete but theres nothing to complete
you think this is worth a shot
-
This is already something visible in Xen Orchestra, in Dashboard/Health view, we exposed all VDI attached to a control domain, it's one click to solve it.
-
haha, i just want to kill myself
vbd.delete { "id": "532ddd91-5bdb-691b-b3f2-e9382c74fde7" } { "code": "OPERATION_NOT_ALLOWED", "params": [ "VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86'" ], "call": { "method": "VBD.destroy", "params": [ "OpaqueRef:1aa11b30-a64a-463a-a83d-c5095c5e9139" ] }, "message": "OPERATION_NOT_ALLOWED(VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86')", "name": "XapiError", "stack": "XapiError: OPERATION_NOT_ALLOWED(VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86') at Function.wrap (/xen-orchestra/packages/xen-api/src/_XapiError.js:16:11) at /xen-orchestra/packages/xen-api/src/index.js:630:55 at Generator.throw (<anonymous>) at asyncGeneratorStep (/xen-orchestra/packages/xen-api/dist/index.js:58:103) at _throw (/xen-orchestra/packages/xen-api/dist/index.js:60:291) at tryCatcher (/xen-orchestra/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/xen-orchestra/node_modules/bluebird/js/release/promise.js:547:31) at Promise._settlePromise (/xen-orchestra/node_modules/bluebird/js/release/promise.js:604:18) at Promise._settlePromise0 (/xen-orchestra/node_modules/bluebird/js/release/promise.js:649:10) at Promise._settlePromises (/xen-orchestra/node_modules/bluebird/js/release/promise.js:725:18) at _drainQueueStep (/xen-orchestra/node_modules/bluebird/js/release/async.js:93:12) at _drainQueue (/xen-orchestra/node_modules/bluebird/js/release/async.js:86:9) at Async._drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:102:5) at Immediate.Async.drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:15:14) at runCallback (timers.js:810:20) at tryOnImmediate (timers.js:768:5) at processImmediate [as _immediateCallback] (timers.js:745:5)" }
so its still attached to my DOM0 ie xenserver the host itself
-
You need to unplug before remove it. XO does that automatically.
-
thanks Olivier,
i thought while im having this nightmare i mightaswell update my XOA at the same time
so if XOA auto unplugs the vbd to the vdi, why is it throwing errors?
-
This is a message for XO team, it should do that normally.
-
its still there ie the host is still atached to that vdi
im thinking this is the issue why my backups wont run anymore
if not i will do it on the xen server cli, but it would be nice to know why this isnt working