XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    too many VDI/VHD per VM, how to get rid of unused ones

    Scheduled Pinned Locked Moved Xen Orchestra
    43 Posts 3 Posters 9.2k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      1. Please edit your post and use Markdown syntax for code/error blocks
      2. Rescan your SR and try again, at worst remove everything you can, leave it a while to coalesce and wait before trying to remove it again
      1 Reply Last reply Reply Quote 0
      • R Offline
        robert wild
        last edited by

        vdi.delete
        {
        "id": "1778c579-65a5-48b3-82df-9558a4f6ff7f"
        }
        {
        "code": "SR_BACKEND_FAILURE_1200",
        "params": [
        "",
        "",
        ""
        ],
        "task": {
        "uuid": "3c72bd9a-36ec-01da-b1b5-0b19468f4532",
        "name_label": "Async.VDI.destroy",
        "name_description": "",
        "allowed_operations": [],
        "current_operations": {},
        "created": "20200120T19:23:41Z",
        "finished": "20200120T19:23:50Z",
        "status": "failure",
        "resident_on": "OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8",
        "progress": 1,
        "type": "<none/>",
        "result": "",
        "error_info": [
        "SR_BACKEND_FAILURE_1200",
        "",
        "",
        ""
        ],
        "other_config": {},
        "subtask_of": "OpaqueRef:NULL",
        "subtasks": [],
        "backtrace": "(((process"xapi @ lon-p-xenserver01")(filename lib/backtrace.ml)(line 210))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/storage_access.ml)(line 31))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/xapi_vdi.ml)(line 683))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/message_forwarding.ml)(line 100))((process"xapi @ lon-p-xenserver01")(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/rbac.ml)(line 236))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/server_helpers.ml)(line 83)))"
        },
        "message": "SR_BACKEND_FAILURE_1200(, , )",
        "name": "XapiError",
        "stack": "XapiError: SR_BACKEND_FAILURE_1200(, , )
        at Function.wrap (/xen-orchestra/packages/xen-api/src/_XapiError.js:16:11)
        at _default (/xen-orchestra/packages/xen-api/src/_getTaskResult.js:11:28)
        at Xapi._addRecordToCache (/xen-orchestra/packages/xen-api/src/index.js:812:37)
        at events.forEach.event (/xen-orchestra/packages/xen-api/src/index.js:833:13)
        at Array.forEach (<anonymous>)
        at Xapi._processEvents (/xen-orchestra/packages/xen-api/src/index.js:823:11)
        at /xen-orchestra/packages/xen-api/src/index.js:984:13
        at Generator.next (<anonymous>)
        at asyncGeneratorStep (/xen-orchestra/packages/xen-api/dist/index.js:58:103)
        at _next (/xen-orchestra/packages/xen-api/dist/index.js:60:194)
        at tryCatcher (/xen-orchestra/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/xen-orchestra/node_modules/bluebird/js/release/promise.js:547:31)
        at Promise._settlePromise (/xen-orchestra/node_modules/bluebird/js/release/promise.js:604:18)
        at Promise._settlePromise0 (/xen-orchestra/node_modules/bluebird/js/release/promise.js:649:10)
        at Promise._settlePromises (/xen-orchestra/node_modules/bluebird/js/release/promise.js:729:18)
        at _drainQueueStep (/xen-orchestra/node_modules/bluebird/js/release/async.js:93:12)
        at _drainQueue (/xen-orchestra/node_modules/bluebird/js/release/async.js:86:9)
        at Async._drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:102:5)
        at Immediate.Async.drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:15:14)
        at runCallback (timers.js:810:20)
        at tryOnImmediate (timers.js:768:5)
        at processImmediate [as _immediateCallback] (timers.js:745:5)"
        }
        
        1 Reply Last reply Reply Quote 0
        • R Offline
          robert wild
          last edited by robert wild

          got the command

          xe vm-disk-list vm=(name or UUID)

          im going to cross refernce it with this

          xe vdi-list sr-uuid=$sr_uuid params=uuid managed=true

          and any vdis that dont match im going to delete with

          xe vdi-destroy

          is this corect?

          mmm... when i run a

          xe vdi-list

          i see i have a lot of base copies

          what are they all about?

          1 Reply Last reply Reply Quote 0
          • R Offline
            robert wild
            last edited by

            ok Olivier

            as XOA isnt working can i do it from the cli of xen, so -

            xe vdi-destroy uuid=(uuid of orphaned vdi )

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              It's not XOA that's not working. XO is the client, sending commands to the host. You had a pretty clear error coming from the host, so the issue is there.

              Also, please edit your post and use Markdown syntax please, otherwise it's hard to read.

              1 Reply Last reply Reply Quote 0
              • R Offline
                robert wild
                last edited by

                Hi Olivier, do i need to do this -

                https://support.citrix.com/article/CTX214523

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Why? How it's related? Have you cleaned the maximum number of VDIs?

                  1 Reply Last reply Reply Quote 0
                  • R Offline
                    robert wild
                    last edited by

                    no...how do i do that Olivier

                    do you mean delete the orphan ones?

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      Yes, removing all possible orphaned VDIs

                      1 Reply Last reply Reply Quote 0
                      • R Offline
                        robert wild
                        last edited by

                        mmm... not looking good, got a problem with my backend SR...

                        Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Found 2 orphaned vdis
                        Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Found 2 VDIs for deletion:
                        Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577]   *4c5de6b9[VHD](20.000G//192.000M|ao)
                        Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577]   *98b10e9d[VHD](50.000G//9.445G|n)
                        Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Deleting unlinked VDI *4c5de6b9[VHD](20.000G//192.000M|ao)
                        Jan 21 08:57:16 lon-p-xenserver01 SM: [6577] lock: tried lock /var/lock/sm/0f956522-42d7-5328-a5ec-a7fd406ca0f3/sr, acquired: True (exists: True)
                        Jan 21 08:57:16 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:21 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:21 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] '
                        Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #0
                        Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:24 lon-p-xenserver01 SM: [9438] sr_scan {'sr_uuid': 'e4497404-baa7-c26f-49b1-266fd1f89e5f', 'subtask_of': 'DummyRef:|26c1154f-1c31-4ff7-8455-7899be8018d9|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8', 'session_ref': 'OpaqueRef:a5cdddcc-1b39-4fe7-8de0-457e40783795', 'device_config': {'username': 'robert.wild.admin', 'vers': '3.0', 'cifspassword_secret': '8a5270fc-72ca-59aa-a71e-c8cd9dc91750', 'iso_path': '/engineering/xen/iso', 'location': '//10.110.130.101/mmfs1', 'type': 'cifs', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:88f40308-1653-4d3e-906b-c85de994844d'}
                        Jan 21 08:57:25 lon-p-xenserver01 SM: [9464] sr_update {'sr_uuid': 'e4497404-baa7-c26f-49b1-266fd1f89e5f', 'subtask_of': 'DummyRef:|d6f3febc-c434-446f-b152-8b226d935e8c|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8', 'session_ref': 'OpaqueRef:6355efbb-370e-4192-b69b-d7e529058db9', 'device_config': {'username': 'robert.wild.admin', 'vers': '3.0', 'cifspassword_secret': '8a5270fc-72ca-59aa-a71e-c8cd9dc91750', 'iso_path': '/engineering/xen/iso', 'location': '//10.110.130.101/mmfs1', 'type': 'cifs', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:88f40308-1653-4d3e-906b-c85de994844d'}
                        Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:26 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:26 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] '
                        Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #1
                        Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:31 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:31 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] '
                        Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #2
                        Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:36 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:36 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] '
                        Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #3
                        Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:41 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:41 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] '
                        Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #4
                        Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:45 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:45 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] '
                        Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #5
                        Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
                        Jan 21 08:57:50 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
                        Jan 21 08:57:50 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
                        Jan 21 08:57:50 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
                        
                        1 Reply Last reply Reply Quote 0
                        • R Offline
                          robert wild
                          last edited by

                          great, seems a bug in XS 7.6 and im running it LOL

                          https://github.com/xcp-ng/xcp/issues/104

                          jaroslaw-freus created this issue in xcp-ng/xcp

                          closed XCP-ng 7.6 /run/lvm/lvmetad.socket: connect failed #104

                          stormiS 1 Reply Last reply Reply Quote 0
                          • stormiS Offline
                            stormi Vates 🪐 XCP-ng Team @robert wild
                            last edited by

                            @robert-wild I'm not sure about that. See this comment https://github.com/xcp-ng/xcp/issues/104#issuecomment-449381556

                            jaroslaw-freus created this issue in xcp-ng/xcp

                            closed XCP-ng 7.6 /run/lvm/lvmetad.socket: connect failed #104

                            1 Reply Last reply Reply Quote 0
                            • R Offline
                              robert wild
                              last edited by

                              so i know my continous backup replicas arnt happening because it cant do a coalesce any of my VDI's

                              WARNING: Failed to connect to lvmetad

                              so if i run this command in my xen server 7.6

                              vgchange -a y --config global{metadata_read_only=0}

                              the next time my xen server does an auto coalesce any of my VDI's, will it become good again?

                              1 Reply Last reply Reply Quote 0
                              • R Offline
                                robert wild
                                last edited by

                                think i have answered my own question -

                                https://support.citrix.com/article/CTX207574?_ga=2.182110311.1347118359.1579810336-430666083.1579810336

                                basically dom0 is protecting the source vdi until vbd operations complete but theres nothing to complete

                                you think this is worth a shot

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  This is already something visible in Xen Orchestra, in Dashboard/Health view, we exposed all VDI attached to a control domain, it's one click to solve it.

                                  1 Reply Last reply Reply Quote 0
                                  • R Offline
                                    robert wild
                                    last edited by robert wild

                                    haha, i just want to kill myself

                                    vbd.delete
                                    {
                                      "id": "532ddd91-5bdb-691b-b3f2-e9382c74fde7"
                                    }
                                    {
                                      "code": "OPERATION_NOT_ALLOWED",
                                      "params": [
                                        "VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86'"
                                      ],
                                      "call": {
                                        "method": "VBD.destroy",
                                        "params": [
                                          "OpaqueRef:1aa11b30-a64a-463a-a83d-c5095c5e9139"
                                        ]
                                      },
                                      "message": "OPERATION_NOT_ALLOWED(VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86')",
                                      "name": "XapiError",
                                      "stack": "XapiError: OPERATION_NOT_ALLOWED(VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86')
                                        at Function.wrap (/xen-orchestra/packages/xen-api/src/_XapiError.js:16:11)
                                        at /xen-orchestra/packages/xen-api/src/index.js:630:55
                                        at Generator.throw (<anonymous>)
                                        at asyncGeneratorStep (/xen-orchestra/packages/xen-api/dist/index.js:58:103)
                                        at _throw (/xen-orchestra/packages/xen-api/dist/index.js:60:291)
                                        at tryCatcher (/xen-orchestra/node_modules/bluebird/js/release/util.js:16:23)
                                        at Promise._settlePromiseFromHandler (/xen-orchestra/node_modules/bluebird/js/release/promise.js:547:31)
                                        at Promise._settlePromise (/xen-orchestra/node_modules/bluebird/js/release/promise.js:604:18)
                                        at Promise._settlePromise0 (/xen-orchestra/node_modules/bluebird/js/release/promise.js:649:10)
                                        at Promise._settlePromises (/xen-orchestra/node_modules/bluebird/js/release/promise.js:725:18)
                                        at _drainQueueStep (/xen-orchestra/node_modules/bluebird/js/release/async.js:93:12)
                                        at _drainQueue (/xen-orchestra/node_modules/bluebird/js/release/async.js:86:9)
                                        at Async._drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:102:5)
                                        at Immediate.Async.drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:15:14)
                                        at runCallback (timers.js:810:20)
                                        at tryOnImmediate (timers.js:768:5)
                                        at processImmediate [as _immediateCallback] (timers.js:745:5)"
                                    }
                                    

                                    so its still attached to my DOM0 ie xenserver the host itself

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      You need to unplug before remove it. XO does that automatically.

                                      1 Reply Last reply Reply Quote 0
                                      • R Offline
                                        robert wild
                                        last edited by

                                        thanks Olivier,

                                        i thought while im having this nightmare i mightaswell update my XOA at the same time

                                        so if XOA auto unplugs the vbd to the vdi, why is it throwing errors?

                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          This is a message for XO team, it should do that normally.

                                          1 Reply Last reply Reply Quote 0
                                          • R Offline
                                            robert wild
                                            last edited by

                                            its still there ie the host is still atached to that vdi

                                            im thinking this is the issue why my backups wont run anymore

                                            if not i will do it on the xen server cli, but it would be nice to know why this isnt working

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post