XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. robert wild
    3. Posts
    R
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 36
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      its still there ie the host is still atached to that vdi

      im thinking this is the issue why my backups wont run anymore

      if not i will do it on the xen server cli, but it would be nice to know why this isnt working

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      thanks Olivier,

      i thought while im having this nightmare i mightaswell update my XOA at the same time

      so if XOA auto unplugs the vbd to the vdi, why is it throwing errors?

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      haha, i just want to kill myself

      vbd.delete
      {
        "id": "532ddd91-5bdb-691b-b3f2-e9382c74fde7"
      }
      {
        "code": "OPERATION_NOT_ALLOWED",
        "params": [
          "VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86'"
        ],
        "call": {
          "method": "VBD.destroy",
          "params": [
            "OpaqueRef:1aa11b30-a64a-463a-a83d-c5095c5e9139"
          ]
        },
        "message": "OPERATION_NOT_ALLOWED(VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86')",
        "name": "XapiError",
        "stack": "XapiError: OPERATION_NOT_ALLOWED(VBD '532ddd91-5bdb-691b-b3f2-e9382c74fde7' still attached to '1f927d69-8257-4f23-9335-7d007ed9ab86')
          at Function.wrap (/xen-orchestra/packages/xen-api/src/_XapiError.js:16:11)
          at /xen-orchestra/packages/xen-api/src/index.js:630:55
          at Generator.throw (<anonymous>)
          at asyncGeneratorStep (/xen-orchestra/packages/xen-api/dist/index.js:58:103)
          at _throw (/xen-orchestra/packages/xen-api/dist/index.js:60:291)
          at tryCatcher (/xen-orchestra/node_modules/bluebird/js/release/util.js:16:23)
          at Promise._settlePromiseFromHandler (/xen-orchestra/node_modules/bluebird/js/release/promise.js:547:31)
          at Promise._settlePromise (/xen-orchestra/node_modules/bluebird/js/release/promise.js:604:18)
          at Promise._settlePromise0 (/xen-orchestra/node_modules/bluebird/js/release/promise.js:649:10)
          at Promise._settlePromises (/xen-orchestra/node_modules/bluebird/js/release/promise.js:725:18)
          at _drainQueueStep (/xen-orchestra/node_modules/bluebird/js/release/async.js:93:12)
          at _drainQueue (/xen-orchestra/node_modules/bluebird/js/release/async.js:86:9)
          at Async._drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:102:5)
          at Immediate.Async.drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:15:14)
          at runCallback (timers.js:810:20)
          at tryOnImmediate (timers.js:768:5)
          at processImmediate [as _immediateCallback] (timers.js:745:5)"
      }
      

      so its still attached to my DOM0 ie xenserver the host itself

      posted in Xen Orchestra
      R
      robert wild
    • RE: XOA - backups skipped to protect VDI chain

      think i have answered my own question -

      https://support.citrix.com/article/CTX207574?_ga=2.182110311.1347118359.1579810336-430666083.1579810336

      basically dom0 is protecting the source vdi until vbd operations complete but theres nothing to complete

      you think this is worth a shot

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      think i have answered my own question -

      https://support.citrix.com/article/CTX207574?_ga=2.182110311.1347118359.1579810336-430666083.1579810336

      basically dom0 is protecting the source vdi until vbd operations complete but theres nothing to complete

      you think this is worth a shot

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      so i know my continous backup replicas arnt happening because it cant do a coalesce any of my VDI's

      WARNING: Failed to connect to lvmetad

      so if i run this command in my xen server 7.6

      vgchange -a y --config global{metadata_read_only=0}

      the next time my xen server does an auto coalesce any of my VDI's, will it become good again?

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      great, seems a bug in XS 7.6 and im running it LOL

      https://github.com/xcp-ng/xcp/issues/104

      jaroslaw-freus created this issue in xcp-ng/xcp

      closed XCP-ng 7.6 /run/lvm/lvmetad.socket: connect failed #104

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      mmm... not looking good, got a problem with my backend SR...

      Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Found 2 orphaned vdis
      Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Found 2 VDIs for deletion:
      Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577]   *4c5de6b9[VHD](20.000G//192.000M|ao)
      Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577]   *98b10e9d[VHD](50.000G//9.445G|n)
      Jan 21 08:57:16 lon-p-xenserver01 SMGC: [6577] Deleting unlinked VDI *4c5de6b9[VHD](20.000G//192.000M|ao)
      Jan 21 08:57:16 lon-p-xenserver01 SM: [6577] lock: tried lock /var/lock/sm/0f956522-42d7-5328-a5ec-a7fd406ca0f3/sr, acquired: True (exists: True)
      Jan 21 08:57:16 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:21 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:21 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] '
      Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #0
      Jan 21 08:57:21 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:24 lon-p-xenserver01 SM: [9438] sr_scan {'sr_uuid': 'e4497404-baa7-c26f-49b1-266fd1f89e5f', 'subtask_of': 'DummyRef:|26c1154f-1c31-4ff7-8455-7899be8018d9|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8', 'session_ref': 'OpaqueRef:a5cdddcc-1b39-4fe7-8de0-457e40783795', 'device_config': {'username': 'robert.wild.admin', 'vers': '3.0', 'cifspassword_secret': '8a5270fc-72ca-59aa-a71e-c8cd9dc91750', 'iso_path': '/engineering/xen/iso', 'location': '//10.110.130.101/mmfs1', 'type': 'cifs', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:88f40308-1653-4d3e-906b-c85de994844d'}
      Jan 21 08:57:25 lon-p-xenserver01 SM: [9464] sr_update {'sr_uuid': 'e4497404-baa7-c26f-49b1-266fd1f89e5f', 'subtask_of': 'DummyRef:|d6f3febc-c434-446f-b152-8b226d935e8c|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8', 'session_ref': 'OpaqueRef:6355efbb-370e-4192-b69b-d7e529058db9', 'device_config': {'username': 'robert.wild.admin', 'vers': '3.0', 'cifspassword_secret': '8a5270fc-72ca-59aa-a71e-c8cd9dc91750', 'iso_path': '/engineering/xen/iso', 'location': '//10.110.130.101/mmfs1', 'type': 'cifs', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:88f40308-1653-4d3e-906b-c85de994844d'}
      Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:26 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:26 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] '
      Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #1
      Jan 21 08:57:26 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:31 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:31 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] '
      Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #2
      Jan 21 08:57:31 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:36 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:36 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] '
      Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #3
      Jan 21 08:57:36 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:41 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:41 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] '
      Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #4
      Jan 21 08:57:41 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:45 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:45 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] '
      Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] *** lvremove failed on attempt #5
      Jan 21 08:57:45 lon-p-xenserver01 SM: [6577] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653']
      Jan 21 08:57:50 lon-p-xenserver01 SM: [6577] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
      Jan 21 08:57:50 lon-p-xenserver01 SM: [6577]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
      Jan 21 08:57:50 lon-p-xenserver01 SM: [6577]   Logical volume VG_XenStorage-0f956522-42d7-5328-a5ec-a7fd406ca0f3/VHD-4c5de6b9-ae97-4a15-ad76-84a49c397653 in use.
      
      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      no...how do i do that Olivier

      do you mean delete the orphan ones?

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      Hi Olivier, do i need to do this -

      https://support.citrix.com/article/CTX214523

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      ok Olivier

      as XOA isnt working can i do it from the cli of xen, so -

      xe vdi-destroy uuid=(uuid of orphaned vdi )

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      got the command

      xe vm-disk-list vm=(name or UUID)

      im going to cross refernce it with this

      xe vdi-list sr-uuid=$sr_uuid params=uuid managed=true

      and any vdis that dont match im going to delete with

      xe vdi-destroy

      is this corect?

      mmm... when i run a

      xe vdi-list

      i see i have a lot of base copies

      what are they all about?

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones
      vdi.delete
      {
      "id": "1778c579-65a5-48b3-82df-9558a4f6ff7f"
      }
      {
      "code": "SR_BACKEND_FAILURE_1200",
      "params": [
      "",
      "",
      ""
      ],
      "task": {
      "uuid": "3c72bd9a-36ec-01da-b1b5-0b19468f4532",
      "name_label": "Async.VDI.destroy",
      "name_description": "",
      "allowed_operations": [],
      "current_operations": {},
      "created": "20200120T19:23:41Z",
      "finished": "20200120T19:23:50Z",
      "status": "failure",
      "resident_on": "OpaqueRef:bb0f833a-b400-4862-9d3f-07f15f34e0f8",
      "progress": 1,
      "type": "<none/>",
      "result": "",
      "error_info": [
      "SR_BACKEND_FAILURE_1200",
      "",
      "",
      ""
      ],
      "other_config": {},
      "subtask_of": "OpaqueRef:NULL",
      "subtasks": [],
      "backtrace": "(((process"xapi @ lon-p-xenserver01")(filename lib/backtrace.ml)(line 210))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/storage_access.ml)(line 31))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/xapi_vdi.ml)(line 683))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/message_forwarding.ml)(line 100))((process"xapi @ lon-p-xenserver01")(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/rbac.ml)(line 236))((process"xapi @ lon-p-xenserver01")(filename ocaml/xapi/server_helpers.ml)(line 83)))"
      },
      "message": "SR_BACKEND_FAILURE_1200(, , )",
      "name": "XapiError",
      "stack": "XapiError: SR_BACKEND_FAILURE_1200(, , )
      at Function.wrap (/xen-orchestra/packages/xen-api/src/_XapiError.js:16:11)
      at _default (/xen-orchestra/packages/xen-api/src/_getTaskResult.js:11:28)
      at Xapi._addRecordToCache (/xen-orchestra/packages/xen-api/src/index.js:812:37)
      at events.forEach.event (/xen-orchestra/packages/xen-api/src/index.js:833:13)
      at Array.forEach (<anonymous>)
      at Xapi._processEvents (/xen-orchestra/packages/xen-api/src/index.js:823:11)
      at /xen-orchestra/packages/xen-api/src/index.js:984:13
      at Generator.next (<anonymous>)
      at asyncGeneratorStep (/xen-orchestra/packages/xen-api/dist/index.js:58:103)
      at _next (/xen-orchestra/packages/xen-api/dist/index.js:60:194)
      at tryCatcher (/xen-orchestra/node_modules/bluebird/js/release/util.js:16:23)
      at Promise._settlePromiseFromHandler (/xen-orchestra/node_modules/bluebird/js/release/promise.js:547:31)
      at Promise._settlePromise (/xen-orchestra/node_modules/bluebird/js/release/promise.js:604:18)
      at Promise._settlePromise0 (/xen-orchestra/node_modules/bluebird/js/release/promise.js:649:10)
      at Promise._settlePromises (/xen-orchestra/node_modules/bluebird/js/release/promise.js:729:18)
      at _drainQueueStep (/xen-orchestra/node_modules/bluebird/js/release/async.js:93:12)
      at _drainQueue (/xen-orchestra/node_modules/bluebird/js/release/async.js:86:9)
      at Async._drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:102:5)
      at Immediate.Async.drainQueues (/xen-orchestra/node_modules/bluebird/js/release/async.js:15:14)
      at runCallback (timers.js:810:20)
      at tryOnImmediate (timers.js:768:5)
      at processImmediate [as _immediateCallback] (timers.js:745:5)"
      }
      
      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      so can i delete the "type:VDI-snapshot"

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      just out of interest when i change the magnifying glass to "type:VDI-unmanaged"

      it lists "base copy"

      what are they?

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      thanks Olivier because when i change the magnifying glass to "type:!VDI-unmanaged" i see under the VMs the actual live vm name and the backup vm so i think its safe to delete all the orphaned ones

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      im looking and i see no vms attached to the orphaned disks

      orphaned_disks.PNG

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      @olivierlambert your amazing Olivier, so I imagine I can delete all the orphaned vdi's

      posted in Xen Orchestra
      R
      robert wild
    • RE: too many VDI/VHD per VM, how to get rid of unused ones

      @olivierlambert but I can't see any more orphaned vdis on XOA?

      posted in Xen Orchestra
      R
      robert wild