XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Tristis Oris
    3. Topics
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 41
    • Posts 433
    • Groups 1

    Topics

    • Tristis OrisT

      API doc

      Watching Ignoring Scheduled Pinned Locked Moved Solved REST API
      6
      0 Votes
      6 Posts
      132 Views
      olivierlambertO
      This page is being improved as we speak It shouldn't be in "Future improvements" because it's here
    • Tristis OrisT

      No Xen tools detected on Windows server

      Watching Ignoring Scheduled Pinned Locked Moved Solved Management
      13
      2
      0 Votes
      13 Posts
      470 Views
      Tristis OrisT
      @DustinB I advise you not to decide for others what they should do. I'm tired.
    • Tristis OrisT

      Metadata backup management

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      3
      2
      0 Votes
      3 Posts
      102 Views
      lsouai-vatesL
      @olivierlambert for what concern backup, I think you should ping @florent for XO-5...
    • Tristis OrisT

      about Anti-affinity

      Watching Ignoring Scheduled Pinned Locked Moved Solved Management
      3
      0 Votes
      3 Posts
      105 Views
      Tristis OrisT
      @olivierlambert Just to be sure. Thanks.
    • Tristis OrisT

      Xcp site is broken a bit

      Watching Ignoring Scheduled Pinned Locked Moved Solved Off topic
      6
      2
      0 Votes
      6 Posts
      220 Views
      olivierlambertO
      No worries, always good to have feedback
    • Tristis OrisT

      XO tasks - select all button

      Watching Ignoring Scheduled Pinned Locked Moved Management
      11
      0 Votes
      11 Posts
      127 Views
      Tristis OrisT
      @olivierlambert as i say, now it works. So nvm)
    • Tristis OrisT

      SR Garbage Collection running permanently

      Watching Ignoring Scheduled Pinned Locked Moved Management
      32
      1
      0 Votes
      32 Posts
      1k Views
      R
      @ronan-a for example, for ID 3407716 in SMlog for Dom0 master : Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] Setting LVM_DEVICE to /dev/disk/by-scsid/3624a93704071cc78f82b4df4000113ee Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] Setting LVM_DEVICE to /dev/disk/by-scsid/3624a93704071cc78f82b4df4000113ee Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] lock: opening lock file /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] LVMCache created for VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89 Jul 9 09:44:45 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893214.155789358' Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] ['/sbin/vgs', '--readonly', 'VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] LVMCache: will initialize now Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:46 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893215.105961535' Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] Entering _checkMetadataVolume Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] vdi_list_changed_blocks {'host_ref': 'OpaqueRef:8bb2bb09-1d56-cb47-6014-91c3eca78529', 'command': 'vdi_list_changed_blocks', 'args': ['OpaqueRef:3a4d6847-eb39-5a96-f2c2-ffcad3da4477'], 'device_config': {'SRmaster': 'true', 'port': '3260', 'SCSIid': '3624a93704071cc78f82b4df4000113ee', 'targetIQN': 'iqn.2010-06.com.purestorage:flasharray.2498b71d53b104d9', 'target': '10.20.0.21', 'multihomelist': '10.20.0.21:3260,10.20.0.22:3260,10.20.1.21:3260,10.20.1.22:3260'}, 'session_ref': '******', 'sr_ref': 'OpaqueRef:78f4538b-4c0b-531c-c523-fd92f8738fc5', 'sr_uuid': '5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89', 'vdi_ref': 'OpaqueRef:05e1857b-f39d-47da-9c06-d0d35ba97f00', 'vdi_location': '9fae176c-2c5f-4fd1-91fd-9bdb32533795', 'vdi_uuid': '9fae176c-2c5f-4fd1-91fd-9bdb32533795', 'subtask_of': 'DummyRef:|dea7f1a8-9116-8be0-6ba8-c3ecac77b442|VDI.list_changed_blocks', 'vdi_on_boot': 'persist', 'vdi_allow_caching': 'false'} Jul 9 09:44:47 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893215.990307086' Jul 9 09:44:47 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893216.783648117' Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] lock: opening lock file /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:48 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893217.178993739' Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:49 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893218.029420281' Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog'] Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog', '-c'] Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog => 1 / '' / '' Jul 9 09:44:50 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893219.268237038' Jul 9 09:44:50 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog'] Jul 9 09:44:50 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:51 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893220.721225461' Jul 9 09:44:51 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-9fae176c--2c5f--4fd1--91fd--9bdb32533795.cbtlog'] Jul 9 09:44:51 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:51 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:44:51 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893220.765534349' Jul 9 09:44:52 na-mut-xen-03 SM: [3407716] lock: opening lock file /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:52 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:52 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:53 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893222.038114535' Jul 9 09:44:53 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:53 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:54 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893222.921315355' Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog', '-s'] Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog => 1 / '' / '' Jul 9 09:44:55 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893224.138058885' Jul 9 09:44:55 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:55 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:56 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893225.053222085' Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-54bc2594--5228--411a--a4b2--cc1a7502d9a4.cbtlog'] Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] DEBUG: Processing VDI 54bc2594-5228-411a-a4b2-cc1a7502d9a4 of size 107374182400 Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:56 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893225.076140954' Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:57 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893225.934427314' Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog', '-b'] Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog => 1 / '' / '' Jul 9 09:44:59 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893227.790063374' Jul 9 09:44:59 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:59 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:00 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893228.82935698' Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-54bc2594--5228--411a--a4b2--cc1a7502d9a4.cbtlog'] Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] Size of bitmap: 1638400 Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:45:00 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893229.259729804' Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:01 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893230.644242194' Jul 9 09:45:01 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog', '-c'] Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog => 1 / '' / '' Jul 9 09:45:02 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893231.488289557' Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:45:03 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:04 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893233.069490829' Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-54bc2594--5228--411a--a4b2--cc1a7502d9a4.cbtlog'] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:04 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893233.110222368' Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] Raising exception [460, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ***** generic exception: vdi_list_changed_blocks: EXCEPTION <class 'xs_errors.SROSError'>, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 113, in run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return self._run_locked(sr) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] rv = self._run(sr, target) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 333, in _run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return target.list_changed_blocks() Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/VDI.py", line 761, in list_changed_blocks Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] "Source and target VDI are unrelated") Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ***** LVHD over iSCSI: EXCEPTION <class 'xs_errors.SROSError'>, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 392, in run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ret = cmd.run(sr) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 113, in run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return self._run_locked(sr) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] rv = self._run(sr, target) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 333, in _run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return target.list_changed_blocks() Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/VDI.py", line 761, in list_changed_blocks Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] "Source and target VDI are unrelated") Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: closed /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: closed /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: closed /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr
    • Tristis OrisT

      Manual snapshots retention

      Watching Ignoring Scheduled Pinned Locked Moved Solved Management
      46
      0 Votes
      46 Posts
      2k Views
      lsouai-vatesL
      @olivierlambert we are thinking about implementing a relationnal DB in the future. Maybe it could offer to stock snapshots as "archived"... I don't know for now. I understand that it could be risky to delete old snapshots, and maybe the responsability of setting the snaspshots retention can be an ACL subject... There are plenty of avenues of work on this topic, and more generally on the retention of information about objects, tasks, and other actions. It's a real in-depth topic.
    • Tristis OrisT

      Netbox version 4.2.1 not supported

      Watching Ignoring Scheduled Pinned Locked Moved Solved Advanced features
      13
      0 Votes
      13 Posts
      759 Views
      W
      Thanks for fixing this!
    • Tristis OrisT

      Backup-reports plugin logic

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      1
      3
      0 Votes
      1 Posts
      72 Views
      No one has replied
    • Tristis OrisT

      Host show no stats when secondary link is down

      Watching Ignoring Scheduled Pinned Locked Moved Management
      10
      2
      0 Votes
      10 Posts
      223 Views
      olivierlambertO
      It's still a draft maybe it's not working correctly, @florent might give a look when he can
    • Tristis OrisT

      The "paths[1]" argument must be of type string. Received undefined

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      7
      1
      0 Votes
      7 Posts
      388 Views
      Tristis OrisT
      @stephane-m-dev that happens again for 1 vm. { "data": { "type": "VM", "id": "316e7303-c9c9-9bb6-04ef-83948ee1b19e", "name_label": "name" }, "id": "1732299284886", "message": "backup VM", "start": 1732299284886, "status": "failure", "tasks": [ { "id": "1732299284997", "message": "clean-vm", "start": 1732299284997, "status": "failure", "warnings": [ { "data": { "path": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "error": { "generatedMessage": true, "code": "ERR_ASSERTION", "actual": false, "expected": true, "operator": "==" } }, "message": "VHD check error" }, { "data": { "alias": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd" }, "message": "missing target of alias" } ], "end": 1732299341663, "result": { "code": "ERR_INVALID_ARG_TYPE", "message": "The \"paths[1]\" argument must be of type string. Received undefined", "name": "TypeError", "stack": "TypeError [ERR_INVALID_ARG_TYPE]: The \"paths[1]\" argument must be of type string. Received undefined\n at resolve (node:path:1169:7)\n at normalize (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/path.js:21:27)\n at NfsHandler.__unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/abstract.js:412:32)\n at NfsHandler.unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/node_modules/limit-concurrency-decorator/index.js:97:24)\n at checkAliases (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:132:25)\n at async Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:284:5)\n at async Promise.all (index 1)\n at async RemoteAdapter.cleanVm (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:283:3)" } }, { "id": "1732299285125", "message": "clean-vm", "start": 1732299285125, "status": "failure", "warnings": [ { "data": { "path": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "error": { "generatedMessage": true, "code": "ERR_ASSERTION", "actual": false, "expected": true, "operator": "==" } }, "message": "VHD check error" }, { "data": { "alias": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd" }, "message": "missing target of alias" } ], "end": 1732299343111, "result": { "code": "ERR_INVALID_ARG_TYPE", "message": "The \"paths[1]\" argument must be of type string. Received undefined", "name": "TypeError", "stack": "TypeError [ERR_INVALID_ARG_TYPE]: The \"paths[1]\" argument must be of type string. Received undefined\n at resolve (node:path:1169:7)\n at normalize (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/path.js:21:27)\n at NfsHandler.__unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/abstract.js:412:32)\n at NfsHandler.unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/node_modules/limit-concurrency-decorator/index.js:97:24)\n at checkAliases (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:132:25)\n at async Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:284:5)\n at async Promise.all (index 3)\n at async RemoteAdapter.cleanVm (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:283:3)" } }, { "id": "1732299343953", "message": "snapshot", "start": 1732299343953, "status": "success", "end": 1732299346495, "result": "ee646d05-83b2-31d8-e54b-0d3b0cf7df1d" }, { "data": { "id": "4b6d24a3-0b1e-48d5-aac2-a06e3a8ee485", "isFull": false, "type": "remote" }, "id": "1732299346495:0", "message": "export", "start": 1732299346495, "status": "success", "tasks": [ { "id": "1732299353253", "message": "transfer", "start": 1732299353253, "status": "success", "end": 1732299450434, "result": { "size": 9674571776 } }, { "id": "1732299501828:0", "message": "clean-vm", "start": 1732299501828, "status": "success", "warnings": [ { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241104T180802Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241102T180758Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241103T180648Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241105T181019Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1732299518747, "result": { "merge": false } } ], "end": 1732299518760 }, { "data": { "id": "8da40b08-636f-450d-af15-3264b9692e1f", "isFull": false, "type": "remote" }, "id": "1732299346496", "message": "export", "start": 1732299346496, "status": "success", "tasks": [ { "id": "1732299353244", "message": "transfer", "start": 1732299353244, "status": "success", "end": 1732299450546, "result": { "size": 9674571776 } }, { "id": "1732299451765", "message": "clean-vm", "start": 1732299451765, "status": "success", "warnings": [ { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241103T180648Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241104T180802Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241102T180758Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241105T181019Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1732299501791, "result": { "merge": false } } ], "end": 1732299501828 } ], "infos": [ { "message": "Transfer data using NBD" } ], "end": 1732299518760 } ], "end": 1732299518761 }
    • Tristis OrisT

      NBD used even when disabled

      Watching Ignoring Scheduled Pinned Locked Moved Solved Backup
      8
      1
      0 Votes
      8 Posts
      222 Views
      Tristis OrisT
      that a make a sense, now. So no issue here.
    • Tristis OrisT

      SR usage on multiple pools

      Watching Ignoring Scheduled Pinned Locked Moved Solved Management
      12
      1
      0 Votes
      12 Posts
      326 Views
      olivierlambertO
      Yes, it's a "definition" question/problem: a SR is the exact place where you store your virtual disk images. On NFS, it's a folder named after the UUID of the SR. As Dan said, if you connect to the same NFS folder, it will create a new folder for each new SR inside. So even if it's the same physical device/NFS share, there's many SRs on it. For a block based SR, it's different: if you pass a LUN (in ISCSI for example), it will attach it and warn you: either you want to re-attach the previously existing SR (UUID etc) either you erase it completely to start from scratch. There's no "folders" on a block space like a LUN, so it's different.
    • Tristis OrisT

      Pool metadata 8.2 to 8.3

      Watching Ignoring Scheduled Pinned Locked Moved Solved Backup
      3
      0 Votes
      3 Posts
      156 Views
      Tristis OrisT
      @olivierlambert thanks, just to be sure.
    • Tristis OrisT

      Empty select at Backup - Sequence

      Watching Ignoring Scheduled Pinned Locked Moved Solved Management
      4
      1
      0 Votes
      4 Posts
      196 Views
      Tristis OrisT
      @pdonias @julien-f yep fixed now, thanks.
    • Tristis OrisT

      GPU passthrough not available at 8.2

      Watching Ignoring Scheduled Pinned Locked Moved Management
      5
      1
      0 Votes
      5 Posts
      197 Views
      olivierlambertO
      We can make the message clearer, but we advertised multiple time that adding PCI passthrough UI required XAPI changed only available in 8.3
    • Tristis OrisT

      Rolling succeed but not clear log

      Watching Ignoring Scheduled Pinned Locked Moved Management
      2
      1
      0 Votes
      2 Posts
      77 Views
      olivierlambertO
      The task is green, meaning it's done
    • Tristis OrisT

      ISCSI mount - SR_BACKEND_FAILURE_432

      Watching Ignoring Scheduled Pinned Locked Moved Solved Management
      14
      1
      0 Votes
      14 Posts
      632 Views
      J
      @m-mirzayev said in ISCSI mount - SR_BACKEND_FAILURE_432: @john-c Would you mind to post your custom.conf so i have a reference? @m-mirzayev I don’t use multipath personally however, though managed to get Microsoft Copilot to put what you gave above into a valid structure. I also remember people in the past having trouble with multipath, so Vates employees implemented this /etc/multipath/conf.d/custom.conf to fix these issues. # /etc/multipath/conf.d/custom.conf defaults { user_friendly_names yes } multipaths { multipath { wwid "your_device_wwid_here" alias "truenas_iscsi" path_selector "round-robin 0" path_grouping_policy multibus path_checker tur prio const failback immediate } } devices { device { vendor "TrueNAS" product "iSCSI" path_selector "round-robin 0" path_grouping_policy multibus hardware_handler "0" prio const failback immediate } } Please replace "your_device_wwid_here" with the WWID of the device on your network.
    • Tristis OrisT

      XO not update network link info

      Watching Ignoring Scheduled Pinned Locked Moved Management
      2
      0 Votes
      2 Posts
      110 Views
      K
      @Tristis-Oris That's a "known issue": https://xcp-ng.org/forum/topic/7827/xoa-does-not-refresh-quickly-status-of-nics XCP-ng-center is showing the update.