Subcategories

  • VMs, hosts, pools, networks and all other usual management tasks.

    457 Topics
    3k Posts
    A
    @acebmxer Yes, 22 or 24 should work correctly. Some early versions of 24 had issues, but not currently. I'm running 24 and it is working.
  • ACLs, Self-service, Cloud-init, Load balancing...

    101 Topics
    840 Posts
    olivierlambertO
    That's a very good question, let me ask internally. Ping @pdonias he might know the answer
  • All XO backup features: full and incremental, replication, mirrors...

    479 Topics
    5k Posts
    A
    @florent I updated XO to master 60ba5, and still having the issue. I don't see any errors in the backup log. It just says "isFull": true for the ones stuck on full backups. The XO log just reports: Mar 24 16:26:47 xo1 xo-server[21199]: 2026-03-24T20:26:47.315Z xo:backups:worker INFO starting backup Mar 24 16:26:52 xo1 xo-server[21199]: 2026-03-24T20:26:52.359Z xo:xapi:xapi-disks INFO export through vhd Mar 24 16:26:54 xo1 xo-server[21199]: 2026-03-24T20:26:54.410Z xo:xapi:xapi-disks INFO export through vhd Mar 24 16:29:22 xo1 xo-server[21199]: 2026-03-24T20:29:22.250Z xo:backups:worker INFO backup has ended Note, it reports two export through vhd and does two full backups and one delta.
  • Everything related to Xen Orchestra's REST API

    83 Topics
    624 Posts
    K
    @gduperrey Worked swell, thanks
  • Terraform, Packer or any tool to do IaC

    49 Topics
    463 Posts
    dalemD
    Version 1.4.0 is released: https://codeberg.org/NiXOA/system/releases/tag/v1.4.0 It includes significant changes and improvements, including: dedicated getting started section, migration to valkey, only needing to clone system, and helper scripts. the xen-orchestra-ce nixpkg now references the libvhdi nixpkg, and the core flake now references and pulls from the xen-orchestra-ce repo as an overlay. System (the user input flake) now uses the Core repo as an overlay, reducing the need to clone both locally AND allowing system to pull new updates and releases from core. XO, and libvhdi as needed. The next goal is: Make an xsconsole-like TUI Automate package updates for libvhdi and xen-orchestra-ce using CI/CD pipelines Submit libvhdi and xen-orchestra-ce as official nixpkgs
  • Vmware migration to XCP-NG no longer works

    Solved
    18
    0 Votes
    18 Posts
    3k Views
    K
    @olivierlambert Sorry, I will tell it clearly next time.
  • 2 Votes
    2 Posts
    494 Views
    olivierlambertO
    Pinging @florent about this
  • NDB Connection

    3
    1
    0 Votes
    3 Posts
    641 Views
    H
    @Andrew, Thank you for the reply.
  • Live Migration in XO Fails

    6
    0 Votes
    6 Posts
    1k Views
    O
    @omatsei I found the following error on the source host, if it helps. I rebooted it and restarted iscsid on both the source and destination hosts, just to make sure nothing was pending or hung. May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] SR.scan D:9f4f3c05cc88 failed with exception Storage_error ([S(Redirect);[S(192.168.1.201)]]) May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] Raised Storage_error ([S(Redirect);[S(192.168.1.201)]]) May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] 1/1 xapi Raised at file (Thread 2507 has no backtrace table. Was with_backtraces called?, line 0 May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] May 28 10:15:32 xcp09 xapi: [error||2507 ||storage_interface] Storage_error ([S(Redirect);[S(192.168.1.201)]]) (File "storage/storage_interface.ml", line 436, characters 51-58) May 28 10:15:32 xcp09 xapi: [error||2506 HTTP 127.0.0.1->:::80|Querying services D:6b15aa4c5bcd|storage_interface] Storage_error ([S(Redirect);[S(192.168.1.201)]]) (File "storage/storage_interface.ml", line 431, characters 49-56) May 28 10:15:32 xcp09 xapi: [error||2506 HTTP 127.0.0.1->:::80|Querying services D:6b15aa4c5bcd|storage_interface] Storage_error ([S(Redirect);[S(192.168.1.201)]]) (File "storage/storage_interface.ml", line 436, characters 51-58) Note that 192.168.1.201 is the pool master. I ended up rebooting the pool master after manually migrating VM's off it, and it seems to have fixed the issue. No idea why, but whatever.
  • Template visibility on multiple pools

    5
    0 Votes
    5 Posts
    855 Views
    olivierlambertO
    https://github.com/vatesfr/xen-orchestra/issues/7690 olivierlambert created this issue in vatesfr/xen-orchestra open Custom template replication between pools #7690
  • Confused re: pricing (XOA vs. Vates Essentials)

    23
    0 Votes
    23 Posts
    12k Views
    J
    @msimanyi said in Confused re: pricing (XOA vs. Vates Essentials): @john-c said in Confused re: pricing (XOA vs. Vates Essentials): It does NOT require three licensed servers (on Essential or Essential+), it allows up to (max) 3 hosts so you have room to grow. Yes, on those packages. I was specifically talking about the Enterprise version for 24/7 support, which requires licensing at least three hosts. (I'm sure I could ignore implementing the third host, but I'd still have to pay the $1,800 / year basic host license fee for it.) Again, thank you for all your effort discussing this. @msimanyi The Enterprise plan isn't just per year but also per host, so you would be paying on that plan per host and per year. With that minimum requirement you would be paying $4590 for 3 hosts every year (for 5 Years of Support), then for just a 1 year $5400 each year for 3 hosts. The price per host per year is for each individual host on the higher plans, but the 3 host minimum is just a threshold for eligibility. So once you have 3 (or more) hosts, then you can choose to be on those plans. So Essential+ plan is around that amount of $1,800 / year, if going for the 5 Years Support (with $1700). Anyway I left a feedback note for @olivierlambert to see if he can add options on the SMB plans, to enable upgrading the response times to be closer to the 24/7 ones (from Enterprise level). Maybe worth talking to Vates directly to see if an exception can be granted in your case so you can be 24/7, any way worth contacting them to see - yes?
  • This topic is deleted!

    2
    0 Votes
    2 Posts
    44 Views
    No one has replied
  • Failed to migrate vdi

    5
    0 Votes
    5 Posts
    1k Views
    L
    @Danp I am using warm migration based on past experience, where we have already migrated more than four hundred virtual machines without any incident. This function of migrating only the VDI within the same pool is rarely used in our environment. Regarding SMlog, Orquestra Xen presented the error at 22:58:08 on May 15th, below is SMlog data around that time: May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (0, 0) + (1, 0) => (1, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (1, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'set', '--debug', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0', '-f', 'hidden', '-v', '1'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Deleting vdi: 96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering deleteVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] entering updateVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering getMetadataToWrite May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (1, 0) + (-1, 0) => (0, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (0, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: closed /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: unlinking lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Setting virtual_allocation of SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183 to 7884496699392 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Kicking GC May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] === SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183: gc === May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22432] Will finish as PID [22433] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: closed /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] New PID [22432] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] LVMCache created for VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr, acquired: False (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache created for VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/vgs', '--readonly', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: will initialize now May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: refreshing May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Entering _checkMetadataVolume May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] vdi_snapshot {'sr_uuid': '87edd82a-f612-d1f9-bfcd-bc2cae7cff98', 'subtask_of': 'DummyRef:|f06fb873-be2b-46ff-8910-e0489003b63c|VDI.snapshot', 'vdi_ref': 'OpaqueRef:f60beba2-8202-4a43-be62-5935e240be01', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e0d805fb-d325-4af7-af40-a4249b5ddec4', 'host_ref': 'OpaqueRef:f2af7892-f475-4c61-a289-3cb7bffc07c4', 'session_ref': 'OpaqueRef:5e183ddc-f65c-4dff-8322-0e909c489425', 'device_config': {'SCSIid': '360002ac00000000000000096000205b4', 'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:85b38e21-10be-4ea3-8578-65686851a0e1', 'driver_params': {'type': 'internal'}, 'vdi_uuid': 'e0d805fb-d325-4af7-af40-a4249b5ddec4'} May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Pause request for e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Calling tap-pause on host OpaqueRef:e352bec8-d91e-4cfe-af1b-8f53876c4e56 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVHDVDI._snapshot for e0d805fb-d325-4af7-af40-a4249b5ddec4 (type 3) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 (1, 0) + (1, 0) => (2, 0) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 set => (2, 0b) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4', '-l', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98', '-a'] May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d (1, 0) + (1, 0) => (2, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d set => (2, 0b) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 (0, 0) + (1, 0) => (1, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 set => (1, 0b)
  • can't get XOA to install

    6
    0 Votes
    6 Posts
    2k Views
    olivierlambertO
    That's why I said about peering issue. We have 10G here and no bandwidth issue neither.
  • Help with Windows Cloudbase-init Setup

    5
    0 Votes
    5 Posts
    1k Views
    M
    @olivierlambert Sure, happy to do so.
  • Does XOA pause scheduled backups during XOA upgrades?

    7
    1
    0 Votes
    7 Posts
    1k Views
    julien-fJ
    XO does not pause backups during upgrades/restart, all currently running backups will be interrupted as @olivierlambert said. An interrupted backup is not a big problem by itself, nothing will be broken, and the next run will run properly. Backups are only run at the time they are scheduled, if XO is offline at this time, it will not automatically run them when restarted, it will wait for the next scheduled run.
  • Roadmap XO6

    11
    0 Votes
    11 Posts
    5k Views
    olivierlambertO
    Our doc is up to date here: https://docs.xcp-ng.org/project/ecosystem/#-vm-backup
  • XO Sources on Host

    Moved
    5
    0 Votes
    5 Posts
    1k Views
    CTGC
    @olivierlambert Merci Oliver!
  • XCP-ng host status enabled but can't access it.

    Solved
    15
    1
    0 Votes
    15 Posts
    3k Views
    olivierlambertO
    Ah indeed, it's written in bold in the documentation, I forgot to ask you about this ^^ Enjoy XO!
  • XOA Proxy and Console Access

    Solved
    15
    0 Votes
    15 Posts
    3k Views
    olivierlambertO
    Thanks everyone for the report!
  • Console Zoom Percentage Slightly Cutoff

    1
    1
    0 Votes
    1 Posts
    239 Views
    No one has replied
  • Preventing "new network" detection on different XCP-NG hosts

    14
    0 Votes
    14 Posts
    4k Views
    nikadeN
    @fohdeesha said in Preventing "new network" detection on different XCP-NG hosts: @Zevgeny As you suspected, this is caused by the VMs booting with new & fresh MAC addresses they've never seen before, which from their point of view means an entirely new NIC. The "clean" solution here would be to have an option or toggle inside of XOA backup jobs, that allows MAC addresses to be preserved - this way the replicated DR VMs on the backup site still have the same MAC addresses. This could be something you could file as a feature request on our github However note that pretty much any VM copy/replicate action will trigger xen to generate a new MAC address (for safety, duplicated MACs are usually bad). This means that even if the MAC is preserved to the DR VMs, when your admin copies the DR VMs and boots them, the new copies will have newly generated MACs. I suppose it would be possible to add a "preserve MAC" checkbox for copy operations too, but I'm not sure if XAPI currently exposes such functionality. Yea, keeping the mac adress would really solve this. We actually edit the mac adress manually on the copied VM's incase we need to start the DR copy up for testing.
  • VMware migration tool not bringing disks

    Solved
    7
    0 Votes
    7 Posts
    1k Views
    ItMeCorbanI
    Yep the snapshots solved it. And I was even able to import the VMs while they're still running so that was a bonus. Thanks for your help @Danp
  • Import from VMWare fails Error: 404 Not Found

    4
    0 Votes
    4 Posts
    1k Views
    S
    @florent will do
  • Updated XOA with kernel >5.3 to support nconnect nfs option

    34
    1 Votes
    34 Posts
    9k Views
    M
    @manilx 2 more and yes seems to be confusing and just to round it up. Same VM as above Delta (full) backup using NBD without and with "nconnect=6" in the remote setting: [image: 1714397312022-screenshot-2024-04-29-at-14.27.19.png] with nconnect=6 [image: 1714397346566-screenshot-2024-04-29-at-14.28.57.png] nconnect=6 doesn't seem to do a lot.