Subcategories

  • VMs, hosts, pools, networks and all other usual management tasks.

    380 Topics
    3k Posts
    olivierlambertO
    It's indeed pretty long. I would maybe use a local SR as suspend SR to check if it's very different, that could help to pinpoint the issue.
  • ACLs, Self-service, Cloud-init, Load balancing...

    93 Topics
    784 Posts
    P
    @tmk hi ! Many thanks, your modified python file did the trick, my static IP address is now working as intented. I can confirm this is working on Windows 2025 Server as well.
  • All XO backup features: full and incremental, replication, mirrors...

    389 Topics
    4k Posts
    tjkreidlT
    @nikade Yeah, that is a far from optimal setup. It will force the data to flow through the management interface before being routed to the storage NICs. Running iostat and xtop should show the load. A better configuration IMO would be putting the storage NICs on the switch and using a separate network or VLAN for the storage I/O traffic. Storage I/O optimization takes some time and effort. The type, number, and RAID configuration of your storage device as well as speed of your host CPUs, eize and type of memory, and configuration of your VMs (if NUMA aware, for example) all will play a role.
  • Everything related to Xen Orchestra's REST API

    68 Topics
    529 Posts
    olivierlambertO
    @lsouai-vates we should try to reproduce and if we can, to fix it ASAP. Thanks!
  • Terraform, Packer or any tool to do IaC

    35 Topics
    350 Posts
    J
    @manilx I have proposed to the IaC team of Vates, a MCP Server for Vates VMS. Which can be used by GitHub Copilot or similar, if used when doing IaC etc.
  • How to determine if VM is a fast clone i XO/A?

    11
    0 Votes
    11 Posts
    2k Views
    olivierlambertO
    Hi, I answered in the GH issue. Please don't try to duplicate requests on multiple places.
  • Problem with importing large VMDK disks

    Solved
    12
    6
    0 Votes
    12 Posts
    2k Views
    A
    To conclude this: @florent and I discussed the issue in a private conversation. I provided the vmdk file that was causing the problem. It turned out that the problem was caused by the vmdk thin file from VMware Workstation Pro, which has a slightly different structure than the vmdk files on ESXi hypervisors. Currently XO imports these files fine and the problem can be considered solved. Thank you very much @florent for your cooperation and help.
  • XO - enable PCI devices for pass-through - should it work already or not yet?

    Solved
    15
    1
    0 Votes
    15 Posts
    2k Views
    A
    That's exactly it, thank you [image: 1717165046697-1533660b-c0eb-45b1-b81c-ae50d03a4f36-image.png]
  • Update release channels

    Solved
    3
    0 Votes
    3 Posts
    363 Views
    F
    @olivierlambert Perfect, thank you
  • Vmware migration to XCP-NG no longer works

    Solved
    18
    0 Votes
    18 Posts
    2k Views
    K
    @olivierlambert Sorry, I will tell it clearly next time.
  • 2 Votes
    2 Posts
    291 Views
    olivierlambertO
    Pinging @florent about this
  • NDB Connection

    3
    1
    0 Votes
    3 Posts
    423 Views
    H
    @Andrew, Thank you for the reply.
  • Live Migration in XO Fails

    6
    0 Votes
    6 Posts
    641 Views
    O
    @omatsei I found the following error on the source host, if it helps. I rebooted it and restarted iscsid on both the source and destination hosts, just to make sure nothing was pending or hung. May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] SR.scan D:9f4f3c05cc88 failed with exception Storage_error ([S(Redirect);[S(192.168.1.201)]]) May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] Raised Storage_error ([S(Redirect);[S(192.168.1.201)]]) May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] 1/1 xapi Raised at file (Thread 2507 has no backtrace table. Was with_backtraces called?, line 0 May 28 10:15:32 xcp09 xapi: [error||2507 ||backtrace] May 28 10:15:32 xcp09 xapi: [error||2507 ||storage_interface] Storage_error ([S(Redirect);[S(192.168.1.201)]]) (File "storage/storage_interface.ml", line 436, characters 51-58) May 28 10:15:32 xcp09 xapi: [error||2506 HTTP 127.0.0.1->:::80|Querying services D:6b15aa4c5bcd|storage_interface] Storage_error ([S(Redirect);[S(192.168.1.201)]]) (File "storage/storage_interface.ml", line 431, characters 49-56) May 28 10:15:32 xcp09 xapi: [error||2506 HTTP 127.0.0.1->:::80|Querying services D:6b15aa4c5bcd|storage_interface] Storage_error ([S(Redirect);[S(192.168.1.201)]]) (File "storage/storage_interface.ml", line 436, characters 51-58) Note that 192.168.1.201 is the pool master. I ended up rebooting the pool master after manually migrating VM's off it, and it seems to have fixed the issue. No idea why, but whatever.
  • Template visibility on multiple pools

    5
    0 Votes
    5 Posts
    535 Views
    olivierlambertO
    https://github.com/vatesfr/xen-orchestra/issues/7690 olivierlambert created this issue in vatesfr/xen-orchestra open Custom template replication between pools #7690
  • Confused re: pricing (XOA vs. Vates Essentials)

    23
    0 Votes
    23 Posts
    8k Views
    J
    @msimanyi said in Confused re: pricing (XOA vs. Vates Essentials): @john-c said in Confused re: pricing (XOA vs. Vates Essentials): It does NOT require three licensed servers (on Essential or Essential+), it allows up to (max) 3 hosts so you have room to grow. Yes, on those packages. I was specifically talking about the Enterprise version for 24/7 support, which requires licensing at least three hosts. (I'm sure I could ignore implementing the third host, but I'd still have to pay the $1,800 / year basic host license fee for it.) Again, thank you for all your effort discussing this. @msimanyi The Enterprise plan isn't just per year but also per host, so you would be paying on that plan per host and per year. With that minimum requirement you would be paying $4590 for 3 hosts every year (for 5 Years of Support), then for just a 1 year $5400 each year for 3 hosts. The price per host per year is for each individual host on the higher plans, but the 3 host minimum is just a threshold for eligibility. So once you have 3 (or more) hosts, then you can choose to be on those plans. So Essential+ plan is around that amount of $1,800 / year, if going for the 5 Years Support (with $1700). Anyway I left a feedback note for @olivierlambert to see if he can add options on the SMB plans, to enable upgrading the response times to be closer to the 24/7 ones (from Enterprise level). Maybe worth talking to Vates directly to see if an exception can be granted in your case so you can be 24/7, any way worth contacting them to see - yes?
  • This topic is deleted!

    2
    0 Votes
    2 Posts
    44 Views
    No one has replied
  • Failed to migrate vdi

    5
    0 Votes
    5 Posts
    698 Views
    L
    @Danp I am using warm migration based on past experience, where we have already migrated more than four hundred virtual machines without any incident. This function of migrating only the VDI within the same pool is rarely used in our environment. Regarding SMlog, Orquestra Xen presented the error at 22:58:08 on May 15th, below is SMlog data around that time: May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (0, 0) + (1, 0) => (1, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (1, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'set', '--debug', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0', '-f', 'hidden', '-v', '1'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Deleting vdi: 96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering deleteVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] entering updateVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering getMetadataToWrite May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (1, 0) + (-1, 0) => (0, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (0, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: closed /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: unlinking lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Setting virtual_allocation of SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183 to 7884496699392 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Kicking GC May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] === SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183: gc === May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22432] Will finish as PID [22433] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: closed /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] New PID [22432] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] LVMCache created for VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr, acquired: False (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache created for VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/vgs', '--readonly', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: will initialize now May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: refreshing May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Entering _checkMetadataVolume May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] vdi_snapshot {'sr_uuid': '87edd82a-f612-d1f9-bfcd-bc2cae7cff98', 'subtask_of': 'DummyRef:|f06fb873-be2b-46ff-8910-e0489003b63c|VDI.snapshot', 'vdi_ref': 'OpaqueRef:f60beba2-8202-4a43-be62-5935e240be01', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e0d805fb-d325-4af7-af40-a4249b5ddec4', 'host_ref': 'OpaqueRef:f2af7892-f475-4c61-a289-3cb7bffc07c4', 'session_ref': 'OpaqueRef:5e183ddc-f65c-4dff-8322-0e909c489425', 'device_config': {'SCSIid': '360002ac00000000000000096000205b4', 'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:85b38e21-10be-4ea3-8578-65686851a0e1', 'driver_params': {'type': 'internal'}, 'vdi_uuid': 'e0d805fb-d325-4af7-af40-a4249b5ddec4'} May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Pause request for e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Calling tap-pause on host OpaqueRef:e352bec8-d91e-4cfe-af1b-8f53876c4e56 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVHDVDI._snapshot for e0d805fb-d325-4af7-af40-a4249b5ddec4 (type 3) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 (1, 0) + (1, 0) => (2, 0) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 set => (2, 0b) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4', '-l', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98', '-a'] May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d (1, 0) + (1, 0) => (2, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d set => (2, 0b) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 (0, 0) + (1, 0) => (1, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 set => (1, 0b)
  • can't get XOA to install

    6
    0 Votes
    6 Posts
    2k Views
    olivierlambertO
    That's why I said about peering issue. We have 10G here and no bandwidth issue neither.
  • Help with Windows Cloudbase-init Setup

    5
    0 Votes
    5 Posts
    917 Views
    M
    @olivierlambert Sure, happy to do so.
  • Does XOA pause scheduled backups during XOA upgrades?

    7
    1
    0 Votes
    7 Posts
    609 Views
    julien-fJ
    XO does not pause backups during upgrades/restart, all currently running backups will be interrupted as @olivierlambert said. An interrupted backup is not a big problem by itself, nothing will be broken, and the next run will run properly. Backups are only run at the time they are scheduled, if XO is offline at this time, it will not automatically run them when restarted, it will wait for the next scheduled run.
  • Roadmap XO6

    11
    0 Votes
    11 Posts
    4k Views
    olivierlambertO
    Our doc is up to date here: https://docs.xcp-ng.org/project/ecosystem/#-vm-backup
  • XO Sources on Host

    Moved
    5
    0 Votes
    5 Posts
    795 Views
    CTGC
    @olivierlambert Merci Oliver!
  • XCP-ng host status enabled but can't access it.

    Solved
    15
    1
    0 Votes
    15 Posts
    1k Views
    olivierlambertO
    Ah indeed, it's written in bold in the documentation, I forgot to ask you about this ^^ Enjoy XO!
  • XOA Proxy and Console Access

    Solved
    15
    0 Votes
    15 Posts
    2k Views
    olivierlambertO
    Thanks everyone for the report!
  • Console Zoom Percentage Slightly Cutoff

    1
    1
    0 Votes
    1 Posts
    138 Views
    No one has replied