Subcategories

  • VMs, hosts, pools, networks and all other usual management tasks.

    363 Topics
    3k Posts
    dthenotD
    @Razor_648 While I was writing my previous message, I have been reminded that there are also issues with LVHDoISCSI SR and CBT, you should disable CBT on your backup job and on all VDI on the SR. It might help with the issue.
  • ACLs, Self-service, Cloud-init, Load balancing...

    90 Topics
    766 Posts
    D
    @Danp said in Advanced VM settings Logs: Hi Dustin, Yes... these types of actions are recorded in the Audit log (Settings > Audit) if you have access to this feature and have enabled it. Dan Perfect, thanks!
  • All XO backup features: full and incremental, replication, mirrors...

    370 Topics
    4k Posts
    K
    @olivierlambert Understood, however, I'm failing to see what part of my posting was irrelevant to the topic at hand. I was merely sharing what I'd experienced as well and providing context. I thought this was the kind of feedback expected of community members, who want to see this project flourish?
  • Everything related to Xen Orchestra's REST API

    66 Topics
    502 Posts
    lsouai-vatesL
    @StephenAOINS This endpoint is not currently present in our REST API swagger, but we do plan to add it to the list of endpoints. We are currently finalizing the migration of existing endpoints, the next step will be adding new ones. We will keep you informed when it is available. Feel free to come back to us if you want to learn more and follow our blog posts. have a good day
  • Terraform, Packer or any tool to do IaC

    34 Topics
    319 Posts
    olivierlambertO
    No, it's fine, as long the issue contains all the relevant details it's fine
  • Failed to migrate vdi

    5
    0 Votes
    5 Posts
    587 Views
    L
    @Danp I am using warm migration based on past experience, where we have already migrated more than four hundred virtual machines without any incident. This function of migrating only the VDI within the same pool is rarely used in our environment. Regarding SMlog, Orquestra Xen presented the error at 22:58:08 on May 15th, below is SMlog data around that time: May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (0, 0) + (1, 0) => (1, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (1, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'set', '--debug', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0', '-f', 'hidden', '-v', '1'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Deleting vdi: 96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering deleteVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] entering updateVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering getMetadataToWrite May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (1, 0) + (-1, 0) => (0, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (0, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: closed /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: unlinking lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Setting virtual_allocation of SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183 to 7884496699392 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Kicking GC May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] === SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183: gc === May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22432] Will finish as PID [22433] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: closed /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] New PID [22432] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] LVMCache created for VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr, acquired: False (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache created for VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/vgs', '--readonly', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: will initialize now May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: refreshing May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Entering _checkMetadataVolume May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] vdi_snapshot {'sr_uuid': '87edd82a-f612-d1f9-bfcd-bc2cae7cff98', 'subtask_of': 'DummyRef:|f06fb873-be2b-46ff-8910-e0489003b63c|VDI.snapshot', 'vdi_ref': 'OpaqueRef:f60beba2-8202-4a43-be62-5935e240be01', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e0d805fb-d325-4af7-af40-a4249b5ddec4', 'host_ref': 'OpaqueRef:f2af7892-f475-4c61-a289-3cb7bffc07c4', 'session_ref': 'OpaqueRef:5e183ddc-f65c-4dff-8322-0e909c489425', 'device_config': {'SCSIid': '360002ac00000000000000096000205b4', 'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:85b38e21-10be-4ea3-8578-65686851a0e1', 'driver_params': {'type': 'internal'}, 'vdi_uuid': 'e0d805fb-d325-4af7-af40-a4249b5ddec4'} May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Pause request for e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Calling tap-pause on host OpaqueRef:e352bec8-d91e-4cfe-af1b-8f53876c4e56 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVHDVDI._snapshot for e0d805fb-d325-4af7-af40-a4249b5ddec4 (type 3) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 (1, 0) + (1, 0) => (2, 0) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 set => (2, 0b) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4', '-l', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98', '-a'] May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d (1, 0) + (1, 0) => (2, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d set => (2, 0b) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 (0, 0) + (1, 0) => (1, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 set => (1, 0b)
  • can't get XOA to install

    6
    0 Votes
    6 Posts
    2k Views
    olivierlambertO
    That's why I said about peering issue. We have 10G here and no bandwidth issue neither.
  • Help with Windows Cloudbase-init Setup

    5
    0 Votes
    5 Posts
    767 Views
    M
    @olivierlambert Sure, happy to do so.
  • Does XOA pause scheduled backups during XOA upgrades?

    7
    1
    0 Votes
    7 Posts
    419 Views
    julien-fJ
    XO does not pause backups during upgrades/restart, all currently running backups will be interrupted as @olivierlambert said. An interrupted backup is not a big problem by itself, nothing will be broken, and the next run will run properly. Backups are only run at the time they are scheduled, if XO is offline at this time, it will not automatically run them when restarted, it will wait for the next scheduled run.
  • Roadmap XO6

    11
    0 Votes
    11 Posts
    3k Views
    olivierlambertO
    Our doc is up to date here: https://docs.xcp-ng.org/project/ecosystem/#-vm-backup
  • XO Sources on Host

    Moved
    5
    0 Votes
    5 Posts
    632 Views
    CTGC
    @olivierlambert Merci Oliver!
  • XCP-ng host status enabled but can't access it.

    Solved
    15
    1
    0 Votes
    15 Posts
    903 Views
    olivierlambertO
    Ah indeed, it's written in bold in the documentation, I forgot to ask you about this ^^ Enjoy XO!
  • XOA Proxy and Console Access

    Solved
    15
    0 Votes
    15 Posts
    1k Views
    olivierlambertO
    Thanks everyone for the report!
  • Console Zoom Percentage Slightly Cutoff

    1
    1
    0 Votes
    1 Posts
    116 Views
    No one has replied
  • Preventing "new network" detection on different XCP-NG hosts

    14
    0 Votes
    14 Posts
    2k Views
    nikadeN
    @fohdeesha said in Preventing "new network" detection on different XCP-NG hosts: @Zevgeny As you suspected, this is caused by the VMs booting with new & fresh MAC addresses they've never seen before, which from their point of view means an entirely new NIC. The "clean" solution here would be to have an option or toggle inside of XOA backup jobs, that allows MAC addresses to be preserved - this way the replicated DR VMs on the backup site still have the same MAC addresses. This could be something you could file as a feature request on our github However note that pretty much any VM copy/replicate action will trigger xen to generate a new MAC address (for safety, duplicated MACs are usually bad). This means that even if the MAC is preserved to the DR VMs, when your admin copies the DR VMs and boots them, the new copies will have newly generated MACs. I suppose it would be possible to add a "preserve MAC" checkbox for copy operations too, but I'm not sure if XAPI currently exposes such functionality. Yea, keeping the mac adress would really solve this. We actually edit the mac adress manually on the copied VM's incase we need to start the DR copy up for testing.
  • freepbx import from esxi wont boot correctly

    9
    0 Votes
    9 Posts
    653 Views
    J
    thanks everyone for the replies really appreciate this community! so its not a disaster, the server that runs the freepbx vm is still online. i merged the other vms from the other servers just fine to new xcp-ng servers anyways, i actually have the original vms for all of them still ready to spin up in esxi. so far every single distro lol except this custom Sangoma centos 7 has been good to go nothing else needed. i think what i will do since the esxi servers are no longer connected to a vcenter appliance is make a copy of the hard disk, stop the original, boot up the copy and see if i cant play around with it with some of the suggestions here. if in fact i do manage to get it working, ill post a 'how i made it work in this instance' thread for anyone else who may encounter it. in the meantime all suggestions are welcome, im not very versed with boot configurations in linux. im more of a windows systems technician, i can operate *nix just fine and do whatever i need to if the installation is working lol, troubleshooting boot issues has never been something i learned, so crash course engaged.
  • VMware migration tool not bringing disks

    Solved
    7
    0 Votes
    7 Posts
    586 Views
    ItMeCorbanI
    Yep the snapshots solved it. And I was even able to import the VMs while they're still running so that was a bonus. Thanks for your help @Danp
  • Import from VMWare fails Error: 404 Not Found

    4
    0 Votes
    4 Posts
    517 Views
    S
    @florent will do
  • Updated XOA with kernel >5.3 to support nconnect nfs option

    34
    1 Votes
    34 Posts
    5k Views
    M
    @manilx 2 more and yes seems to be confusing and just to round it up. Same VM as above Delta (full) backup using NBD without and with "nconnect=6" in the remote setting: [image: 1714397312022-screenshot-2024-04-29-at-14.27.19.png] with nconnect=6 [image: 1714397346566-screenshot-2024-04-29-at-14.28.57.png] nconnect=6 doesn't seem to do a lot.
  • Technique for shared repo of cloudinit templates

    cloudinit
    4
    0 Votes
    4 Posts
    236 Views
    olivierlambertO
    Your pre-recorded Cloudconfig own setup is user-wide (so per user). It won't be universal for everyone.
  • Default console username and password on XOA Appliance

    Solved
    10
    0 Votes
    10 Posts
    21k Views
    K
    @olivierlambert Hey thanks a lot for that tip.
  • Cannot delete failed XO VMware-related migration tasks

    14
    0 Votes
    14 Posts
    2k Views
    D
    @gb-123 Yes, I have tried that and it appears to have deleted the tasks. First I had to register as follows: # xo-cli --register <URL of XOA> <admin username> <password> Then: # xo-cli rest get tasks which showed a list of tasks IDs. I verified the IDs with the ID show in the raw log for each of the tasks listed in XOA. Once I figured out which task(s) I wanted to delete, I did the following for each tasks: # xo-cli rest del <tasks/task ID> I have not seen an ill effects since I deleted some tasks last August. Below is a link to removing tasks using xo-cli: https://help.vates.tech/kb/en-us/31-troubleshooting-and-tips/123-how-to-remove-xo-tasks
  • Warm migration stuck at 0%

    Solved
    12
    1
    1 Votes
    12 Posts
    819 Views
    J
    Currently in commit 2f962 and it's working. Thanks you all
  • Easy way to find a failed task?

    2
    0 Votes
    2 Posts
    424 Views
    D
    @Pyroteq said in Easy way to find a failed task?: It seems as soon as you refresh the page everything disappears. The Tasks page only shows logs while you're viewing it. Afterwards I think they get recorded to the Logs, but if your system fell asleep XO wouldn't know what happened. Besides the connection getting dropped and I don't know if that would get recorded.
  • Debian 12 template - long load on bios sreen at startup

    Solved
    4
    1
    0 Votes
    4 Posts
    292 Views
    olivierlambertO
    Yes indeed, if you are not passing a static IP in the template, Cloud init will wait to get an IP for few minutes before continuing to boot