Subcategories

  • VMs, hosts, pools, networks and all other usual management tasks.

    330 Topics
    2k Posts
    M
    @olivierlambert Thanks, I have tried using xe-reset-networking and the emergency network reset function in the UI of XCP-NG and neither seem to work to reset the management Interface, after rebooting they still have no interfaces at all (which happened when the failed uplink bond was created). I will probably need to wipe and re-install to get traction again. Also I am confused with the MTU. The servers we are using are configured (using a server profile template) that we also use with vMware, and the uplinks on those are all MTU 9000 both in the VNIC templates (ie hardware side) and in the distributed vSwitches in vCenter (equivalent of bonded interfaces in XO?) If their was a problem with MTU, I would have though they would not work either. I will keep at it, thanks for the help.
  • ACLs, Self-service, Cloud-init, Load balancing...

    84 Topics
    707 Posts
    CyrilleC
    @ferrao Yes it is. At the end the task disappears unless there is an error / failure.
  • All XO backup features: full and incremental, replication, mirrors...

    337 Topics
    3k Posts
    olivierlambertO
    Thanks for your feedback, we'll discuss internally if there's any other possible approach (and I'm not sure).
  • Everything related to Xen Orchestra's REST API

    61 Topics
    476 Posts
    S
    @Studmuffn1134 I changed my link from ws:// to https:// and it now gives me this error File "Z:\Valera\School\Lakeland University\Finished\Programming 2\Python Programs\StudsPrograms.venv\Lib\site-packages\jsonrpc_base\jsonrpc.py", line 213, in parse_response raise ProtocolError(code, message, data) jsonrpc_base.jsonrpc.ProtocolError: (10, 'invalid parameters', {'error': {'message': 'invalid parameters', 'code': 10, 'data': {'errors': [{'instancePath': '/id', 'schemaPath': '#/properties/id/type', 'keyword': 'type', 'params': {'type': 'string'}, 'message': 'must be string'}]}}, 'id': '0a11ec72-9300-4030-a5d2-a5c0286f3811', 'jsonrpc': '2.0'})
  • Terraform, Packer or any tool to do IaC

    30 Topics
    279 Posts
    nathanael-hN
    Hello, I suggest you look also for Packer to build ready to use VM templates, with cloud-init, guest tools, and the softwares you'd need. There's a blog post for this https://xcp-ng.org/blog/2024/02/22/using-packer-with-xcp-ng/.
  • Confused re: pricing (XOA vs. Vates Essentials)

    23
    0 Votes
    23 Posts
    6k Views
    J
    @msimanyi said in Confused re: pricing (XOA vs. Vates Essentials): @john-c said in Confused re: pricing (XOA vs. Vates Essentials): It does NOT require three licensed servers (on Essential or Essential+), it allows up to (max) 3 hosts so you have room to grow. Yes, on those packages. I was specifically talking about the Enterprise version for 24/7 support, which requires licensing at least three hosts. (I'm sure I could ignore implementing the third host, but I'd still have to pay the $1,800 / year basic host license fee for it.) Again, thank you for all your effort discussing this. @msimanyi The Enterprise plan isn't just per year but also per host, so you would be paying on that plan per host and per year. With that minimum requirement you would be paying $4590 for 3 hosts every year (for 5 Years of Support), then for just a 1 year $5400 each year for 3 hosts. The price per host per year is for each individual host on the higher plans, but the 3 host minimum is just a threshold for eligibility. So once you have 3 (or more) hosts, then you can choose to be on those plans. So Essential+ plan is around that amount of $1,800 / year, if going for the 5 Years Support (with $1700). Anyway I left a feedback note for @olivierlambert to see if he can add options on the SMB plans, to enable upgrading the response times to be closer to the 24/7 ones (from Enterprise level). Maybe worth talking to Vates directly to see if an exception can be granted in your case so you can be 24/7, any way worth contacting them to see - yes?
  • This topic is deleted!

    2
    0 Votes
    2 Posts
    44 Views
    No one has replied
  • Failed to migrate vdi

    5
    0 Votes
    5 Posts
    507 Views
    L
    @Danp I am using warm migration based on past experience, where we have already migrated more than four hundred virtual machines without any incident. This function of migrating only the VDI within the same pool is rarely used in our environment. Regarding SMlog, Orquestra Xen presented the error at 22:58:08 on May 15th, below is SMlog data around that time: May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (0, 0) + (1, 0) => (1, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (1, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/usr/bin/vhd-util', 'set', '--debug', '-n', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0', '-f', 'hidden', '-v', '1'] May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Deleting vdi: 96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering deleteVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] entering updateVdi May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering getMetadataToWrite May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Entering VDI info May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 (1, 0) + (-1, 0) => (0, 0) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] Refcount for lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183:96be9922-4bb1-4fea-8c09-c5e29a5475a0 set => (0, 0b) May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:07 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183/VHD-96be9922-4bb1-4fea-8c09-c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/dmsetup', 'status', 'VG_XenStorage--4534d3f4--59d6--f7ce--93b7--bafc382ed183-VHD--96be9922--4bb1--4fea--8c09--c5e29a5475a0'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: closed /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: unlinking lock file /var/lock/sm/lvm-4534d3f4-59d6-f7ce-93b7-bafc382ed183/96be9922-4bb1-4fea-8c09-c5e29a5475a0 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Setting virtual_allocation of SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183 to 7884496699392 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183'] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] pread SUCCESS May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] Kicking GC May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] === SR 4534d3f4-59d6-f7ce-93b7-bafc382ed183: gc === May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22432] Will finish as PID [22433] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: closed /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/running May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active May 15 22:58:08 SDE-RK8-R6525-P01-H01 SMGC: [22357] New PID [22432] May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: opening lock file /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] LVMCache created for VG_XenStorage-4534d3f4-59d6-f7ce-93b7-bafc382ed183 May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/gc_active, acquired: True (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22433] lock: tried lock /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr, acquired: False (exists: True) May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/.nil/lvm May 15 22:58:08 SDE-RK8-R6525-P01-H01 SM: [22357] lock: released /var/lock/sm/4534d3f4-59d6-f7ce-93b7-bafc382ed183/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Setting LVM_DEVICE to /dev/disk/by-scsid/360002ac00000000000000096000205b4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache created for VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/vgs', '--readonly', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: will initialize now May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVMCache: refreshing May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Entering _checkMetadataVolume May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/87edd82a-f612-d1f9-bfcd-bc2cae7cff98/sr May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] vdi_snapshot {'sr_uuid': '87edd82a-f612-d1f9-bfcd-bc2cae7cff98', 'subtask_of': 'DummyRef:|f06fb873-be2b-46ff-8910-e0489003b63c|VDI.snapshot', 'vdi_ref': 'OpaqueRef:f60beba2-8202-4a43-be62-5935e240be01', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e0d805fb-d325-4af7-af40-a4249b5ddec4', 'host_ref': 'OpaqueRef:f2af7892-f475-4c61-a289-3cb7bffc07c4', 'session_ref': 'OpaqueRef:5e183ddc-f65c-4dff-8322-0e909c489425', 'device_config': {'SCSIid': '360002ac00000000000000096000205b4', 'SRmaster': 'true'}, 'command': 'vdi_snapshot', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:85b38e21-10be-4ea3-8578-65686851a0e1', 'driver_params': {'type': 'internal'}, 'vdi_uuid': 'e0d805fb-d325-4af7-af40-a4249b5ddec4'} May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/.nil/lvm May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Pause request for e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Calling tap-pause on host OpaqueRef:e352bec8-d91e-4cfe-af1b-8f53876c4e56 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] LVHDVDI._snapshot for e0d805fb-d325-4af7-af40-a4249b5ddec4 (type 3) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 (1, 0) + (1, 0) => (2, 0) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:e0d805fb-d325-4af7-af40-a4249b5ddec4 set => (2, 0b) May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/e0d805fb-d325-4af7-af40-a4249b5ddec4 May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'query', '--debug', '-vsf', '-n', '/dev/VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4'] May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:09 SDE-RK8-R6525-P01-H01 SM: [22510] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-e0d805fb-d325-4af7-af40-a4249b5ddec4', '-l', 'VG_XenStorage-87edd82a-f612-d1f9-bfcd-bc2cae7cff98', '-a'] May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] pread SUCCESS May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d (1, 0) + (1, 0) => (2, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:77aed767-3cd9-428b-b088-a16d1a04d91d set => (2, 0b) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: released /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/77aed767-3cd9-428b-b088-a16d1a04d91d May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: opening lock file /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] lock: acquired /var/lock/sm/lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98/00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 (0, 0) + (1, 0) => (1, 0) May 15 22:58:10 SDE-RK8-R6525-P01-H01 SM: [22510] Refcount for lvm-87edd82a-f612-d1f9-bfcd-bc2cae7cff98:00a5a7dd-5651-4ee2-aba1-dbae12c3dae9 set => (1, 0b)
  • can't get XOA to install

    6
    0 Votes
    6 Posts
    1k Views
    olivierlambertO
    That's why I said about peering issue. We have 10G here and no bandwidth issue neither.
  • Help with Windows Cloudbase-init Setup

    5
    0 Votes
    5 Posts
    671 Views
    M
    @olivierlambert Sure, happy to do so.
  • Does XOA pause scheduled backups during XOA upgrades?

    7
    1
    0 Votes
    7 Posts
    341 Views
    julien-fJ
    XO does not pause backups during upgrades/restart, all currently running backups will be interrupted as @olivierlambert said. An interrupted backup is not a big problem by itself, nothing will be broken, and the next run will run properly. Backups are only run at the time they are scheduled, if XO is offline at this time, it will not automatically run them when restarted, it will wait for the next scheduled run.
  • Roadmap XO6

    11
    0 Votes
    11 Posts
    3k Views
    olivierlambertO
    Our doc is up to date here: https://docs.xcp-ng.org/project/ecosystem/#-vm-backup
  • XO Sources on Host

    Moved
    5
    0 Votes
    5 Posts
    485 Views
    CTGC
    @olivierlambert Merci Oliver!
  • XCP-ng host status enabled but can't access it.

    Solved
    15
    1
    0 Votes
    15 Posts
    631 Views
    olivierlambertO
    Ah indeed, it's written in bold in the documentation, I forgot to ask you about this ^^ Enjoy XO!
  • XOA Proxy and Console Access

    Solved
    15
    0 Votes
    15 Posts
    994 Views
    olivierlambertO
    Thanks everyone for the report!
  • Console Zoom Percentage Slightly Cutoff

    1
    1
    0 Votes
    1 Posts
    101 Views
    No one has replied
  • Preventing "new network" detection on different XCP-NG hosts

    14
    0 Votes
    14 Posts
    2k Views
    nikadeN
    @fohdeesha said in Preventing "new network" detection on different XCP-NG hosts: @Zevgeny As you suspected, this is caused by the VMs booting with new & fresh MAC addresses they've never seen before, which from their point of view means an entirely new NIC. The "clean" solution here would be to have an option or toggle inside of XOA backup jobs, that allows MAC addresses to be preserved - this way the replicated DR VMs on the backup site still have the same MAC addresses. This could be something you could file as a feature request on our github However note that pretty much any VM copy/replicate action will trigger xen to generate a new MAC address (for safety, duplicated MACs are usually bad). This means that even if the MAC is preserved to the DR VMs, when your admin copies the DR VMs and boots them, the new copies will have newly generated MACs. I suppose it would be possible to add a "preserve MAC" checkbox for copy operations too, but I'm not sure if XAPI currently exposes such functionality. Yea, keeping the mac adress would really solve this. We actually edit the mac adress manually on the copied VM's incase we need to start the DR copy up for testing.
  • freepbx import from esxi wont boot correctly

    9
    0 Votes
    9 Posts
    505 Views
    J
    thanks everyone for the replies really appreciate this community! so its not a disaster, the server that runs the freepbx vm is still online. i merged the other vms from the other servers just fine to new xcp-ng servers anyways, i actually have the original vms for all of them still ready to spin up in esxi. so far every single distro lol except this custom Sangoma centos 7 has been good to go nothing else needed. i think what i will do since the esxi servers are no longer connected to a vcenter appliance is make a copy of the hard disk, stop the original, boot up the copy and see if i cant play around with it with some of the suggestions here. if in fact i do manage to get it working, ill post a 'how i made it work in this instance' thread for anyone else who may encounter it. in the meantime all suggestions are welcome, im not very versed with boot configurations in linux. im more of a windows systems technician, i can operate *nix just fine and do whatever i need to if the installation is working lol, troubleshooting boot issues has never been something i learned, so crash course engaged.
  • VMware migration tool not bringing disks

    Solved
    7
    0 Votes
    7 Posts
    477 Views
    ItMeCorbanI
    Yep the snapshots solved it. And I was even able to import the VMs while they're still running so that was a bonus. Thanks for your help @Danp
  • Import from VMWare fails Error: 404 Not Found

    4
    0 Votes
    4 Posts
    441 Views
    S
    @florent will do
  • Updated XOA with kernel >5.3 to support nconnect nfs option

    34
    1 Votes
    34 Posts
    3k Views
    M
    @manilx 2 more and yes seems to be confusing and just to round it up. Same VM as above Delta (full) backup using NBD without and with "nconnect=6" in the remote setting: [image: 1714397312022-screenshot-2024-04-29-at-14.27.19.png] with nconnect=6 [image: 1714397346566-screenshot-2024-04-29-at-14.28.57.png] nconnect=6 doesn't seem to do a lot.
  • Technique for shared repo of cloudinit templates

    cloudinit
    4
    0 Votes
    4 Posts
    212 Views
    olivierlambertO
    Your pre-recorded Cloudconfig own setup is user-wide (so per user). It won't be universal for everyone.
  • Default console username and password on XOA Appliance

    Solved
    10
    0 Votes
    10 Posts
    18k Views
    K
    @olivierlambert Hey thanks a lot for that tip.
  • Cannot delete failed XO VMware-related migration tasks

    14
    0 Votes
    14 Posts
    1k Views
    D
    @gb-123 Yes, I have tried that and it appears to have deleted the tasks. First I had to register as follows: # xo-cli --register <URL of XOA> <admin username> <password> Then: # xo-cli rest get tasks which showed a list of tasks IDs. I verified the IDs with the ID show in the raw log for each of the tasks listed in XOA. Once I figured out which task(s) I wanted to delete, I did the following for each tasks: # xo-cli rest del <tasks/task ID> I have not seen an ill effects since I deleted some tasks last August. Below is a link to removing tasks using xo-cli: https://help.vates.tech/kb/en-us/31-troubleshooting-and-tips/123-how-to-remove-xo-tasks
  • Warm migration stuck at 0%

    Solved
    12
    1
    1 Votes
    12 Posts
    618 Views
    J
    Currently in commit 2f962 and it's working. Thanks you all