@StormMaster Thank you for sharing your findings!
@stephane-m-dev Thank you for the update! Looking forward to testing the fix just not sure how to replicate other than what @StormMaster found.
@StormMaster Thank you for sharing your findings!
@stephane-m-dev Thank you for the update! Looking forward to testing the fix just not sure how to replicate other than what @StormMaster found.
@andrewperry I couldn't get the live migration to work on large VMs. To move large VMs, I ended up taking a snapshot of the VM and then creating a new VM from that snapshot. It's not ideal but had to do that as we needed to retire some old servers and I couldn't get the live migration to work w/ large VMs.
@olivierlambert Thank you! Is it possible to simply attach the shared from the source pool to the new pool without having to do a live/warm migration?
We are only moving this VM from one pool to a new pool but the VM disks are going to remain on the shared storage (NFS):
The reason I'm moving pools is we have newer servers.
Thank you,
SW
Hi,
I'm trying to live migrate a large VM from one host to another and XO built from source using the latest commit a7d7c is reporting it will take over 32 hours for the migration to complete. However, after about 20+ hours, the live migration fails at about 48% complete.
XO shows the following error under Settings > Logs:
vm.migrate
{
"vm": "3797df5e-695a-00be-86bd-318a7c48860f",
"mapVifsNetworks": {
"de9a4140-0b67-314b-6fed-0bbd00d4cf74": "eefd492c-2bef-502e-79e7-f5123209d887",
"cbea6ad6-cb84-5536-e021-d2bdabda6348": "34f336c5-b05d-4258-20d0-984523113b85"
},
"migrationNetwork": "02bfbb0b-5f1a-e47e-d50b-28f0f7c50b11",
"sr": "8da3d03e-4d2c-bab2-cd94-0d15168a58f3",
"targetHost": "2d926060-41bf-4e17-ba76-bac9b1112257"
}
{
"code": "MIRROR_FAILED",
"params": [
"OpaqueRef:7069c225-0bf2-433c-a255-0fab035ea70b"
],
"task": {
"uuid": "f31eafed-faea-2eb4-b9e9-d45cc7967498",
"name_label": "Async.VM.migrate_send",
"name_description": "",
"allowed_operations": [],
"current_operations": {
"DummyRef:|e8163dcb-8c54-4907-a0de-97d36cf2127e|task.cancel": "cancel"
},
"created": "20241014T04:40:24Z",
"finished": "20241015T00:09:27Z",
"status": "failure",
"resident_on": "OpaqueRef:d3f118d9-3aef-4d16-94d5-6d6fa22f84b9",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"MIRROR_FAILED",
"OpaqueRef:7069c225-0bf2-433c-a255-0fab035ea70b"
],
"other_config": {
"mirror_failed": "b607eebc-49ad-4fc0-ad4a-b605fedfc51e"
},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [],
"backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1556))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 131))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
},
"message": "MIRROR_FAILED(OpaqueRef:7069c225-0bf2-433c-a255-0fab035ea70b)",
"name": "XapiError",
"stack": "XapiError: MIRROR_FAILED(OpaqueRef:7069c225-0bf2-433c-a255-0fab035ea70b)
at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)
at default (file:///opt/xen-orchestra/packages/xen-api/_getTaskResult.mjs:13:29)
at Xapi._addRecordToCache (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1041:24)
at file:///opt/xen-orchestra/packages/xen-api/index.mjs:1075:14
at Array.forEach (<anonymous>)
at Xapi._processEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1065:12)
at Xapi._watchEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1238:14)"
}
On the source host, /var/log/xensource.log show the following when the error occurs:
And on the target host, /var/log/xensource.log show the following:
I appreciate any help you can offer to help me identify why the live migration is failing.
Thank you,
SW
@enes-selcuk Did you find a setting that works best w/ Dell servers?
I have a Dell R640's which I'll be using for LAMP/LEMP vm servers and was wondering what are the best settings to use in Dell Bios and if there are any changes I need to make on the xcp-ng host?
Dual Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
Level 2 Cache 16x1 MB
Level 3 Cache 22 MB
Number of Cores 16
Dell Bios Setting:
System Profile: Performance Per Watt (OS)
CPU Power Management: OS DBPM
Memory Frequency: Maximum Performance
Turbo Boost: Enabled
C1E: Enabled
C States: Enabled
Write Data CRC: Disabled
Memory Patrol Scrub: Standard
Memory Refresh Rate: 1x
Uncore Frequency : Dynamic
Energy Efficient Policy: Balanced Performance
Number of Turbo Boost Enabled Cores for Processor 1: All
Number of Turbo Boost Enabled Cores for Processor 2: All
Monitor/Mwait: Enabled
Workload Profile: Not Available
CPU Interconnect Bus Link Power Management: Enabled
PCI ASPM L1 Link Power Management: Enabled
Thank you!
SW
@TS79 It seems something is not stable w/ SR NFS creation. The strange thing is XOCE allowed the creation of the NFS Remote under Settings > Remotes but refused to create the SR NFS until I patched to latest build (I was 3 commits behind) and then rebooted the entire server.
I was hoping to test this via just XO to rule out if it's an issue w/ XOCE or not but I couldn't get the XO deploy script to work. But I'll have to tackle that issue another day.
I was able to install a VM using the SR NFS and will be running som Disk IO tests to see how stable the NFS connection.
Thank you again for your help and testing this on your home lab! Greatly appreciate it!
Best Regards,
SW
@olivierlambert @TS79 I just rebooted the server, ran the XOCE update script, and tried adding the SR NFS to the new host and I was able to create the SR NFS!
I'm about to test it by installing a VM to confirm it's working properly!
@TS79 Thanks for the help! This host is NOT part of a pool. It's a standalone host.
I'm so glad you are also getting the same issue!!! Man I've been trying to figure out this error/bug for the past seveal days. Like you, it was working at some point as my old hosts have no problem w/ the NFS share from FreeNAS.
I used the Jarli01 script to build the XOCE and keep it updated regularly. I was 3 commits behind but just updated to the latest commit and still have the same issue.
@olivierlambert Any ideas why I can't deploy XO? Not sure if the logs I provided above are helpful.
Please let me know if you wish me to provide you with any additional logs, etc.
Thank you both!
SW
@TS79 Thanks again! The TrueNAS NFS path which I see on the existing XCP-NG Hosts has the serverpath: /mnt/Tank/XCP-NG (see screenshot below):
However, when I try to create this SR NFS on the new host, it only shows the serverpath as /mnt/Tank like so:
Using the search icon under the Subdirectory doesn't load anything:
And if I manually enter XCP-NG in the Subdirectory field, I'm presented with two options under the "Storage usage":
FreeNAS shows the following content in /mn/Tank/XCP-NG directory:
The "e027cbe3-c650-d923-fb6f-97626c68c514" and "e140ab5d-8833-fd4b-91fe-68d9ff8652e9" are SR for existing hosts that connect with no issues to TrueNAS via NFS.
On the networking error when trying to install XO using the deploy method, I ran the following command on this host and returned the following:
I'm not using any VLANs and not sure where to check the network configuration files but this is what I have via the XOCE for this host:
[12:24 XCP55 ~]# brctl show
bridge name bridge id STP enabled interfaces
[12:24 XCP55 ~]#
[12:24 XCP55 ~]# ip route show
default via 10.10.10.1 dev xenbr7
10.10.1.0/24 dev xenbr0 proto kernel scope link src 10.10.1.10
10.10.2.0/24 dev xenbr1 proto kernel scope link src 10.10.2.10
10.10.10.0/24 dev xenbr7 proto kernel scope link src 10.10.10.5
[12:25 XCP55 ~]#
[12:32 XCP55 ~]# ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether a0:36:9f:8a:18:18 brd ff:ff:ff:ff:ff:ff
3: eth5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master ovs-system state DOWN group default qlen 1000
link/ether a0:36:9f:8a:18:19 brd ff:ff:ff:ff:ff:ff
4: eth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether a0:36:9f:8a:18:1a brd ff:ff:ff:ff:ff:ff
5: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master ovs-system state UP group default qlen 1000
link/ether e4:43:4b:c8:51:84 brd ff:ff:ff:ff:ff:ff
6: eth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether a0:36:9f:8a:18:1b brd ff:ff:ff:ff:ff:ff
7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master ovs-system state UP group default qlen 1000
link/ether e4:43:4b:c8:51:85 brd ff:ff:ff:ff:ff:ff
8: eth2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master ovs-system state DOWN group default qlen 1000
link/ether e4:43:4b:c8:51:86 brd ff:ff:ff:ff:ff:ff
9: eth3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master ovs-system state DOWN group default qlen 1000
link/ether e4:43:4b:c8:51:87 brd ff:ff:ff:ff:ff:ff
10: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 1e:13:9f:39:76:f9 brd ff:ff:ff:ff:ff:ff
11: xenbr4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether a0:36:9f:8a:18:18 brd ff:ff:ff:ff:ff:ff
14: xenbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether e4:43:4b:c8:51:86 brd ff:ff:ff:ff:ff:ff
15: xenbr6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether a0:36:9f:8a:18:1a brd ff:ff:ff:ff:ff:ff
16: xenbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether a0:36:9f:8a:18:19 brd ff:ff:ff:ff:ff:ff
17: xenbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether e4:43:4b:c8:51:87 brd ff:ff:ff:ff:ff:ff
18: xenbr7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether a0:36:9f:8a:18:1b brd ff:ff:ff:ff:ff:ff
inet 10.10.10.5/24 brd 10.10.10.255 scope global xenbr7
valid_lft forever preferred_lft forever
19: xenbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether e4:43:4b:c8:51:85 brd ff:ff:ff:ff:ff:ff
inet 10.10.2.10/24 brd 10.10.2.255 scope global xenbr1
valid_lft forever preferred_lft forever
20: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether e4:43:4b:c8:51:84 brd ff:ff:ff:ff:ff:ff
inet 10.10.1.10/24 brd 10.10.1.255 scope global xenbr0
valid_lft forever preferred_lft forever
Thank you again!
SW
@olivierlambert I tailed the /var/log/xensource.log when I try to click on the "Deploy" button using the https://vates.tech/deploy if this is helpful:
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|audit] VM.import: url = '(url filtered)' sr='OpaqueRef:6aaef256-69c8-4928-8199-957dc4d01202' force='false'
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Failed to directly open the archive; trying gzip
Oct 3 11:34:23 XCP55 xapi: [debug||8801 ||import] Writing initial buffer
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Got XML
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] importing new style VM
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Importing 0 host(s)
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Importing 1 SR(s)
Oct 3 11:34:23 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Importing 1 VDI(s)
Oct 3 11:34:23 XCP55 xapi: [debug||8802 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.create D:2ee7d07f3c8e created by task R:269b6e3fdd34
Oct 3 11:34:23 XCP55 xapi: [ info||8802 /var/lib/xcp/xapi||taskhelper] task VDI.create R:8e2c32b2c2a9 (uuid:6c7b2409-d437-931e-bd9c-43244e13b0fc) created (trackid=e5b9c7399e413bfe02ae51306d634912) by task R:269b6e3fdd34
Oct 3 11:34:23 XCP55 xapi: [debug||8802 /var/lib/xcp/xapi|VDI.create R:8e2c32b2c2a9|audit] VDI.create: SR = '70fc7ee7-8eb2-0cca-0807-83dba8917d5c (Local storage)'; name label = 'xoa root'
Oct 3 11:34:23 XCP55 xapi: [debug||8802 /var/lib/xcp/xapi|VDI.create R:8e2c32b2c2a9|message_forwarding] Marking SR for VDI.create (task=OpaqueRef:8e2c32b2-c2a9-4537-ba4e-bfba0a8da660)
Oct 3 11:34:23 XCP55 xapi: [ info||8802 /var/lib/xcp/xapi|VDI.create R:8e2c32b2c2a9|storage_impl] VDI.create dbg:OpaqueRef:8e2c32b2-c2a9-4537-ba4e-bfba0a8da660 sr:70fc7ee7-8eb2-0cca-0807-83dba8917d5c vdi_info:{"sm_config":{"import_task":"OpaqueRef:269b6e3f-dd34-4ff0-a13b-0ceaea1d15e0"},"sharable":false,"persistent":true,"physical_utilisation":0,"virtual_size":21474836480,"cbt_enabled":false,"read_only":false,"snapshot_of":"","snapshot_time":"19700101T00:00:00Z","is_a_snapshot":false,"metadata_of_pool":"","ty":"user","name_description":"","name_label":"xoa root","content_id":"","vdi":""}
Oct 3 11:34:23 XCP55 xapi: [debug||8803 ||dummytaskhelper] task VDI.create D:530849acd382 created by task R:8e2c32b2c2a9
Oct 3 11:34:23 XCP55 xapi: [debug||8803 |VDI.create D:530849acd382|sm] SM ext vdi_create sr=OpaqueRef:6aaef256-69c8-4928-8199-957dc4d01202 sm_config=[import_task=OpaqueRef:269b6e3f-dd34-4ff0-a13b-0ceaea1d15e0] type=[user] size=21474836480
Oct 3 11:34:23 XCP55 xapi: [ info||8803 |sm_exec D:cef862198a26|xapi_session] Session.create trackid=c434cfd79956a0dc270f86eb6e3c6560 pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:23 XCP55 xapi: [debug||8804 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:deba20300eeb created by task D:cef862198a26
Oct 3 11:34:24 XCP55 xapi: [debug||8805 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:host.get_other_config D:977f9d72bdb3 created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8806 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.get_other_config D:6af2095e7c63 created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8807 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.db_introduce D:6ccc8d44b069 created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [ info||8807 /var/lib/xcp/xapi||taskhelper] task VDI.db_introduce R:8a81dc0481ad (uuid:7bff162e-8152-64bf-bf2d-48f27858f426) created (trackid=c434cfd79956a0dc270f86eb6e3c6560) by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8807 /var/lib/xcp/xapi|VDI.db_introduce R:8a81dc0481ad|xapi_vdi] {pool,db}_introduce uuid=5a57f11e-be24-481a-8e89-b37069585c08 name_label=xoa root
Oct 3 11:34:24 XCP55 xapi: [debug||8807 /var/lib/xcp/xapi|VDI.db_introduce R:8a81dc0481ad|xapi_vdi] VDI.introduce read_only = false
Oct 3 11:34:24 XCP55 xapi: [debug||8808 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.get_virtual_allocation D:dad5f8c21fac created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8809 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.get_by_uuid D:1aa3954c501b created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8810 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.set_virtual_allocation D:a8530df366e1 created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8811 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.set_physical_size D:8510cfe86103 created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [debug||8812 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.set_physical_utilisation D:2d097d044c2d created by task D:530849acd382
Oct 3 11:34:24 XCP55 xapi: [ info||8803 |sm_exec D:cef862198a26|xapi_session] Session.destroy trackid=c434cfd79956a0dc270f86eb6e3c6560
Oct 3 11:34:24 XCP55 xapi: [debug||8802 /var/lib/xcp/xapi|VDI.create R:8e2c32b2c2a9|xapi_sr] OpaqueRef:0b1d4430-214b-466a-b5df-e6246544f682 snapshot_of <- OpaqueRef:NULL
Oct 3 11:34:24 XCP55 xapi: [debug||8802 /var/lib/xcp/xapi|VDI.create R:8e2c32b2c2a9|message_forwarding] Unmarking SR after VDI.create (task=OpaqueRef:8e2c32b2-c2a9-4537-ba4e-bfba0a8da660)
Oct 3 11:34:24 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Importing 1 VM_guest_metrics(s)
Oct 3 11:34:24 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Importing 1 VM(s)
Oct 3 11:34:24 XCP55 xapi: [debug||8813 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.create D:60ff9638321b created by task R:269b6e3fdd34
Oct 3 11:34:24 XCP55 xapi: [ info||8813 /var/lib/xcp/xapi||taskhelper] task VM.create R:7c496952935c (uuid:7d96d787-dcc6-a45e-8aa7-b646df253dc7) created (trackid=e5b9c7399e413bfe02ae51306d634912) by task R:269b6e3fdd34
Oct 3 11:34:24 XCP55 xapi: [debug||8813 /var/lib/xcp/xapi|VM.create R:7c496952935c|audit] VM.create: name_label = 'XOA' name_description = 'Xen Orchestra virtual Appliance'
Oct 3 11:34:24 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Created VM: OpaqueRef:fda15bd0-5dcd-44ad-8a37-5e110ce39724 (was Ref:208)
Oct 3 11:34:24 XCP55 xapi: [debug||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Importing 1 network(s)
Oct 3 11:34:24 XCP55 xapi: [debug||8814 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:network.get_by_name_label D:54e3bcbf2532 created by task R:269b6e3fdd34
Oct 3 11:34:24 XCP55 xapi: [debug||8815 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:network.get_all_records_where D:64c2c4875251 created by task R:269b6e3fdd34
Oct 3 11:34:24 XCP55 xapi: [debug||8816 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:network.create D:36398824e4bc created by task R:269b6e3fdd34
Oct 3 11:34:24 XCP55 xapi: [ info||8816 /var/lib/xcp/xapi||taskhelper] task network.create R:1662dcd1a2fb (uuid:225442be-036f-5a7b-7782-97f773e6af09) created (trackid=e5b9c7399e413bfe02ae51306d634912) by task R:269b6e3fdd34
Oct 3 11:34:24 XCP55 xapi: [debug||8816 /var/lib/xcp/xapi|network.create R:1662dcd1a2fb|audit] Network.create: name_label = 'Pool-wide network associated with eth0 on VLAN11'; bridge = 'xapi1'; managed = 'true'
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] network.create R:1662dcd1a2fb failed with exception Server_error(INVALID_VALUE, [ bridge; xapi1 ])
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] Raised Server_error(INVALID_VALUE, [ bridge; xapi1 ])
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 1/8 xapi Raised at file ocaml/xapi/xapi_network.ml, line 266
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 2/8 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 3/8 xapi Called from file ocaml/xapi/rbac.ml, line 205
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 4/8 xapi Called from file ocaml/xapi/server_helpers.ml, line 95
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 5/8 xapi Called from file ocaml/xapi/server_helpers.ml, line 113
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 6/8 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 7/8 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace] 8/8 xapi Called from file lib/backtrace.ml, line 177
Oct 3 11:34:24 XCP55 xapi: [error||8816 /var/lib/xcp/xapi||backtrace]
Oct 3 11:34:24 XCP55 xapi: [error||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Import failed: failed to create Network with name_label Pool-wide network associated with eth0 on VLAN11
Oct 3 11:34:24 XCP55 xapi: [error||8800 HTTPS 10.10.10.57->:::80|VM.import R:269b6e3fdd34|import] Caught exception in import: INVALID_VALUE: [ bridge; xapi1 ]
Oct 3 11:34:24 XCP55 xapi: [debug||8817 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:9cb4342e73e2 created by task R:d59db384ea2e
Oct 3 11:34:24 XCP55 xapi: [ info||8817 /var/lib/xcp/xapi|session.slave_login D:8784583e6172|xapi_session] Session.create trackid=2f17fc053e7e09f323c4b973c22fc02c pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:24 XCP55 xapi: [debug||8818 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:eb92071215a7 created by task D:8784583e6172
Oct 3 11:34:24 XCP55 xapi: [debug||8819 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.destroy D:c98e01df046f created by task R:d59db384ea2e
Oct 3 11:34:24 XCP55 xapi: [ info||8819 /var/lib/xcp/xapi||taskhelper] task VM.destroy R:ca9ded8caef3 (uuid:c93118e0-1b98-1c5d-086a-76aa5a189c2d) created (trackid=2f17fc053e7e09f323c4b973c22fc02c) by task R:d59db384ea2e
Oct 3 11:34:24 XCP55 xapi: [debug||8819 /var/lib/xcp/xapi|VM.destroy R:ca9ded8caef3|audit] VM.destroy: VM = 'e6a49727-110e-2e25-287d-b706951e9161 (XOA)'
Oct 3 11:34:24 XCP55 xapi: [debug||8819 /var/lib/xcp/xapi|VM.destroy R:ca9ded8caef3|xapi_vm_helpers] VM.destroy: deleting DB records
Oct 3 11:34:24 XCP55 xapi: [debug||8820 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.destroy D:5ef6f61c2f76 created by task R:d59db384ea2e
Oct 3 11:34:24 XCP55 xapi: [ info||8820 /var/lib/xcp/xapi||taskhelper] task VDI.destroy R:ed9f5fc50a0b (uuid:b4f54201-1cf9-f46c-9f4c-19694d00d14c) created (trackid=2f17fc053e7e09f323c4b973c22fc02c) by task R:d59db384ea2e
Oct 3 11:34:24 XCP55 xapi: [debug||8820 /var/lib/xcp/xapi|VDI.destroy R:ed9f5fc50a0b|audit] VDI.destroy: VDI = '5a57f11e-be24-481a-8e89-b37069585c08'
Oct 3 11:34:24 XCP55 xapi: [debug||8820 /var/lib/xcp/xapi|VDI.destroy R:ed9f5fc50a0b|message_forwarding] Marking SR for VDI.destroy (task=OpaqueRef:ed9f5fc5-0a0b-458f-8858-247e333dc1f9)
Oct 3 11:34:24 XCP55 xapi: [ info||8820 /var/lib/xcp/xapi|VDI.destroy R:ed9f5fc50a0b|storage_impl] VDI.destroy dbg:OpaqueRef:ed9f5fc5-0a0b-458f-8858-247e333dc1f9 sr:70fc7ee7-8eb2-0cca-0807-83dba8917d5c vdi:5a57f11e-be24-481a-8e89-b37069585c08
Oct 3 11:34:24 XCP55 xapi: [debug||8821 ||dummytaskhelper] task VDI.destroy D:a94b9e39269f created by task R:ed9f5fc50a0b
Oct 3 11:34:24 XCP55 xapi: [debug||8821 |VDI.destroy D:a94b9e39269f|sm] SM ext vdi_delete sr=OpaqueRef:6aaef256-69c8-4928-8199-957dc4d01202 vdi=OpaqueRef:0b1d4430-214b-466a-b5df-e6246544f682
Oct 3 11:34:24 XCP55 xapi: [ info||8821 |sm_exec D:73be5872978b|xapi_session] Session.create trackid=94d13e7939f10383fc10c4f3503f9c6f pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:24 XCP55 xapi: [debug||8822 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:613297a97154 created by task D:73be5872978b
Oct 3 11:34:24 XCP55 xapi: [debug||8823 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:host.get_other_config D:4e078575ee73 created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8824 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.get_other_config D:8c72ca70ce0b created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [ info||8825 /var/lib/xcp/xapi|session.login_with_password D:133bcd0ae16f|xapi_session] Session.create trackid=ae9ea8a24a7e9e3eaea57666905e955f pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:24 XCP55 xapi: [debug||8826 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:a3ee39a6ee41 created by task D:133bcd0ae16f
Oct 3 11:34:24 XCP55 xapi: [ info||8834 /var/lib/xcp/xapi|session.logout D:0ea100b8985d|xapi_session] Session.destroy trackid=ae9ea8a24a7e9e3eaea57666905e955f
Oct 3 11:34:24 XCP55 xapi: [debug||8835 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.get_by_uuid D:c427804d38ef created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8836 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VDI.db_forget D:b3211d50868f created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [ info||8836 /var/lib/xcp/xapi||taskhelper] task VDI.db_forget R:0444b480c4f2 (uuid:c1e60f49-fbf6-a978-02fc-89b7eb2eba83) created (trackid=94d13e7939f10383fc10c4f3503f9c6f) by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8836 /var/lib/xcp/xapi|VDI.db_forget R:0444b480c4f2|xapi_vdi] db_forget uuid=5a57f11e-be24-481a-8e89-b37069585c08
Oct 3 11:34:24 XCP55 xapi: [debug||8837 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.get_virtual_allocation D:c00facb0a49c created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8838 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.get_by_uuid D:7b3016885891 created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8839 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.set_virtual_allocation D:efa488742d4c created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8840 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.set_physical_size D:3d15163a9337 created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [debug||8841 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:SR.set_physical_utilisation D:13d788b1bcc6 created by task D:a94b9e39269f
Oct 3 11:34:24 XCP55 xapi: [ info||8842 /var/lib/xcp/xapi|session.login_with_password D:32d41049cdd7|xapi_session] Session.create trackid=c1a91205d46beaa8fd0f72c871cc7890 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:24 XCP55 xapi: [ info||8843 /var/lib/xcp/xapi|session.login_with_password D:91f9a8162270|xapi_session] Session.create trackid=4255185b7b399215999d881ccfd9bd69 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:24 XCP55 xapi: [debug||8844 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:bbc704b86901 created by task D:32d41049cdd7
Oct 3 11:34:24 XCP55 xapi: [debug||8845 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:2e27b0f38226 created by task D:91f9a8162270
Oct 3 11:34:24 XCP55 xapi: [ info||8859 /var/lib/xcp/xapi|session.logout D:77ab4a50a888|xapi_session] Session.destroy trackid=c1a91205d46beaa8fd0f72c871cc7890
Oct 3 11:34:24 XCP55 xapi: [ info||8860 /var/lib/xcp/xapi|session.logout D:3a423443cf9e|xapi_session] Session.destroy trackid=4255185b7b399215999d881ccfd9bd69
Oct 3 11:34:24 XCP55 xapi: [ info||8861 /var/lib/xcp/xapi|session.login_with_password D:ad7636f29052|xapi_session] Session.create trackid=22e9b2dee5433c30a4a7a7cfe803e354 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Oct 3 11:34:24 XCP55 xapi: [debug||8862 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:14b2b200097d created by task D:ad7636f29052
Oct 3 11:34:24 XCP55 xapi: [ info||8821 |sm_exec D:73be5872978b|xapi_session] Session.destroy trackid=94d13e7939f10383fc10c4f3503f9c6f
Oct 3 11:34:24 XCP55 xapi: [debug||8820 /var/lib/xcp/xapi|VDI.destroy R:ed9f5fc50a0b|message_forwarding] Unmarking SR after VDI.destroy (task=OpaqueRef:ed9f5fc5-0a0b-458f-8858-247e333dc1f9)
Oct 3 11:34:24 XCP55 xapi: [debug||8872 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:1554b9af87b0 created by task R:d59db384ea2e
Oct 3 11:34:24 XCP55 xapi: [ info||8872 /var/lib/xcp/xapi|session.logout D:3189c746e134|xapi_session] Session.destroy trackid=2f17fc053e7e09f323c4b973c22fc02c
Oct 3 11:34:24 XCP55 xapi: [ warn||8801 ||pervasiveext] finally: Error while running cleanup after failure of main function: (Failure "Decompression via zcat failed: exit code 1")
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] VM.import R:269b6e3fdd34 failed with exception Server_error(INVALID_VALUE, [ bridge; xapi1 ])
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] Raised Server_error(INVALID_VALUE, [ bridge; xapi1 ])
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 1/22 xapi Raised at file ocaml/xapi-client/client.ml, line 7
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 2/22 xapi Called from file ocaml/xapi-client/client.ml, line 19
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 3/22 xapi Called from file ocaml/xapi-client/client.ml, line 9676
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 4/22 xapi Called from file ocaml/xapi/import.ml, line 93
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 5/22 xapi Called from file ocaml/xapi/import.ml, line 97
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 6/22 xapi Called from file ocaml/xapi/import.ml, line 1136
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 7/22 xapi Called from file list.ml, line 110
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 8/22 xapi Called from file list.ml, line 110
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 9/22 xapi Called from file ocaml/xapi/import.ml, line 1861
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 10/22 xapi Called from file ocaml/xapi/import.ml, line 1881
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 11/22 xapi Called from file ocaml/xapi/import.ml, line 2189
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 12/22 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 13/22 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 14/22 xapi Called from file lib/open_uri.ml, line 20
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 15/22 xapi Called from file lib/open_uri.ml, line 20
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 16/22 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 17/22 xapi Called from file ocaml/xapi/rbac.ml, line 205
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 18/22 xapi Called from file ocaml/xapi/server_helpers.ml, line 95
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 19/22 xapi Called from file ocaml/xapi/server_helpers.ml, line 113
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 20/22 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 21/22 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace] 22/22 xapi Called from file lib/backtrace.ml, line 177
Oct 3 11:34:24 XCP55 xapi: [error||8800 :::80||backtrace]
Oct 3 11:34:24 XCP55 xapi: [ info||8873 /var/lib/xcp/xapi|session.logout D:18cedef1668a|xapi_session] Session.destroy trackid=22e9b2dee5433c30a4a7a7cfe803e354
@TS79 Hi @TS79 Thank you for your assistance! I did remove the restriction on TrueNAS side as there is no listed IP which means it will allow ALL:
Thanks again!
SW
@olivierlambert Just tried it and it's still not working:
@olivierlambert Thanks for your assistance! I just tried installing XO on the new host but when I click on "Deploy" button, I'm getting the following error:
Xen Orchestra, commit 6c258
Hi,
I'm trying to setup SR NFS on a new Host to a TrueNAS server but XO is returning the following error:
sr.createNfs
{
"host": "0ff18896-df39-4e6e-910f-c9cfb7b80414",
"nameLabel": "TrueNAS",
"nameDescription": "Data/Backup Migration",
"server": "10.10.2.3",
"serverPath": "/mnt/Tank/"
}
{
"code": "SR_BACKEND_FAILURE_88",
"params": [
"",
"NFS SR creation error [opterr=remote directory creation error is 13]",
""
],
"call": {
"method": "SR.create",
"params": [
"OpaqueRef:a5e4f76b-571c-4573-a8c9-c6dfd20cbc90",
{
"server": "10.10.2.3",
"serverpath": "/mnt/Tank/"
},
0,
"TrueNAS",
"Data/Backup Migration",
"nfs",
"user",
true,
{}
]
},
"message": "SR_BACKEND_FAILURE_88(, NFS SR creation error [opterr=remote directory creation error is 13], )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_88(, NFS SR creation error [opterr=remote directory creation error is 13], )
at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)
at file:///opt/xen-orchestra/packages/xen-api/transports/json-rpc.mjs:38:21
at runNextTicks (node:internal/process/task_queues:60:5)
at processImmediate (node:internal/timers:454:9)
at process.callbackTrampoline (node:internal/async_hooks:130:17)"
}
I have other xcp-ng hosts which connect find with no issue to this TrueNAS server. However I'm not sure if I used a custom NFS option in XO to connect to this TrueNAS as I did this a while back on the other xcp-ng hosts. How do I find out what nfs options I might have used on the other xcp-ng hosts which are able to connect to the TrueNAS server so I can use the same settings on this new host?
On the TrueNAS server, I have the following settings:
And on the new xcp-ng host, I have the following:
I'm able to ping and run showmount from the new host to the TrueNAS server:
Not sure what else can be the issue. Greatly appreciate any assistance.
I also tried setting NFS via XO Remotes and that worked fine with no errors:
Best Regards,
SW
Xen Orchestra, commit 32228
Master, commit 32228
Hi,
I have a two host pool that uses shared storage for all VMs. When I try to perform a VM migrate, using either the management network (1Gbs) or the 20Gbps NFS network, the live migration can take a long time (see example below):
I thought when the VM is using the shared storage of a pool, then moving VMs should be very fast as there is no disk data that needs to be moved other than the VM ram.
Both hosts Control Domains have assigned 16 GB of memory.
Any recommendations on what could be causing the slow live migration if the VM resides on the pool shared storage?
Also, our XOCE VM resides on a different pool than the pool I'm performing the live migration. Wasn't sure if that matters or not.
Thank You,
SW
Thank you all! I was on an older release of XOCE. I haven't wanted to update as the last few times I did, it broke the nightly backups. So stuck with an older version of XOCE that seems to have been stable in our environment.
I updated XOCE to latest build:
Xen Orchestra, commit 32228
Master, commit 32228
I was able to manually update the second host (by running yum update). Then restarted the pool master and now I'm trying to vacate VMs from second host back to master so I can reboot the second host.
However the live migration seems to be taking much longer from the second host back to master. All VMs use shared storage connected via NFS to this pool. I was thinking it should just take a few minutes to do a live migrate but so far it's been 12+ minutes and it's only at 12%:
The network I selected for the live migration is a 20Gbe (bond of two 10 Gbe).
UPDATE: The live migration progress sped up and it was completed in about 16 minutes instead of the 1hr it was estimating.
Hi,
I just tried performing a Rolling Pool Update which has 2 hosts and in the XO tasks, I'm seeing the following failed entries (see below).
Not sure how best to proceed. All running VMs have been evacuated from pool master and are running on the second host. XO now shows the pool master is "Disabled":
Any recommendations on what I should do next?
Here are the XO task that failed during the rolling update:
Rolling pool update 2024-08-12 20:45
id "0lzrpb9pj"
properties
poolId "e52fca9f-4a2f-b24c-8c37-b7891946a82a"
poolName "Production"
name "Rolling pool update"
userId "6356451e-8fab-4588-a2f0-682b34f8f684"
start 1723509930727
status "failure"
updatedAt 1723510634068
tasks
0
id "hsb0p4ig9nc"
properties
name "Listing missing patches"
total 2
progress 100
done 2
start 1723509930730
status "success"
tasks
0
id "swldkb837u"
properties
name "Listing missing patches for host 8d6e2b03-54c1-4523-b45b-d572a268f2cc"
hostId "8d6e2b03-54c1-4523-b45b-d572a268f2cc"
hostName "XCP25"
start 1723509930731
status "success"
end 1723509930731
1
id "ui4qgc07lk"
properties
name "Listing missing patches for host a2c8f50f-0555-44e0-bcd4-1454b6e407a1"
hostId "a2c8f50f-0555-44e0-bcd4-1454b6e407a1"
hostName "XCP35"
start 1723509930731
status "success"
end 1723509930731
end 1723509930731
1
id "m2f4j80sayd"
properties
name "Updating and rebooting"
start 1723509930732
status "failure"
tasks
0
id "kp5is2v7nm"
properties
name "Restarting hosts"
total 2
progress 0
done 0
start 1723509930945
status "failure"
tasks
0
id "szi12dqrq"
properties
name "Restarting host 8d6e2b03…-4523-b45b-d572a268f2cc"
hostId "8d6e2b03-54c1-4523-b45b-d572a268f2cc"
hostName "XCP25"
start 1723509930946
status "failure"
tasks
0 {…}
1 {…}
end 1723510634067
result
message "Text data outside of roo…nColumn: 21393\nChar: }"
name "Error"
stack "Error: Text data outside…-mixins/api.mjs:366:20)"
end 1723510634067
result
message "Text data outside of root node.\nLine: 0\nColumn: 21393\nChar: }"
name "Error"
stack "Error: Text data outside of root node.\nLine: 0\nColumn: 21393\nChar: }\n at error (/opt/xen-orchestra/node_modules/sax/lib/sax.js:652:10)\n at strictFail (/opt/xen-orchestra/node_modules/sax/lib/sax.js:678:7)\n at SAXParser.write (/opt/xen-orchestra/node_modules/sax/lib/sax.js:1036:15)\n at parseXmlTree (/opt/xen-orchestra/@vates/xml/parse.js:20:10)\n at Xapi._xcpUpdate (file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:314:58)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:545:17\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:113:17\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:99:13\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at Xapi.rollingPoolReboot (file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:90:5)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:530:7\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at XenServers.rollingPoolUpdate (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/xen-servers.mjs:689:5)\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at Api.#callApiMethod (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:366:20)"
end 1723510634067
result
message "Text data outside of root node.\nLine: 0\nColumn: 21393\nChar: }"
name "Error"
stack "Error: Text data outside of root node.\nLine: 0\nColumn: 21393\nChar: }\n at error (/opt/xen-orchestra/node_modules/sax/lib/sax.js:652:10)\n at strictFail (/opt/xen-orchestra/node_modules/sax/lib/sax.js:678:7)\n at SAXParser.write (/opt/xen-orchestra/node_modules/sax/lib/sax.js:1036:15)\n at parseXmlTree (/opt/xen-orchestra/@vates/xml/parse.js:20:10)\n at Xapi._xcpUpdate (file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:314:58)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:545:17\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:113:17\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:99:13\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at Xapi.rollingPoolReboot (file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:90:5)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:530:7\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at XenServers.rollingPoolUpdate (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/xen-servers.mjs:689:5)\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at Api.#callApiMethod (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:366:20)"
end 1723510634068
result
message "Text data outside of root node.\nLine: 0\nColumn: 21393\nChar: }"
name "Error"
stack "Error: Text data outside of root node.\nLine: 0\nColumn: 21393\nChar: }\n at error (/opt/xen-orchestra/node_modules/sax/lib/sax.js:652:10)\n at strictFail (/opt/xen-orchestra/node_modules/sax/lib/sax.js:678:7)\n at SAXParser.write (/opt/xen-orchestra/node_modules/sax/lib/sax.js:1036:15)\n at parseXmlTree (/opt/xen-orchestra/@vates/xml/parse.js:20:10)\n at Xapi._xcpUpdate (file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:314:58)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:545:17\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:113:17\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:99:13\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at Xapi.rollingPoolReboot (file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/pool.mjs:90:5)\n at file:///opt/xen-orchestra/packages/xo-server/src/xapi/mixins/patching.mjs:530:7\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at XenServers.rollingPoolUpdate (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/xen-servers.mjs:689:5)\n at Task.runInside (/opt/xen-orchestra/@vates/task/index.js:175:22)\n at Task.run (/opt/xen-orchestra/@vates/task/index.js:158:20)\n at Api.#callApiMethod (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:366:20)"
Logged into pool master and it seems XO installed all of the pending updates:
[21:14 XCP25 ~]# yum update
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Excluding mirror: updates.xcp-ng.org
* xcp-ng-base: mirrors.xcp-ng.org
Excluding mirror: updates.xcp-ng.org
* xcp-ng-updates: mirrors.xcp-ng.org
No packages marked for update
And /var/log/yum.log shows the following packages were installed:
Aug 12 20:55:06 Updated: xen-libs-4.13.5-9.40.2.xcpng8.2.x86_64
Aug 12 20:55:06 Updated: xcp-ng-release-presets-8.2.1-11.x86_64
Aug 12 20:55:06 Updated: openssh-7.4p1-23.2.1.xcpng8.2.x86_64
Aug 12 20:55:06 Updated: message-switch-1.23.2-17.1.xcpng8.2.x86_64
Aug 12 20:55:07 Updated: xen-hypervisor-4.13.5-9.40.2.xcpng8.2.x86_64
Aug 12 20:55:07 Updated: xen-dom0-libs-4.13.5-9.40.2.xcpng8.2.x86_64
Aug 12 20:55:07 Updated: xen-tools-4.13.5-9.40.2.xcpng8.2.x86_64
Aug 12 20:55:07 Updated: xen-dom0-tools-4.13.5-9.40.2.xcpng8.2.x86_64
Aug 12 20:55:07 Updated: xenopsd-0.150.19-3.1.xcpng8.2.x86_64
Aug 12 20:55:11 Updated: 2:qemu-4.2.1-4.6.4.1.xcpng8.2.x86_64
Aug 12 20:55:11 Updated: forkexecd-1.18.3-10.1.xcpng8.2.x86_64
Aug 12 20:55:12 Updated: vhd-tool-0.43.0-18.1.xcpng8.2.x86_64
Aug 12 20:55:12 Updated: libcurl-8.6.0-2.1.xcpng8.2.x86_64
Aug 12 20:55:12 Updated: curl-8.6.0-2.1.xcpng8.2.x86_64
Aug 12 20:55:15 Updated: xapi-core-1.249.36-1.2.xcpng8.2.x86_64
Aug 12 20:55:16 Updated: xenopsd-xc-0.150.19-3.1.xcpng8.2.x86_64
Aug 12 20:55:16 Updated: xenopsd-cli-0.150.19-3.1.xcpng8.2.x86_64
Aug 12 20:55:16 Updated: squeezed-0.27.0-18.1.xcpng8.2.x86_64
Aug 12 20:55:17 Updated: xapi-tests-1.249.36-1.2.xcpng8.2.x86_64
Aug 12 20:55:17 Updated: gpumon-0.18.0-18.1.xcpng8.2.x86_64
Aug 12 20:55:17 Updated: xcp-rrdd-1.33.4-4.1.xcpng8.2.x86_64
Aug 12 20:55:18 Updated: rrdd-plugins-1.10.9-12.1.xcpng8.2.x86_64
Aug 12 20:55:18 Updated: openssh-server-7.4p1-23.2.1.xcpng8.2.x86_64
Aug 12 20:55:19 Updated: openssh-clients-7.4p1-23.2.1.xcpng8.2.x86_64
Aug 12 20:55:19 Updated: xcp-ng-release-8.2.1-11.x86_64
Aug 12 20:55:22 Updated: xcp-ng-release-config-8.2.1-11.x86_64
Aug 12 20:55:23 Updated: sm-2.30.8-12.1.xcpng8.2.x86_64
Aug 12 20:55:23 Updated: sm-cli-0.23.0-61.1.xcpng8.2.x86_64
Aug 12 20:55:23 Updated: varstored-guard-0.6.2-15.xcpng8.2.x86_64
Aug 12 20:55:23 Updated: xapi-storage-11.19.0_sxm2-17.xcpng8.2.x86_64
Aug 12 20:55:24 Updated: xcp-networkd-0.56.2-15.xcpng8.2.x86_64
Aug 12 20:55:29 Updated: linux-firmware-20190314-11.1.xcpng8.2.noarch
Aug 12 20:55:29 Updated: xapi-xe-1.249.36-1.2.xcpng8.2.x86_64
Aug 12 20:55:29 Updated: sudo-1.9.15-2.1.xcpng8.2.x86_64
Aug 12 20:55:29 Updated: xapi-nbd-1.11.0-17.1.xcpng8.2.x86_64
Aug 12 20:55:30 Updated: xapi-storage-script-0.34.1-16.1.xcpng8.2.x86_64
Aug 12 20:55:30 Updated: 2:microcode_ctl-2.1-26.xs29.2.xcpng8.2.x86_64
Aug 12 20:55:31 Updated: rrd2csv-1.2.6-15.1.xcpng8.2.x86_64
Aug 12 20:55:31 Updated: tzdata-2024a-1.el7.noarch
Aug 12 20:55:31 Updated: xsconsole-10.1.13-1.2.xcpng8.2.x86_64
Aug 12 20:55:31 Updated: sm-rawhba-2.30.8-12.1.xcpng8.2.x86_64
Aug 12 20:55:31 Updated: wsproxy-1.12.0-19.xcpng8.2.x86_64
Thank You,
SW
@olivierlambert Thank you!
One follow-up question, I don't see the plugin pulls in existing alerts that were set via xencenter. Any ideas how to delete those if using xencenter is no longer recommended?
Thank you,
SW
Hi,
Sorry if this is a simple question, but trying to figure out how to set and manage alerts in XO. I tried searching the docs but couldn't find where in XO ui I would set alert.
It seems the alerts I set years ago when using Xen center pc app are still working but I need to tweak them and delete some and was hoping I can do so via the XO UI.
Thank you,
SW