@Danp Interesting. That will be it. Thanks for linking this.
In the mean time, I've put in a request with the Australian government to move us closer to Europe.
Posts
-
RE: XOA Failing on Check-in
-
RE: XOA Failing on Check-in
I was able to fix it in mine by disabling IPv6. (Which we don't run).
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
In order to verify that IPv6 is disabled, run:cat /proc/sys/net/ipv6/conf/all/disable_ipv6
If the output is 1, we can say IPv6 is in disable state.This is a temp fix until next reboot. Read here for a permanent solution:
https://bobcares.com/blog/debian-12-disable-ipv6/
After disabling IPv6, 'xoa check' immediately started working.
-
RE: XOA Failing on Check-in
@acomav Replying to myself again. After working for a few days, the issue restarted. I'll raise a ticket.
-
RE: XOA Failing on Check-in
Replying to myself here for an update.
I reinstalled the XOA appliance and imported my config. (On a different host in a different pool)
That took me back to XOA v5.98.1. Internet connectivity was fine.
I stayed on the Stable Channel and went up to 5.99.1. Internet connectivity was fine.I have a Pool issue where the XOA was and I can't fix it until tonight. (I need to upgrade and reboot the master.)
-
RE: XOA Failing on Check-in
Hi,
I have also started having this issue.My error:
✖ 15/16 - Internet connectivity: AggregateError [ETIMEDOUT]: at internalConnectMultiple (node:net:1118:18) at internalConnectMultiple (node:net:1186:5) at Timeout.internalConnectMultipleTimeout (node:net:1712:5) at listOnTimeout (node:internal/timers:583:11) at process.processTimers (node:internal/timers:519:7) { code: 'ETIMEDOUT', url: 'http://xen-orchestra.com/', [errors]: [ Error: connect ETIMEDOUT 185.78.159.93:80 at createConnectionError (node:net:1648:14) at Timeout.internalConnectMultipleTimeout (node:net:1707:38) at listOnTimeout (node:internal/timers:583:11) at process.processTimers (node:internal/timers:519:7) { errno: -110, code: 'ETIMEDOUT', syscall: 'connect', address: '185.78.159.93', port: 80 }, Error: connect ENETUNREACH 2a01:240:ab08::4:80 - Local (:::0) at internalConnectMultiple (node:net:1182:16) at Timeout.internalConnectMultipleTimeout (node:net:1712:5) at listOnTimeout (node:internal/timers:583:11) at process.processTimers (node:internal/timers:519:7) { errno: -101, code: 'ENETUNREACH', syscall: 'connect', address: '2a01:240:ab08::4', port: 80 } ] }
I have two XOA appliance running in different locations. One works fine but the XOA version is: 5.95.1
The one that has started failing is running the latest version: 5.100.2Traceroutes from the working XOA get to: (I'm in Australia hence the long response times)
...
16 prs-b1-link.ip.twelve99.net (62.115.125.167) 282.574 ms 282.700 ms freeprosas-ic-367227.ip.twelve99-cust.net (80.239.167.129) 303.985 ms
17 freeprosas-ic-367227.ip.twelve99-cust.net (80.239.167.129) 302.850 ms 302.835 ms be1.er02.lyo03.jaguar-network.net (85.31.194.151) 309.182 ms
18 cpe-et008453.cust.jaguar-network.net (85.31.197.135) 310.999 ms be1.er02.lyo03.jaguar-network.net (85.31.194.151) 308.157 ms 308.477 ms
19 * cpe-et008453.cust.jaguar-network.net (85.31.197.135) 318.785 ms 309.982 msFrom the non-working XOA:
...
10 * be803.lsr01.prth.wa.vocus.network (103.1.76.147) 106.498 ms be803.lsr01.stpk.wa.vocus.network (103.1.76.145) 109.750 ms
11 * * *
12 * * *
13 * * *
14 mei-b5-link.ip.twelve99.net (62.115.134.228) 244.552 ms mei-b5-link.ip.twelve99.net (62.115.113.2) 243.988 ms mei-b5-link.ip.twelve99.net (62.115.124.123) 258.259 ms
15 freeprosas-ic-373578.ip.twelve99-cust.net (62.115.35.93) 256.427 ms * *
16 be1.er02.lyo03.jaguar-network.net (85.31.194.151) 279.685 ms 276.070 ms *On the new XOA, I can manually telnet to 185.78.159.93 on port 80 and get a response so I am at a loss.
It is not affecting day to day work.
I was going to download the latest version of the XOA appliance and import my config and see if that does the trick......unless anyone here has any other tests to run? -
RE: Updated XOA with kernel >5.3 to support nconnect nfs option
@olivierlambert Hi Olivier. I'll see what I can do. I've spent the weekend cleaning up my backups and catching up on mirror transfers. Once completed, I'll do a few custom backups at various nconnect values.
-
RE: Updated XOA with kernel >5.3 to support nconnect nfs option
Just replying to thank you for pointing this out. I have been having very poor backup speeds for over a month and this sorted it out.
I have only used nconnect=4 and 6 for my NFS shares. -
RE: Import from VMware fails after upgrade to XOA 5.91
@olivierlambert
I can confirm it was my side. I had to do a few things to get the VMware Virtual disks to free up empty space and once I did, the VM Import to XCP-NG to an NFS SR successfully copied the virtual disk in a thin mode.
For anyone reading this who will be preparing to jump ship off VMware.I am using vSphere 6.7. I have not tested against vSphere 7 yet. Not bothering with vSphere 8 for obvious reasons. My VM was a CentOS 7 VM with LVM to manage the 3 virtual disks.
- Make sure you Virtual Hardware is at least version 11. My test VM was a very old one still on version 8.
- For the ESXi host the VM lives on (but you should probably go all hosts in the cluster), go into Advanced settings, and enable (change 0 to 1) VMFS3.EnableBlockDelete. I thought I had this enabled but only 2 of the 5 hosts in the cluster did. You may need to check this is not reset after updates.
- Due to using CentOS 7 (perhaps) I could not used 'fstrim' with the discard mount option. It was not supported. I filled up the diskspace with zeros, synced, and then removed the zeroes.
# cd /mount point; dd if=/dev/zero of=./zeroes bs=1M count=1024; sync; rm zeroes
Change count=1024 (Which will create 1 GB of zeroes in a file) to however big a file you require to nearly fill up the partition / volume. eg count=10240 will make a 10 GB file.
Windows users can use 'sdelete'.I could have waited for vSphere to automatically clean up the datastore in the background at this stage, but I was impatient and 'storage motioned' the virtual disks to NFS storage in Thin mode. I confirmed only the used space was copied across. I then migrated the disks back to my HP Nimble SAN and retained thin provisioning.
-
RE: Import from VMware fails after upgrade to XOA 5.91
@olivierlambert Hi.
The disk sizes (and vmdk file size) are 150GB and 170GB. Both are in a Volume group and one Logical Volume using 100% of the Volume group mounted using XFS.Disk space in use is 81%:
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- <15.51g 0 /dev/sdb VolGroup01 lvm2 a-- <150.00g 0 /dev/sdc VolGroup01 lvm2 a-- <170.00g 0 # vgs VG #PV #LV #SN Attr VSize VFree VolGroup01 2 1 0 wz--n- 319.99g 0 centos 1 2 0 wz--n- <15.51g 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert IMAPSpool VolGroup01 -wi-ao---- 319.99g # df -h /dev/mapper/VolGroup01-IMAPSpool 320G 257G 64G 81% /var/spool/imap
The vmdk files live on an HPE/Nimble CS3000 (Block iscsi). I am now thinking I will need to get into the VM and free up discarded/deleted blocks....which would make the vmdk sizes smaller. (as they are set to thin provisioned with vmfs)
I'll do that and retry and report back if I see the the full disk being written out to XCP-NG. -
RE: Import from VMware fails after upgrade to XOA 5.91
@florent The VM is on an NFS SR which is thin provisioned. LVM is inside the VM on the virtual disks.
-
RE: Import from VMware fails after upgrade to XOA 5.91
Hi, a question about these patches and thin provisioning.
My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.
[root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/ total 540G 4.0K drwxr-xr-x 2 root root 4.0K Feb 6 09:33 . 4.0K drwxr-xr-x 27 root root 4.0K Feb 1 21:22 .. 151G -rw-r--r-- 1 root root 151G Feb 6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd 13G -rw-r--r-- 1 root root 13G Feb 6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd 171G -rw-r--r-- 1 root root 171G Feb 6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.
Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?
Not a big deal, I am just curious.
Thanks
-
RE: Import from VMware fails after upgrade to XOA 5.91
@florent
Thanks. I have kicked off an Import but it takes 2 hours however....the first small virtual disk has now been successful whereas it was failing, so I am confident the rest will work. Will update then.Thanks
-
RE: Import from VMware fails after upgrade to XOA 5.91
I patched my XO source VM with the latest from 5th Feb and still had the same error.
"stack": "Error: no opaque ref found in undefinedIt may be I am not patching correctly so I have added a XOA trial and moved to the 'latest' channel and have ping @florent with a support tunnel to test in the morning.
-
RE: Import from VMware fails after upgrade to XOA 5.91
@acomav
Replying to myself.I redid the job with a snapshot from a running VM to a local SR. Same issue occurred at the same time.
-
RE: Import from VMware fails after upgrade to XOA 5.91
I came across the same error today before seeing this thread. Importing a 3 disk VM (powered off).
The first smaller disk failed first.
I saw the post about the patch and applied to my XO source VM. (Ronivay Debian image with the disk extended to 30 GB).
I then tried a live (with snapshot) 10 GB 1 disk VM to local thick LVM SR, and it was successful.
I retried the big VM to a NFS SR and it failed in the same spot.Feb 03 10:41:42 xo-ce xo-server[2902]: 2024-02-03T10:41:42.888Z xo:xo-server WARN possibly unhandled rejection { Feb 03 10:41:42 xo-ce xo-server[2902]: error: Error: already finalized or destroyed Feb 03 10:41:42 xo-ce xo-server[2902]: at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/tar-stream/pack.js:138:51) Feb 03 10:41:42 xo-ce xo-server[2902]: at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/promise-toolbox/fromCallback.js:5:6) Feb 03 10:41:42 xo-ce xo-server[2902]: at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/bluebird/js/release/debuggability.js:384:9) Feb 03 10:41:42 xo-ce xo-server[2902]: at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/bluebird/js/release/promise.js:518:18) Feb 03 10:41:42 xo-ce xo-server[2902]: at new Promise (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/bluebird/js/release/promise.js:103:10) Feb 03 10:41:42 xo-ce xo-server[2902]: at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/promise-toolbox/fromCallback.js:9:10) Feb 03 10:41:42 xo-ce xo-server[2902]: at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402030246/@xen-orchestra/xva/_writeDisk.mjs:9:22) Feb 03 10:41:42 xo-ce xo-server[2902]: } Feb 03 10:41:45 xo-ce xo-server[2902]: root@10.1.4.10 Xapi#putResource /import/ XapiError: IMPORT_ERROR(INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]) Feb 03 10:41:45 xo-ce xo-server[2902]: at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/_XapiError.mjs:16:12) Feb 03 10:41:45 xo-ce xo-server[2902]: at default (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/_getTaskResult.mjs:11:29) Feb 03 10:41:45 xo-ce xo-server[2902]: at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1006:24) Feb 03 10:41:45 xo-ce xo-server[2902]: at file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1040:14 Feb 03 10:41:45 xo-ce xo-server[2902]: at Array.forEach (<anonymous>) Feb 03 10:41:45 xo-ce xo-server[2902]: at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1030:12) Feb 03 10:41:45 xo-ce xo-server[2902]: at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1203:14) { Feb 03 10:41:45 xo-ce xo-server[2902]: code: 'IMPORT_ERROR', Feb 03 10:41:45 xo-ce xo-server[2902]: params: [ 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' ], Feb 03 10:41:45 xo-ce xo-server[2902]: call: undefined, Feb 03 10:41:45 xo-ce xo-server[2902]: url: undefined, Feb 03 10:41:45 xo-ce xo-server[2902]: task: task { Feb 03 10:41:45 xo-ce xo-server[2902]: uuid: 'e1ed657e-165c-0a78-2b72-3096b0550fed', Feb 03 10:41:45 xo-ce xo-server[2902]: name_label: '[XO] VM import', Feb 03 10:41:45 xo-ce xo-server[2902]: name_description: '', Feb 03 10:41:45 xo-ce xo-server[2902]: allowed_operations: [], Feb 03 10:41:45 xo-ce xo-server[2902]: current_operations: {}, Feb 03 10:41:45 xo-ce xo-server[2902]: created: '20240203T10:32:22Z', Feb 03 10:41:45 xo-ce xo-server[2902]: finished: '20240203T10:41:45Z', Feb 03 10:41:45 xo-ce xo-server[2902]: status: 'failure', Feb 03 10:41:45 xo-ce xo-server[2902]: resident_on: 'OpaqueRef:e44d0112-ac22-4037-91d3-6394943789fd', Feb 03 10:41:45 xo-ce xo-server[2902]: progress: 1, Feb 03 10:41:45 xo-ce xo-server[2902]: type: '<none/>', Feb 03 10:41:45 xo-ce xo-server[2902]: result: '', Feb 03 10:41:45 xo-ce xo-server[2902]: error_info: [ Feb 03 10:41:45 xo-ce xo-server[2902]: 'IMPORT_ERROR', Feb 03 10:41:45 xo-ce xo-server[2902]: 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' Feb 03 10:41:45 xo-ce xo-server[2902]: ], Feb 03 10:41:45 xo-ce xo-server[2902]: other_config: { object_creation: 'complete' }, Feb 03 10:41:45 xo-ce xo-server[2902]: subtask_of: 'OpaqueRef:NULL', Feb 03 10:41:45 xo-ce xo-server[2902]: subtasks: [], Feb 03 10:41:45 xo-ce xo-server[2902]: backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/import.ml)(line 2021))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 92)))' Feb 03 10:41:45 xo-ce xo-server[2902]: } Feb 03 10:41:45 xo-ce xo-server[2902]: } Feb 03 10:41:45 xo-ce xo-server[2902]: 2024-02-03T10:41:45.956Z xo:api WARN admin@admin.net | vm.importMultipleFromEsxi(...) [9m] =!> Error: no opaque ref found
I'm going to try the next test from the same (3 disk VM) but have it powered up with a snapshot and save to local LVM SR.
The XO error for that disk that failed.
{ "id": "38jiy3bsy5r", "properties": { "name": "Cold import of disks scsi0:0" }, "start": 1706956341748, "status": "failure", "end": 1706956905772, "result": { "message": "no opaque ref found", "name": "Error", "stack": "Error: no opaque ref found\n at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402030246/@xen-orchestra/xva/importVm.mjs:28:19)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402030246/@xen-orchestra/xva/importVdi.mjs:6:17)\n at file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402030246/@vates/task/index.js:158:22)\n at Task.run (/opt/xo/xo-builds/xen-orchestra-202402030246/@vates/task/index.js:141:20)" } },
-
RE: In XOA, change Virtual disk properties (device position)
Thanks @olivierlambert.
The problem is that one disk was already there running a PBX so I could not shut it off. Also, for reasons during its migration from Oracle Cloud to xcp-ng, the virtual disk finished up as /dev/xvdb for boot and OS. Adding a 2nd disk did not give me an option of where to place the disk and it was added as xvda. When I did a fast snapshot clone of the running VM, the cloned VM wanted to boot off /dev/xvda.
Anyway, thanks for your response and I look forward to seeing the XOA's evolution and this feature in the coming releases.cheers
-
In XOA, change Virtual disk properties (device position)
Hi,
Apologies if this has been answered, but I could not find what I was after when searching.In XOA, is there a way in the interface to adjust the device position of a (2nd or 3rd) virtual disk of a VM. I had to resort to XCP-NG Centre to make the change I was after.
Here is a Xenserver description of where I went:
https://docs.xenserver.com/en-us/xencenter/8-2/vms-storage-properties.htmlThanks