@olivierlambert Hi Olivier. I'll see what I can do. I've spent the weekend cleaning up my backups and catching up on mirror transfers. Once completed, I'll do a few custom backups at various nconnect values.
Best posts made by acomav
-
RE: Updated XOA with kernel >5.3 to support nconnect nfs option
-
RE: XOA Failing on Check-in
@acomav Replying to myself again. After working for a few days, the issue restarted. I'll raise a ticket.
-
RE: Import from VMware fails after upgrade to XOA 5.91
@olivierlambert
I can confirm it was my side. I had to do a few things to get the VMware Virtual disks to free up empty space and once I did, the VM Import to XCP-NG to an NFS SR successfully copied the virtual disk in a thin mode.
For anyone reading this who will be preparing to jump ship off VMware.I am using vSphere 6.7. I have not tested against vSphere 7 yet. Not bothering with vSphere 8 for obvious reasons. My VM was a CentOS 7 VM with LVM to manage the 3 virtual disks.
- Make sure you Virtual Hardware is at least version 11. My test VM was a very old one still on version 8.
- For the ESXi host the VM lives on (but you should probably go all hosts in the cluster), go into Advanced settings, and enable (change 0 to 1) VMFS3.EnableBlockDelete. I thought I had this enabled but only 2 of the 5 hosts in the cluster did. You may need to check this is not reset after updates.
- Due to using CentOS 7 (perhaps) I could not used 'fstrim' with the discard mount option. It was not supported. I filled up the diskspace with zeros, synced, and then removed the zeroes.
# cd /mount point; dd if=/dev/zero of=./zeroes bs=1M count=1024; sync; rm zeroes
Change count=1024 (Which will create 1 GB of zeroes in a file) to however big a file you require to nearly fill up the partition / volume. eg count=10240 will make a 10 GB file.
Windows users can use 'sdelete'.I could have waited for vSphere to automatically clean up the datastore in the background at this stage, but I was impatient and 'storage motioned' the virtual disks to NFS storage in Thin mode. I confirmed only the used space was copied across. I then migrated the disks back to my HP Nimble SAN and retained thin provisioning.
-
RE: Import from VMware fails after upgrade to XOA 5.91
@olivierlambert Hi.
The disk sizes (and vmdk file size) are 150GB and 170GB. Both are in a Volume group and one Logical Volume using 100% of the Volume group mounted using XFS.Disk space in use is 81%:
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- <15.51g 0 /dev/sdb VolGroup01 lvm2 a-- <150.00g 0 /dev/sdc VolGroup01 lvm2 a-- <170.00g 0 # vgs VG #PV #LV #SN Attr VSize VFree VolGroup01 2 1 0 wz--n- 319.99g 0 centos 1 2 0 wz--n- <15.51g 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert IMAPSpool VolGroup01 -wi-ao---- 319.99g # df -h /dev/mapper/VolGroup01-IMAPSpool 320G 257G 64G 81% /var/spool/imap
The vmdk files live on an HPE/Nimble CS3000 (Block iscsi). I am now thinking I will need to get into the VM and free up discarded/deleted blocks....which would make the vmdk sizes smaller. (as they are set to thin provisioned with vmfs)
I'll do that and retry and report back if I see the the full disk being written out to XCP-NG.
Latest posts made by acomav
-
RE: XOA Failing on Check-in
@Danp Interesting. That will be it. Thanks for linking this.
In the mean time, I've put in a request with the Australian government to move us closer to Europe.
-
RE: XOA Failing on Check-in
I was able to fix it in mine by disabling IPv6. (Which we don't run).
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
In order to verify that IPv6 is disabled, run:cat /proc/sys/net/ipv6/conf/all/disable_ipv6
If the output is 1, we can say IPv6 is in disable state.This is a temp fix until next reboot. Read here for a permanent solution:
https://bobcares.com/blog/debian-12-disable-ipv6/
After disabling IPv6, 'xoa check' immediately started working.
-
RE: XOA Failing on Check-in
@acomav Replying to myself again. After working for a few days, the issue restarted. I'll raise a ticket.
-
RE: XOA Failing on Check-in
Replying to myself here for an update.
I reinstalled the XOA appliance and imported my config. (On a different host in a different pool)
That took me back to XOA v5.98.1. Internet connectivity was fine.
I stayed on the Stable Channel and went up to 5.99.1. Internet connectivity was fine.I have a Pool issue where the XOA was and I can't fix it until tonight. (I need to upgrade and reboot the master.)
-
RE: XOA Failing on Check-in
Hi,
I have also started having this issue.My error:
✖ 15/16 - Internet connectivity: AggregateError [ETIMEDOUT]: at internalConnectMultiple (node:net:1118:18) at internalConnectMultiple (node:net:1186:5) at Timeout.internalConnectMultipleTimeout (node:net:1712:5) at listOnTimeout (node:internal/timers:583:11) at process.processTimers (node:internal/timers:519:7) { code: 'ETIMEDOUT', url: 'http://xen-orchestra.com/', [errors]: [ Error: connect ETIMEDOUT 185.78.159.93:80 at createConnectionError (node:net:1648:14) at Timeout.internalConnectMultipleTimeout (node:net:1707:38) at listOnTimeout (node:internal/timers:583:11) at process.processTimers (node:internal/timers:519:7) { errno: -110, code: 'ETIMEDOUT', syscall: 'connect', address: '185.78.159.93', port: 80 }, Error: connect ENETUNREACH 2a01:240:ab08::4:80 - Local (:::0) at internalConnectMultiple (node:net:1182:16) at Timeout.internalConnectMultipleTimeout (node:net:1712:5) at listOnTimeout (node:internal/timers:583:11) at process.processTimers (node:internal/timers:519:7) { errno: -101, code: 'ENETUNREACH', syscall: 'connect', address: '2a01:240:ab08::4', port: 80 } ] }
I have two XOA appliance running in different locations. One works fine but the XOA version is: 5.95.1
The one that has started failing is running the latest version: 5.100.2Traceroutes from the working XOA get to: (I'm in Australia hence the long response times)
...
16 prs-b1-link.ip.twelve99.net (62.115.125.167) 282.574 ms 282.700 ms freeprosas-ic-367227.ip.twelve99-cust.net (80.239.167.129) 303.985 ms
17 freeprosas-ic-367227.ip.twelve99-cust.net (80.239.167.129) 302.850 ms 302.835 ms be1.er02.lyo03.jaguar-network.net (85.31.194.151) 309.182 ms
18 cpe-et008453.cust.jaguar-network.net (85.31.197.135) 310.999 ms be1.er02.lyo03.jaguar-network.net (85.31.194.151) 308.157 ms 308.477 ms
19 * cpe-et008453.cust.jaguar-network.net (85.31.197.135) 318.785 ms 309.982 msFrom the non-working XOA:
...
10 * be803.lsr01.prth.wa.vocus.network (103.1.76.147) 106.498 ms be803.lsr01.stpk.wa.vocus.network (103.1.76.145) 109.750 ms
11 * * *
12 * * *
13 * * *
14 mei-b5-link.ip.twelve99.net (62.115.134.228) 244.552 ms mei-b5-link.ip.twelve99.net (62.115.113.2) 243.988 ms mei-b5-link.ip.twelve99.net (62.115.124.123) 258.259 ms
15 freeprosas-ic-373578.ip.twelve99-cust.net (62.115.35.93) 256.427 ms * *
16 be1.er02.lyo03.jaguar-network.net (85.31.194.151) 279.685 ms 276.070 ms *On the new XOA, I can manually telnet to 185.78.159.93 on port 80 and get a response so I am at a loss.
It is not affecting day to day work.
I was going to download the latest version of the XOA appliance and import my config and see if that does the trick......unless anyone here has any other tests to run? -
RE: Updated XOA with kernel >5.3 to support nconnect nfs option
@olivierlambert Hi Olivier. I'll see what I can do. I've spent the weekend cleaning up my backups and catching up on mirror transfers. Once completed, I'll do a few custom backups at various nconnect values.
-
RE: Updated XOA with kernel >5.3 to support nconnect nfs option
Just replying to thank you for pointing this out. I have been having very poor backup speeds for over a month and this sorted it out.
I have only used nconnect=4 and 6 for my NFS shares. -
RE: Import from VMware fails after upgrade to XOA 5.91
@olivierlambert
I can confirm it was my side. I had to do a few things to get the VMware Virtual disks to free up empty space and once I did, the VM Import to XCP-NG to an NFS SR successfully copied the virtual disk in a thin mode.
For anyone reading this who will be preparing to jump ship off VMware.I am using vSphere 6.7. I have not tested against vSphere 7 yet. Not bothering with vSphere 8 for obvious reasons. My VM was a CentOS 7 VM with LVM to manage the 3 virtual disks.
- Make sure you Virtual Hardware is at least version 11. My test VM was a very old one still on version 8.
- For the ESXi host the VM lives on (but you should probably go all hosts in the cluster), go into Advanced settings, and enable (change 0 to 1) VMFS3.EnableBlockDelete. I thought I had this enabled but only 2 of the 5 hosts in the cluster did. You may need to check this is not reset after updates.
- Due to using CentOS 7 (perhaps) I could not used 'fstrim' with the discard mount option. It was not supported. I filled up the diskspace with zeros, synced, and then removed the zeroes.
# cd /mount point; dd if=/dev/zero of=./zeroes bs=1M count=1024; sync; rm zeroes
Change count=1024 (Which will create 1 GB of zeroes in a file) to however big a file you require to nearly fill up the partition / volume. eg count=10240 will make a 10 GB file.
Windows users can use 'sdelete'.I could have waited for vSphere to automatically clean up the datastore in the background at this stage, but I was impatient and 'storage motioned' the virtual disks to NFS storage in Thin mode. I confirmed only the used space was copied across. I then migrated the disks back to my HP Nimble SAN and retained thin provisioning.
-
RE: Import from VMware fails after upgrade to XOA 5.91
@olivierlambert Hi.
The disk sizes (and vmdk file size) are 150GB and 170GB. Both are in a Volume group and one Logical Volume using 100% of the Volume group mounted using XFS.Disk space in use is 81%:
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- <15.51g 0 /dev/sdb VolGroup01 lvm2 a-- <150.00g 0 /dev/sdc VolGroup01 lvm2 a-- <170.00g 0 # vgs VG #PV #LV #SN Attr VSize VFree VolGroup01 2 1 0 wz--n- 319.99g 0 centos 1 2 0 wz--n- <15.51g 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert IMAPSpool VolGroup01 -wi-ao---- 319.99g # df -h /dev/mapper/VolGroup01-IMAPSpool 320G 257G 64G 81% /var/spool/imap
The vmdk files live on an HPE/Nimble CS3000 (Block iscsi). I am now thinking I will need to get into the VM and free up discarded/deleted blocks....which would make the vmdk sizes smaller. (as they are set to thin provisioned with vmfs)
I'll do that and retry and report back if I see the the full disk being written out to XCP-NG.