@Forza Too funny. I came across this post and clicked on the URL you referenced......and that earlier question was from me!
Well, nothing has changed. I'm doing mirror'ed backups and I'm still blind as a bat.
@Forza Too funny. I came across this post and clicked on the URL you referenced......and that earlier question was from me!
Well, nothing has changed. I'm doing mirror'ed backups and I'm still blind as a bat.
@olivierlambert
Hi,
Yes, I have "Merge backups synchronously" enabled on the Local Backups jobs but still get the error as the Mirror Job has a lock on the VM data.
Error/Message:
"the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution."
Start: 2025-07-26 21:17
End: 2025-07-26 21:17
Duration: a few seconds
Error: Lock file is already being held
Thanks.
I am using XOA 5.106.2, I have a large incremental mirror backup running between my primary backup NFS and a remote NFS Remote. Apart from the task in the Backup tab saying it has started, there is no progress information in Tasks, so I have nothing to view on the status.
It is very frustrating as I don't know how long the job has left. My current job has been running two days. I can confirm from the XOA command line with 'journalctl -f -u xo-server.service' that merges occasionally happen, but what about the actual transfer of data? The Remote NFS has IO load and it is updating the files in the directories for the Mirror-ed data.
Do I need to look at a different log? Is there any way to see this information in XOA? I thought I read over a year ago that this was to be fixed in XOA 6 but I cannot find any reference to that any more in Google searches or this Forum.
I can't run backups against the two VMs being mirrored as I get the 'lock error'.
Before I open a support ticket I thought I would ask here. Can I view the progress/current status of a running Mirror Incremental backup job in the GUI? Do I need to enable some debugging manually?
@Danp Interesting. That will be it. Thanks for linking this.
In the mean time, I've put in a request with the Australian government to move us closer to Europe.

I was able to fix it in mine by disabling IPv6. (Which we don't run).
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
In order to verify that IPv6 is disabled, run:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
If the output is 1, we can say IPv6 is in disable state.
This is a temp fix until next reboot. Read here for a permanent solution:
https://bobcares.com/blog/debian-12-disable-ipv6/
After disabling IPv6, 'xoa check' immediately started working.
@acomav Replying to myself again. After working for a few days, the issue restarted. I'll raise a ticket.
Replying to myself here for an update.
I reinstalled the XOA appliance and imported my config. (On a different host in a different pool)
That took me back to XOA v5.98.1. Internet connectivity was fine.
I stayed on the Stable Channel and went up to 5.99.1. Internet connectivity was fine.
I have a Pool issue where the XOA was and I can't fix it until tonight. (I need to upgrade and reboot the master.)
Hi,
I have also started having this issue.
My error:
✖ 15/16 - Internet connectivity: AggregateError [ETIMEDOUT]:
at internalConnectMultiple (node:net:1118:18)
at internalConnectMultiple (node:net:1186:5)
at Timeout.internalConnectMultipleTimeout (node:net:1712:5)
at listOnTimeout (node:internal/timers:583:11)
at process.processTimers (node:internal/timers:519:7) {
code: 'ETIMEDOUT',
url: 'http://xen-orchestra.com/',
[errors]: [
Error: connect ETIMEDOUT 185.78.159.93:80
at createConnectionError (node:net:1648:14)
at Timeout.internalConnectMultipleTimeout (node:net:1707:38)
at listOnTimeout (node:internal/timers:583:11)
at process.processTimers (node:internal/timers:519:7) {
errno: -110,
code: 'ETIMEDOUT',
syscall: 'connect',
address: '185.78.159.93',
port: 80
},
Error: connect ENETUNREACH 2a01:240:ab08::4:80 - Local (:::0)
at internalConnectMultiple (node:net:1182:16)
at Timeout.internalConnectMultipleTimeout (node:net:1712:5)
at listOnTimeout (node:internal/timers:583:11)
at process.processTimers (node:internal/timers:519:7) {
errno: -101,
code: 'ENETUNREACH',
syscall: 'connect',
address: '2a01:240:ab08::4',
port: 80
}
]
}
I have two XOA appliance running in different locations. One works fine but the XOA version is: 5.95.1
The one that has started failing is running the latest version: 5.100.2
Traceroutes from the working XOA get to: (I'm in Australia hence the long response times)
...
16 prs-b1-link.ip.twelve99.net (62.115.125.167) 282.574 ms 282.700 ms freeprosas-ic-367227.ip.twelve99-cust.net (80.239.167.129) 303.985 ms
17 freeprosas-ic-367227.ip.twelve99-cust.net (80.239.167.129) 302.850 ms 302.835 ms be1.er02.lyo03.jaguar-network.net (85.31.194.151) 309.182 ms
18 cpe-et008453.cust.jaguar-network.net (85.31.197.135) 310.999 ms be1.er02.lyo03.jaguar-network.net (85.31.194.151) 308.157 ms 308.477 ms
19 * cpe-et008453.cust.jaguar-network.net (85.31.197.135) 318.785 ms 309.982 ms
From the non-working XOA:
...
10 * be803.lsr01.prth.wa.vocus.network (103.1.76.147) 106.498 ms be803.lsr01.stpk.wa.vocus.network (103.1.76.145) 109.750 ms
11 * * *
12 * * *
13 * * *
14 mei-b5-link.ip.twelve99.net (62.115.134.228) 244.552 ms mei-b5-link.ip.twelve99.net (62.115.113.2) 243.988 ms mei-b5-link.ip.twelve99.net (62.115.124.123) 258.259 ms
15 freeprosas-ic-373578.ip.twelve99-cust.net (62.115.35.93) 256.427 ms * *
16 be1.er02.lyo03.jaguar-network.net (85.31.194.151) 279.685 ms 276.070 ms *
On the new XOA, I can manually telnet to 185.78.159.93 on port 80 and get a response so I am at a loss.
It is not affecting day to day work.
I was going to download the latest version of the XOA appliance and import my config and see if that does the trick......unless anyone here has any other tests to run?
@olivierlambert Hi Olivier. I'll see what I can do. I've spent the weekend cleaning up my backups and catching up on mirror transfers. Once completed, I'll do a few custom backups at various nconnect values.
Just replying to thank you for pointing this out. I have been having very poor backup speeds for over a month and this sorted it out.
I have only used nconnect=4 and 6 for my NFS shares.
@olivierlambert
I can confirm it was my side. I had to do a few things to get the VMware Virtual disks to free up empty space and once I did, the VM Import to XCP-NG to an NFS SR successfully copied the virtual disk in a thin mode.
For anyone reading this who will be preparing to jump ship off VMware.
I am using vSphere 6.7. I have not tested against vSphere 7 yet. Not bothering with vSphere 8 for obvious reasons. My VM was a CentOS 7 VM with LVM to manage the 3 virtual disks.
# cd /mount point; dd if=/dev/zero of=./zeroes bs=1M count=1024; sync; rm zeroes
Change count=1024 (Which will create 1 GB of zeroes in a file) to however big a file you require to nearly fill up the partition / volume. eg count=10240 will make a 10 GB file.
Windows users can use 'sdelete'.
I could have waited for vSphere to automatically clean up the datastore in the background at this stage, but I was impatient and 'storage motioned' the virtual disks to NFS storage in Thin mode. I confirmed only the used space was copied across. I then migrated the disks back to my HP Nimble SAN and retained thin provisioning.
@olivierlambert Hi.
The disk sizes (and vmdk file size) are 150GB and 170GB. Both are in a Volume group and one Logical Volume using 100% of the Volume group mounted using XFS.
Disk space in use is 81%:
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- <15.51g 0
/dev/sdb VolGroup01 lvm2 a-- <150.00g 0
/dev/sdc VolGroup01 lvm2 a-- <170.00g 0
# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup01 2 1 0 wz--n- 319.99g 0
centos 1 2 0 wz--n- <15.51g 0
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
IMAPSpool VolGroup01 -wi-ao---- 319.99g
# df -h
/dev/mapper/VolGroup01-IMAPSpool 320G 257G 64G 81% /var/spool/imap
The vmdk files live on an HPE/Nimble CS3000 (Block iscsi). I am now thinking I will need to get into the VM and free up discarded/deleted blocks....which would make the vmdk sizes smaller. (as they are set to thin provisioned with vmfs)
I'll do that and retry and report back if I see the the full disk being written out to XCP-NG.
@florent The VM is on an NFS SR which is thin provisioned. LVM is inside the VM on the virtual disks.
Hi, a question about these patches and thin provisioning.
My test import now works, however, it fully provisioned the full size of the disk on an NFS SR.
[root@XXXX ~]# ls -salh /mnt/NFS/d8ad046d-c279-5bd6-8ed7-43888187f188/
total 540G
4.0K drwxr-xr-x 2 root root 4.0K Feb 6 09:33 .
4.0K drwxr-xr-x 27 root root 4.0K Feb 1 21:22 ..
151G -rw-r--r-- 1 root root 151G Feb 6 10:45 1c3b93da-de07-4a4f-8229-60635bc2f279.vhd
13G -rw-r--r-- 1 root root 13G Feb 6 09:43 1eae9130-e6eb-45be-ae25-a7dcb7ee8f4e.vhd
171G -rw-r--r-- 1 root root 171G Feb 6 10:51 751b7a5f-df32-4cb1-9479-e196671e7149.vhd
The two large disks are in an LVM VG on the source and combined, use up 253 GB of the 320 GB LV. They are thin provisioned on the VMware side.
Am I wrong to expect the vhd files on the NFS SR to be smaller than what I see? Does LVM on the source negate thin provisioning on the xcp-ng side?
Not a big deal, I am just curious.
Thanks
@florent
Thanks. I have kicked off an Import but it takes 2 hours however....the first small virtual disk has now been successful whereas it was failing, so I am confident the rest will work. Will update then.
Thanks
I patched my XO source VM with the latest from 5th Feb and still had the same error.
"stack": "Error: no opaque ref found in undefined
It may be I am not patching correctly so I have added a XOA trial and moved to the 'latest' channel and have ping @florent with a support tunnel to test in the morning.
@acomav
Replying to myself.
I redid the job with a snapshot from a running VM to a local SR. Same issue occurred at the same time.
I came across the same error today before seeing this thread. Importing a 3 disk VM (powered off).
The first smaller disk failed first.
I saw the post about the patch and applied to my XO source VM. (Ronivay Debian image with the disk extended to 30 GB).
I then tried a live (with snapshot) 10 GB 1 disk VM to local thick LVM SR, and it was successful.
I retried the big VM to a NFS SR and it failed in the same spot.
Feb 03 10:41:42 xo-ce xo-server[2902]: 2024-02-03T10:41:42.888Z xo:xo-server WARN possibly unhandled rejection {
Feb 03 10:41:42 xo-ce xo-server[2902]: error: Error: already finalized or destroyed
Feb 03 10:41:42 xo-ce xo-server[2902]: at Pack.entry (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/tar-stream/pack.js:138:51)
Feb 03 10:41:42 xo-ce xo-server[2902]: at Pack.resolver (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/promise-toolbox/fromCallback.js:5:6)
Feb 03 10:41:42 xo-ce xo-server[2902]: at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/bluebird/js/release/debuggability.js:384:9)
Feb 03 10:41:42 xo-ce xo-server[2902]: at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/bluebird/js/release/promise.js:518:18)
Feb 03 10:41:42 xo-ce xo-server[2902]: at new Promise (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/bluebird/js/release/promise.js:103:10)
Feb 03 10:41:42 xo-ce xo-server[2902]: at Pack.fromCallback (/opt/xo/xo-builds/xen-orchestra-202402030246/node_modules/promise-toolbox/fromCallback.js:9:10)
Feb 03 10:41:42 xo-ce xo-server[2902]: at writeBlock (file:///opt/xo/xo-builds/xen-orchestra-202402030246/@xen-orchestra/xva/_writeDisk.mjs:9:22)
Feb 03 10:41:42 xo-ce xo-server[2902]: }
Feb 03 10:41:45 xo-ce xo-server[2902]: root@10.1.4.10 Xapi#putResource /import/ XapiError: IMPORT_ERROR(INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ])
Feb 03 10:41:45 xo-ce xo-server[2902]: at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/_XapiError.mjs:16:12)
Feb 03 10:41:45 xo-ce xo-server[2902]: at default (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/_getTaskResult.mjs:11:29)
Feb 03 10:41:45 xo-ce xo-server[2902]: at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1006:24)
Feb 03 10:41:45 xo-ce xo-server[2902]: at file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1040:14
Feb 03 10:41:45 xo-ce xo-server[2902]: at Array.forEach (<anonymous>)
Feb 03 10:41:45 xo-ce xo-server[2902]: at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1030:12)
Feb 03 10:41:45 xo-ce xo-server[2902]: at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xen-api/index.mjs:1203:14) {
Feb 03 10:41:45 xo-ce xo-server[2902]: code: 'IMPORT_ERROR',
Feb 03 10:41:45 xo-ce xo-server[2902]: params: [ 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]' ],
Feb 03 10:41:45 xo-ce xo-server[2902]: call: undefined,
Feb 03 10:41:45 xo-ce xo-server[2902]: url: undefined,
Feb 03 10:41:45 xo-ce xo-server[2902]: task: task {
Feb 03 10:41:45 xo-ce xo-server[2902]: uuid: 'e1ed657e-165c-0a78-2b72-3096b0550fed',
Feb 03 10:41:45 xo-ce xo-server[2902]: name_label: '[XO] VM import',
Feb 03 10:41:45 xo-ce xo-server[2902]: name_description: '',
Feb 03 10:41:45 xo-ce xo-server[2902]: allowed_operations: [],
Feb 03 10:41:45 xo-ce xo-server[2902]: current_operations: {},
Feb 03 10:41:45 xo-ce xo-server[2902]: created: '20240203T10:32:22Z',
Feb 03 10:41:45 xo-ce xo-server[2902]: finished: '20240203T10:41:45Z',
Feb 03 10:41:45 xo-ce xo-server[2902]: status: 'failure',
Feb 03 10:41:45 xo-ce xo-server[2902]: resident_on: 'OpaqueRef:e44d0112-ac22-4037-91d3-6394943789fd',
Feb 03 10:41:45 xo-ce xo-server[2902]: progress: 1,
Feb 03 10:41:45 xo-ce xo-server[2902]: type: '<none/>',
Feb 03 10:41:45 xo-ce xo-server[2902]: result: '',
Feb 03 10:41:45 xo-ce xo-server[2902]: error_info: [
Feb 03 10:41:45 xo-ce xo-server[2902]: 'IMPORT_ERROR',
Feb 03 10:41:45 xo-ce xo-server[2902]: 'INTERNAL_ERROR: [ Unix.Unix_error(Unix.ENOSPC, "write", "") ]'
Feb 03 10:41:45 xo-ce xo-server[2902]: ],
Feb 03 10:41:45 xo-ce xo-server[2902]: other_config: { object_creation: 'complete' },
Feb 03 10:41:45 xo-ce xo-server[2902]: subtask_of: 'OpaqueRef:NULL',
Feb 03 10:41:45 xo-ce xo-server[2902]: subtasks: [],
Feb 03 10:41:45 xo-ce xo-server[2902]: backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/import.ml)(line 2021))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 92)))'
Feb 03 10:41:45 xo-ce xo-server[2902]: }
Feb 03 10:41:45 xo-ce xo-server[2902]: }
Feb 03 10:41:45 xo-ce xo-server[2902]: 2024-02-03T10:41:45.956Z xo:api WARN admin@admin.net | vm.importMultipleFromEsxi(...) [9m] =!> Error: no opaque ref found
I'm going to try the next test from the same (3 disk VM) but have it powered up with a snapshot and save to local LVM SR.
The XO error for that disk that failed.
{
"id": "38jiy3bsy5r",
"properties": {
"name": "Cold import of disks scsi0:0"
},
"start": 1706956341748,
"status": "failure",
"end": 1706956905772,
"result": {
"message": "no opaque ref found",
"name": "Error",
"stack": "Error: no opaque ref found\n at importVm (file:///opt/xo/xo-builds/xen-orchestra-202402030246/@xen-orchestra/xva/importVm.mjs:28:19)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at importVdi (file:///opt/xo/xo-builds/xen-orchestra-202402030246/@xen-orchestra/xva/importVdi.mjs:6:17)\n at file:///opt/xo/xo-builds/xen-orchestra-202402030246/packages/xo-server/src/xo-mixins/migrate-vm.mjs:260:21\n at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202402030246/@vates/task/index.js:158:22)\n at Task.run (/opt/xo/xo-builds/xen-orchestra-202402030246/@vates/task/index.js:141:20)"
}
},