Happy to report from my limited testing that as of this morning this appears to be fixed in master.
Posts
-
RE: Backing up from Replica triggers full backup
-
RE: Backing up from Replica triggers full backup
@florent Since both jobs were test jobs that i have been running manually, i do have an unnamed and disabled schedule on both that does look identical, so i unintentionally did have multiple jobs on the same schedule. I have since named the schedule within my test job so to each is unique.
Updating to "f445b" shows improvement:
I was able to replicate from pool A to Pool B, then run a backup job which was incremental.
I then ran the replication job again which was incremental and did not create a new VM!
Unfortunately though after this i ran the backup job again which resulted in a full backup from the replica rather than a delta, not sure why. The snapshot from the first backup job run was also not removed, leaving 2 snapshots behind, one from each backup run.

I then tried the process again. Ran the CR job, which was a delta (this part seemed fixed!) then ran the backup job after, same behavior, a full ran instead of a delta and the previous backup snapshot was left behind leaving the VM looking like:

So it seems one problem solved but another remains.
-
RE: Backing up from Replica triggers full backup
Retrying the job after the above failure results in a full replication and a new VM being created just as before.
-
RE: Backing up from Replica triggers full backup
@florent Just gave this branch a shot, trying to run replication job after running a backup job no longer results in a full replication but just fails with:

-
RE: Double CR backup
My understanding is this should only be happening during backups, not a CR run though.
-
RE: Application on VM causing BSOD
@tsukraw Another thing i remember from my time troubleshooting blue iris was capturing a crash dump using Xentrace:
xentrace -D -e 0x0008f000 xentrace.dmp
From there i was able to determine the MSR related issue. Not at all saying thats the issue you are having but it may shed some light or be useful for those more knowledgeable with Xen than myself.
-
RE: Application on VM causing BSOD
After the VM crashes check for output from "xl dmesg" on the hypervisor via ssh. It may provide some information on why the VM crashed.
I ran into an issue with Blue Iris crashing on recent Intel CPUs and the fix was to relax MSR enforcement on the VM by running:
xe vm-param-add uuid=VM_UUID param-name=platform msr-relaxed=true
However this was after determining this was the issue via xl dmesg.
-
Backing up from Replica triggers full backup
Testing the Symmetrical backup feature in the upcoming version of XOA using the latest commit from today. (8e580) and running into a potential issue where after backing up a replicated VM a full CR run will be triggered and a new replicated VM will be created instead of simply a snapshot on an existing replica.
Here is an example of the issue:
Create a new CR job with a VM replicating from Pool A to Pool B and set the retention to whatever you choose. I am using a retention of 3 in this example. Run the job and the initial sync will take place.

After running the job 3 times i will have 3 snapshots on the replica side as expected.

Now create a delta backup job that backs up this replica . To simulate backing up from production pool to DR pool and backing up those replicas to long term, immutable or cheaper storage. Then run the job, as expected the initial backup run will be a full.

Here is how the VM looks at this point:

Everything should now be in place. What used to happen on earlier commits this month is the next time the CR job ran only a delta would be transferred again, allowing very efficient offsite replication and Backup.
However if i run the CR job again, instead a full replica takes place and a new VM/UUID is generated.


I have attached the log of in hopes it may also shed some light on why this happens.
With a commit from March 12th this behaviour did not occur, a delta replica would be transferred as expected and show as a snapshot on the existing VM/UUID.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Installed on my usual round test hosts. No issues to report so far! With such a small change i wasn't expecting anything to go wrong!
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey No issues to report on my test systems.
-
RE: XCP-ng 8.3 updates announcements and testing
I took the opportunity to swap out the ssh keys on my hosts yesterday from the default rsa keys over to ed25519 keys. I used Ansible to copy the new key over before running another playbook to remove the old key. Having the ability to use Ansible to mange your hosts at scale is very slick!
-
RE: backup mail report says INTERRUPTED but it's not ?
Tonight i noticed our XOA memory usage was high today so i tried running xo-cli xo.clean as you suggested. It returned a status of "true" but unfortunately did not seem to make a difference in overall memory usage.

-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Tested this on the same hosts i already have running the testing updates from earlier. No issues. Mixture of AMD and Intel.
-
RE: backup mail report says INTERRUPTED but it's not ?
@florent I can try running this command next time memory usage is high and will report my findings!
-
RE: XCP-ng 8.3 updates announcements and testing
Installed on my usual selection of hosts. (A mixture of AMD and Intel hosts, SuperMicro, Asus, and Minisforum). No issues after a reboot, PCI Passthru, backups, etc continue to work smoothly. Also installed on a HP GL325 Gen 10 with no issues after reboot.
-
RE: backup mail report says INTERRUPTED but it's not ?
Here is the latest after installing 6.2.1 on Friday.

-
RE: XOA 6.2 parent VHD is missing followed by full Backup
@florent Awesome, i have patched a second location's XOA and am not experiencing the issue anymore there either.
-
RE: XOA 6.2 parent VHD is missing followed by full Backup
@florent thanks for the quick work on this. I have installed the patch and am running a test job. So far i do not see the rename error in the logs. The only error i see is:
Feb 27 08:26:45 xoa xo-server[1792434]: 2026-02-27T14:26:45.711Z vhd-lib:VhdFile INFO double close { Feb 27 08:26:45 xoa xo-server[1792434]: path: { Feb 27 08:26:45 xoa xo-server[1792434]: fd: 18, Feb 27 08:26:45 xoa xo-server[1792434]: path: '/xo-vm-backups/2e8d1b6c-5fe8-0f41-22b9-777eb61f582a/vdis/08dce402-b472-4fd5-ab38-549b4c263c31/6b1191c3-8052-4e16-9cfa-ede59bf790ea/20260131T040121Z.vhd' Feb 27 08:26:45 xoa xo-server[1792434]: }, Feb 27 08:26:45 xoa xo-server[1792434]: firstClosedBy: 'Error\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at VhdFile.dispose (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/Vhd/VhdFile.js:134:22)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDisk.dispose (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/Vhd/VhdFile.js:104:26)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDisk.close (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/RemoteVhdDisk.mjs:101:24)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDisk.unlink (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/RemoteVhdDisk.mjs:427:16)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDiskChain.unlink (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/RemoteVhdDiskChain.mjs:269:18)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at #step_cleanup (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/MergeRemoteDisk.mjs:279:23)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at async MergeRemoteDisk.merge (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/MergeRemoteDisk.mjs:159:11)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at async _mergeVhdChain (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cleanVm.mjs:56:20)', Feb 27 08:26:45 xoa xo-server[1792434]: closedAgainBy: 'Error\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at VhdFile.dispose (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/Vhd/VhdFile.js:129:24)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDisk.dispose (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/Vhd/VhdFile.js:104:26)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDisk.close (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/RemoteVhdDisk.mjs:101:24)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/RemoteVhdDiskChain.mjs:85:52\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at Array.map (<anonymous>)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at RemoteVhdDiskChain.close (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/RemoteVhdDiskChain.mjs:85:35)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at #cleanup (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/disks/MergeRemoteDisk.mjs:309:21)\n' + Feb 27 08:26:45 xoa xo-server[1792434]: ' at async _mergeVhdChain (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cleanVm.mjs:56:20)' Feb 27 08:26:45 xoa xo-server[1792434]: }Im guessing this is unrelated?