Installed on my usual test hosts without issues. However i am not using the qcow2 disk format anywhere yet
Posts
-
RE: XCP-ng 8.3 updates announcements and testing
-
RE: Xen Orchestra 6.3.2 Random Replication Failure
@florent I am using NBD for all backups yes but am not purging snapshots/usnig CBT.
Its so rare in fact that i haven't had it happen since since i made this post last week. (When i had it happen twice in 2 days). This is our production XOA it has occurred on so i wont be able to test a branch and i have never seen it happen on my sources install at home.
-
RE: XCP-ng 8.3 updates announcements and testing
Installed on a handful of test machines. Not as many as usual as im being very cautious with this one for now. Everything rebooted and VMs started ok after. Using VHD for everything currently.
-
RE: Xen Orchestra 6.3.2 Random Replication Failure
@pierrebrunet Thanks for the update. Glad to know its not something unique to our environment and you were able to track down the cause!
-
Xen Orchestra 6.3.2 Random Replication Failure
Since the XOA 6.3 release i have had a few random backup errors in an environment that has otherwise had fairly flawless backup performance for the last year. I cannot make out what exactly the error means but retrying the job allows it to succeed without issue. It is also very intermittent.

Log attached. 2026-04-07T01_00_03.075Z - backup NG.txt
If the issue persists i will submit a ticket to dive into it further but i have only had it happen 3 times since the release ofthe 6.3.x update so its hard to reproduce.
Replication target storage is a Pure C50R4 with NFS3 exports.
-
RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"
@florent I can confirm that this fixes the issue!
-
RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"
I created a brand new Windows 11 VM with a VTPM and Secure boot enabled and am able to reproduce this on a freshly created VM.
- Initial replication will work.
- Any follow up replications will fail with Error: VTPM_MAX_AMOUNT_REACHED(1)
- Retrying the job after the failure will succeed.
-
RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"
I tried removing the replica chain and let it start from scratch. The initial full replica was a success but unfortunately running a follow up incremental replication job results in the same error.

The entire transfer succeeds, it only seems to fail at the very end.
I have other VMs (Server 2022) with VTPMs that do not have this issue. The VM that is failing is Windows 11, and is the only Windows 11 VM we have running.
-
RE: XOA - Memory Usage
Looks like the issue still persists in 6.3.1?
Here is since installing the latest update yesterday:

-
XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"
Since updating to XOA 6.1.3 i have a VM that has secure boot enabled and will fail to replicate with the error:
"VTPM_MAX_AMOUNT_REACHED(1)"Retrying the backup allows it to complete.
"data": { "id": "69d826f4-383c-f163-b59a-8f3ea5132fd1", "isFull": false, "name_label": "C50-DR-Win-NFS3-SR1", "type": "SR" }, "id": "1775101037617", "message": "export", "start": 1775101037617, "status": "failure", "tasks": [ { "id": "1775101039730", "message": "transfer", "start": 1775101039730, "status": "failure", "end": 1775101063330, "result": { "code": "VTPM_MAX_AMOUNT_REACHED", "params": [ "1" ], "call": { "duration": 3, "method": "VTPM.create", "params": [ "* session id *", "OpaqueRef:5263e3da-0772-f8c5-5344-32e81c08c37a", false ] }, "message": "VTPM_MAX_AMOUNT_REACHED(1)", "name": "XapiError", "stack": "XapiError: VTPM_MAX_AMOUNT_REACHED(1)\n at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/transports/json-rpc.mjs:38:21\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1775101063749 }It appears to be trying to create a new TPM on the replica when one already exists?
Not sure why it fails during the job run but completes during a retry but it is consistent in its behaviour.
-
RE: Backing up from Replica triggers full backup
Happy to report from my limited testing that as of this morning this appears to be fixed in master.
-
RE: Backing up from Replica triggers full backup
@florent Since both jobs were test jobs that i have been running manually, i do have an unnamed and disabled schedule on both that does look identical, so i unintentionally did have multiple jobs on the same schedule. I have since named the schedule within my test job so to each is unique.
Updating to "f445b" shows improvement:
I was able to replicate from pool A to Pool B, then run a backup job which was incremental.
I then ran the replication job again which was incremental and did not create a new VM!
Unfortunately though after this i ran the backup job again which resulted in a full backup from the replica rather than a delta, not sure why. The snapshot from the first backup job run was also not removed, leaving 2 snapshots behind, one from each backup run.

I then tried the process again. Ran the CR job, which was a delta (this part seemed fixed!) then ran the backup job after, same behavior, a full ran instead of a delta and the previous backup snapshot was left behind leaving the VM looking like:

So it seems one problem solved but another remains.
-
RE: Backing up from Replica triggers full backup
Retrying the job after the above failure results in a full replication and a new VM being created just as before.
-
RE: Backing up from Replica triggers full backup
@florent Just gave this branch a shot, trying to run replication job after running a backup job no longer results in a full replication but just fails with:

-
RE: Double CR backup
My understanding is this should only be happening during backups, not a CR run though.
-
RE: Application on VM causing BSOD
@tsukraw Another thing i remember from my time troubleshooting blue iris was capturing a crash dump using Xentrace:
xentrace -D -e 0x0008f000 xentrace.dmp
From there i was able to determine the MSR related issue. Not at all saying thats the issue you are having but it may shed some light or be useful for those more knowledgeable with Xen than myself.
-
RE: Application on VM causing BSOD
After the VM crashes check for output from "xl dmesg" on the hypervisor via ssh. It may provide some information on why the VM crashed.
I ran into an issue with Blue Iris crashing on recent Intel CPUs and the fix was to relax MSR enforcement on the VM by running:
xe vm-param-add uuid=VM_UUID param-name=platform msr-relaxed=true
However this was after determining this was the issue via xl dmesg.
-
Backing up from Replica triggers full backup
Testing the Symmetrical backup feature in the upcoming version of XOA using the latest commit from today. (8e580) and running into a potential issue where after backing up a replicated VM a full CR run will be triggered and a new replicated VM will be created instead of simply a snapshot on an existing replica.
Here is an example of the issue:
Create a new CR job with a VM replicating from Pool A to Pool B and set the retention to whatever you choose. I am using a retention of 3 in this example. Run the job and the initial sync will take place.

After running the job 3 times i will have 3 snapshots on the replica side as expected.

Now create a delta backup job that backs up this replica . To simulate backing up from production pool to DR pool and backing up those replicas to long term, immutable or cheaper storage. Then run the job, as expected the initial backup run will be a full.

Here is how the VM looks at this point:

Everything should now be in place. What used to happen on earlier commits this month is the next time the CR job ran only a delta would be transferred again, allowing very efficient offsite replication and Backup.
However if i run the CR job again, instead a full replica takes place and a new VM/UUID is generated.


I have attached the log of in hopes it may also shed some light on why this happens.
With a commit from March 12th this behaviour did not occur, a delta replica would be transferred as expected and show as a snapshot on the existing VM/UUID.