@stormi Installed on our 2 production pools, DR and remote sites, 46 hosts total ranging from Dell, Lenovo, HP, and Supermicro servers, no issues to report!
Best posts made by flakpyro
-
RE: XCP-ng 8.3 updates announcements and testing
-
RE: XCP-ng 8.3 updates announcements and testing
Installed on my usual selection of hosts. (A mixture of AMD and Intel hosts, SuperMicro, Asus, and Minisforum). No issues after a reboot, PCI Passthru, backups, etc continue to work smoothly. Also installed on a HP GL325 Gen 10 with no issues after reboot.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Updated my usual test hosts, (Minisforum and Supermicro X11) as well as an two sets of 2 host AMD pools (one pool of HP DL320 Gen10s and another of Asus Epyc servers of some sort, and lastly a Dell R360 without issue.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Installed on my usual round test hosts. No issues to report so far! With such a small change i wasn't expecting anything to go wrong!
-
RE: XCP-ng 8.3 updates announcements and testing
Installed on my usual selection of hosts. (A mixture of AMD and Intel hosts, SuperMicro, Asus, and Minisforum). No issues after a reboot, PCI Passthru, backups, etc continue to work smoothly
-
RE: log_fs_usage / /var/log directory on pool master filling up constantly
One of our pools. (5 hosts, 6 NFS SRs) had this issue when we first deployed it. I engaged with support from Vates and they changed a setting that reduced the frequency of the SR.scan job from 30 seconds to every 2 mins instead. This totally fixed the issue for us going on a year and a half later.
I dug back in our documentation and found the command they gave us
xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID>Where hosts UUID is your pool master.
-
RE: XCP-ng 8.3 updates announcements and testing
@stormi Installed on my usual test hosts (Intel Minisforum MS-01, and Supermicro running a Xeon E-2336 CPU). Also installed onto a 2 host AMD epyc pool. Updates went smooth, backups continue to function as before.
3 windows 11 VMs had secure boot enabled. In XOA i clicked "Copy pool's default UEFI certificates to the VM" after the update was complete. The VMs continued to boot without issue after.
-
RE: XCP-ng 8.3 updates announcements and testing
installed on 2 test machines
Machine 1:
Intel Xeon E-2336
SuperMicro board.Machine 2:
Minisforum MS-01
i9-13900H
32 GB Ram
Using Intel X710 onboard NICBoth machines installed fine and all VMs came up without issue after. My one test backup job also seemed to run without any issues.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey installed on 2 test machines
Machine 1:
Intel Xeon E-2336
SuperMicro board.Machine 2:
Minisforum MS-01
i9-13900H
32 GB Ram
Using Intel X710 onboard NICBoth machines installed fine and all VMs came up without issue after.
I ran a backup job after to test snapshot coalesce, no issues there.
-
RE: XCP-ng 8.3 updates announcements and testing
@stormi Updated a test machine running only couple VMs. Everything installed fine and rebooted without issue.
Machine is:
Intel Xeon E-2336
SuperMicro board.
One VM happens to be windows based with an Nvidia GPU passed though to it running Blue Iris using the MSR fixed found elsewhere on these forums, fix continues to work with this version of Xen.
Latest posts made by flakpyro
-
RE: Backing up from Replica triggers full backup
Happy to report from my limited testing that as of this morning this appears to be fixed in master.
-
RE: Backing up from Replica triggers full backup
@florent Since both jobs were test jobs that i have been running manually, i do have an unnamed and disabled schedule on both that does look identical, so i unintentionally did have multiple jobs on the same schedule. I have since named the schedule within my test job so to each is unique.
Updating to "f445b" shows improvement:
I was able to replicate from pool A to Pool B, then run a backup job which was incremental.
I then ran the replication job again which was incremental and did not create a new VM!
Unfortunately though after this i ran the backup job again which resulted in a full backup from the replica rather than a delta, not sure why. The snapshot from the first backup job run was also not removed, leaving 2 snapshots behind, one from each backup run.

I then tried the process again. Ran the CR job, which was a delta (this part seemed fixed!) then ran the backup job after, same behavior, a full ran instead of a delta and the previous backup snapshot was left behind leaving the VM looking like:

So it seems one problem solved but another remains.
-
RE: Backing up from Replica triggers full backup
Retrying the job after the above failure results in a full replication and a new VM being created just as before.
-
RE: Backing up from Replica triggers full backup
@florent Just gave this branch a shot, trying to run replication job after running a backup job no longer results in a full replication but just fails with:

-
RE: Double CR backup
My understanding is this should only be happening during backups, not a CR run though.
-
RE: Application on VM causing BSOD
@tsukraw Another thing i remember from my time troubleshooting blue iris was capturing a crash dump using Xentrace:
xentrace -D -e 0x0008f000 xentrace.dmp
From there i was able to determine the MSR related issue. Not at all saying thats the issue you are having but it may shed some light or be useful for those more knowledgeable with Xen than myself.
-
RE: Application on VM causing BSOD
After the VM crashes check for output from "xl dmesg" on the hypervisor via ssh. It may provide some information on why the VM crashed.
I ran into an issue with Blue Iris crashing on recent Intel CPUs and the fix was to relax MSR enforcement on the VM by running:
xe vm-param-add uuid=VM_UUID param-name=platform msr-relaxed=true
However this was after determining this was the issue via xl dmesg.
-
Backing up from Replica triggers full backup
Testing the Symmetrical backup feature in the upcoming version of XOA using the latest commit from today. (8e580) and running into a potential issue where after backing up a replicated VM a full CR run will be triggered and a new replicated VM will be created instead of simply a snapshot on an existing replica.
Here is an example of the issue:
Create a new CR job with a VM replicating from Pool A to Pool B and set the retention to whatever you choose. I am using a retention of 3 in this example. Run the job and the initial sync will take place.

After running the job 3 times i will have 3 snapshots on the replica side as expected.

Now create a delta backup job that backs up this replica . To simulate backing up from production pool to DR pool and backing up those replicas to long term, immutable or cheaper storage. Then run the job, as expected the initial backup run will be a full.

Here is how the VM looks at this point:

Everything should now be in place. What used to happen on earlier commits this month is the next time the CR job ran only a delta would be transferred again, allowing very efficient offsite replication and Backup.
However if i run the CR job again, instead a full replica takes place and a new VM/UUID is generated.


I have attached the log of in hopes it may also shed some light on why this happens.
With a commit from March 12th this behaviour did not occur, a delta replica would be transferred as expected and show as a snapshot on the existing VM/UUID.