@Biggen At the moment, xcp-ng center provides some better views and overviews not yet available in XO.. Hoping next major version fixes this 
Best posts made by Forza
-
RE: [WARNING] XCP-ng Center shows wrong CITRIX updates for XCP-ng Servers - DO NOT APPLY - Fix released
-
RE: Long backup times via NFS to Data Domain from Xen Orchestra
@florent said in Long backup times via NFS to Data Domain from Xen Orchestra:
@MajorP93 this settings exists (not in the ui )
you can create a configuration file named
/etc/xo-server/config.diskConcurrency.tomlif you use a xoacontaining
[backups] diskPerVmConcurrency = 2That is great. Can we get it as a UI option too?

-
RE: XCP-ng Guest Agent - Reported Windows Version for Servers
@olivierlambert said in XCP-ng Guest Agent - Reported Windows Version for Servers:
It's funny to see Microsoft having a version 10 for an edition named 11. I suppose it's not a surprise for an organization that huge.
They did say that Windows 10 would be the last version of Windows...

-
RE: Citrix or XCP-ng drivers for Windows Server 2022
@dinhngtu Thank you. I think it is clear for me now.
The docs at https://xcp-ng.org/docs/guests.html#windows could be improved to cover all three options but also to be a little more concise to make it easier to read.
-
RE: Epyc VM to VM networking slow
Tested the new updates on my prod EPYC 7402P pool with
iperf3. Seems like quite a good uplift
Ubuntu 24.04 VM (6 cores) -> bare metal server (6 cores) over a 2x25Gbit LACP link.
Pre-patch
- iperf3 -P1 : 9.72Gbit/s
- iperf3 -P6 : 14.6GBis/s
Post Patch
- iperf3 -P1 : 11.3GBit/s
- iperf3 -P6 : 24.2GBit/s
Ubuntu 24.04 VM (6 cores) -> Ubuntu 24.04 VM (6 cores) on the same host
Pre Patch
Forgot to test this...
Post Patch
- iperf3 -P1 : 13.7GBit/s
- iperf3 -P6 : 30.8GBit/s
- iperf3 -P24 : 40.4GBit/s
Our servers have
Last-Level Cache (LLC) as NUMA Nodeenabled as most our VMs do not have huge amount of vCPUs assigned. This means for the EPYC 7402P (24c/48t) we have 8 NUMA nodes. We however do not usexl cpupool-numa-split. -
RE: Best CPU performance settings for HP DL325/AMD EPYC servers?
Sorry for spamming the thread.

I have two identical servers (srv01 and srv02) with AMD EPYC 7402P 24 Core CPUs. On srv02 I enabled the
LLC as NUMA Node.I've done some quick benchmarks with
Sysbenchon Ubuntu 20.10 with 12 assigned cores. Command line:sysbench cpu run --threads=12It would seem that in this test the NUMA option is much faster, 194187 events vs 103769 events. Perhaps I am misunderstanding how sysbench works?

With 7-zip the gain is much less, but still meaningful. A little slower in single-threaded performance but quite a bit faster in multi-threaded mode.

-
RE: Host stuck in booting state.
Problem was a stale connection with the NFS server. A reboot of the NFS server fixed the issue.
-
RE: Restoring a downed host ISNT easy
@xcprocks said in Restoring a downed host ISNT easy:
So, we had a host go down (OS drive failure). No big deal right? According to instructions, just reinstall XCP on a new drive, jump over into XOA and do a metadata restore.
Well, not quite.
First during installation, you really really must not select any of the disks to create an SR as you could potentially wipe out an SR.
Second, you have to do the sr-probe and sr-introduce and pbd-create and pbd-plug to get the SRs back.
Third, you then have to use XOA to restore the metadata which according to the directions is pretty simple looking. According to: https://xen-orchestra.com/docs/metadata_backup.html#performing-a-restore
"To restore one, simply click the blue restore arrow, choose a backup date to restore, and click OK:"
But this isn't quite true. When we did it, the restore threw an error:
"message": "no such object d7b6f090-cd68-9dec-2e00-803fc90c3593",
"name": "XoError",Panic mode sets in... It can't find the metadata? We try an earlier backup. Same error. We check the backup NFS share--no its there alright.
After a couple of hours scouring the internet and not finding anything, it dawns on us... The object XOA is looking for is the OLD server not a backup directory. It is looking for the server that died and no longer exists. The problem is, when you install the new server, it gets a new ID. But the restore program is looking for the ID of the dead server.
But how do you tell XOA, to copy the metadata over to the new server? It assumes that you want to restore it over an existing server. It does not provide a drop down list to pick where to deploy it.
In an act of desperation, we copied the backup directory to a new location and named it with the ID number of the newly recreated server. Now XOA could restore the metadata and we were able to recover the VMs in the SRs without issue.
This long story is really just a way to highlight the need for better host backup in three ways:
A) The first idea would be to create better instructions. It ain't nowhere as easy as the documentation says it is and it's easy to mess up the first step so bad that you can wipe out the contents of an SR. The documentation should spell this out.
B) The second idea is to add to the metadata backup something that reads the states of SR to PBD mappings and provides/saves a script to restore them. This would ease a lot of the difficulty in the actual restoring of a failed OS after a new OS can be installed.
C) The third idea is provide a dropdown during the restoration of the metadata that allows the user to target a particular machine for the restore operation instead of blindly assuming you want to restore it over a machine that is dead and gone.
I hope this helps out the next person trying to bring a host back from the dead, and I hope it also helps make XOA a better product.
Thanks for a good description of the restore process.
I was wary of the metadata-backup option. It sounds simple and good to have, but as you said it is in no way a comprehensive restore of a pool.
I'd like to add my own oppinion here. A full pool restore, including network, re-attaching SRs and everything else that is needed to quickly get back up and running. Also a restore pool backup should be available on the boot media. It could look for a NFS/CIFS mount or a USB disk with the backup files on. This would avoid things like issues with bonded networks not working.
-
RE: Remove VUSB as part of job
Might a different solution be to use a USB network bridge instead of direct attached USB? Something like this https://www.seh-technology.com/products/usb-deviceserver/utnserver-pro.html (There are different options available)... We use my-utn-50a with hardware USB keys and it has shown to be very reliable over the years.
-
RE: I/O errors on file restore
I re-checked again but the issue is unfortunately not resolved. It does not happen on all VMs and files, so maybe there is something wrong somehow in the VDI?
Latest posts made by Forza
-
RE: Mirror backup: No new data to upload for this vm?
The delta backup job saves to srv04-incremental and srv12-incremental.
The incremental mirror job has srv04-incremental as source and srv12-incremental as destination.The VM that showed no data to copy message was a the "Incremental backup every 4 hours - 8 days retention" backup job.
Originally I only had one remote,
srv04. I createdsrv04-incrementaland renamedsrv04tosrv04-fulland used the mirror backup feature to copy all delta backups tosrv04-incremental(as I did not want to attempt to move the data on the NFS backend side). Then, I set upsrv12-incrementalandsrv12-fulland created mirror jobs to copy fromsrv04-fullandsrv04-incremental. Once the mirror backups were completed I switched the normal backup jobs to store backups on both backup servers.I can set up a support connection if you want to remotely check this.
-
RE: Mirror backup: No new data to upload for this vm?
Here is the incremental backup config. Originally we only had the remote called srv04-incremental. I have now added srv12-incremental and wanted to copy over all existing backups to the new remote. I did the same with full backups too. Now I have each backup job using both remotes.


-
RE: Mirror backup: No new data to upload for this vm?
I noticed too that both
mirror backupandfull backupdo not actually copy all backups.This shows the source Remote on the left and destination remote on the right. I made sure the retention setting in the mirror backup is high enough to not exclude anything, yet we are not getting everything transferred:

Se also the same for full mirror backups: https://xcp-ng.org/forum/topic/11624/mirror-of-full-backups-with-low-retention-copies-all-vms-and-then-deletes-them/
-
RE: Mirror of full backups with low retention - copies all vms and then deletes them
It looks like it transfers one backup, deletes it, starts the next backup, deletes it, starts the next one... and so on. This seems rather inefficient for
fullbackups. I can understand it has to transfer the full chain when dealing withincrementalbackups, even if it has to prune and merge them afterwards.I also notice that even though I set the retention to 1000 in the full mirror job, not all backups are copied:


{ "data": { "type": "VM", "id": "0ecd9bc3-b4e8-8f0e-e50d-6b94420ea742" }, "id": "1764584306124", "message": "backup VM", "start": 1764584306124, "status": "success", "infos": [ { "message": "No new data to upload for this VM" }, { "message": "No healthCheck needed because no data was transferred." } ], "tasks": [ { "id": "1764584306137:1", "message": "clean-vm", "start": 1764584306137, "status": "success", "end": 1764584306150, "result": { "merge": false } } ], "end": 1764584306151 }, -
Mirror of full backups with low retention - copies all vms and then deletes them
I noticed that for "Mirror full backup", the process copies every backup, and then immediately removes backups older than retention. This seems unnecesary for full backups as there is no relationship between each backup for a VM.

-
Mirror backup: No new data to upload for this vm?
I am testing the mirror backup feature. I see a message "New new data to upload for this VM" although it is clearly transfering data.

I am using XOA on stable channel.
-
RE: Long backup times via NFS to Data Domain from Xen Orchestra
@MajorP93 aha, yea. Per disk concurrency is important too.
-
RE: Long backup times via NFS to Data Domain from Xen Orchestra
@MajorP93 You need to set concurrency so it only exports one VM at the time.

-
Mirror backup: Progress status and ETA
Hi,
I would like to have more progress information for the "Mirror backup" feature. Currently there is no progress other than how many VMs that have been copied so far.
What I would like is:
The total data size to be transferred plus how many VMs/backups that are left in the queue plus an approximate ETA. All this information should be available for XOA since it already knows what already exists in the source Remote.
Additionally I would like the option to cancel a running Mirror backup. The Cancel button is disabled when I try:

There is also no Task for the Mirror job listed the XOA Tasks screen.
I found an earlier question like this, but there was no answer there.
https://xcp-ng.org/forum/topic/11108/incremental-mirror-backup-progress-information-in-xoa -
Xen 4.21
I just saw that Xen 4.21 was released.
It has support for amd-cppc and amd-cppc-epp and resizable BAR. Looks to be good release

They even have a quote from @olivierlambert too - here's hoping we get Xen 4.21 in next XCP-ng release
