@Biggen At the moment, xcp-ng center provides some better views and overviews not yet available in XO.. Hoping next major version fixes this
Best posts made by Forza
-
RE: [WARNING] XCP-ng Center shows wrong CITRIX updates for XCP-ng Servers - DO NOT APPLY - Fix released
-
RE: Best CPU performance settings for HP DL325/AMD EPYC servers?
Sorry for spamming the thread.
I have two identical servers (srv01 and srv02) with AMD EPYC 7402P 24 Core CPUs. On srv02 I enabled the
LLC as NUMA Node
.I've done some quick benchmarks with
Sysbench
on Ubuntu 20.10 with 12 assigned cores. Command line:sysbench cpu run --threads=12
It would seem that in this test the NUMA option is much faster, 194187 events vs 103769 events. Perhaps I am misunderstanding how sysbench works?
With 7-zip the gain is much less, but still meaningful. A little slower in single-threaded performance but quite a bit faster in multi-threaded mode.
-
RE: Host stuck in booting state.
Problem was a stale connection with the NFS server. A reboot of the NFS server fixed the issue.
-
RE: Restoring a downed host ISNT easy
@xcprocks said in Restoring a downed host ISNT easy:
So, we had a host go down (OS drive failure). No big deal right? According to instructions, just reinstall XCP on a new drive, jump over into XOA and do a metadata restore.
Well, not quite.
First during installation, you really really must not select any of the disks to create an SR as you could potentially wipe out an SR.
Second, you have to do the sr-probe and sr-introduce and pbd-create and pbd-plug to get the SRs back.
Third, you then have to use XOA to restore the metadata which according to the directions is pretty simple looking. According to: https://xen-orchestra.com/docs/metadata_backup.html#performing-a-restore
"To restore one, simply click the blue restore arrow, choose a backup date to restore, and click OK:"
But this isn't quite true. When we did it, the restore threw an error:
"message": "no such object d7b6f090-cd68-9dec-2e00-803fc90c3593",
"name": "XoError",Panic mode sets in... It can't find the metadata? We try an earlier backup. Same error. We check the backup NFS share--no its there alright.
After a couple of hours scouring the internet and not finding anything, it dawns on us... The object XOA is looking for is the OLD server not a backup directory. It is looking for the server that died and no longer exists. The problem is, when you install the new server, it gets a new ID. But the restore program is looking for the ID of the dead server.
But how do you tell XOA, to copy the metadata over to the new server? It assumes that you want to restore it over an existing server. It does not provide a drop down list to pick where to deploy it.
In an act of desperation, we copied the backup directory to a new location and named it with the ID number of the newly recreated server. Now XOA could restore the metadata and we were able to recover the VMs in the SRs without issue.
This long story is really just a way to highlight the need for better host backup in three ways:
A) The first idea would be to create better instructions. It ain't nowhere as easy as the documentation says it is and it's easy to mess up the first step so bad that you can wipe out the contents of an SR. The documentation should spell this out.
B) The second idea is to add to the metadata backup something that reads the states of SR to PBD mappings and provides/saves a script to restore them. This would ease a lot of the difficulty in the actual restoring of a failed OS after a new OS can be installed.
C) The third idea is provide a dropdown during the restoration of the metadata that allows the user to target a particular machine for the restore operation instead of blindly assuming you want to restore it over a machine that is dead and gone.
I hope this helps out the next person trying to bring a host back from the dead, and I hope it also helps make XOA a better product.
Thanks for a good description of the restore process.
I was wary of the metadata-backup option. It sounds simple and good to have, but as you said it is in no way a comprehensive restore of a pool.
I'd like to add my own oppinion here. A full pool restore, including network, re-attaching SRs and everything else that is needed to quickly get back up and running. Also a restore pool backup should be available on the boot media. It could look for a NFS/CIFS mount or a USB disk with the backup files on. This would avoid things like issues with bonded networks not working.
-
RE: Remove VUSB as part of job
Might a different solution be to use a USB network bridge instead of direct attached USB? Something like this https://www.seh-technology.com/products/usb-deviceserver/utnserver-pro.html (There are different options available)... We use my-utn-50a with hardware USB keys and it has shown to be very reliable over the years.
-
RE: Netdata package is now available in XCP-ng
@andrewm4894 said in Netdata package is now available in XCP-ng:
Qq, what would be the best way for me to try spin up a sort of test or dev XCP-ng env for me to try things out on? Or is there sort of hardware involved such that this might not be so easy. In my mind I'm imagining spinning up a VM lol which probably shows my level of naivety
You can run XCP-ng inside a VM, as long as the hypervisor underneath exposes nested virtualisation. The actual installation of XCP-ng is very easy. Mostly click and run.
-
RE: SMAPIv3 - Feedback & Bug reports
@olivierlambert hi. I'm also eager to see how the new v3 is progressing. From my company point of view, being able to compact VDIs using guest trim/unmap is very valuable as it minimises storage space usage and improves backup/restore speeds.
-
RE: IP Address changed for a slave within a Pool, How do I reconfigure it?
For now, a warning in XOA when changing the management IP would be nice. It could include an instruction on how to update the pool members after the IP of the master was changed.
Is it possible to use hostnames instead of IPs? It could make things easier?
-
RE: IP Address changed for a slave within a Pool, How do I reconfigure it?
@olivierlambert said in IP Address changed for a slave within a Pool, How do I reconfigure it?:
On a slave host, the database is in read only. If the slave lost the connection with the master, there's no way to make any change into the "local" slave XAPI database.
It makes sense. It would have been nice if pool members get the updated IP through XAPI before the actual change on the master happens.
Latest posts made by Forza
-
RE: Netdata package is now available in XCP-ng
@grapesmc at one point I had the idea to set up a xcp-ng build environment and build netdata in there, then simply copy it over to the xcp-ng hosts. Unfortunately I was not able to dedicate the time to this so far.
-
RE: NVMe storage wrong detection
@laurentm Xcp-ng doesn't support 4Kn sectors. Perhaps this is why only one one of the disks could be used.
-
RE: GPU pass through - suggestion for suitable hardware
Thanks for the feedback.
I was thinking a low-end Quadro. The performance requirements are rather low, it's just that software rendering isn't acceptable.
It's interesting to hear the RTX 2060 works. Didn't Nvidia block pass through with consumer cards?
Also, how is it working with RDP. Does 3D/HW accel work with Windows 10 VMs, or is it only with Windows Server?
https://linustechtips.com/topic/1140266-how-to-maximize-performance-on-remote-desktop/
-
GPU pass through - suggestion for suitable hardware
I'm looking to add a GPU to our setup for pass through to a specific VM. The user will connect to the VM via Remote Desktop/RDP.
What type of card should I use. I do not need extreme performance, but I need something that can handle a CAD software with simple models.
Server is an EPYC 7402 24-core/48-thread system with an available PCIe 4.0 x16 slot.
-
RE: Epyc VM to VM networking slow
Perhaps try the Debian 12 guest with mitigations=off
-
RE: Epyc VM to VM networking slow
It could be a cpu/xeon specific optimisation that is very unfortunate on EPYCs. It isn't unheard of.
-
RE: Epyc VM to VM networking slow
Those are really interesting results.
How can we as a community best help find the root cause/debug this issue?
For example, is it an ovswitch issue or perhaps something to do with excessive context switches?
-
RE: Epyc VM to VM networking slow
I've found that iperf isnt super great at scaling it's performance, which might be a small factor here.
I too have similar performance figures VM<->VM on a AMD EPYC 7402P 24-Core server. About 6-8Gbit/s.
-
Alternative to memory ballooning
Memory ballooning is a good thing, but can lead to issues, especially during migration. Since Citrix seems to have abandoned the technology, I am happy Xcp-ng keeps it.
In QEMU/KVM/VirtIO world there is a new thing as an alternative to ballooning, which is memory hot plug. https://virtio-mem.gitlab.io/
The basic idea is to emulate memory modules and utilise OS support for memory hot plug, something that has been used on bare metal servers before.
Could this be a avenue to develop with xcp-ng to overcome the limitations of the traditional ballooning?
One clear benefit is to be able to add more memory than originally allocated a guest without restarting the guest.
-
RE: Enhancement: Virtual OpenGL support (Virgl)
It might be happening after all
https://www.phoronix.com/news/AMD-Xen-GPU-For-Cars
https://www.phoronix.com/news/AMD-GPU-Xen-Hypervisor-S3
Having virtio gpu with gl/vulkan/dx3d support would be really interesting.