@Biggen At the moment, xcp-ng center provides some better views and overviews not yet available in XO.. Hoping next major version fixes this
Best posts made by Forza
-
RE: [WARNING] XCP-ng Center shows wrong CITRIX updates for XCP-ng Servers - DO NOT APPLY - Fix released
-
RE: Best CPU performance settings for HP DL325/AMD EPYC servers?
Sorry for spamming the thread.
I have two identical servers (srv01 and srv02) with AMD EPYC 7402P 24 Core CPUs. On srv02 I enabled the
LLC as NUMA Node
.I've done some quick benchmarks with
Sysbench
on Ubuntu 20.10 with 12 assigned cores. Command line:sysbench cpu run --threads=12
It would seem that in this test the NUMA option is much faster, 194187 events vs 103769 events. Perhaps I am misunderstanding how sysbench works?
With 7-zip the gain is much less, but still meaningful. A little slower in single-threaded performance but quite a bit faster in multi-threaded mode.
-
RE: Host stuck in booting state.
Problem was a stale connection with the NFS server. A reboot of the NFS server fixed the issue.
-
RE: Restoring a downed host ISNT easy
@xcprocks said in Restoring a downed host ISNT easy:
So, we had a host go down (OS drive failure). No big deal right? According to instructions, just reinstall XCP on a new drive, jump over into XOA and do a metadata restore.
Well, not quite.
First during installation, you really really must not select any of the disks to create an SR as you could potentially wipe out an SR.
Second, you have to do the sr-probe and sr-introduce and pbd-create and pbd-plug to get the SRs back.
Third, you then have to use XOA to restore the metadata which according to the directions is pretty simple looking. According to: https://xen-orchestra.com/docs/metadata_backup.html#performing-a-restore
"To restore one, simply click the blue restore arrow, choose a backup date to restore, and click OK:"
But this isn't quite true. When we did it, the restore threw an error:
"message": "no such object d7b6f090-cd68-9dec-2e00-803fc90c3593",
"name": "XoError",Panic mode sets in... It can't find the metadata? We try an earlier backup. Same error. We check the backup NFS share--no its there alright.
After a couple of hours scouring the internet and not finding anything, it dawns on us... The object XOA is looking for is the OLD server not a backup directory. It is looking for the server that died and no longer exists. The problem is, when you install the new server, it gets a new ID. But the restore program is looking for the ID of the dead server.
But how do you tell XOA, to copy the metadata over to the new server? It assumes that you want to restore it over an existing server. It does not provide a drop down list to pick where to deploy it.
In an act of desperation, we copied the backup directory to a new location and named it with the ID number of the newly recreated server. Now XOA could restore the metadata and we were able to recover the VMs in the SRs without issue.
This long story is really just a way to highlight the need for better host backup in three ways:
A) The first idea would be to create better instructions. It ain't nowhere as easy as the documentation says it is and it's easy to mess up the first step so bad that you can wipe out the contents of an SR. The documentation should spell this out.
B) The second idea is to add to the metadata backup something that reads the states of SR to PBD mappings and provides/saves a script to restore them. This would ease a lot of the difficulty in the actual restoring of a failed OS after a new OS can be installed.
C) The third idea is provide a dropdown during the restoration of the metadata that allows the user to target a particular machine for the restore operation instead of blindly assuming you want to restore it over a machine that is dead and gone.
I hope this helps out the next person trying to bring a host back from the dead, and I hope it also helps make XOA a better product.
Thanks for a good description of the restore process.
I was wary of the metadata-backup option. It sounds simple and good to have, but as you said it is in no way a comprehensive restore of a pool.
I'd like to add my own oppinion here. A full pool restore, including network, re-attaching SRs and everything else that is needed to quickly get back up and running. Also a restore pool backup should be available on the boot media. It could look for a NFS/CIFS mount or a USB disk with the backup files on. This would avoid things like issues with bonded networks not working.
-
RE: Remove VUSB as part of job
Might a different solution be to use a USB network bridge instead of direct attached USB? Something like this https://www.seh-technology.com/products/usb-deviceserver/utnserver-pro.html (There are different options available)... We use my-utn-50a with hardware USB keys and it has shown to be very reliable over the years.
-
RE: Need some advice on retention
@rtjdamen you could simply make two backup jobs, one for daily backups and one for monthly backups.
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
Since the nfs shares can be mounted on other hosts, I'd guess a fsid/clientid mismatch.
In the share, always specify fsid export option. If you do not use it, the nfs server tries to determine a suitable id from the underlying mounts. It may not always be reliable, for example after an upgrade or other changes. Now, if you combine this with a client that uses
hard
mount option and the fsid changes, it will not be possible to recover the mount as the client keeps asking for the old id.Nfs3 uses rpcbind and nfs4 doesn't, though this shouldn't matter if your nfs server supports both protocols. With nfs4 you should not export the same directory twice. That is do not export the root directory /mnt/datavol if /mnt/datavol/dir1 and /mnt/datavol/dir2 are exported.
So to fix this, you can adjust your exports (fsid, nesting) and the nfs mount option (to soft) , reboot the nfs server and client and see if it works.
-
RE: Netdata package is now available in XCP-ng
@andrewm4894 said in Netdata package is now available in XCP-ng:
Qq, what would be the best way for me to try spin up a sort of test or dev XCP-ng env for me to try things out on? Or is there sort of hardware involved such that this might not be so easy. In my mind I'm imagining spinning up a VM lol which probably shows my level of naivety
You can run XCP-ng inside a VM, as long as the hypervisor underneath exposes nested virtualisation. The actual installation of XCP-ng is very easy. Mostly click and run.
-
RE: [DEPRECATED] SMAPIv3 - Feedback & Bug reports
@olivierlambert hi. I'm also eager to see how the new v3 is progressing. From my company point of view, being able to compact VDIs using guest trim/unmap is very valuable as it minimises storage space usage and improves backup/restore speeds.
Latest posts made by Forza
-
RE: Change Resolution of the XCP-NG console itself
@thetechhipster Ah, the xsconsole resolution probably could be changed from the grub/kernel command line.
-
RE: Change Resolution of the XCP-NG console itself
Perhaps it would be possible to rebuild the bundled Tianocore image?
-
RE: Change Resolution of the XCP-NG console itself
At VM start, during Tiano firmware logo display, press ESC.
I often find the boot delay too short, which is annoying. Is it possible for xcpng to set a slightly longer default?
-
RE: I/O errors on file restore
@flakpyro said in I/O errors on file restore:
@Forza Did you ever get to the bottom of this? I am experiencing the same issue currently.
No. I still have an open support ticket on this.
-
RE: Need some advice on retention
@rtjdamen that works too of course. I find it easier to manage multiple jobs and use tags to match vms.
For example
Weekly full
Daily full
Every 4h incremental
Every 1h incremental
...The each VM is tagged with the type of backups that should be used. It also helps me see instantly what type of backups a specific VM has.
-
RE: Need some advice on retention
@rtjdamen you could simply make two backup jobs, one for daily backups and one for monthly backups.
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary That's OK. I just didn't want to confuse things and give bad advice for you
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@CodeMercenary said in All NFS remotes started to timeout during backup but worked fine a few days ago:
I used
exportfs -v
on UNRAID to look at the share options, without changing anything first, and they were set to:
(async,wdelay,hide,no_subtree_check,fsid=101,sec=sys,rw,secure,root_squash,no_all_squash)
So,
fsid
was already being used.This is good.
I'm having trouble getting the VDIs detached from dom0.
I was thinking the issue was using the nfs as a Remote (for storing backups), not as an SR (storage repository for VM/VDI). Those are very different things with different needs. Using
soft
on a SR can cause corruption in your VDI. I would also pay attention to theasync
export option as this can lead to corruption if your nfs server crashes or looses power (unless you have battery backups, etc). -
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
@tjkreidl said in All NFS remotes started to timeout during backup but worked fine a few days ago:
@Forza That all is good advice. Again, the showmount command is a good utility that cam show you right away if you can see/mount the storage device from your host.
I do not think showmount lists nfs4 only clients. At least not on my system. For nfs4 I can see connected clients via
/proc/fs/nfsd/clients/
Debug logging on a Linux nfs server can be controlled with
rpcdebug
. To enable all debug output for the nfsd you can userpcdebug -m nfsd all
. Though on the Unraid/Synology it might be different.In dmesg it would look like this:
[465664.478823] __find_in_sessionid_hashtbl: 1722337149:795964100:1:0 [465664.478829] nfsd4_sequence: slotid 0 [465664.478831] check_slot_seqid enter. seqid 4745 slot_seqid 4744 [465664.478928] found domain 10.5.1.1 [465664.478937] found fsidtype 7 [465664.478941] found fsid length 24 [465664.478944] Path seems to be </mnt/6TB/volume/haiku> [465664.478946] Found the path /mnt/6TB/volume/haiku [465664.478994] --> nfsd4_store_cache_entry slot 00000000e3c4c7e5
-
RE: All NFS remotes started to timeout during backup but worked fine a few days ago
Since the nfs shares can be mounted on other hosts, I'd guess a fsid/clientid mismatch.
In the share, always specify fsid export option. If you do not use it, the nfs server tries to determine a suitable id from the underlying mounts. It may not always be reliable, for example after an upgrade or other changes. Now, if you combine this with a client that uses
hard
mount option and the fsid changes, it will not be possible to recover the mount as the client keeps asking for the old id.Nfs3 uses rpcbind and nfs4 doesn't, though this shouldn't matter if your nfs server supports both protocols. With nfs4 you should not export the same directory twice. That is do not export the root directory /mnt/datavol if /mnt/datavol/dir1 and /mnt/datavol/dir2 are exported.
So to fix this, you can adjust your exports (fsid, nesting) and the nfs mount option (to soft) , reboot the nfs server and client and see if it works.