XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Forza
    Offline
    • Profile
    • Following 1
    • Followers 0
    • Topics 94
    • Posts 473
    • Groups 0

    Forza

    @Forza

    115
    Reputation
    112
    Profile views
    473
    Posts
    0
    Followers
    1
    Following
    Joined
    Last Online
    Website wiki.tnonline.net

    Forza Unfollow Follow

    Best posts made by Forza

    • RE: [WARNING] XCP-ng Center shows wrong CITRIX updates for XCP-ng Servers - DO NOT APPLY - Fix released

      @Biggen At the moment, xcp-ng center provides some better views and overviews not yet available in XO.. Hoping next major version fixes this 🙂

      posted in News
      ForzaF
      Forza
    • RE: XCP-ng Guest Agent - Reported Windows Version for Servers

      @olivierlambert said in XCP-ng Guest Agent - Reported Windows Version for Servers:

      It's funny to see Microsoft having a version 10 for an edition named 11. I suppose it's not a surprise for an organization that huge.

      They did say that Windows 10 would be the last version of Windows... 😄

      posted in XCP-ng
      ForzaF
      Forza
    • RE: Citrix or XCP-ng drivers for Windows Server 2022

      @dinhngtu Thank you. I think it is clear for me now.

      The docs at https://xcp-ng.org/docs/guests.html#windows could be improved to cover all three options but also to be a little more concise to make it easier to read.

      posted in XCP-ng
      ForzaF
      Forza
    • RE: Epyc VM to VM networking slow

      Tested the new updates on my prod EPYC 7402P pool with iperf3. Seems like quite a good uplift 🙂

      Ubuntu 24.04 VM (6 cores) -> bare metal server (6 cores) over a 2x25Gbit LACP link.

      Pre-patch

      • iperf3 -P1 : 9.72Gbit/s
      • iperf3 -P6 : 14.6GBis/s

      Post Patch

      • iperf3 -P1 : 11.3GBit/s
      • iperf3 -P6 : 24.2GBit/s

      Ubuntu 24.04 VM (6 cores) -> Ubuntu 24.04 VM (6 cores) on the same host

      Pre Patch

      Forgot to test this...

      Post Patch

      • iperf3 -P1 : 13.7GBit/s
      • iperf3 -P6 : 30.8GBit/s
      • iperf3 -P24 : 40.4GBit/s

      Our servers have Last-Level Cache (LLC) as NUMA Node enabled as most our VMs do not have huge amount of vCPUs assigned. This means for the EPYC 7402P (24c/48t) we have 8 NUMA nodes. We however do not use xl cpupool-numa-split.

      posted in Compute
      ForzaF
      Forza
    • RE: Best CPU performance settings for HP DL325/AMD EPYC servers?

      Sorry for spamming the thread. 🙂

      I have two identical servers (srv01 and srv02) with AMD EPYC 7402P 24 Core CPUs. On srv02 I enabled the LLC as NUMA Node.

      I've done some quick benchmarks with Sysbench on Ubuntu 20.10 with 12 assigned cores. Command line: sysbench cpu run --threads=12

      It would seem that in this test the NUMA option is much faster, 194187 events vs 103769 events. Perhaps I am misunderstanding how sysbench works?

      b65ec3da-4b1d-430e-b90d-02542fe59552-image.png

      With 7-zip the gain is much less, but still meaningful. A little slower in single-threaded performance but quite a bit faster in multi-threaded mode.
      f9592ee9-d327-4ce1-9e34-0ee86280d9e9-image.png

      posted in Compute
      ForzaF
      Forza
    • RE: Host stuck in booting state.

      Problem was a stale connection with the NFS server. A reboot of the NFS server fixed the issue.

      posted in Compute
      ForzaF
      Forza
    • RE: Restoring a downed host ISNT easy

      @xcprocks said in Restoring a downed host ISNT easy:

      So, we had a host go down (OS drive failure). No big deal right? According to instructions, just reinstall XCP on a new drive, jump over into XOA and do a metadata restore.

      Well, not quite.

      First during installation, you really really must not select any of the disks to create an SR as you could potentially wipe out an SR.

      Second, you have to do the sr-probe and sr-introduce and pbd-create and pbd-plug to get the SRs back.

      Third, you then have to use XOA to restore the metadata which according to the directions is pretty simple looking. According to: https://xen-orchestra.com/docs/metadata_backup.html#performing-a-restore

      "To restore one, simply click the blue restore arrow, choose a backup date to restore, and click OK:"

      But this isn't quite true. When we did it, the restore threw an error:

      "message": "no such object d7b6f090-cd68-9dec-2e00-803fc90c3593",
      "name": "XoError",

      Panic mode sets in... It can't find the metadata? We try an earlier backup. Same error. We check the backup NFS share--no its there alright.

      After a couple of hours scouring the internet and not finding anything, it dawns on us... The object XOA is looking for is the OLD server not a backup directory. It is looking for the server that died and no longer exists. The problem is, when you install the new server, it gets a new ID. But the restore program is looking for the ID of the dead server.

      But how do you tell XOA, to copy the metadata over to the new server? It assumes that you want to restore it over an existing server. It does not provide a drop down list to pick where to deploy it.

      In an act of desperation, we copied the backup directory to a new location and named it with the ID number of the newly recreated server. Now XOA could restore the metadata and we were able to recover the VMs in the SRs without issue.

      This long story is really just a way to highlight the need for better host backup in three ways:

      A) The first idea would be to create better instructions. It ain't nowhere as easy as the documentation says it is and it's easy to mess up the first step so bad that you can wipe out the contents of an SR. The documentation should spell this out.

      B) The second idea is to add to the metadata backup something that reads the states of SR to PBD mappings and provides/saves a script to restore them. This would ease a lot of the difficulty in the actual restoring of a failed OS after a new OS can be installed.

      C) The third idea is provide a dropdown during the restoration of the metadata that allows the user to target a particular machine for the restore operation instead of blindly assuming you want to restore it over a machine that is dead and gone.

      I hope this helps out the next person trying to bring a host back from the dead, and I hope it also helps make XOA a better product.

      Thanks for a good description of the restore process.

      I was wary of the metadata-backup option. It sounds simple and good to have, but as you said it is in no way a comprehensive restore of a pool.

      I'd like to add my own oppinion here. A full pool restore, including network, re-attaching SRs and everything else that is needed to quickly get back up and running. Also a restore pool backup should be available on the boot media. It could look for a NFS/CIFS mount or a USB disk with the backup files on. This would avoid things like issues with bonded networks not working.

      posted in Xen Orchestra
      ForzaF
      Forza
    • RE: Remove VUSB as part of job

      Might a different solution be to use a USB network bridge instead of direct attached USB? Something like this https://www.seh-technology.com/products/usb-deviceserver/utnserver-pro.html (There are different options available)... We use my-utn-50a with hardware USB keys and it has shown to be very reliable over the years.

      posted in Xen Orchestra
      ForzaF
      Forza
    • RE: I/O errors on file restore

      I re-checked again but the issue is unfortunately not resolved. It does not happen on all VMs and files, so maybe there is something wrong somehow in the VDI?

      posted in Backup
      ForzaF
      Forza
    • RE: ZFS for a backup server

      @McHenry

      Looks like you want disaster recovery option. It creates a ready-to-use VM on a separate XCP-ng server. If your main server fails you can start the vm directly off the second server.

      In any case, backups can be restored with XO to any server and storage available in XCP-ng.

      posted in Backup
      ForzaF
      Forza

    Latest posts made by Forza

    • RE: Incorret time in bash-prompt and logs on XOA

      @Danp said in Incorret time in bash-prompt and logs on XOA:

      The bash prompt may be displaying the incorrect time due to these lines in /etc/bash.bashrc --

      # Default user timezone is EST
      if [ -z "$TZ" ]
      then
        export TZ=EST
      fi
      

      Are you getting the expected result if you comment out this section of code and then reboot?

      Had the same issue myself, and removing that helped.

      posted in Management
      ForzaF
      Forza
    • RE: XCP-ng Guest Agent - Reported Windows Version for Servers

      @olivierlambert said in XCP-ng Guest Agent - Reported Windows Version for Servers:

      It's funny to see Microsoft having a version 10 for an edition named 11. I suppose it's not a surprise for an organization that huge.

      They did say that Windows 10 would be the last version of Windows... 😄

      posted in XCP-ng
      ForzaF
      Forza
    • RE: Performing automated shutdown during a power failure using a USB-UPS with NUT - XCP-ng 8.2

      @samuelolavo, nice work! I wonder if it can be improved by using an independent machine/unit and login remotely via ssh and execute the script? This way we need no modification on xcp-ng dom0 itself.

      posted in Compute
      ForzaF
      Forza
    • RE: Force Remove a NFS Storage Repository

      @kagbasi-ngc As far as I know, Linux cannot release it without rebooting.

      posted in Management
      ForzaF
      Forza
    • RE: Force Remove a NFS Storage Repository

      @kagbasi-ngc A fair warning that lazy unmount doesn't release the mount or any active file descriptors, just hides them from them from the mount output.

      posted in Management
      ForzaF
      Forza
    • RE: Epyc VM to VM networking slow

      @dinhngtu said in Epyc VM to VM networking slow:

      @Forza There's a new script here that will help you check the VM's status wrt. the Fix 1.

      Thank you. It does indeed look like the EPYC fix is active in XOA.

      [07:25 22] xoa:~$ python3 epyc-fix-check.py
      'xen-platform-pci' PCI IO mem address is 0xFB000000
      Grant table cacheability fix is ACTIVE.
      

      Has Vates checked if a newer kernel would help the network performance with XOA?

      Current kernel is: linux-image-amd64/oldstable,now 6.1.148-1 amd64 [installed]

      When trying to install any of the newer kernels (6.12.43-*) it immediately fails dependency check:

      [07:30 22] xoa:~$ apt install linux-image-6.12.43+deb12-
      linux-image-6.12.43+deb12-amd64                 linux-image-6.12.43+deb12-cloud-amd64-unsigned
      linux-image-6.12.43+deb12-amd64-dbg             linux-image-6.12.43+deb12-rt-amd64
      linux-image-6.12.43+deb12-amd64-unsigned        linux-image-6.12.43+deb12-rt-amd64-dbg
      linux-image-6.12.43+deb12-cloud-amd64           linux-image-6.12.43+deb12-rt-amd64-unsigned
      linux-image-6.12.43+deb12-cloud-amd64-dbg
      
      [07:30 22] xoa:~$ apt install linux-image-6.12.43+deb12-amd64
      Reading package lists... Done
      Building dependency tree... Done
      Reading state information... Done
      Some packages could not be installed. This may mean that you have
      requested an impossible situation or if you are using the unstable
      distribution that some required packages have not yet been created
      or been moved out of Incoming.
      The following information may help to resolve the situation:
      
      The following packages have unmet dependencies:
       linux-image-6.12.43+deb12-amd64 : PreDepends: linux-base (>= 4.12~) but 4.9 is to be installed
      E: Unable to correct problems, you have held broken packages.
      
      posted in Compute
      ForzaF
      Forza
    • RE: Epyc VM to VM networking slow

      @olivierlambert said in Epyc VM to VM networking slow:

      Kernel version could have an impact and being unrelated to the fix we provided. Get an even more recent kernel on your XOA to test (eg one from testing)

      I was not able to update the kernel due to some issue in XOA installation. I commented on this at https://xcp-ng.org/forum/post/97522

      But to clarify — my goal is to reach the same performance in XOA as with our other VMs. I had assumed it lacked the kernel support, and that led to the confusion. Sorry for that.

      posted in Compute
      ForzaF
      Forza
    • RE: Epyc VM to VM networking slow

      @olivierlambert said in Epyc VM to VM networking slow:

      If you want to compare perf between kernels, do that with the same number of vCPUs ideally (and also the same env, ie different kernel versions with the same iperf version)

      Sure. but in this context it is not that relevant as all other VMs got a boost while XOA didnt as much.

      Does the Debian 6.1 kernel that XOA uses have the backported fixes mentioned in https://xcp-ng.org/blog/2025/09/01/september-2025-maintenance-update-for-xcp-ng-8-3/

      Even if it has, it is clear that more resent kernels are much faster. Why not release a XOA with more recent kernels?

      posted in Compute
      ForzaF
      Forza
    • RE: Epyc VM to VM networking slow

      @bleader said in Epyc VM to VM networking slow:

      @Forza By default XOA VM has 2 vcpus, how many vcpus do your ubuntu have? Althrough iperf isn't running multithreaded in your test, there is one queue on the kernel side of the VM per vcpu to process packets.

      I have 8 CPUS on XOA, 4 CPUs in the Alpine VM and 6 in the Ubuntu VM

      P4 uses 4 threads with iperf3.

      @bleader Are you saying you do not see any performance differences between XOA and VMs with more recent kernels?

      posted in Compute
      ForzaF
      Forza
    • RE: Epyc VM to VM networking slow

      @olivierlambert said in Epyc VM to VM networking slow:

      XOA's kernel should have the capability already, as it's a Debian 12 with stock kernel. Also, the bottleneck is ONLY between VMs on the same host.

      OK, but then I do not understand the huge difference.

      Tested some VM-VM traffic on the same host:

      Ubuntu (kernel 6.8) -> Alpine (kernel 6.12): 13.7Gbit/s, P4 = 23.8 Gbit/s
      XOA (kernel 6.1) -> Ubuntu (kernel 6.8) : 5.5 Gbit/s, P4 = 17.5 Gbit/s
      XOA (kernel 6.1) -> Alpine (kernel 6.12): 11.9 Gbit/s, P4 = 13.6 Gbit/s

      And for measure against the bare metal NFS server:
      Alpine -> NFS SR (kernel 6.12): 13.7 Gbit/s, P4 = 23.4 Gbit/s

      posted in Compute
      ForzaF
      Forza