Subcategories

  • All Xen related stuff

    613 Topics
    6k Posts
    julienXOvatesJ
    Hi @robbie-c, could you send a screenshot ? is that in XO ?
  • The integrated web UI to manage XCP-ng

    27 Topics
    354 Posts
    O
    Hi @olivierlambert and @pilow Thank you for your answers, it helps a lot, Regards, Olivier
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    124 Topics
    1k Posts
    J
    @CyaVMware said: Have a simple question for someone. Looking at migrating a client to XCP. Currently they have a data disk that is exactly 2TB in VMware. My question is the limitation anything OVER 2TB, or is it 2TB AND above? I'd prefer not to use the VMware converter tool to shrink the disk to 1.99TB if I don't have to. VHD format has a strict limit of roughly 2TB (around 2040 GB), so if your client’s VM disks are larger than this then you’re going to have trouble with VHD format. However QCOW2 support is almost here (production ready) about to reach RC2 state, once this reaches stable you’ll be able to safely reach a max of around 16TB. That is if your using ext4 based SR, other Filesystems have different limits. If using a networked file sharing protocol service then, the limit of the host server’s filesystem applies. Though QCOW2 VDI has a limit of 16 TiB.
  • Hardware related section

    165 Topics
    2k Posts
    poddingueP
    From what I understand, the OS-to-iDRAC pass-through is a direct USB link between the host and the BMC, designed for the hypervisor host itself to talk to the iDRAC, not something that would naturally bridge through Xen's networking layer into a VM. That might be why the PIF shows up in XCP-ng but the VM still can't reach 169.254.0.1; the bare metal case works because the OS sits directly on the hardware with no virtualisation layer in between. If you need the VM to have direct BMC access, USB passthrough might be worth exploring; XCP-ng does support passing USB devices through to VMs (https://docs.xcp-ng.org/compute#usb-passthrough), though I'm not certain the iDRAC virtual USB NIC would show up as a passthrough-able device. Might be worth a mention to @Team-Hypervisor-Kernel to find out; others here will know more than me.
  • The place to discuss new additions into XCP-ng

    252 Topics
    3k Posts
    nathanael-hN
    @lknite Hello, sorry if replying in this old thread looks odd... but I wanted to share that we are starting to work on supporting Cluster API. We have an internal proof of concept: /home/nathanael/Images/Captures d’écran/Capture d’écran du 2026-05-05 17-12-22.png[image: 1777999357100-a9596fde-e251-4996-8fc9-3f8e208f9f01-image.jpeg]
  • Unbootable VHD backups

    19
    1
    0 Votes
    19 Posts
    2k Views
    D
    @AtaxyaNetwork said in Unbootable VHD backups: @Schmidty86 Try to detach the disk and reattach, it should be xvda in order to be bootable That's what I was thinking as well, but obviously something is off with this VM. @Schmidty86 is the old host still online? If so you might be able to perform a Live Migration or a replication job to copy it from the old host to the new.
  • CBT Error when powering on VM

    28
    0 Votes
    28 Posts
    4k Views
    R
    AlmaLinux 8.10
  • RHEL UEFI boot bug

    5
    1
    0 Votes
    5 Posts
    1k Views
    kiuK
    Hello, thank you for your reply @bogikornel @TrapoSAMA . Here are my processor specifications: Intel Xeon E5-1620 v2 (8) @ 3.691GHz. Unfortunately @Andrew , I have to use RHEL 10 on my server ^^ but thank you for providing the link. I will change my processor/server.
  • DR error - (intermediate value) is not iterable

    2
    0 Votes
    2 Posts
    541 Views
    N
    I worked with ChatGPT on this for a bit. We have narrowed it down to an issue with the NFS Storage that I ship the backups to. "When you recreated storage and moved data back, OMV is technically exporting a different underlying filesystem object than before. NFS clients that had an old handle cached (your XCP-ng host) try to access it and get ESTALE. That explains the initial backup errors and why deleting/re-adding the SR is failing now." I had to remove the NFS storage from XCP-ng, then delete the NFS share from OMV, then add the NFS share back to OMV, and then add it back to XCP-ng. I probably could have resolved this with a reboot, but I didn't wanna. This issue is resolved now.
  • 0 Votes
    31 Posts
    6k Views
    D
    As @Andrew said, your host itself is unhealthy, you might be able to disassemble the CPU and heatseat, clean it up and add some new paste to address the issue with the CPU overheating (if the paste is shot). As for the memory issue, run a memtest on the host and see what is reported.
  • Connection failed "EHOSTUNREACH"

    4
    1
    0 Votes
    4 Posts
    653 Views
    A
    @santos_luan Check if there is any firewall issue on the XO-ce side.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    11
    1 Votes
    11 Posts
    3k Views
    olivierlambertO
    Just quickly chiming in to confirm what @bleader said. We'll be happy to assist you further, especially to put you in contact with our head of security at Vates to discuss our future certification plans (he's a former ANSSI employee BTW).
  • 0 Votes
    7 Posts
    3k Views
    olivierlambertO
    CPU speed is great to enhance all Xen operations (using grants for example). But tapdisk got a lot of room to be better outside that, thanks to multiqueue and so on. However, it's not clear if it's better to improve tapdisk or making something different. This is an active topic of reasearch.
  • Windows Server not listening to radius port after vmware migration

    6
    0 Votes
    6 Posts
    1k Views
    nikadeN
    @acebmxer said in Windows Server not listening to radius port after vmware migration: After migrating our windows server that host our Duo Proxy manager having an issue. [info] Testing section 'radius_client' with configuration: [info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'} [error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812 [debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote host After the migration I did have to reset the IP address and I did install the Xen tools via windows update. Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well.... On your VM that runs the Duo Auth Proxy service, check if the service is actually listening on the external IP or if its just listening on 127.0.0.1 If its just listening on 127.0.0.1 you can try to repair the Duo Auth Proxy service, take a snapshot before doing so. Also, if you're using encrypted passwords in your Duo Auth Proxy configuration you probably need to re-encrypt them, just a heads up, since I just had to do so after migrating one of ours. Edit: Do you have the "interface" option specified in your Duo Auth Proxy configuration?
  • Best practices for small/edge/IoT deployments? (remote management, power)

    5
    0 Votes
    5 Posts
    975 Views
    H
    We have some sites with a single-host XCP-ng pool backed by a small UPS. We install nut directly in dom-0. I'm aware of the policy for adding anything to dom-0 but we believe this usecase fits in the recommendations (simple enough, no vast dependencies, marginal resources usage, no interference ...). With proper testing works pretty well. nut inside a dedicated RPi definitely makes sense for a site with multiple hosts backed by the same UPS.
  • Unable to Access MGMT interface/ No NICS detected

    24
    4
    0 Votes
    24 Posts
    6k Views
    C
    @AtaxyaNetwork I'll check it out! Im currently on chrome. So ill see if they have something close to it. Thank you!
  • Migration compression is not available on this pool

    9
    0 Votes
    9 Posts
    1k Views
    henri9813H
    Hello, We tried the compression feature. You "can see" a benefit only if you have a shared storage. ( and again, the migration between 2 nodes is already very fast, we don't see major difference, but maybe a VM will a lot of ram ( >32GB ) can see a difference. If you don't have a shared storage ( like XOSTOR, NFS, ISCSI ), then you will not see any difference because there is a limitation of 30MB/s-40MB/s ( see here: https://xcp-ng.org/forum/topic/9389/backup-migration-performance ) Best regards,
  • Multi gpu peer to peer not available in vm

    4
    0 Votes
    4 Posts
    663 Views
    olivierlambertO
    Hmm I'm not sure it's even possible due to the nature of isolation provided by Xen Let me ask @Team-Hypervisor-Kernel
  • Internal error: Not_found after Vinchin backup

    56
    0 Votes
    56 Posts
    11k Views
    olivierlambertO
    So you have to dig in the SMlog to check what's going on
  • Migrating from XCP-ng Windows guest tools to Citrix

    20
    0 Votes
    20 Posts
    4k Views
    B
    I did it that way so as to get the old Citrix driver first, and then let it update and watch it reboot. That was my logic anyway. @dinhngtu said in Migrating from XCP-ng Windows guest tools to Citrix: @bberndt Okay, I managed to reproduce your situation. I think it's because the "driver via Windows Update" option was enabled after installing the XS drivers, which caused the drivers to lock onto the non-C000 device and prevent updates from coming in. Normally, XenClean should be able to fix the situation. But if you want to fix things manually, or if things still don't work (C000 is still not active), here's a procedure that should fix the problem: Take a snapshot/backup/etc. Keep a note of static IP addresses (if you have any; there's a chance those will be lost). You can also use our script here: https://github.com/xcp-ng/win-pv-drivers/blob/xcp-ng-9.1/XenDriverUtils/Copy-XenVifSettings.ps1 Reboot in safe mode and disable the non-C000 device. Reboot back to normal mode; it'll ask you to reboot a few more times. The C000 device should now be active and you should be able to get driver updates again. (Optional) You can now enable and manually update the non-C000 device (Browse my computer - Let me pick).
  • Pool Master

    8
    0 Votes
    8 Posts
    867 Views
    R
    @olivierlambert Dang ok. I waited a few minute then clicked the Connect in XOA for that host and it connected. Not sure what to do really.
  • v8.2.1 rolling pool update getting stuck

    4
    0 Votes
    4 Posts
    597 Views
    olivierlambertO
    Do you have any SR using a VM? (like an ISO SR in a NFS share inside a VM). This is freezing NFS and makes host taking half an hour to restart. Logs should tell you why the RPU failed if it failed
  • Other 2 hosts reboot when 1 host in HA enabled pool is powered off

    10
    0 Votes
    10 Posts
    2k Views
    olivierlambertO
    It's impossible to answer right off the bat without knowing more in details what's going on. HA is a complex beast, and combined with HCI, requires a lot of knowledge to find what's causing your issue, between both xha and XOSTOR. In other words, it is very demanding to analyze all the logs and trying to make sense of it. However, I can give you some clues to make sense of it: The HA log is at /var/log/xha.log. When you shutdown a host, you should be able to watch (on each host) what the HA is deciding to do. My gut feeling: there's maybe a XOSTOR issue making the heartbeat SR being unavailable, and so all hosts will autofence Then you need to understand the XOSTOR logs for why the cluster wasn't doing what's expected. My best advice: remove HA first, and only then investigate on XOSTOR. Kill on node (not the master) and check if your VMs are still able to start/snapshot/write inside.
  • PXE Boot from new VM not working

    2
    2
    0 Votes
    2 Posts
    750 Views
    bleaderB
    @JBlessing as it looks like it does start, it looks like the networking side is working, at least at first. Just for debugging purpose you could try to switch that VM to BIOS instead of UEFI if it is possible, maybe it is related to what the pxe is starting in the VM. You could also try switching the VM between realtek and e1000 NIC, at this stage, PV drivers are not there so it is using an emulated NIC, maybe the image your PXE starts doesn't like the one you're using and it gets stuck somehow. As you're already using it with vmware, I assume you know how to size your VM, but if you went for a tight RAM value for this VM, you could try to give it more RAM to see if that could be related, as everything has to fit in RAM at some point, we may be using more at startup than vmware… Hope one of this can help
  • Can't get slave out of maintenance mode after yum updates

    3
    0 Votes
    3 Posts
    518 Views
    olivierlambertO
    About the xsconsole: sometimes it's not refreshing. You can try to get access to the console, then type "xsconsole" it will start it and you should see it works You must have the master up to date if you want your slave to connect again. I never tried to elect a new master in the middle of the upgrade, I would discourage it. Better shutdown some VMs on the master, upgrade it and you are automatically back on track.