Subcategories

  • All Xen related stuff

    585 Topics
    6k Posts
    G
    @Chemikant784 It's likely that both Microsoft and Xen tools are a combination of the fix, I doubt XCP-ng has anything to do with this issue. And I never had time to check the XCP-ng Guest tools for Windows to see if this happened, but I'm guessing no or not tested. All my hosts are now on XCP-ng 8.3 and I don't see any point in testing 8.2 since it is EOL. And that said, I'm no farther along in my Server 2025 testing, too many other things going on to think about it right now. If I have time I need to burn the vSphere portion of my lab down and install either Harvester HCI or Windows Server for Hyper-V. Broadcom is just (seemingly) going out of their way to prevent people like me (or us) from learning their products and using them in our labs to further that goal. I've explained this several times to VMUG Advantage managers, but they seem so tied up in clawing out some continuing relationships with Broadcom that they will not "rock the boat". I've said these things in Broadcom webcasts as well, always a run-around with no answers. Sorry for the rant. All that said, eagerly awaiting XCP-ng 9, unfortunately I think the Alpha or Beta may wait until XO 6 is finished (just a guess). The updated kernel brings with it some storage changes that I really want to test, NFS nconnect=XX being one of them to see if I can get a little better performance to/from the disks. ESXi default was nconnect=4 and the VMs were slightly faster to/from their disks (all thin provisioned). The 4k "block" size and smaller is what I want to improve in all this.
  • The integrated web UI to manage XCP-ng

    23 Topics
    339 Posts
    C
    @lsouai-vates Great! Thanks for addressing this
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    105 Topics
    1k Posts
    F
    @DustinB When I tried to do that the first thing the migration process did was to power off the VM.
  • Hardware related section

    128 Topics
    1k Posts
    TeddyAstieT
    I've seen cases where the a hard reset is forced in case some devices can't DMA. Maybe it's related. If that's the case, something should show up in the IPMI, and the crash is usually instantaneous; otherwise, there is some delay (~5 seconds) between Xen/Dom0 crash and actual reboot.
  • The place to discuss new additions into XCP-ng

    244 Topics
    3k Posts
    M
    Hi all, not sure if anyone is still following this this thread, but just in case... Over the last few months I’ve been working on a CSI driver for Kubernetes that integrates with Xen Orchestra. I’ve been running it in my own setup for a couple of weeks now and it seems to be working well. I originally built it for my own needs, but I’ve since cleaned it up and added documentation so others can try it out. If you do, I’d love to hear your feedback, issues, or feature requests. Here is the link: github.com/m4rCsi/csi-xen-orchestra-driver Features Dynamic provisioning (create disks on demand via PVCs) Migration of disks between storage repositories (meant for local SRs) Static provisioning (use an existing VDI by UUID) Offline volume expansion Topology aware (pool, and optionally host) (with the help of xenorchestra-cloud-controller-manager)
  • 0 Votes
    1 Posts
    14 Views
    No one has replied
  • XCP-ng DR on Azure

    4
    -1 Votes
    4 Posts
    94 Views
    olivierlambertO
    It's not a trivial scenario indeed. Dom0 is a PV guest (in other words: a VM) on top of an hypervisor (Xen), on top of an hypervisor (HyperV). As you can see, more layers means more problems
  • Snapshot Question

    2
    0 Votes
    2 Posts
    56 Views
    R
    Sorry, I'm asking if I should be good deleting the snapshots
  • Unbootable VHD backups

    19
    1
    0 Votes
    19 Posts
    279 Views
    D
    @AtaxyaNetwork said in Unbootable VHD backups: @Schmidty86 Try to detach the disk and reattach, it should be xvda in order to be bootable That's what I was thinking as well, but obviously something is off with this VM. @Schmidty86 is the old host still online? If so you might be able to perform a Live Migration or a replication job to copy it from the old host to the new.
  • CBT Error when powering on VM

    28
    0 Votes
    28 Posts
    496 Views
    R
    AlmaLinux 8.10
  • Debian 9 virtual machine does not start in xcp-ng 8.3

    7
    2
    0 Votes
    7 Posts
    425 Views
    ForzaF
    @mdavico said in Debian 9 virtual machine does not start in xcp-ng 8.3: Update: If I change the vCPU configuration to 1 socket with 4 cores per socket the VM starts correctly Interesring. First time I heard it had any effect at all on a VM.
  • What to do about Realtek RTL8125 RTL8126 RTL8127 drivers

    12
    0 Votes
    12 Posts
    2k Views
    A
    @olivierlambert Great to know it's not something I was doing wrong! Hopefully these other cards I have ordered will afford me more luck! Cheers
  • RHEL UEFI boot bug

    5
    1
    0 Votes
    5 Posts
    310 Views
    kiuK
    Hello, thank you for your reply @bogikornel @TrapoSAMA . Here are my processor specifications: Intel Xeon E5-1620 v2 (8) @ 3.691GHz. Unfortunately @Andrew , I have to use RHEL 10 on my server ^^ but thank you for providing the link. I will change my processor/server.
  • DR error - (intermediate value) is not iterable

    2
    0 Votes
    2 Posts
    198 Views
    N
    I worked with ChatGPT on this for a bit. We have narrowed it down to an issue with the NFS Storage that I ship the backups to. "When you recreated storage and moved data back, OMV is technically exporting a different underlying filesystem object than before. NFS clients that had an old handle cached (your XCP-ng host) try to access it and get ESTALE. That explains the initial backup errors and why deleting/re-adding the SR is failing now." I had to remove the NFS storage from XCP-ng, then delete the NFS share from OMV, then add the NFS share back to OMV, and then add it back to XCP-ng. I probably could have resolved this with a reboot, but I didn't wanna. This issue is resolved now.
  • 0 Votes
    31 Posts
    2k Views
    D
    As @Andrew said, your host itself is unhealthy, you might be able to disassemble the CPU and heatseat, clean it up and add some new paste to address the issue with the CPU overheating (if the paste is shot). As for the memory issue, run a memtest on the host and see what is reported.
  • Connection failed "EHOSTUNREACH"

    4
    1
    0 Votes
    4 Posts
    218 Views
    A
    @santos_luan Check if there is any firewall issue on the XO-ce side.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    11
    1 Votes
    11 Posts
    1k Views
    olivierlambertO
    Just quickly chiming in to confirm what @bleader said. We'll be happy to assist you further, especially to put you in contact with our head of security at Vates to discuss our future certification plans (he's a former ANSSI employee BTW).
  • 0 Votes
    7 Posts
    1k Views
    olivierlambertO
    CPU speed is great to enhance all Xen operations (using grants for example). But tapdisk got a lot of room to be better outside that, thanks to multiqueue and so on. However, it's not clear if it's better to improve tapdisk or making something different. This is an active topic of reasearch.
  • Windows Server not listening to radius port after vmware migration

    6
    0 Votes
    6 Posts
    353 Views
    nikadeN
    @acebmxer said in Windows Server not listening to radius port after vmware migration: After migrating our windows server that host our Duo Proxy manager having an issue. [info] Testing section 'radius_client' with configuration: [info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'} [error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812 [debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote host After the migration I did have to reset the IP address and I did install the Xen tools via windows update. Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well.... On your VM that runs the Duo Auth Proxy service, check if the service is actually listening on the external IP or if its just listening on 127.0.0.1 If its just listening on 127.0.0.1 you can try to repair the Duo Auth Proxy service, take a snapshot before doing so. Also, if you're using encrypted passwords in your Duo Auth Proxy configuration you probably need to re-encrypt them, just a heads up, since I just had to do so after migrating one of ours. Edit: Do you have the "interface" option specified in your Duo Auth Proxy configuration?
  • Best practices for small/edge/IoT deployments? (remote management, power)

    5
    0 Votes
    5 Posts
    392 Views
    H
    We have some sites with a single-host XCP-ng pool backed by a small UPS. We install nut directly in dom-0. I'm aware of the policy for adding anything to dom-0 but we believe this usecase fits in the recommendations (simple enough, no vast dependencies, marginal resources usage, no interference ...). With proper testing works pretty well. nut inside a dedicated RPi definitely makes sense for a site with multiple hosts backed by the same UPS.
  • Unable to Access MGMT interface/ No NICS detected

    24
    4
    0 Votes
    24 Posts
    2k Views
    C
    @AtaxyaNetwork I'll check it out! Im currently on chrome. So ill see if they have something close to it. Thank you!
  • Migration compression is not available on this pool

    9
    0 Votes
    9 Posts
    477 Views
    henri9813H
    Hello, We tried the compression feature. You "can see" a benefit only if you have a shared storage. ( and again, the migration between 2 nodes is already very fast, we don't see major difference, but maybe a VM will a lot of ram ( >32GB ) can see a difference. If you don't have a shared storage ( like XOSTOR, NFS, ISCSI ), then you will not see any difference because there is a limitation of 30MB/s-40MB/s ( see here: https://xcp-ng.org/forum/topic/9389/backup-migration-performance ) Best regards,
  • Multi gpu peer to peer not available in vm

    4
    0 Votes
    4 Posts
    279 Views
    olivierlambertO
    Hmm I'm not sure it's even possible due to the nature of isolation provided by Xen Let me ask @Team-Hypervisor-Kernel
  • Internal error: Not_found after Vinchin backup

    56
    0 Votes
    56 Posts
    5k Views
    olivierlambertO
    So you have to dig in the SMlog to check what's going on
  • Migrating from XCP-ng Windows guest tools to Citrix

    20
    0 Votes
    20 Posts
    2k Views
    B
    I did it that way so as to get the old Citrix driver first, and then let it update and watch it reboot. That was my logic anyway. @dinhngtu said in Migrating from XCP-ng Windows guest tools to Citrix: @bberndt Okay, I managed to reproduce your situation. I think it's because the "driver via Windows Update" option was enabled after installing the XS drivers, which caused the drivers to lock onto the non-C000 device and prevent updates from coming in. Normally, XenClean should be able to fix the situation. But if you want to fix things manually, or if things still don't work (C000 is still not active), here's a procedure that should fix the problem: Take a snapshot/backup/etc. Keep a note of static IP addresses (if you have any; there's a chance those will be lost). You can also use our script here: https://github.com/xcp-ng/win-pv-drivers/blob/xcp-ng-9.1/XenDriverUtils/Copy-XenVifSettings.ps1 Reboot in safe mode and disable the non-C000 device. Reboot back to normal mode; it'll ask you to reboot a few more times. The C000 device should now be active and you should be able to get driver updates again. (Optional) You can now enable and manually update the non-C000 device (Browse my computer - Let me pick).