Subcategories

  • All Xen related stuff

    583 Topics
    6k Posts
    olivierlambertO
    Thanks @dinhngtu and @stormi
  • The integrated web UI to manage XCP-ng

    23 Topics
    339 Posts
    C
    @lsouai-vates Great! Thanks for addressing this
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    104 Topics
    1k Posts
    cichyC
    @AtaxyaNetwork I appreciate the sentiment. I think this one is all on me as pointed out by @Danp .. My VDI's would not register without this step. I'm unsure as to why because the error logs were completely blank within XO. Your post in conjunction with the docs were extremely helpful though!
  • Hardware related section

    125 Topics
    1k Posts
    K
    @DustinB Hmm - just got done running mem86+ - 4 passes -- all 14 tests. No RAM errors. I wonder the what would cause this error? I'll probably just save config and reinstall. So strange.
  • The place to discuss new additions into XCP-ng

    242 Topics
    3k Posts
    F
    Hello everyone, I absolutely understand the point regarding datacenters. I am using xcp-ng for years in smaller - non datacenter - environments and its great. Therefore I added the nut packages (8.2 and 8.3) and it works like a charm. It's installation is well explained in: https://xcp-ng.org/forum/topic/4300/performing-automated-shutdown-during-a-power-failure-using-a-usb-ups-with-nut-xcp-ng-8-2/13?_=1742229073030 However, I agree with @Kajetan321 that it would be great if the nut package could be included in the standard "updated" packages repo since its added quite some benefits (imho) for "smaller" IT environments. However, I do not know how much effort it takes to maintain that.
  • What to do about Realtek RTL8125 RTL8126 RTL8127 drivers

    12
    0 Votes
    12 Posts
    1k Views
    A
    @olivierlambert Great to know it's not something I was doing wrong! Hopefully these other cards I have ordered will afford me more luck! Cheers
  • RHEL UEFI boot bug

    5
    1
    0 Votes
    5 Posts
    206 Views
    kiuK
    Hello, thank you for your reply @bogikornel @TrapoSAMA . Here are my processor specifications: Intel Xeon E5-1620 v2 (8) @ 3.691GHz. Unfortunately @Andrew , I have to use RHEL 10 on my server ^^ but thank you for providing the link. I will change my processor/server.
  • DR error - (intermediate value) is not iterable

    2
    0 Votes
    2 Posts
    135 Views
    N
    I worked with ChatGPT on this for a bit. We have narrowed it down to an issue with the NFS Storage that I ship the backups to. "When you recreated storage and moved data back, OMV is technically exporting a different underlying filesystem object than before. NFS clients that had an old handle cached (your XCP-ng host) try to access it and get ESTALE. That explains the initial backup errors and why deleting/re-adding the SR is failing now." I had to remove the NFS storage from XCP-ng, then delete the NFS share from OMV, then add the NFS share back to OMV, and then add it back to XCP-ng. I probably could have resolved this with a reboot, but I didn't wanna. This issue is resolved now.
  • 0 Votes
    31 Posts
    2k Views
    D
    As @Andrew said, your host itself is unhealthy, you might be able to disassemble the CPU and heatseat, clean it up and add some new paste to address the issue with the CPU overheating (if the paste is shot). As for the memory issue, run a memtest on the host and see what is reported.
  • Connection failed "EHOSTUNREACH"

    4
    1
    0 Votes
    4 Posts
    145 Views
    A
    @santos_luan Check if there is any firewall issue on the XO-ce side.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    11
    1 Votes
    11 Posts
    1k Views
    olivierlambertO
    Just quickly chiming in to confirm what @bleader said. We'll be happy to assist you further, especially to put you in contact with our head of security at Vates to discuss our future certification plans (he's a former ANSSI employee BTW).
  • 0 Votes
    7 Posts
    1k Views
    olivierlambertO
    CPU speed is great to enhance all Xen operations (using grants for example). But tapdisk got a lot of room to be better outside that, thanks to multiqueue and so on. However, it's not clear if it's better to improve tapdisk or making something different. This is an active topic of reasearch.
  • Windows Server not listening to radius port after vmware migration

    6
    0 Votes
    6 Posts
    271 Views
    nikadeN
    @acebmxer said in Windows Server not listening to radius port after vmware migration: After migrating our windows server that host our Duo Proxy manager having an issue. [info] Testing section 'radius_client' with configuration: [info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'} [error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812 [debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote host After the migration I did have to reset the IP address and I did install the Xen tools via windows update. Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well.... On your VM that runs the Duo Auth Proxy service, check if the service is actually listening on the external IP or if its just listening on 127.0.0.1 If its just listening on 127.0.0.1 you can try to repair the Duo Auth Proxy service, take a snapshot before doing so. Also, if you're using encrypted passwords in your Duo Auth Proxy configuration you probably need to re-encrypt them, just a heads up, since I just had to do so after migrating one of ours. Edit: Do you have the "interface" option specified in your Duo Auth Proxy configuration?
  • Best practices for small/edge/IoT deployments? (remote management, power)

    5
    0 Votes
    5 Posts
    320 Views
    H
    We have some sites with a single-host XCP-ng pool backed by a small UPS. We install nut directly in dom-0. I'm aware of the policy for adding anything to dom-0 but we believe this usecase fits in the recommendations (simple enough, no vast dependencies, marginal resources usage, no interference ...). With proper testing works pretty well. nut inside a dedicated RPi definitely makes sense for a site with multiple hosts backed by the same UPS.
  • Unable to Access MGMT interface/ No NICS detected

    24
    4
    0 Votes
    24 Posts
    1k Views
    C
    @AtaxyaNetwork I'll check it out! Im currently on chrome. So ill see if they have something close to it. Thank you!
  • Migration compression is not available on this pool

    9
    0 Votes
    9 Posts
    358 Views
    henri9813H
    Hello, We tried the compression feature. You "can see" a benefit only if you have a shared storage. ( and again, the migration between 2 nodes is already very fast, we don't see major difference, but maybe a VM will a lot of ram ( >32GB ) can see a difference. If you don't have a shared storage ( like XOSTOR, NFS, ISCSI ), then you will not see any difference because there is a limitation of 30MB/s-40MB/s ( see here: https://xcp-ng.org/forum/topic/9389/backup-migration-performance ) Best regards,
  • Debian 9 virtual machine does not start in xcp-ng 8.3

    5
    2
    0 Votes
    5 Posts
    224 Views
    olivierlambertO
    OK so potentially a vCPU topology issue
  • Multi gpu peer to peer not available in vm

    4
    0 Votes
    4 Posts
    206 Views
    olivierlambertO
    Hmm I'm not sure it's even possible due to the nature of isolation provided by Xen Let me ask @Team-Hypervisor-Kernel
  • Internal error: Not_found after Vinchin backup

    56
    0 Votes
    56 Posts
    4k Views
    olivierlambertO
    So you have to dig in the SMlog to check what's going on
  • Migrating from XCP-ng Windows guest tools to Citrix

    20
    0 Votes
    20 Posts
    2k Views
    B
    I did it that way so as to get the old Citrix driver first, and then let it update and watch it reboot. That was my logic anyway. @dinhngtu said in Migrating from XCP-ng Windows guest tools to Citrix: @bberndt Okay, I managed to reproduce your situation. I think it's because the "driver via Windows Update" option was enabled after installing the XS drivers, which caused the drivers to lock onto the non-C000 device and prevent updates from coming in. Normally, XenClean should be able to fix the situation. But if you want to fix things manually, or if things still don't work (C000 is still not active), here's a procedure that should fix the problem: Take a snapshot/backup/etc. Keep a note of static IP addresses (if you have any; there's a chance those will be lost). You can also use our script here: https://github.com/xcp-ng/win-pv-drivers/blob/xcp-ng-9.1/XenDriverUtils/Copy-XenVifSettings.ps1 Reboot in safe mode and disable the non-C000 device. Reboot back to normal mode; it'll ask you to reboot a few more times. The C000 device should now be active and you should be able to get driver updates again. (Optional) You can now enable and manually update the non-C000 device (Browse my computer - Let me pick).
  • Pool Master

    8
    0 Votes
    8 Posts
    289 Views
    R
    @olivierlambert Dang ok. I waited a few minute then clicked the Connect in XOA for that host and it connected. Not sure what to do really.
  • v8.2.1 rolling pool update getting stuck

    4
    0 Votes
    4 Posts
    224 Views
    olivierlambertO
    Do you have any SR using a VM? (like an ISO SR in a NFS share inside a VM). This is freezing NFS and makes host taking half an hour to restart. Logs should tell you why the RPU failed if it failed
  • Other 2 hosts reboot when 1 host in HA enabled pool is powered off

    10
    0 Votes
    10 Posts
    761 Views
    olivierlambertO
    It's impossible to answer right off the bat without knowing more in details what's going on. HA is a complex beast, and combined with HCI, requires a lot of knowledge to find what's causing your issue, between both xha and XOSTOR. In other words, it is very demanding to analyze all the logs and trying to make sense of it. However, I can give you some clues to make sense of it: The HA log is at /var/log/xha.log. When you shutdown a host, you should be able to watch (on each host) what the HA is deciding to do. My gut feeling: there's maybe a XOSTOR issue making the heartbeat SR being unavailable, and so all hosts will autofence Then you need to understand the XOSTOR logs for why the cluster wasn't doing what's expected. My best advice: remove HA first, and only then investigate on XOSTOR. Kill on node (not the master) and check if your VMs are still able to start/snapshot/write inside.
  • PXE Boot from new VM not working

    2
    2
    0 Votes
    2 Posts
    200 Views
    bleaderB
    @JBlessing as it looks like it does start, it looks like the networking side is working, at least at first. Just for debugging purpose you could try to switch that VM to BIOS instead of UEFI if it is possible, maybe it is related to what the pxe is starting in the VM. You could also try switching the VM between realtek and e1000 NIC, at this stage, PV drivers are not there so it is using an emulated NIC, maybe the image your PXE starts doesn't like the one you're using and it gets stuck somehow. As you're already using it with vmware, I assume you know how to size your VM, but if you went for a tight RAM value for this VM, you could try to give it more RAM to see if that could be related, as everything has to fit in RAM at some point, we may be using more at startup than vmware… Hope one of this can help
  • Can't get slave out of maintenance mode after yum updates

    3
    0 Votes
    3 Posts
    193 Views
    olivierlambertO
    About the xsconsole: sometimes it's not refreshing. You can try to get access to the console, then type "xsconsole" it will start it and you should see it works You must have the master up to date if you want your slave to connect again. I never tried to elect a new master in the middle of the upgrade, I would discourage it. Better shutdown some VMs on the master, upgrade it and you are automatically back on track.