Subcategories

  • All Xen related stuff

    606 Topics
    6k Posts
    F
    @redakula said in Coral TPU PCI Passthrough: Frigate.nvr which is one of the popular uses for the Coral do not recommend it for new installs either. Frigate updated their recommendations because of the Google decision to sunset the device and because there are alternative options available to frigate for image inferencing. The coral is still supported though, and frigate is not the only use case or platform that can benefit from an accelerator. At the end of the day if you've already got the hardware, and it's efficient enough to run, then not using it is a waste or resources that could be allocated to other VMs.
  • The integrated web UI to manage XCP-ng

    26 Topics
    348 Posts
    olivierlambertO
    It's not meant to be used like that. If you are behind a NAT, the right approach is to have your XOA behind the NAT and inside the same network than the hosts. That's because hosts will always use and return their internal IPs to connect to some resources (stats, consoles etc.). XOA deals with that easily as being the "main control point" for all hosts behind your NAT (or a XO proxy if you prefer).
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    116 Topics
    1k Posts
    florentF
    @yeopil21 you can target vsphere it should work The issue here is that XO fail to link one of your datastore to the datacenter is it XO from source or a XOA ? you should have in your server logs something like can't find datacenter for datastore with the datacenters and the datastore name as detected by XO are you using an admin account on vsphere or is it a limited one ?
  • Hardware related section

    159 Topics
    2k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • The place to discuss new additions into XCP-ng

    246 Topics
    3k Posts
    yannY
    @Tristis-Oris yes it is likely you're using a newer kernel, we likely need to rebuild the agent using a newer version of the netlink crate.
  • Netbox integration

    4
    0 Votes
    4 Posts
    482 Views
    olivierlambertO
    Right now, it's XO -> Netbox only. As soon as you want something bidirectional, the complexity is exponential. I'm not closed to the idea, but we need to carefully think about the how and what's really expected functionally speaking from our users
  • XCP-ng DR on Azure

    4
    -1 Votes
    4 Posts
    447 Views
    olivierlambertO
    It's not a trivial scenario indeed. Dom0 is a PV guest (in other words: a VM) on top of an hypervisor (Xen), on top of an hypervisor (HyperV). As you can see, more layers means more problems
  • Snapshot Question

    2
    0 Votes
    2 Posts
    344 Views
    R
    Sorry, I'm asking if I should be good deleting the snapshots
  • Unbootable VHD backups

    19
    1
    0 Votes
    19 Posts
    2k Views
    D
    @AtaxyaNetwork said in Unbootable VHD backups: @Schmidty86 Try to detach the disk and reattach, it should be xvda in order to be bootable That's what I was thinking as well, but obviously something is off with this VM. @Schmidty86 is the old host still online? If so you might be able to perform a Live Migration or a replication job to copy it from the old host to the new.
  • CBT Error when powering on VM

    28
    0 Votes
    28 Posts
    3k Views
    R
    AlmaLinux 8.10
  • RHEL UEFI boot bug

    5
    1
    0 Votes
    5 Posts
    871 Views
    kiuK
    Hello, thank you for your reply @bogikornel @TrapoSAMA . Here are my processor specifications: Intel Xeon E5-1620 v2 (8) @ 3.691GHz. Unfortunately @Andrew , I have to use RHEL 10 on my server ^^ but thank you for providing the link. I will change my processor/server.
  • DR error - (intermediate value) is not iterable

    2
    0 Votes
    2 Posts
    436 Views
    N
    I worked with ChatGPT on this for a bit. We have narrowed it down to an issue with the NFS Storage that I ship the backups to. "When you recreated storage and moved data back, OMV is technically exporting a different underlying filesystem object than before. NFS clients that had an old handle cached (your XCP-ng host) try to access it and get ESTALE. That explains the initial backup errors and why deleting/re-adding the SR is failing now." I had to remove the NFS storage from XCP-ng, then delete the NFS share from OMV, then add the NFS share back to OMV, and then add it back to XCP-ng. I probably could have resolved this with a reboot, but I didn't wanna. This issue is resolved now.
  • 0 Votes
    31 Posts
    5k Views
    D
    As @Andrew said, your host itself is unhealthy, you might be able to disassemble the CPU and heatseat, clean it up and add some new paste to address the issue with the CPU overheating (if the paste is shot). As for the memory issue, run a memtest on the host and see what is reported.
  • Connection failed "EHOSTUNREACH"

    4
    1
    0 Votes
    4 Posts
    528 Views
    A
    @santos_luan Check if there is any firewall issue on the XO-ce side.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    11
    1 Votes
    11 Posts
    2k Views
    olivierlambertO
    Just quickly chiming in to confirm what @bleader said. We'll be happy to assist you further, especially to put you in contact with our head of security at Vates to discuss our future certification plans (he's a former ANSSI employee BTW).
  • 0 Votes
    7 Posts
    2k Views
    olivierlambertO
    CPU speed is great to enhance all Xen operations (using grants for example). But tapdisk got a lot of room to be better outside that, thanks to multiqueue and so on. However, it's not clear if it's better to improve tapdisk or making something different. This is an active topic of reasearch.
  • Windows Server not listening to radius port after vmware migration

    6
    0 Votes
    6 Posts
    828 Views
    nikadeN
    @acebmxer said in Windows Server not listening to radius port after vmware migration: After migrating our windows server that host our Duo Proxy manager having an issue. [info] Testing section 'radius_client' with configuration: [info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'} [error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812 [debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote host After the migration I did have to reset the IP address and I did install the Xen tools via windows update. Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well.... On your VM that runs the Duo Auth Proxy service, check if the service is actually listening on the external IP or if its just listening on 127.0.0.1 If its just listening on 127.0.0.1 you can try to repair the Duo Auth Proxy service, take a snapshot before doing so. Also, if you're using encrypted passwords in your Duo Auth Proxy configuration you probably need to re-encrypt them, just a heads up, since I just had to do so after migrating one of ours. Edit: Do you have the "interface" option specified in your Duo Auth Proxy configuration?
  • Best practices for small/edge/IoT deployments? (remote management, power)

    5
    0 Votes
    5 Posts
    735 Views
    H
    We have some sites with a single-host XCP-ng pool backed by a small UPS. We install nut directly in dom-0. I'm aware of the policy for adding anything to dom-0 but we believe this usecase fits in the recommendations (simple enough, no vast dependencies, marginal resources usage, no interference ...). With proper testing works pretty well. nut inside a dedicated RPi definitely makes sense for a site with multiple hosts backed by the same UPS.
  • Unable to Access MGMT interface/ No NICS detected

    24
    4
    0 Votes
    24 Posts
    4k Views
    C
    @AtaxyaNetwork I'll check it out! Im currently on chrome. So ill see if they have something close to it. Thank you!
  • Migration compression is not available on this pool

    9
    0 Votes
    9 Posts
    982 Views
    henri9813H
    Hello, We tried the compression feature. You "can see" a benefit only if you have a shared storage. ( and again, the migration between 2 nodes is already very fast, we don't see major difference, but maybe a VM will a lot of ram ( >32GB ) can see a difference. If you don't have a shared storage ( like XOSTOR, NFS, ISCSI ), then you will not see any difference because there is a limitation of 30MB/s-40MB/s ( see here: https://xcp-ng.org/forum/topic/9389/backup-migration-performance ) Best regards,
  • Multi gpu peer to peer not available in vm

    4
    0 Votes
    4 Posts
    506 Views
    olivierlambertO
    Hmm I'm not sure it's even possible due to the nature of isolation provided by Xen Let me ask @Team-Hypervisor-Kernel
  • Internal error: Not_found after Vinchin backup

    56
    0 Votes
    56 Posts
    9k Views
    olivierlambertO
    So you have to dig in the SMlog to check what's going on
  • Migrating from XCP-ng Windows guest tools to Citrix

    20
    0 Votes
    20 Posts
    3k Views
    B
    I did it that way so as to get the old Citrix driver first, and then let it update and watch it reboot. That was my logic anyway. @dinhngtu said in Migrating from XCP-ng Windows guest tools to Citrix: @bberndt Okay, I managed to reproduce your situation. I think it's because the "driver via Windows Update" option was enabled after installing the XS drivers, which caused the drivers to lock onto the non-C000 device and prevent updates from coming in. Normally, XenClean should be able to fix the situation. But if you want to fix things manually, or if things still don't work (C000 is still not active), here's a procedure that should fix the problem: Take a snapshot/backup/etc. Keep a note of static IP addresses (if you have any; there's a chance those will be lost). You can also use our script here: https://github.com/xcp-ng/win-pv-drivers/blob/xcp-ng-9.1/XenDriverUtils/Copy-XenVifSettings.ps1 Reboot in safe mode and disable the non-C000 device. Reboot back to normal mode; it'll ask you to reboot a few more times. The C000 device should now be active and you should be able to get driver updates again. (Optional) You can now enable and manually update the non-C000 device (Browse my computer - Let me pick).
  • Pool Master

    8
    0 Votes
    8 Posts
    645 Views
    R
    @olivierlambert Dang ok. I waited a few minute then clicked the Connect in XOA for that host and it connected. Not sure what to do really.
  • v8.2.1 rolling pool update getting stuck

    4
    0 Votes
    4 Posts
    478 Views
    olivierlambertO
    Do you have any SR using a VM? (like an ISO SR in a NFS share inside a VM). This is freezing NFS and makes host taking half an hour to restart. Logs should tell you why the RPU failed if it failed