Subcategories

  • All Xen related stuff

    603 Topics
    6k Posts
    TeddyAstieT
    @hitechhillbilly no it doesn't, it just ensures the N-th vCPU of Dom0 only runs on N-th pCPU of the machine. Not sure about the practical impact of it, in the past it has been used for getting meaningful CPU temperatures from coretemp (with physical core matching virtual one), but that doesn't work anymore since Xen filters MSR accesses (including Dom0).
  • The integrated web UI to manage XCP-ng

    26 Topics
    348 Posts
    olivierlambertO
    It's not meant to be used like that. If you are behind a NAT, the right approach is to have your XOA behind the NAT and inside the same network than the hosts. That's because hosts will always use and return their internal IPs to connect to some resources (stats, consoles etc.). XOA deals with that easily as being the "main control point" for all hosts behind your NAT (or a XO proxy if you prefer).
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    110 Topics
    1k Posts
    L
    @olivierlambert Thanks for the tip — it’s a very interesting mechanism. I’m going to read the docs now
  • Hardware related section

    144 Topics
    1k Posts
    olivierlambertO
    Yes!! Congrats for everyone (including you for the feedback @dcskinner !)
  • The place to discuss new additions into XCP-ng

    245 Topics
    3k Posts
    R
    @Greg_E said in Building XCP-ng from source code: @olivierlambert Ok, thanks. Yes I'm eagerly awaiting XCP-ng 9 for testing. Hi, check this thread: https://xcp-ng.org/forum/topic/11698/xcp-ng-9.0-demonstrator-early-preview More coming...
  • Best practices for small/edge/IoT deployments? (remote management, power)

    5
    0 Votes
    5 Posts
    638 Views
    H
    We have some sites with a single-host XCP-ng pool backed by a small UPS. We install nut directly in dom-0. I'm aware of the policy for adding anything to dom-0 but we believe this usecase fits in the recommendations (simple enough, no vast dependencies, marginal resources usage, no interference ...). With proper testing works pretty well. nut inside a dedicated RPi definitely makes sense for a site with multiple hosts backed by the same UPS.
  • Unable to Access MGMT interface/ No NICS detected

    24
    4
    0 Votes
    24 Posts
    4k Views
    C
    @AtaxyaNetwork I'll check it out! Im currently on chrome. So ill see if they have something close to it. Thank you!
  • Migration compression is not available on this pool

    9
    0 Votes
    9 Posts
    843 Views
    henri9813H
    Hello, We tried the compression feature. You "can see" a benefit only if you have a shared storage. ( and again, the migration between 2 nodes is already very fast, we don't see major difference, but maybe a VM will a lot of ram ( >32GB ) can see a difference. If you don't have a shared storage ( like XOSTOR, NFS, ISCSI ), then you will not see any difference because there is a limitation of 30MB/s-40MB/s ( see here: https://xcp-ng.org/forum/topic/9389/backup-migration-performance ) Best regards,
  • Multi gpu peer to peer not available in vm

    4
    0 Votes
    4 Posts
    425 Views
    olivierlambertO
    Hmm I'm not sure it's even possible due to the nature of isolation provided by Xen Let me ask @Team-Hypervisor-Kernel
  • Internal error: Not_found after Vinchin backup

    56
    0 Votes
    56 Posts
    7k Views
    olivierlambertO
    So you have to dig in the SMlog to check what's going on
  • Migrating from XCP-ng Windows guest tools to Citrix

    20
    0 Votes
    20 Posts
    3k Views
    B
    I did it that way so as to get the old Citrix driver first, and then let it update and watch it reboot. That was my logic anyway. @dinhngtu said in Migrating from XCP-ng Windows guest tools to Citrix: @bberndt Okay, I managed to reproduce your situation. I think it's because the "driver via Windows Update" option was enabled after installing the XS drivers, which caused the drivers to lock onto the non-C000 device and prevent updates from coming in. Normally, XenClean should be able to fix the situation. But if you want to fix things manually, or if things still don't work (C000 is still not active), here's a procedure that should fix the problem: Take a snapshot/backup/etc. Keep a note of static IP addresses (if you have any; there's a chance those will be lost). You can also use our script here: https://github.com/xcp-ng/win-pv-drivers/blob/xcp-ng-9.1/XenDriverUtils/Copy-XenVifSettings.ps1 Reboot in safe mode and disable the non-C000 device. Reboot back to normal mode; it'll ask you to reboot a few more times. The C000 device should now be active and you should be able to get driver updates again. (Optional) You can now enable and manually update the non-C000 device (Browse my computer - Let me pick).
  • Pool Master

    8
    0 Votes
    8 Posts
    559 Views
    R
    @olivierlambert Dang ok. I waited a few minute then clicked the Connect in XOA for that host and it connected. Not sure what to do really.
  • v8.2.1 rolling pool update getting stuck

    4
    0 Votes
    4 Posts
    410 Views
    olivierlambertO
    Do you have any SR using a VM? (like an ISO SR in a NFS share inside a VM). This is freezing NFS and makes host taking half an hour to restart. Logs should tell you why the RPU failed if it failed
  • Other 2 hosts reboot when 1 host in HA enabled pool is powered off

    10
    0 Votes
    10 Posts
    1k Views
    olivierlambertO
    It's impossible to answer right off the bat without knowing more in details what's going on. HA is a complex beast, and combined with HCI, requires a lot of knowledge to find what's causing your issue, between both xha and XOSTOR. In other words, it is very demanding to analyze all the logs and trying to make sense of it. However, I can give you some clues to make sense of it: The HA log is at /var/log/xha.log. When you shutdown a host, you should be able to watch (on each host) what the HA is deciding to do. My gut feeling: there's maybe a XOSTOR issue making the heartbeat SR being unavailable, and so all hosts will autofence Then you need to understand the XOSTOR logs for why the cluster wasn't doing what's expected. My best advice: remove HA first, and only then investigate on XOSTOR. Kill on node (not the master) and check if your VMs are still able to start/snapshot/write inside.
  • PXE Boot from new VM not working

    2
    2
    0 Votes
    2 Posts
    437 Views
    bleaderB
    @JBlessing as it looks like it does start, it looks like the networking side is working, at least at first. Just for debugging purpose you could try to switch that VM to BIOS instead of UEFI if it is possible, maybe it is related to what the pxe is starting in the VM. You could also try switching the VM between realtek and e1000 NIC, at this stage, PV drivers are not there so it is using an emulated NIC, maybe the image your PXE starts doesn't like the one you're using and it gets stuck somehow. As you're already using it with vmware, I assume you know how to size your VM, but if you went for a tight RAM value for this VM, you could try to give it more RAM to see if that could be related, as everything has to fit in RAM at some point, we may be using more at startup than vmware… Hope one of this can help
  • Can't get slave out of maintenance mode after yum updates

    3
    0 Votes
    3 Posts
    372 Views
    olivierlambertO
    About the xsconsole: sometimes it's not refreshing. You can try to get access to the console, then type "xsconsole" it will start it and you should see it works You must have the master up to date if you want your slave to connect again. I never tried to elect a new master in the middle of the upgrade, I would discourage it. Better shutdown some VMs on the master, upgrade it and you are automatically back on track.
  • 0 Votes
    35 Posts
    5k Views
    olivierlambertO
    Then try to find anything happening around that time on other hosts, equipment, storage and so on.
  • XCP-NG Kubernetes micro8k

    3
    7
    0 Votes
    3 Posts
    692 Views
    nathanael-hN
    Hello @msupport we published a step by step guide, read more in the announcement there https://xcp-ng.org/forum/post/94268
  • NFS multipathing configuration

    xcp-ng nfs xenorchestra
    9
    3
    0 Votes
    9 Posts
    2k Views
    B
    Great, thank you!
  • 0 Votes
    3 Posts
    261 Views
    F
    @Danp yes I ran "yum update" to be sure but "nothing to upgrade" on the pool-master. I try with storage (iscsi) NIC configured and without Storage NIC configured but the pool join freeze. Seems that persist some "SESSION" (may be referred to the slave host previously configured?) or some incoerence in the pool database... from /var/log/xensource of the slave host when try to join the pool: "session_check D:520c5b4e5b36 failed with exception Server_error(SESSION_INVALID, "
  • Automating VM configurations after mass VMware imports

    9
    0 Votes
    9 Posts
    931 Views
    olivierlambertO
    Thanks, this is helpful We'll discuss that with the @Team-DevOps and try to get things implemented!
  • Commvault backups failing for a VM with large disks

    2
    0 Votes
    2 Posts
    483 Views
    olivierlambertO
    To me it sounds like a Commvault issue. If you want some investigation on Vates side, I would recommend to open a support ticket
  • How to Re-attach an SR

    Solved
    20
    0 Votes
    20 Posts
    3k Views
    tjkreidlT
    @olivierlambert Agreed. The Citrix forum used to be very active, but especially since Citrix was taken over, https://community.citrix.com has had way less activity, sadly. It's still gratifying that a lot of the functionality still is common to both platforms, although as XCP-ng evolves, there will be continually less commonality.
  • Rolling Pool Update - not possible to resume a failed RPU

    13
    0 Votes
    13 Posts
    1k Views
    Tristis OrisT
    @olivierlambert During RPU - yes. i mean manual update in case of failure.
  • Alpine Template Problem

    7
    2
    0 Votes
    7 Posts
    623 Views
    ?
    For anything older than the branches still shown in https://pkgs.alpinelinux.org, (from v3.0 to v3.12), the packages should be downloaded from their cdn: https://dl-cdn.alpinelinux.org/alpine But as mentioned above, anything older than 3 releases from the lastest current one(v3.21) are end of life and should not be used for more than testing.