Subcategories

  • VMs, hosts, pools, networks and all other usual management tasks.

    451 Topics
    3k Posts
    nikadeN
    @DustyArmstrong thats super-strange, i actually have the same setup at home, 2 hp z240 machines running xcp-ng in a small pool. xcp1 is always up and running, xcp2 is powered down when I dont need it, everything important is running on xcp1, maybe that's the reason I don't run into these issues.
  • ACLs, Self-service, Cloud-init, Load balancing...

    100 Topics
    832 Posts
    Tristis OrisT
    Ok, my syntax is oudated. resize_rootfs: true growpart: mode: auto devices: ['/dev/xvda3'] ignore_growroot_disabled: false runcmd: - pvresize /dev/xvda3 - lvextend -r -l +100%FREE /dev/ubuntu-vg/ubuntu-lv || true this one works. final_message. doesn't support any macro like %(uptime) %(UPTIME)
  • All XO backup features: full and incremental, replication, mirrors...

    464 Topics
    5k Posts
    DustyArmstrongD
    @florent No problem, just thought it would be fun. Thanks for your work anyway!
  • Everything related to Xen Orchestra's REST API

    77 Topics
    583 Posts
    R
    @Pilow tags can work and the path to them is much more succinct. Thanks!
  • Terraform, Packer or any tool to do IaC

    49 Topics
    455 Posts
    T
    @nathanael-h The image pull was successful to my local computer using the same classic personal access token I generated and set as the regcred secret. [image: 1771466660300-2602_ghcr_xocsi.png] Looking at the documentation again and since I am not using MicroK8s, I tried something different but the result was the same. (the pods never transitioned to a running state). This time, prior to executing the install script, I updated the kubelet-registration-path in the csi-xenorchestra-node-single.yaml and csi-xenorchestra-node.yaml files. (I believe this would be an opportunity to update the README for clarity on what to update based on the Kubernetes platform i.e. MicroK8s vs non-MicroK8s -- I can submit a PR for this, if you like) e.g. - --kubelet-registration-path=/var/lib/kubelet/plugins/csi.xenorchestra.vates.tech/csi.sock #- --kubelet-registration-path=/var/snap/microk8s/common/var/lib/kubelet/plugins/csi.xenorchestra.vates.tech/csi.sock On the control-plane: [root@xxxx kubelet]# pwd /var/lib/kubelet [root@xxxx kubelet]# tree plugins plugins └── csi.xenorchestra.vates.tech kgp -nkube-system | grep csi csi-xenorchestra-controller-748db9b45b-w4zk4 2/3 ImagePullBackOff 19 (12s ago) 41m csi-xenorchestra-node-6zzv8 1/3 CrashLoopBackOff 11 (3m51s ago) 41m csi-xenorchestra-node-8r4ml 1/3 CrashLoopBackOff 11 (3m59s ago) 41m csi-xenorchestra-node-btrsb 1/3 CrashLoopBackOff 11 (4m11s ago) 41m csi-xenorchestra-node-w69pc 1/3 CrashLoopBackOff 11 (4m3s ago) 41m Excerpt from /var/log/messages: Feb 18 22:21:44 xxx kubelet[50541]: I0218 22:21:44.474317 50541 scope.go:117] "RemoveContainer" containerID="26d29856a551fe7dfd873a3f8124584d400d1a88d77cdb4c1797a9726fa85408" Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.475900036-05:00" level=info msg="Checking image status: ghcr.io/vatesfr/xenorchestra-csi-driver:edge" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.476149865-05:00" level=info msg="Image ghcr.io/vatesfr/xenorchestra-csi-driver:edge not found" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.476188202-05:00" level=info msg="Image ghcr.io/vatesfr/xenorchestra-csi-driver:edge not found" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus Feb 18 22:21:44 xxx kubelet[50541]: E0218 22:21:44.476862 50541 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-xenorchestra-node-btrsb_kube-system(433e69c9-2da9-4e23-b92b-90918bd36248)\", failed to \"StartContainer\" for \"xenorchestra-csi-driver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/vatesfr/xenorchestra-csi-driver:edge\\\"\"]" pod="kube-system/csi-xenorchestra-node-btrsb" podUID="433e69c9-2da9-4e23-b92b-90918bd36248" Any other suggestions in the meantime or if I can collect more information, let me know.
  • Reboot XOA

    9
    0 Votes
    9 Posts
    6k Views
    olivierlambertO
    It's hard to tell. Let's say next time it happens please report
  • Ubuntu 18.04 Cloud-Init Networking Problem

    13
    0 Votes
    13 Posts
    7k Views
    fohdeeshaF
    @slynch OK, that's very good news then! Debian was still having issues where any configuration applied via cloud-init would just be added to the existing default DHCP config, unless you took care to build the VM without the default interfaces config beforehand.
  • PCI Passthru Error not working on 8.2 but was 8.1

    5
    0 Votes
    5 Posts
    2k Views
    L
    @olivierlambert said in PCI Passthru Error not working on 8.2 but was 8.1: Hi! IOMMU should be enable in the BIOS. Double check that Also please share your grub config to see if it's correctly written. After playing with it more, the issue appears to be passing multiple devices of the same type. In this case RADEON WX7100. If I take it down to one card it works as expected. If I add more than 1, pick any quantity, then I run into the issue. The issue goes away once I reboot the host, and then I can assign a card, but the second the VM reboots the error comes back
  • Error: Expected "actual" to be strictly unequal to: false

    Solved
    5
    0 Votes
    5 Posts
    586 Views
    olivierlambertO
    This can happen from time to time on master (despite we do every "broken" code in dedicated branches), sometimes you can have a surprise. That's why it's important, before anything, to get the latest commit and rebuild, to see if the issue persist. If it's the case, then reporting the problem will be helpful
  • Auto coalescing of disks

    coalesce storage
    14
    0 Votes
    14 Posts
    8k Views
    C
    @olivierlambert thanks - I'll continue to look into it.
  • XOA Deployment issue

    Solved
    9
    0 Votes
    9 Posts
    3k Views
    olivierlambertO
    It's always DNS [image: SRVKUe2MGSCBVB56FwBzwFZE6uxGaNZ_Vknx5vioVAw.png?auto=webp&s=b21e5b8bde6eed10e5ef54a188ebf0840e0bf8b4]
  • XO managing XCP pool behind NAT

    2
    0 Votes
    2 Posts
    852 Views
    olivierlambertO
    The best way would be to use XO Proxies as "Reverse HTTP proxies" (or any reverse proxy in each DC) and then tell XO to connect to those proxies. This way, each DC will have only one entry point exposed to the outside, and you could manage that with your central XO. This is a subject we planned to work in the next months. If you have a support subscription, please open a ticket so we can do initial test inside your infrastructure
  • How to change what disk to boot from in XOA?

    1
    0 Votes
    1 Posts
    490 Views
    No one has replied
  • Restore exported snapshot to VM

    Unsolved
    4
    0 Votes
    4 Posts
    1k Views
    olivierlambertO
    In XOA, Import, drag and drop the XVA and that's it
  • XOA won't deploy. Getting a DNS and INTERNAL related errors

    2
    2
    0 Votes
    2 Posts
    779 Views
    gskgerG
    @magtech If you want to do a XOA quick deploy, you need a working internet connecting and outbound port tcp/8888 must be open to the internet. New install of XCP-ng, can't complete Quick Deploy might help?
  • Problems with latest XOA update - failed to start xo-server [SOLVED]

    Solved
    4
    0 Votes
    4 Posts
    2k Views
    olivierlambertO
    Maybe slow disk then? Hard to know from here.
  • Backup & Snapshot Fail - SR_BACKEND_FAILURE_82

    5
    0 Votes
    5 Posts
    2k Views
    N
    Hi, 2 to 3 hours after the migration of VM to master coalesce disappeared. Thanks for help
  • Is it possible to reset to full backup at the desired time ?

    4
    4
    0 Votes
    4 Posts
    621 Views
    GheppyG
    Yes, that's what i want to do. But at the moment it create an snapshot for each cron that I made and I want to have only one snapshot and to be reset to full backup on Saturday. In essence, I want to have only one snapshot to an VM no matter how many cron I have attached to the replication. In this case, I want to reset the Continuous Replication chain when the server is least busy, not after a number of replications
  • Migration failing but not failing

    1
    1
    0 Votes
    1 Posts
    220 Views
    No one has replied
  • Plugin transport-email (v0.6.0) broken ?

    Solved
    20
    1 Votes
    20 Posts
    4k Views
    gskgerG
    @julien-f yes, works well (even with the 3td party script ). Nice job and fast as usual . Thanks @Alexander-0 for helping to pin it down
  • Transfer log missing

    2
    2
    0 Votes
    2 Posts
    211 Views
    olivierlambertO
    This is a known issue that will be fixed next week. @julien-f is working on it
  • From the sources build now requires >2GB ram

    4
    0 Votes
    4 Posts
    1k Views
    M
    I also encountered this problem. If you are using Debian you can use pre-packaged XO from sources https://github.com/mathiswolff/xen-orchestra
  • Error on Delta Backup - cannot read property "length" of undefined

    16
    0 Votes
    16 Posts
    2k Views
    julien-fJ
    @mbt Great! Thanks for your report and testing
  • Delta Backup job timeout does not get respected

    5
    0 Votes
    5 Posts
    819 Views
    mkrumbholzM
    @olivierlambert I tested this now a bit more. And now it works with the traffic limiter. Your last updates have optimised the backup system too much for my NFS storage and with that froze somehow the timeout (XO is running on local storage). And you are right it is doing it per VM, but i think this was in an older version the other way around (wich would be more important to me, but not for others ). But it would be great if i could get somehow an evenly distribution of the full backups .
  • Disaster Recovery Storage

    4
    0 Votes
    4 Posts
    1k Views
    olivierlambertO
    You need to get the exact same structure ideally to not lose any data. But it's not a big deal, just copy/rsync whatever you like to the destination and that's it.