XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. ThasianXi
    3. Posts
    T
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 22
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      šŸ Just a follow-up that the PV and PVC creation was successful.
      All pods stable since previous post. āœ”

      k get pv
      NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                        STORAGECLASS          VOLUMEATTRIBUTESCLASS   REASON   AGE
      dtw-6m            2Gi        RWO            Retain           Bound       kube-system/xo-csi-test      csi-xenorchestra-sc   <unset>                          10h
      
      
      k get pvc -nkube-system
      NAME          STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          VOLUMEATTRIBUTESCLASS   AGE
      xo-csi-test   Bound    dtw-6m   2Gi        RWO            csi-xenorchestra-sc   <unset>                 9h
      
      kgp -nkube-system | grep csi*
      csi-xenorchestra-controller-b5b695fb-ts4b9               3/3     Running   0          43h
      csi-xenorchestra-node-27qzg                              3/3     Running   0          43h
      csi-xenorchestra-node-4bflf                              3/3     Running   0          43h
      csi-xenorchestra-node-8tb5m                              3/3     Running   0          43h
      csi-xenorchestra-node-t9m78                              3/3     Running   0          43h
      
      posted in Infrastructure as Code
      T
      ThasianXi
    • RE: very slow disk ssd support all vms xcp-ng8.2.1

      @comdirect
      Use this command: (replace sda in the command below with the relevant device)
      cat /sys/block/sda/queue/scheduler
      The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq]

      For multiple drives use:
      grep "" /sys/block/*/queue/scheduler

      posted in Hardware
      T
      ThasianXi
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      @jmara Thank you for the input. All pods are running with caveats. ⚠

      Prior to executing the installation, I updated the image name to ghcr.io/vatesfr/xenorchestra-csi:edge in the manifests.
      After executing the install, I had to manually edit the image name in the DaemonSet, from ghcr.io/vatesfr/xenorchestra-csi-driver:edge to ghcr.io/vatesfr/xenorchestra-csi:edge.
      After editing the DaemonSet, the node pods restarted and transitioned to running.

      However, the controller pod was still attempting to pull this image: ghcr.io/vatesfr/xenorchestra-csi-driver:edge and never transitioned to running.
      To correct that, I edited the image name in the Deployment, from ghcr.io/vatesfr/xenorchestra-csi-driver:edge to ghcr.io/vatesfr/xenorchestra-csi:edge.

      Thus after editing the DaemonSet and Deployment, the pods transitioned to running. ⛳

      kgp -nkube-system | grep csi*
      csi-xenorchestra-controller-b5b695fb-ts4b9               3/3     Running   0          4m8s
      csi-xenorchestra-node-27qzg                              3/3     Running   0          6m21s
      csi-xenorchestra-node-4bflf                              3/3     Running   0          6m20s
      csi-xenorchestra-node-8tb5m                              3/3     Running   0          6m20s
      csi-xenorchestra-node-t9m78                              3/3     Running   0          6m20s
      
      posted in Infrastructure as Code
      T
      ThasianXi
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      @nathanael-h
      šŸ The image pull was successful to my local computer using the same classic personal access token I generated and set as the regcred secret.
      2602_ghcr_xocsi.png


      šŸ’”
      Looking at the documentation again and since I am not using MicroK8s, I tried something different but the result was the same. (the pods never transitioned to a running state).

      This time, prior to executing the install script, I updated the kubelet-registration-path and the volume path in the csi-xenorchestra-node-single.yaml and csi-xenorchestra-node.yaml files.
      (I believe this would be an opportunity to update the README for clarity on what to update based on the Kubernetes platform i.e. MicroK8s vs non-MicroK8s -- I can submit a PR for this, if you like)
      excerpts:

       - --kubelet-registration-path=/var/lib/kubelet/plugins/csi.xenorchestra.vates.tech/csi.sock
       #- --kubelet-registration-path=/var/snap/microk8s/common/var/lib/kubelet/plugins/csi.xenorchestra.vates.tech/csi.sock
      -------------------------
       volumes:
              - hostPath:
                  path: /var/lib/kubelet/plugins/csi.xenorchestra.vates.tech
                  type: DirectoryOrCreate
                name: socket-dir
      

      On the control-plane:

      [root@xxxx kubelet]# pwd
      /var/lib/kubelet
      [root@xxxx  kubelet]# tree plugins
      plugins
      └── csi.xenorchestra.vates.tech
      
       kgp -nkube-system | grep csi
      csi-xenorchestra-controller-748db9b45b-w4zk4             2/3     ImagePullBackOff   19 (12s ago)     41m
      csi-xenorchestra-node-6zzv8                              1/3     CrashLoopBackOff   11 (3m51s ago)   41m
      csi-xenorchestra-node-8r4ml                              1/3     CrashLoopBackOff   11 (3m59s ago)   41m
      csi-xenorchestra-node-btrsb                              1/3     CrashLoopBackOff   11 (4m11s ago)   41m
      csi-xenorchestra-node-w69pc                              1/3     CrashLoopBackOff   11 (4m3s ago)    41m
      

      Excerpt from /var/log/messages:

      Feb 18 22:21:44 xxx kubelet[50541]: I0218 22:21:44.474317   50541 scope.go:117] "RemoveContainer" containerID="26d29856a551fe7dfd873a3f8124584d400d1a88d77cdb4c1797a9726fa85408"
      Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.475900036-05:00" level=info msg="Checking image status: ghcr.io/vatesfr/xenorchestra-csi-driver:edge" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus
      Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.476149865-05:00" level=info msg="Image ghcr.io/vatesfr/xenorchestra-csi-driver:edge not found" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus
      Feb 18 22:21:44 xxx crio[734]: time="2026-02-18 22:21:44.476188202-05:00" level=info msg="Image ghcr.io/vatesfr/xenorchestra-csi-driver:edge not found" id=308f8922-453b-481f-804d-3d85b489b933 name=/runtime.v1.ImageService/ImageStatus
      Feb 18 22:21:44 xxx kubelet[50541]: E0218 22:21:44.476862   50541 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"node-driver-registrar\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=node-driver-registrar pod=csi-xenorchestra-node-btrsb_kube-system(433e69c9-2da9-4e23-b92b-90918bd36248)\", failed to \"StartContainer\" for \"xenorchestra-csi-driver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/vatesfr/xenorchestra-csi-driver:edge\\\"\"]" pod="kube-system/csi-xenorchestra-node-btrsb" podUID="433e69c9-2da9-4e23-b92b-90918bd36248"
      

      Any other suggestions in the meantime or if I can collect more information, let me know.

      posted in Infrastructure as Code
      T
      ThasianXi
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      My validation of this was not successful; I used the Quick Start PoC.
      Pods eventually went into CrashLoopBackOff after ErrImagePull and ImagePullBackOff.
      I created a GitHub token with these permissions: public_repo, read:packages. I also used a token with more permissions (although that was futile) however, I figured at least it required the aforementioned ones.

      I have since uninstalled via the script but captured the following events from the controller and one of the node pods:

      kgp -nkube-system | grep csi*
      
      csi-xenorchestra-controller-748db9b45b-z26h6             1/3     CrashLoopBackOff   31 (2m31s ago)   77m
      csi-xenorchestra-node-4jw9z                              1/3     CrashLoopBackOff   18 (42s ago)     77m
      csi-xenorchestra-node-7wcld                              1/3     CrashLoopBackOff   18 (58s ago)     77m
      csi-xenorchestra-node-8jrlq                              1/3     CrashLoopBackOff   18 (34s ago)     77m
      csi-xenorchestra-node-hqwjj                              1/3     CrashLoopBackOff   18 (50s ago)     77m
      

      Pod events:
      csi-xenorchestra-controller-748db9b45b-z26h6

      Normal  BackOff  3m48s (x391 over 78m)  kubelet  Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
      

      csi-xenorchestra-node-4jw9z

      Normal   BackOff  14m (x314 over 79m)    kubelet  Back-off pulling image "ghcr.io/vatesfr/xenorchestra-csi-driver:edge"
      Warning  BackOff  4m21s (x309 over 78m)  kubelet  Back-off restarting failed container node-driver-registrar in pod csi-xenorchestra-node-4jw9z_kube-system(b533c28b-1f28-488a-a31e-862117461964)
      

      I can deploy again and capture more information if needed.

      posted in Infrastructure as Code
      T
      ThasianXi
    • RE: Feedback: XO Cloud Controller Manager (CCM)

      @nathanael-h I have not yet tried the XO CSI driver, if I do I will leave feedback on the relevant thread. (I found the one started in 2025 November)

      posted in Infrastructure as Code
      T
      ThasianXi
    • RE: Feedback: XO Cloud Controller Manager (CCM)

      @nathanael-h You're welcome. I deployed CCM to an existing cluster and only checked that the labels were applied; I did not use CCM to initialise nodes.
      Any other questions about my setup, let me know.

      posted in Infrastructure as Code
      T
      ThasianXi
    • Feedback: XO Cloud Controller Manager (CCM)

      Offering feedback that my implementation of the XO CCM (v1.0.0-rc.1) into my K8s cluster was successful; I validated that the nodes were labeled as expected. I attached screenshots from Rancher and the CLI.
      In my lab, I use OCNE 1.9 (Oracle Cloud Native Environment) on Oracle Linux 9.7.

      In my setup, I edited the kubeadm-config ConfigMap and on all nodes added the Kubelet argument for the cloud provider.
      I restarted the Kubelet on all nodes and then deployed the CCM via Method 1 per the documentation

      āš— From my setup to set the cloud-provider:

      kubectl edit cm kubeadm-config -n kube-system

      apiServer:
        extraArgs:
          cloud-provider: external
      controllerManager:
        extraArgs:
          cloud-provider: external
      

      sudo vim /etc/sysconfig/kubelet

      KUBELET_EXTRA_ARGS="--fail-swap-on=false --cloud-provider=external"
      

      sudo systemctl daemon-reload && sudo systemctl restart kubelet

      Rancher_XO_CCM_labels.png

      OCNE_XO-CCM_pod.png

      posted in Infrastructure as Code
      T
      ThasianXi
    • RE: XO: Multiple VM creation - Uncaught TypeError

      This is now solved. I rebuilt the XO VM and can create multiple VMs as expected.šŸ

      posted in Management
      T
      ThasianXi
    • XO: Multiple VM creation - Uncaught TypeError

      I am using the current build of XO [commit: 1c01f] and there appears to be an issue when using the Multiple VMs creation feature.

      I noticed the following TypeError in the browser console when either recalculating the VM number or refreshing the VM name. The goal is to create three VMs however only two are created and the name does not update.
      I use this feature when rebuilding my lab and previously have not had any issue.

      I can work around this however wanted to report here. Same behaviour in two browsers Firefox and Brave. I have not seen any other reports of this on the forum. Let me know if this can be replicated or if I can capture more information; thanks.

      index.js:780 Uncaught TypeError: Cannot read properties of undefined (reading 'forEach')
          at a._buildTemplate (index.js:780:28)
          at index.js:771:25
          at index.js:65:1
          at index.js:27:1
          at a.u [as _buildVmsNameTemplate] (index.js:76:1)
          at index.js:821:29
          at Object.o (ReactErrorUtils.js:24:1)
          at s (EventPluginUtils.js:83:1)
          at Object.executeDispatchesInOrder (EventPluginUtils.js:106:1)
          at d (EventPluginHub.js:41:1)
      (anonymous)	@	index.js:780
      (anonymous)	@	index.js:771
      (anonymous)	@	index.js:65
      (anonymous)	@	index.js:27
      u	@	index.js:76
      (anonymous)	@	index.js:821
      o	@	ReactErrorUtils.js:24
      s	@	EventPluginUtils.js:83
      executeDispatchesInOrder	@	EventPluginUtils.js:106
      d	@	EventPluginHub.js:41
      f	@	EventPluginHub.js:52
      t.exports	@	forEachAccumulated.js:22
      processEventQueue	@	EventPluginHub.js:250
      (anonymous)	@	ReactEventEmitterMixin.js:15
      handleTopLevel	@	ReactEventEmitterMixin.js:25
      f	@	ReactEventListener.js:70
      perform	@	Transaction.js:141
      batchedUpdates	@	ReactDefaultBatchingStrategy.js:60
      batchedUpdates	@	ReactUpdates.js:95
      dispatchEvent	@	ReactEventListener.js:145
      
      posted in Management
      T
      ThasianXi
    • RE: VM Boot Order via XO?

      @cichy This can be accomplished using vApps via the CLI.
      Check out this forum post in which a member shared the process: XCP-ng Pool Boot Order

      posted in Migrate to XCP-ng
      T
      ThasianXi
    • RE: Shipping System Logs to a Remote Syslog Server

      @kagbasi-ngc You are welcome.

      posted in Management
      T
      ThasianXi
    • RE: Shipping System Logs to a Remote Syslog Server

      @kagbasi-ngc Yes. I have XO configured to send system logs to Graylog via UDP.
      It is mentioned in the last section of this blog for Xen Orchestra 5.76
      Also see this forum post XO logs to external syslog

      However to setup a remote syslog for an XCP-ng host, that can be configured on the Advanced tab of a host in Xen Orchestra.

      posted in Management
      T
      ThasianXi
    • RE: Weird behavior on cpu usage

      @rtjdamen Yes. See this forum post: CPU Stats bottoming out to Zero every five minutes

      posted in Management
      T
      ThasianXi
    • RE: Is Intel Gold Sapphire Rapids CPU supported?

      @gecant Yes. That hotfix was released with the XCP-ng March 2023 Security Update

      posted in Hardware
      T
      ThasianXi
    • RE: Installation of XCP Guest tool frivers on RHEL9

      @Denson You're welcome; glad to help.

      posted in Compute
      T
      ThasianXi
    • RE: Installation of XCP Guest tool frivers on RHEL9

      @Denson You can mount from the CD in Xen Orchestra on the Console tab or install from the EPEL repo. (I always install from EPEL)

      EPEL install:
      dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm -y

      Tool install:
      dnf install xe-guest-utilities-latest -y
      Be sure to enable and start after installing: systemctl enable --now xe-linux-distribution

      Doc ref: https://xcp-ng.org/docs/guests.html#linux

      posted in Compute
      T
      ThasianXi
    • RE: Xen Orchestra netbox sync error

      @sb2014 In the 5.85 release of XO there were updates to the Netbox plugin that now requires three objects in the UUID field.
      https://xen-orchestra.com/blog/xen-orchestra-5-85/

      Here are the ones you need:

      Virtualization > cluster
      Virtualization > virtual machine 
      Virtualization > interface
      
      posted in Advanced features
      T
      ThasianXi
    • RE: Xen Orchestra netbox sync error

      I recall a forum thread about this error. It should be fixed: (if using the latest XOA build or commit from source)
      https://xcp-ng.org/forum/topic/4810/netbox-plugin-error-ipaddr-the-address-has-neither-ipv6-nor-ipv4-format

      If on the latest, then there may be an issue with an IP in the pool. šŸ¤”

      posted in Advanced features
      T
      ThasianXi
    • RE: Enabling and using NBD backups

      @MrNaz I recommend reading the XO blog; updates are published monthly.

      NBD was introduced in the 5.76 release and the ability to enable in the Xen Orchestra UI was introduced in 5.79
      https://xen-orchestra.com/blog/xen-orchestra-5-76/
      https://xen-orchestra.com/blog/xen-orchestra-5-79/

      Also enhancements were released in 5.81
      https://xen-orchestra.com/blog/xen-orchestra-5-81/

      posted in Xen Orchestra
      T
      ThasianXi