XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • M

      VM exports sometimes invalid / truncated

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      2
      0 Votes
      2 Posts
      72 Views
      olivierlambertO
      Ping @florent
    • P

      XO Lite - network management "coming soon"

      Watching Ignoring Scheduled Pinned Locked Moved XO Lite
      2
      1
      0 Votes
      2 Posts
      114 Views
      olivierlambertO
      Hi, Somewhere between Q1 and Q2 next year, hard to be more precise
    • K

      Script suddently stop working (TLS error)

      Watching Ignoring Scheduled Pinned Locked Moved Solved Infrastructure as Code
      5
      0 Votes
      5 Posts
      448 Views
      olivierlambertO
      Excellent news! Thanks for the feedback
    • J

      XCP-NG & XOA Trial extend

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      2
      1
      0 Votes
      2 Posts
      81 Views
      olivierlambertO
      Hi, As the message said, please reach out to us, you can use the contact form https://vates.tech/contact We'll be happy to discuss and assist in your evaluation
    • olivierlambertO

      Feedback on immutability

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      56
      2 Votes
      56 Posts
      16k Views
      olivierlambertO
      Sadly, Backblaze is often having issues on S3 (timeout, not reliable etc). We are updating our doc to give a "tiering" support.
    • E

      Mitigations and impact of CVE-2025-49844 (Redis)

      Watching Ignoring Scheduled Pinned Locked Moved Management
      2
      0 Votes
      2 Posts
      208 Views
      olivierlambertO
      Hi, To start, it's good to read: https://docs.vates.tech/security/ Especially https://docs.vates.tech/security/#contact--disclosure Then, I can answer here directly: we are not affected since Redis is only listening locally, therefore it's not exposed outside XO. There's nothing interesting to do with that CVE, because in order to use it, you already must be a privileged user.
    • marcoiM

      Feature request add open in new tab to XO GUI

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      9
      2
      0 Votes
      9 Posts
      347 Views
      marcoiM
      @tc-atwork thanks for post, i wasnt able to get xo6 setup as i havent had time to play with it. I look forward to when it released. For now I just keep using xo-lite to open the console window.
    • michael.manleyM

      XCP-ng Center 25.04 Released

      Watching Ignoring Scheduled Pinned Locked Moved News
      25
      6 Votes
      25 Posts
      17k Views
      M
      @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
    • planedropP

      What Are You All Doing for Disaster Recovery of XOA Itself?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      8
      0 Votes
      8 Posts
      948 Views
      W
      @probain I would be interested in youre ansible role for deploying XO from sources
    • S

      cleanVm: incorrect backup size in metadata

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      15
      1
      0 Votes
      15 Posts
      2k Views
      K
      I am seeing this message, too. This is the first time I try out delta backups. I only did one backup so far, so this is a full backup. Storage is NFS.
    • T

      Hey XCP-NG! How's my setup?

      Watching Ignoring Scheduled Pinned Locked Moved Share your setup!
      11
      2
      1 Votes
      11 Posts
      1k Views
      T
      [image: 1762024637434-rack-configuration.drawio.png] [image: 1762024637451-authentication.drawio.png] I've had a lot of changes to the homelab and always new projects, as well as started on the diagrams again. A couple of other reference diagrams aren't finished, but here are two that are done! Created a Physical Equipment Reference sheet that expands on the hardware used and how, as well as an Authentication Stack that gives a reference of service access and authentication.
    • C

      XO Community Edition - Ldap Plugin not working ?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      56
      0 Votes
      56 Posts
      11k Views
      C
      Just a reminder for myself, or other people in need in the future thanks again for all people who helped me understanding this Had to reinstall my entire XCP system, and almost forget how to configure Ldap plugin to only allow my admin accout to login So here's my Ldap plugin conf, to allow only admin user (member of specific group) to login. my AD is a windows 2K19 server with active directory without ssl. URI : ldap://dc.domain.net:389 no certificate info base : dc=domain,dc=net Credential : Fill = tick DN = full DN of service user (CN=xen,OU=service_account,DC=domain,DC=net) password = password of this account it's a simple account with no specific right, can only read AD and login User Filter, where it can stuck (&(sAMAccountName={{name}})(memberOf=CN=SG-XCP_Admin,OU=service_account,DC=domain,DC=net)) in real my OU have spaces inside their name, it work anyway. SG-XCP_Admin is a security group having my admin users inside ID Attribute : sAMAccountName and that's all.
    • R

      Unknown error - this pool is already connected

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      15
      0 Votes
      15 Posts
      4k Views
      P
      This is an old thread, but I ran into this myself recently. While there is a link to deleting the entire XO configuration, I think I fixed it with a less drastic solution. Remember: I'm just a random dude on the internet posting dangerous commands to try. It worked for me, but your mileage might vary. I run Xen Orchestra in a container (Xen Orchestra, commit e8733 at the time of writing). So I got a command line in the container with: docker exec -it xoa bash Then I ran redis-cli to get a redis command prompt. I typed KEYS * to get a list of keys. One key I saw was: 7) xo:server_host:172.30.0.214". That's the IP of the host I was trying to join (the master of a single-host pool). So I ran: del xo:server_host:172.30.0.214 Then I restarted my container with docker restart xoa. After that, I was able to successfully add the host to Xen Orchestra. Maybe this will help someone else. It got me working again.
    • olivierlambertO

      DevOps Megathread: what you need and how we can help!

      Watching Ignoring Scheduled Pinned Locked Moved Infrastructure as Code
      49
      4 Votes
      49 Posts
      9k Views
      CyrilleC
      Terraform Provider v0.36.0 and Pulumi Provider v2.3.0 Read and expose boot_firmware on template data-source by @sakaru in #381 Fixes VM creation from multi-disks template: All existing disks in the template are used if they are declared in the plan. All unused disks in the template are deleted to avoid inconsistency between the plan and the actual state. It is no longer possible to resize existing template disks to a smaller size (fixes potential source of data loss). Order of existing disk matches the declaration order in the plan Terraform provider release: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.36.0 Pulumi provider release: https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.3.0
    • O

      Remote desktop on Gnome hangs randomly

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      1
      0 Votes
      1 Posts
      73 Views
      No one has replied
    • A

      What to do about Realtek RTL8125 RTL8126 RTL8127 drivers

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      13
      0 Votes
      13 Posts
      3k Views
      A
      I have updated the drivers for the Realtek RTL812x 2.5/5/10G cards. So far they are working correctly. There are a few minor issues that Realtek needs to fix (for the next version, they say). Also the new Realtek firmware has not been added to XCP (but it's not required). The standard included 8125 driver for XCP 8.3 is not updated. To use the new driver install the new alt version of the 8125 driver. To support the 8126 install the required 8125 alt version first and then the new 8126 driver. The 8127 driver is also available for the new 10GB chips (I just got a production PCIe card for testing). The first issue I see with this card is, it is only a PCIe x1 card, so for full performance you need PCIe 4.0... There are other 8127 chips that support x2 so they will better support PCIe 3.0. Realtek will keep releasing new versions of the chips that will require updates to the drivers to function correctly. Even current versions of Linux needs updates to support the newer chips.
    • S

      Rest API Mount CDRom to VM

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      7
      0 Votes
      7 Posts
      650 Views
      S
      I was curious if there had been any updates to mounting ISOs via the API? Thanks.
    • R

      xenstore: could not write path attr/eth0.401/ip

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      1
      1
      0 Votes
      1 Posts
      102 Views
      No one has replied
    • TheNorthernLightT

      The emulator required to run this VM failed to start..?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      14
      0 Votes
      14 Posts
      11k Views
      M
      Bit of a necro-post, but we're mid migration of storage/VM hosts/pools. I have a single win11 VM that's UEFI (not secure boot). It runs just fine on our old 8.1 server that we're migrating off of. If I provision a new UEFI VM on the new pool and point it at the shared storage VHD, it refuses to boot with the same "The emulator required to run this VM..." error OP posted. I've tried restarting the toolstack and "updating the Secure Boot certs" on the host server. Any guidance very much welcomed, I was unaware that this VM wasn't booting on the new pool (the VM is off a lot of the time), and we're a couple weeks from this project completing (and the old server being WEEEd).
    • olivierlambertO

      XOSTOR hyperconvergence preview

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      458
      1
      6 Votes
      458 Posts
      758k Views
      J
      I have amazing news! After the upgrade to xcp-ng 8.3, I retested velero backup, and it all just works Completed Backup jonathon@jonathon-framework:~$ velero --kubeconfig k8s_configs/production.yaml backup describe grafana-test Name: grafana-test Namespace: velero Labels: objectset.rio.cattle.io/hash=c2b5f500ab5d9b8ffe14f2c70bf3742291df565c velero.io/storage-location=default Annotations: objectset.rio.cattle.io/applied=H4sIAAAAAAAA/4SSQW/bPgzFvwvPtv9OajeJj/8N22HdBqxFL0MPlEQlWmTRkOhgQ5HvPsixE2yH7iji8ffIJ74CDu6ZYnIcoIMTeYpcOf7vtIICji4Y6OB/1MdxgAJ6EjQoCN0rYAgsKI5Dyk9WP0hLIqmi40qjiKfMcRlAq7pBY+py26qmbEi15a5p78vtaqe0oqbVVsO5AI+K/Ju4A6YDdKDXqrVtXaNqzU5traVVY9d6Uyt7t2nW693K2Pa+naABe4IO9hEtBiyFksClmgbUdN06a9NAOtvr5B4DDunA8uR64lGgg7u6rxMUYMji6OWZ/dhTeuIPaQ6os+gTFUA/tR8NmXd+TELxUfNA5hslHqOmBN13OF16ZwvNQShIqpZClYQj7qk6blPlGF5uzC/L3P+kvok7MB9z0OcCXPiLPLHmuLLWCfVfB4rTZ9/iaA5zHovNZz7R++k6JI50q89BXcuXYR5YT0DolkChABEPHWzW9cK+rPQx8jgsH/KQj+QT/frzXCdduc/Ca9u1Y7aaFvMu5Ang5Xz+HQAA//8X7Fu+/QIAAA objectset.rio.cattle.io/id=e104add0-85b4-4eb5-9456-819bcbe45cfc velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.4+rke2r1 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: grafana Excluded: <none> Resources: Included cluster-scoped: <none> Excluded cluster-scoped: volumesnapshotcontents.snapshot.storage.k8s.io Included namespace-scoped: * Excluded namespace-scoped: volumesnapshots.snapshot.storage.k8s.io Label selector: <none> Or label selector: <none> Storage Location: default Velero-Native Snapshot PVs: true Snapshot Move Data: true Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 30m0s ItemOperationTimeout: 4h0m0s Hooks: <none> Backup Format Version: 1.1.0 Started: 2025-10-15 15:29:52 -0700 PDT Completed: 2025-10-15 15:31:25 -0700 PDT Expiration: 2025-11-14 14:29:52 -0800 PST Total items to be backed up: 35 Items backed up: 35 Backup Item Operations: 1 of 1 completed successfully, 0 failed (specify --details for more information) Backup Volumes: Velero-Native Snapshots: <none included> CSI Snapshots: grafana/central-grafana: Data Movement: included, specify --details for more information Pod Volume Backups: <none included> HooksAttempted: 0 HooksFailed: 0 Completed Restore jonathon@jonathon-framework:~$ velero --kubeconfig k8s_configs/production.yaml restore describe restore-grafana-test --details Name: restore-grafana-test Namespace: velero Labels: objectset.rio.cattle.io/hash=252addb3ed156c52d9fa9b8c045b47a55d66c0af Annotations: objectset.rio.cattle.io/applied=H4sIAAAAAAAA/3yRTW7zIBBA7zJrO5/j35gzfE2rtsomymIM45jGBgTjbKLcvaKJm6qL7kDwnt7ABdDpHfmgrQEBZxrJ25W2/85rSOCkjQIBrxTYeoIEJmJUyAjiAmiMZWRtTYhb232Q5EC88tquJDKPFEU6GlpUG5UVZdpUdZ6WZZ+niOtNWtR1SypvqC8buCYwYkfjn7oBwwAC8ipHpbqC1LqqZZWrtse228isrLqywapSdS0z7KPU4EQgwN+mSI8eezSYMgWG22lwKOl7/MgERzJmdChPs9veDL9IGfSbQRcGy+96IjszCCiyCRLQRo6zIrVd5AHEfuHhkIBmmp4d+a/3e9Dl8LPoCZ3T5hg7FvQRcR8nxt6XL7sAgv1MCZztOE+01P23cvmnPYzaxNtwuF4/AwAA//8k6OwC/QEAAA objectset.rio.cattle.io/id=9ad8d034-7562-44f2-aa18-3669ed27ef47 Phase: Completed Total items to be restored: 33 Items restored: 33 Started: 2025-10-15 15:35:26 -0700 PDT Completed: 2025-10-15 15:36:34 -0700 PDT Warnings: Velero: <none> Cluster: <none> Namespaces: grafana-restore: could not restore, ConfigMap:elasticsearch-es-transport-ca-internal already exists. Warning: the in-cluster version is different than the backed-up version could not restore, ConfigMap:kube-root-ca.crt already exists. Warning: the in-cluster version is different than the backed-up version Backup: grafana-test Namespaces: Included: grafana Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io, csinodes.storage.k8s.io, volumeattachments.storage.k8s.io, backuprepositories.velero.io Cluster-scoped: auto Namespace mappings: grafana=grafana-restore Label selector: <none> Or label selector: <none> Restore PVs: true CSI Snapshot Restores: grafana-restore/central-grafana: Data Movement: Operation ID: dd-ffa56e1c-9fd0-44b4-a8bb-8163f40a49e9.330b82fc-ca6a-423217ee5 Data Mover: velero Uploader Type: kopia Existing Resource Policy: <none> ItemOperationTimeout: 4h0m0s Preserve Service NodePorts: auto Restore Item Operations: Operation for persistentvolumeclaims grafana-restore/central-grafana: Restore Item Action Plugin: velero.io/csi-pvc-restorer Operation ID: dd-ffa56e1c-9fd0-44b4-a8bb-8163f40a49e9.330b82fc-ca6a-423217ee5 Phase: Completed Progress: 856284762 of 856284762 complete (Bytes) Progress description: Completed Created: 2025-10-15 15:35:28 -0700 PDT Started: 2025-10-15 15:36:06 -0700 PDT Updated: 2025-10-15 15:36:26 -0700 PDT HooksAttempted: 0 HooksFailed: 0 Resource List: apps/v1/Deployment: - grafana-restore/central-grafana(created) - grafana-restore/grafana-debug(created) apps/v1/ReplicaSet: - grafana-restore/central-grafana-5448b9f65(created) - grafana-restore/central-grafana-56887c6cb6(created) - grafana-restore/central-grafana-56ddd4f497(created) - grafana-restore/central-grafana-5f4757844b(created) - grafana-restore/central-grafana-5f69f86c85(created) - grafana-restore/central-grafana-64545dcdc(created) - grafana-restore/central-grafana-69c66c54d9(created) - grafana-restore/central-grafana-6c8d6f65b8(created) - grafana-restore/central-grafana-7b479f79ff(created) - grafana-restore/central-grafana-bc7d96cdd(created) - grafana-restore/central-grafana-cb88bd49c(created) - grafana-restore/grafana-debug-556845ff7b(created) - grafana-restore/grafana-debug-6fb594cb5f(created) - grafana-restore/grafana-debug-8f66bfbf6(created) discovery.k8s.io/v1/EndpointSlice: - grafana-restore/central-grafana-hkgd5(created) networking.k8s.io/v1/Ingress: - grafana-restore/central-grafana(created) rbac.authorization.k8s.io/v1/Role: - grafana-restore/central-grafana(created) rbac.authorization.k8s.io/v1/RoleBinding: - grafana-restore/central-grafana(created) v1/ConfigMap: - grafana-restore/central-grafana(created) - grafana-restore/elasticsearch-es-transport-ca-internal(failed) - grafana-restore/kube-root-ca.crt(failed) v1/Endpoints: - grafana-restore/central-grafana(created) v1/PersistentVolume: - pvc-e3f6578f-08b2-4e79-85f0-76bbf8985b55(skipped) v1/PersistentVolumeClaim: - grafana-restore/central-grafana(created) v1/Pod: - grafana-restore/central-grafana-cb88bd49c-fc5br(created) v1/Secret: - grafana-restore/fpinfra-net-cf-cert(created) - grafana-restore/grafana(created) v1/Service: - grafana-restore/central-grafana(created) v1/ServiceAccount: - grafana-restore/central-grafana(created) - grafana-restore/default(skipped) velero.io/v2alpha1/DataUpload: - velero/grafana-test-nw7zj(skipped) Image of working restore pod, with correct data in PV [image: 1760568537496-34d87db1-19ae-4348-8d4e-6599375d7634-image.png] Velero installed from helm: https://vmware-tanzu.github.io/helm-charts Version: velero:11.1.0 Values --- image: repository: velero/velero tag: v1.17.0 # Whether to deploy the restic daemonset. deployNodeAgent: true initContainers: - name: velero-plugin-for-aws image: velero/velero-plugin-for-aws:latest imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /target name: plugins configuration: defaultItemOperationTimeout: 2h features: EnableCSI defaultSnapshotMoveData: true backupStorageLocation: - name: default provider: aws bucket: velero config: region: us-east-1 s3ForcePathStyle: true s3Url: https://s3.location # Destination VSL points to LINSTOR snapshot class volumeSnapshotLocation: - name: linstor provider: velero.io/csi config: snapshotClass: linstor-vsc credentials: useSecret: true existingSecret: velero-user metrics: enabled: true serviceMonitor: enabled: true prometheusRule: enabled: true # Additional labels to add to deployed PrometheusRule additionalLabels: {} # PrometheusRule namespace. Defaults to Velero namespace. # namespace: "" # Rules to be deployed spec: - alert: VeleroBackupPartialFailures annotations: message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} partialy failed backups. expr: |- velero_backup_partial_failure_total{schedule!=""} / velero_backup_attempt_total{schedule!=""} > 0.25 for: 15m labels: severity: warning - alert: VeleroBackupFailures annotations: message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} failed backups. expr: |- velero_backup_failure_total{schedule!=""} / velero_backup_attempt_total{schedule!=""} > 0.25 for: 15m labels: severity: warning Also create the following. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: linstor-vsc labels: velero.io/csi-volumesnapshot-class: "true" driver: linstor.csi.linbit.com deletionPolicy: Delete We are using Piraeus operator to use xostor in k8s https://github.com/piraeusdatastore/piraeus-operator.git Version: v2.9.1 Values: --- operator: resources: requests: cpu: 250m memory: 500Mi limits: memory: 1Gi installCRDs: true imageConfigOverride: - base: quay.io/piraeusdatastore components: linstor-satellite: image: piraeus-server tag: v1.29.0 tls: certManagerIssuerRef: name: step-issuer kind: StepClusterIssuer group: certmanager.step.sm Then we just connect to the xostor cluster like external linstor controller.