XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • PoloGTIJauneP

      Build number cloud vs Build number 8.3.0

      Watching Ignoring Scheduled Pinned Locked Moved Solved French (Français)
      11
      1 Votes
      11 Posts
      109 Views
      olivierlambertO
      Ah excellente nouvelle Je passe le sujet en résolu !
    • J

      Backup Error - Invalid RFC7231 date-time value

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      4
      0 Votes
      4 Posts
      42 Views
      J
      I have found that making a new remote to the same bucket with a different directory gave the same error. I also tried restarting Xen Orchestra in case there was an old task that was stuck etc. I have managed to get this backup to work via the same newer remote but targeting a new bucket with one directory inside. Odd because the other older bucket jobs are working. Worth noting other potential factors on the older bucket; there is a lifecycle policy in place on the older wasabi bucket. This reduced Wasabi's versioning files within a time period where no available restores were listed in Xen Orchestra. We have object lock and versioning turned on for immutable backups in Wasabi. The backups in that bucket were not being reduced automatically by Xen Orchestra and the size had gone over 100TB when retention was 15 backups total & should've started dropping the older files. The bucket size has since reduced down to about 40TB now and we're looking to further optimise the backups across our whole infra. If there are other suggestions I can go test them later but just noting what we're doing and how I've worked around the issue to keep the backup running when getting this error.
    • G

      Alternative to XCP-NG Plugin for Veeam Backup & Replication Public BETA

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Backup
      4
      0 Votes
      4 Posts
      85 Views
      P
      @julienXOvates sounds promising !! @gashorus in last veeam webinar they announced for veeam v13.1 (currently 13.0.1) to be out of beta on XCP-NG plugin... date : june 2026.
    • M

      XAPI sr-create ignores name-description parameter

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      4
      0 Votes
      4 Posts
      62 Views
      M
      @psafont Thank you for the quick response. I also found a similar issue: the other-config:auto-scan=true parameter is not being applied during xe sr-create either. As with the name-description parameter, the workaround is to add it separately afterwards using xe sr-param-add.
    • J

      Tag-Based Automation: Manage VM CPU Priority via assigned tag.

      Watching Ignoring Scheduled Pinned Locked Moved Management
      3
      0 Votes
      3 Posts
      34 Views
      J
      @johnnezero said: WHAT: Automatically assigns CPU weights and I/O priorities based on assigned VM tag (i.e. replicating what vcenter did via resource pools etc.). HOW: Run via cron for regular enforcement. WHY: Automatically assign performance metrics on all pool VMs (as well as preventing configuration drift if settings are accidentally changed). TAGS: The Performance Tiering Concept: 4-tier system with a naming convention that sorts logically in XO: TAG CPU WEIGHT I/O PRIORITY USE CASE 0-core 2048 7 (Highest) Domain Controllers, DNS, DHCP, Core DBs 1-high 1024 7 Critical App Servers 2-normal 256 4 Standard Workloads 3-low 128 1 Dev/Test, Noisy Neighbors Why the "0-" prefix? It forces core VMs to the top of the VM list in XO for easy visibility and management. Important: CPU weights only matter during contention. When the host is under-utilized, all VMs get the performance they need regardless of weight. These are an insurance policy. Script: set-performace.sh bash #!/bin/bash # ============================================ # XCP-ng set-performace.sh script # Tags: 0-core, 1-high, 2-normal, 3-low # ============================================ # --- CONFIGURATION (Customize these for your environment) --- CORE_TAG="0-core" CORE_WEIGHT="2048" CORE_IO_PRI="7" HIGH_TAG="1-high" HIGH_WEIGHT="1024" HIGH_IO_PRI="7" NORMAL_TAG="2-normal" NORMAL_WEIGHT="256" NORMAL_IO_PRI="4" LOW_TAG="3-low" LOW_WEIGHT="128" LOW_IO_PRI="1" LOW_QOS_KBPS="100000" # 100Mbps cap for noisy neighbors # --- CORE CRITICAL VMs --- echo "=== Applying $CORE_TAG CPU & I/O Priority ===" xe vm-list tags:contains="$CORE_TAG" --minimal | tr ',' '\n' | while read uuid; do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$CORE_WEIGHT xe vm-param-set uuid=$uuid other-config:sched-pri=$CORE_IO_PRI echo "CORE CRITICAL priority applied: $uuid" done # --- HIGH PRIORITY VMs --- echo "=== Applying $HIGH_TAG CPU & I/O Priority ===" xe vm-list tags:contains="$HIGH_TAG" --minimal | tr ',' '\n' | while read uuid; do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$HIGH_WEIGHT xe vm-param-set uuid=$uuid other-config:sched-pri=$HIGH_IO_PRI echo "HIGH priority applied: $uuid" done # --- NORMAL PRIORITY VMs --- echo "=== Applying $NORMAL_TAG CPU & I/O Priority ===" xe vm-list tags:contains="$NORMAL_TAG" --minimal | tr ',' '\n' | while read uuid; do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$NORMAL_WEIGHT xe vm-param-set uuid=$uuid other-config:sched-pri=$NORMAL_IO_PRI echo "NORMAL priority applied: $uuid" done # --- LOW PRIORITY VMs (with Network QoS cap) --- echo "=== Applying $LOW_TAG CPU & I/O Priority ===" xe vm-list tags:contains="$LOW_TAG" --minimal | tr ',' '\n' | while read uuid; do [ -z "$uuid" ] && continue xe vm-param-set uuid=$uuid VCPUs-params:weight=$LOW_WEIGHT xe vm-param-set uuid=$uuid other-config:sched-pri=$LOW_IO_PRI echo "LOW priority applied: $uuid" done echo "=== Performance Tuning Complete! ===" How to Deploy: 1 Upload script bash # Copy to your pool master scp set-performace.sh root@your-pool-master:/usr/local/bin/ chmod +x /usr/local/bin/set-performace.sh 2 Add to crontab # Add to crontab (runs hourly) */60 * * * * root /usr/local/bin/set-performance.sh >> /var/log/set-performance.log 2>&1 3 Test # Test it manually /usr/local/bin/set-performace.sh It would be even better if you could split the configuration section off, so that it’s in its own conf file. Would make it easier to manage, also if this ends up being used, by Vates in the Vates VMS software. There can then be a vendor recommended configuration with the option of customer’s own workflow based, configuration.
    • A

      XenOrchestra not showing VM Disks on Pool (on single Server working) - XCP-ng Center is showing them

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      14
      2
      0 Votes
      14 Posts
      281 Views
      C
      Dug a little deeper. For a VM where the disks are not shown the following XO API call fails: /rest/v0/vms/a519e879-3971-9210-51b6-7df14336e7b7/vdis { "error": "no such VDI ac37700d-3157-4df7-b8e8-e1799a994591", "data": { "id": "ac37700d-3157-4df7-b8e8-e1799a994591", "type": [ "VDI" ] } } Also the VDI cannot be retrieved over the XO API: /rest/v0/vms/a519e879-3971-9210-51b6-7df14336e7b7 ... "$VBDs": [ "4ea8a3cd-0d1b-dc60-4d9c-fd70e060f06c", "9f4ca686-9fc2-35a9-c3e9-c871c9f68aba" ], ... /rest/v0/vbds/9f4ca686-9fc2-35a9-c3e9-c871c9f68aba { "type": "VBD", "attached": false, "bootable": false, "device": "xvda", "is_cd_drive": false, "position": "0", "read_only": false, "VDI": "ac37700d-3157-4df7-b8e8-e1799a994591", "VM": "a519e879-3971-9210-51b6-7df14336e7b7", "id": "9f4ca686-9fc2-35a9-c3e9-c871c9f68aba", "uuid": "9f4ca686-9fc2-35a9-c3e9-c871c9f68aba", "$pool": "93d361b7-f549-53b7-a3aa-c9695bf0abe4", "$poolId": "93d361b7-f549-53b7-a3aa-c9695bf0abe4", "_xapiRef": "OpaqueRef:1d424d94-f540-2eb4-9e52-2a9b21ec0a19" } /rest/v0/vdis/ac37700d-3157-4df7-b8e8-e1799a994591 { "error": "no such VDI ac37700d-3157-4df7-b8e8-e1799a994591", "data": { "id": "ac37700d-3157-4df7-b8e8-e1799a994591", "type": "VDI" } } However the VDI can be listed using the xe cli: $ xe vm-list uuid=a519e879-3971-9210-51b6-7df14336e7b7 uuid ( RO) : a519e879-3971-9210-51b6-7df14336e7b7 name-label ( RW): XXX power-state ( RO): halted $ xe vbd-list vm-uuid=a519e879-3971-9210-51b6-7df14336e7b7 uuid ( RO) : 4ea8a3cd-0d1b-dc60-4d9c-fd70e060f06c vm-uuid ( RO): a519e879-3971-9210-51b6-7df14336e7b7 vm-name-label ( RO): XXX vdi-uuid ( RO): <not in database> empty ( RO): true device ( RO): xvdd uuid ( RO) : 9f4ca686-9fc2-35a9-c3e9-c871c9f68aba vm-uuid ( RO): a519e879-3971-9210-51b6-7df14336e7b7 vm-name-label ( RO): XXX vdi-uuid ( RO): ac37700d-3157-4df7-b8e8-e1799a994591 empty ( RO): false device ( RO): xvda $ xe vdi-list uuid=ac37700d-3157-4df7-b8e8-e1799a994591 uuid ( RO) : ac37700d-3157-4df7-b8e8-e1799a994591 name-label ( RW): XXX Disk 0 name-description ( RW): Created by XO sr-uuid ( RO): 977b7e63-bb84-57b2-3e0d-206afea553bf virtual-size ( RO): 34359738368 sharable ( RO): false read-only ( RO): false Seems almost like something changed in the XCP-ng API which XO cannot consume.
    • maximsachsM

      XCP-ng 8.3: Broadcom BCM57414 `bnxt_en` Driver Fails to Probe on HPE DL380a Gen12

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      8
      2 Votes
      8 Posts
      557 Views
      T
      Hi @maximsachs, Sorry for the delay. From what you describe, the driver seems to be probed in all cases but I have a doubt regarding the driver from xcp-ng 8.2. I rebuilt it specifically for 8.3. To completely eliminate a driver issue, can you try this RPM ? From the host, this can be done by running the following commands: $ wget https://nextcloud.vates.tech/public.php/dav/files/R33Dwpt5gjy6CCr/broadcom-bnxt-en-1.10.0_216.0.119.1-1.0.82srcs.0.xcpng8.3.x86_64.rpm $ yum update ./broadcom-bnxt-en-1.10.0_216.0.119.1-1.0.82srcs.0.xcpng8.3.x86_64.rpm Also, since this xcpng-8.2 release seems to have support for device IDs that have been removed from the 8.3 one, can you give the output of the following shell commands: $ lspci -nn -s 0001:86:00.0 and $ lspci -nn -s 0001:86:00.1 Regards, Thierry