XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      511
      1 Votes
      511 Posts
      225k Views
      A
      @stormi I'm also getting error on some VMs while trying to export a disk and also trying to even start some VMs from NFS (that were fine before). xo-server[565]: 2026-05-13T02:53:15.746Z xo:api WARN admin | vm.start(...) [2s] =!> XapiError: INTERNAL_ERROR(xenopsd internal error: Storage_error ([S(Illegal_transition);[[S(Activated);S(RO)];[S(Activated);S(RW)]]])) xo-server[565]: 2026-05-13T02:53:40.652Z xo:api WARN admin | vm.start(...) [3s] =!> XapiError: SR_BACKEND_FAILURE_46(, The VDI is not available [opterr=VDI 399734eb-5965-4799-ac36-f6dd774db867 not detached cleanly], )
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      226
      7 Votes
      226 Posts
      33k Views
      julienXOvatesJ
      @jr-m4 Thank you for this feedback. We'll try to make it better based on this !
    • acebmxerA

      Backups with qcow2 enabled

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      28
      3
      0 Votes
      28 Posts
      2k Views
      acebmxerA
      So the scheduled backup at 1pm this afternoon ran with no issue. As stated previously the Coalesces are adding now now show 2 for vm. edit - Apr 21 11:22:55 xo-ce xo-server[15617]: }, Apr 21 11:22:55 xo-ce xo-server[15617]: summary: { duration: '6m', cpuUsage: '4%', memoryUsage: '29.01 MiB' } Apr 21 11:22:55 xo-ce xo-server[15617]: } Apr 21 13:00:00 xo-ce xo-server[16924]: 2026-04-21T17:00:00.320Z xo:backups:worker INFO starting backup Apr 21 13:00:00 xo-ce sudo[16938]: xo-service : PWD=/opt/xen-orchestra/packages/xo-server ; USER=root ; COMMAND=/usr/bin/mount -o -t nf> Apr 21 13:00:00 xo-ce sudo[16938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=996) Apr 21 13:00:00 xo-ce sudo[16938]: pam_unix(sudo:session): session closed for user root Apr 21 13:00:06 xo-ce xo-server[16924]: 2026-04-21T17:00:06.941Z xo:backups:MixinBackupWriter INFO deleting unused VHD { Apr 21 13:00:06 xo-ce xo-server[16924]: path: '/xo-vm-backups/138538a8-ef52-4d0a-4433-5ebb31d7e152/vdis/9f7daac4-80a1-41f9-8af0-99b6fe> Apr 21 13:00:06 xo-ce xo-server[16924]: } Apr 21 13:00:19 xo-ce xo-server[16924]: 2026-04-21T17:00:19.536Z xo:backups:MixinBackupWriter INFO deleting unused VHD { Apr 21 13:00:19 xo-ce xo-server[16924]: path: '/xo-vm-backups/6d733582-0728-b67c-084b-56abe6047bfc/vdis/9f7daac4-80a1-41f9-8af0-99b6fe> Apr 21 13:00:19 xo-ce xo-server[16924]: } Apr 21 13:01:23 xo-ce xo-server[16924]: 2026-04-21T17:01:23.971Z xo:backups:worker INFO backup has ended Apr 21 13:01:23 xo-ce sudo[16971]: xo-service : PWD=/opt/xen-orchestra/packages/xo-server ; USER=root ; COMMAND=/usr/bin/umount /run/xo-> Apr 21 13:01:23 xo-ce sudo[16971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=996) Apr 21 13:01:23 xo-ce sudo[16971]: pam_unix(sudo:session): session closed for user root Apr 21 13:01:24 xo-ce xo-server[16924]: 2026-04-21T17:01:24.029Z xo:backups:worker INFO process will exit { Apr 21 13:01:24 xo-ce xo-server[16924]: duration: 83708608, Apr 21 13:01:24 xo-ce xo-server[16924]: exitCode: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: resourceUsage: { Apr 21 13:01:24 xo-ce xo-server[16924]: userCPUTime: 31147905, Apr 21 13:01:24 xo-ce xo-server[16924]: systemCPUTime: 14319055, Apr 21 13:01:24 xo-ce xo-server[16924]: maxRSS: 65888, Apr 21 13:01:24 xo-ce xo-server[16924]: sharedMemorySize: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: unsharedDataSize: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: unsharedStackSize: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: minorPageFault: 367979, Apr 21 13:01:24 xo-ce xo-server[16924]: majorPageFault: 1, Apr 21 13:01:24 xo-ce xo-server[16924]: swappedOut: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: fsRead: 5648, Apr 21 13:01:24 xo-ce xo-server[16924]: fsWrite: 16101272, Apr 21 13:01:24 xo-ce xo-server[16924]: ipcSent: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: ipcReceived: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: signalsCount: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: voluntaryContextSwitches: 65983, Apr 21 13:01:24 xo-ce xo-server[16924]: involuntaryContextSwitches: 7745 Apr 21 13:01:24 xo-ce xo-server[16924]: }, Apr 21 13:01:24 xo-ce xo-server[16924]: summary: { duration: '1m', cpuUsage: '54%', memoryUsage: '64.34 MiB' } Apr 21 13:01:24 xo-ce xo-server[16924]: } lines 617-656/656 (END)
    • johnnezeroJ

      Tag-Based Automation: Manage VM CPU Priority via assigned tag.

      Watching Ignoring Scheduled Pinned Locked Moved Management
      23
      1 Votes
      23 Posts
      288 Views
      J
      @tjkreidl said: @john.c Not found with the Wayback Machine, alas. Still not finding it anywhere else, but will keep looking! It's a crying shame Citrix didn't preserve the treasure trove of old community blogs. I did a bit of digging with the aid of AI and I’ve managed to uncover the original three blog posts of NUMA and references to UMA. If you do some more digging you may be able to uncover the rest, so it can be rewritten and/or updated, then be hosted somewhere that won’t go down so easily, or be lost as easily. If you do an update or write etc, may I suggest switching the images used to WebP or AVIF format, will seriously help file size while maintaining their quality (or even giving room for higher quality). Consider switching to SVG for diagrams rather than raster (or as the default). I’d suggest checking out to consider using Mermaid for diagrams (https://mermaid.ai/open-source/?utm_medium=hero&utm_campaign=variant_a&utm_source=mermaid_js). Maybe if you do a rewrite use markdown and something like Hugo, to generate it from the files to host on GitHub or some other pages providing repository (e.g. GitLab Pages or Codeberg Pages). https://web.archive.org/web/20220527221535/https://www.mycugc.org/blogs/tobias-kreidl/2019/03/07/tale-of-two-servers-bios-settings-affect-apps-gpu https://web.archive.org/web/20220527213026/https://www.mycugc.org/blogs/tobias-kreidl/2019/04/30/a-tale-of-two-servers-part-2 https://web.archive.org/web/20220527215004/https://www.mycugc.org/blogs/tobias-kreidl/2019/04/30/a-tale-of-two-servers-part-3 https://community.citrix.com/forums/topic/235895-xenserver-vm-citrix-worker-sizing-question/ https://xcp-ng.org/forum/topic/9359/cpu-provisioning https://community.citrix.com/forums/topic/237493-memory-and-cpus-assigning-to-vms-in-order-to-obtain-maximum-performance-according-to-numa-topology/ https://community.citrix.com/forums/topic/241553-bios-power-performance-settings/ https://community.citrix.com/forums/topic/243640-citrix-hypervisor-performance-tips/
    • AlexanderKA

      Nested Virtualization of Windows Hyper-V on XCP-ng

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      133
      1
      0 Votes
      133 Posts
      121k Views
      C
      Thanks for that information. I will make this message short because @stormi is busy but I want to say thanks to Vates and XCP-ng for all their work done to support Windows on the Xen platform. This includes TPM2 and secure boot support and Microsoft-signed pv drivers. Well done!
    • acebmxerA

      XOA - Memory Usage

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      45
      2
      0 Votes
      45 Posts
      3k Views
      florentF
      @acebmxer not yet (a little under the water with the release patch, but we will do it )
    • J

      (Windows) guest IPv6 address doesn't collapse zeroes -> Long IPv6 addresses

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      16
      1
      0 Votes
      16 Posts
      462 Views
      poddingueP
      Thanks for the ping and for narrowing it down; that live-migration repro is a really useful signal. I don't know enough about how the guest tools report IPs back through XAPI to say where the canonicalisation should happen, but it sounds like something @Team-Hypervisor-Kernel might want to look at since the trigger is on the agent side after migration. If it turns out to be reproducible on another Windows guest version (2022, 2019), that might help narrow it further; no pressure though, you've already done the hard part.
    • A

      XenOrchestra not showing VM Disks on Pool (on single Server working) - XCP-ng Center is showing them

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      14
      2
      0 Votes
      14 Posts
      326 Views
      C
      Dug a little deeper. For a VM where the disks are not shown the following XO API call fails: /rest/v0/vms/a519e879-3971-9210-51b6-7df14336e7b7/vdis { "error": "no such VDI ac37700d-3157-4df7-b8e8-e1799a994591", "data": { "id": "ac37700d-3157-4df7-b8e8-e1799a994591", "type": [ "VDI" ] } } Also the VDI cannot be retrieved over the XO API: /rest/v0/vms/a519e879-3971-9210-51b6-7df14336e7b7 ... "$VBDs": [ "4ea8a3cd-0d1b-dc60-4d9c-fd70e060f06c", "9f4ca686-9fc2-35a9-c3e9-c871c9f68aba" ], ... /rest/v0/vbds/9f4ca686-9fc2-35a9-c3e9-c871c9f68aba { "type": "VBD", "attached": false, "bootable": false, "device": "xvda", "is_cd_drive": false, "position": "0", "read_only": false, "VDI": "ac37700d-3157-4df7-b8e8-e1799a994591", "VM": "a519e879-3971-9210-51b6-7df14336e7b7", "id": "9f4ca686-9fc2-35a9-c3e9-c871c9f68aba", "uuid": "9f4ca686-9fc2-35a9-c3e9-c871c9f68aba", "$pool": "93d361b7-f549-53b7-a3aa-c9695bf0abe4", "$poolId": "93d361b7-f549-53b7-a3aa-c9695bf0abe4", "_xapiRef": "OpaqueRef:1d424d94-f540-2eb4-9e52-2a9b21ec0a19" } /rest/v0/vdis/ac37700d-3157-4df7-b8e8-e1799a994591 { "error": "no such VDI ac37700d-3157-4df7-b8e8-e1799a994591", "data": { "id": "ac37700d-3157-4df7-b8e8-e1799a994591", "type": "VDI" } } However the VDI can be listed using the xe cli: $ xe vm-list uuid=a519e879-3971-9210-51b6-7df14336e7b7 uuid ( RO) : a519e879-3971-9210-51b6-7df14336e7b7 name-label ( RW): XXX power-state ( RO): halted $ xe vbd-list vm-uuid=a519e879-3971-9210-51b6-7df14336e7b7 uuid ( RO) : 4ea8a3cd-0d1b-dc60-4d9c-fd70e060f06c vm-uuid ( RO): a519e879-3971-9210-51b6-7df14336e7b7 vm-name-label ( RO): XXX vdi-uuid ( RO): <not in database> empty ( RO): true device ( RO): xvdd uuid ( RO) : 9f4ca686-9fc2-35a9-c3e9-c871c9f68aba vm-uuid ( RO): a519e879-3971-9210-51b6-7df14336e7b7 vm-name-label ( RO): XXX vdi-uuid ( RO): ac37700d-3157-4df7-b8e8-e1799a994591 empty ( RO): false device ( RO): xvda $ xe vdi-list uuid=ac37700d-3157-4df7-b8e8-e1799a994591 uuid ( RO) : ac37700d-3157-4df7-b8e8-e1799a994591 name-label ( RW): XXX Disk 0 name-description ( RW): Created by XO sr-uuid ( RO): 977b7e63-bb84-57b2-3e0d-206afea553bf virtual-size ( RO): 34359738368 sharable ( RO): false read-only ( RO): false Seems almost like something changed in the XCP-ng API which XO cannot consume.
    • stormiS

      Second (and final) Release Candidate for QCOW2 image format support

      Watching Ignoring Scheduled Pinned Locked Moved News
      16
      5 Votes
      16 Posts
      2k Views
      bogikornelB
      @pkgw I tested it with a cluster size of 2 megabytes. I got similar results to those with the default size.
    • C

      Error while scanning disk

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      14
      0 Votes
      14 Posts
      689 Views
      poddingueP
      Thanks, Florent, for the explanation.
    • PoloGTIJauneP

      Build number cloud vs Build number 8.3.0

      Watching Ignoring Scheduled Pinned Locked Moved Solved French (Français)
      11
      1 Votes
      11 Posts
      152 Views
      olivierlambertO
      Ah excellente nouvelle Je passe le sujet en résolu !
    • J

      Building from source, now introduces local changes in typed-router.d.ts?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      11
      0 Votes
      11 Posts
      408 Views
      J
      @MathieuRA I noticed you merged https://github.com/vatesfr/xen-orchestra/pull/9787 I just tried it. And it does seem to fix my original issue! Thank you! I am always impressed by you guys. Making testing and reporting upstream (to you guys) a good experience! Elise-FZI opened this pull request in vatesfr/xen-orchestra closed fix(xo6): remove dev routes from prod #9787
    • acebmxerA

      Lates commit breaks install

      Watching Ignoring Scheduled Pinned Locked Moved Management
      19
      0 Votes
      19 Posts
      896 Views
      acebmxerA
      @gregbinsd let us know. If you do use my script. It pulls nodejs from NodeSource so it may not install the latest 24.15.0 tls. If you specific 24.15.0 it will install that version. If you need to change node version with my script use the rebuild option.
    • V

      Question about pools

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      10
      0 Votes
      10 Posts
      274 Views
      P
      @vlamincktr XO PROXY from source is pretty reliable at no cost either use @acebmxer script or @ronivay here is a quick tuto on an ubuntu VM https://omnibox.huducloud.com/shared_article/QJ9y1bRSPj9VTbWp6NKaV7yn/installation-xoa-a-partir-des-sources-github-ronivay first part is XO CE, second part is XO PROXY CE beware as you delegate some jobs to XO PROXY, to ever upgrade XO PROXY when you upgrade XOA, so that they have the same backup mechanisms/code
    • A

      Lost connection to ISO Repository

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      10
      2
      0 Votes
      10 Posts
      435 Views
      A
      @Pilow Apologies for the late reply. Thank you for sharing the work around. I have tried it and confirmed that the work around works. I hope they can find a solution for this issue.
    • rvreugdeR

      XOA vulnerabilty to "copy fail" and "dirty frag" bug

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      8
      0 Votes
      8 Posts
      330 Views
      R
      Quick update now that Vates has published their official advisory. First, kudos to the Vates security team for the thorough and timely response. VSA-2026-014 is well-documented and covers the full picture, including a third CVE I had not covered in my earlier posts. VSA-2026-014 confirms what I outlined above: XCP-ng is affected by CVE-2026-43284 (XFRM-ESP) and is NOT affected by CVE-2026-43500 (no RxRPC support). The CVE I had missed: CVE-2026-46300 ("Fragnesia") also affects XCP-ng via the XFRM ESP-in-TCP subsystem. The same esp4/esp6 blacklist mitigation applies, with the same caveat @semarie raised: it will break encrypted private networks on XCP-ng. Now that the VSA and official mitigation guidance are public, I'm releasing the diagnostic script I built. It's Python 3.6, no external dependencies, safe to run on production dom0. It tests whether an unprivileged process can engage the esp4 engine via the XFRM interface inside a user namespace — without touching any exploit code. Since both CVE-2026-43284 and CVE-2026-46300 (Fragnesia) require esp4 or esp6 to be reachable from an unprivileged namespace, and share the same mitigation, a positive result confirms exposure to both. Blacklist esp4/esp6, then run the script again — ACCESS DENIED means both CVEs are mitigated. One important note before running it: please read the code before executing it on any of your systems. This is good practice with any script from the internet, regardless of the source. The code is intentionally short and straightforward so you can review it quickly and satisfy yourself that it does exactly what it says. VSA-2026-014: https://docs.vates.tech/security/advisories/2026/vates-sa-2026-014/ Diagnostic tool: https://github.com/grabesec/XCP_ng_CVE-2026-43284_tester A kernel patch from Vates is in progress. Apply as soon as it lands.
    • R

      Date format on web interface: Only US format available?

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      8
      1 Votes
      8 Posts
      320 Views
      R
      @julienXOvates Excellent, thanks for looking at this Julien! Rob
    • C

      XOSTOR appears to be broken on the new XCP-NG May 2026 update

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      8
      0 Votes
      8 Posts
      424 Views
      G
      @dthenot said: @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU. Hello, Had the same problem, the command resolved the issue. It needs to be run on every host. Everything is working fine again. However, I had to complete the pool update manually.
    • JamfoFLJ

      Xen Orchestra has stopped updating commits

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      34
      0 Votes
      34 Posts
      10k Views
      florentF
      @ducatijosh did you do a yarn build ?
    • M

      CR backup with retention > 4

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      0 Votes
      8 Posts
      469 Views
      P
      @McHenry I think depends if the copy is a fast clone, depending of full chain length with 13 points behind, or a full copy of its own that will be independant