XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      495
      1 Votes
      495 Posts
      207k Views
      olivierlambertO
      Congrats to everyone on those, it was a huge amount of work, test, and as usual, great community feedback!
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      216
      7 Votes
      216 Posts
      27k Views
      acebmxerA
      @julienXOvates Ok that was in xo from sources and yes issue is fixed... Confirmed is working in XOA in 6.4.1 [image: 1777984456488-screenshot-2026-05-05-083337.png]
    • M

      Too many snapshots

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      44
      2
      0 Votes
      44 Posts
      2k Views
      M
      @Pilow I did check this and it definitely completes within the hour. I am testing a lesser value for CR retention to see if this resolves it.
    • acebmxerA

      XOA - Memory Usage

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      43
      2
      0 Votes
      43 Posts
      2k Views
      florentF
      @acebmxer said: @florent I didnt setup ssh on work XOA i just set the password but need to reboot it for it to work. The tunnel is still open if you dont mind doing it otherwise I will need to reboot xoa to get in myself. I can do it on monday
    • acebmxerA

      Backups with qcow2 enabled

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      28
      3
      0 Votes
      28 Posts
      989 Views
      acebmxerA
      So the scheduled backup at 1pm this afternoon ran with no issue. As stated previously the Coalesces are adding now now show 2 for vm. edit - Apr 21 11:22:55 xo-ce xo-server[15617]: }, Apr 21 11:22:55 xo-ce xo-server[15617]: summary: { duration: '6m', cpuUsage: '4%', memoryUsage: '29.01 MiB' } Apr 21 11:22:55 xo-ce xo-server[15617]: } Apr 21 13:00:00 xo-ce xo-server[16924]: 2026-04-21T17:00:00.320Z xo:backups:worker INFO starting backup Apr 21 13:00:00 xo-ce sudo[16938]: xo-service : PWD=/opt/xen-orchestra/packages/xo-server ; USER=root ; COMMAND=/usr/bin/mount -o -t nf> Apr 21 13:00:00 xo-ce sudo[16938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=996) Apr 21 13:00:00 xo-ce sudo[16938]: pam_unix(sudo:session): session closed for user root Apr 21 13:00:06 xo-ce xo-server[16924]: 2026-04-21T17:00:06.941Z xo:backups:MixinBackupWriter INFO deleting unused VHD { Apr 21 13:00:06 xo-ce xo-server[16924]: path: '/xo-vm-backups/138538a8-ef52-4d0a-4433-5ebb31d7e152/vdis/9f7daac4-80a1-41f9-8af0-99b6fe> Apr 21 13:00:06 xo-ce xo-server[16924]: } Apr 21 13:00:19 xo-ce xo-server[16924]: 2026-04-21T17:00:19.536Z xo:backups:MixinBackupWriter INFO deleting unused VHD { Apr 21 13:00:19 xo-ce xo-server[16924]: path: '/xo-vm-backups/6d733582-0728-b67c-084b-56abe6047bfc/vdis/9f7daac4-80a1-41f9-8af0-99b6fe> Apr 21 13:00:19 xo-ce xo-server[16924]: } Apr 21 13:01:23 xo-ce xo-server[16924]: 2026-04-21T17:01:23.971Z xo:backups:worker INFO backup has ended Apr 21 13:01:23 xo-ce sudo[16971]: xo-service : PWD=/opt/xen-orchestra/packages/xo-server ; USER=root ; COMMAND=/usr/bin/umount /run/xo-> Apr 21 13:01:23 xo-ce sudo[16971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=996) Apr 21 13:01:23 xo-ce sudo[16971]: pam_unix(sudo:session): session closed for user root Apr 21 13:01:24 xo-ce xo-server[16924]: 2026-04-21T17:01:24.029Z xo:backups:worker INFO process will exit { Apr 21 13:01:24 xo-ce xo-server[16924]: duration: 83708608, Apr 21 13:01:24 xo-ce xo-server[16924]: exitCode: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: resourceUsage: { Apr 21 13:01:24 xo-ce xo-server[16924]: userCPUTime: 31147905, Apr 21 13:01:24 xo-ce xo-server[16924]: systemCPUTime: 14319055, Apr 21 13:01:24 xo-ce xo-server[16924]: maxRSS: 65888, Apr 21 13:01:24 xo-ce xo-server[16924]: sharedMemorySize: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: unsharedDataSize: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: unsharedStackSize: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: minorPageFault: 367979, Apr 21 13:01:24 xo-ce xo-server[16924]: majorPageFault: 1, Apr 21 13:01:24 xo-ce xo-server[16924]: swappedOut: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: fsRead: 5648, Apr 21 13:01:24 xo-ce xo-server[16924]: fsWrite: 16101272, Apr 21 13:01:24 xo-ce xo-server[16924]: ipcSent: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: ipcReceived: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: signalsCount: 0, Apr 21 13:01:24 xo-ce xo-server[16924]: voluntaryContextSwitches: 65983, Apr 21 13:01:24 xo-ce xo-server[16924]: involuntaryContextSwitches: 7745 Apr 21 13:01:24 xo-ce xo-server[16924]: }, Apr 21 13:01:24 xo-ce xo-server[16924]: summary: { duration: '1m', cpuUsage: '54%', memoryUsage: '64.34 MiB' } Apr 21 13:01:24 xo-ce xo-server[16924]: } lines 617-656/656 (END)
    • P

      clean-vm (end) is stalling ?

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      15
      2
      0 Votes
      15 Posts
      483 Views
      simonpS
      @Pilow Thanks for the heads-up, you should be able to add back concurrency as it was before and get similar performance to before the refactoring.
    • C

      Error while scanning disk

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      14
      0 Votes
      14 Posts
      465 Views
      poddingueP
      Thanks, Florent, for the explanation.
    • jerry1333J

      CPU Usage of empty server

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      14
      3
      0 Votes
      14 Posts
      394 Views
      P
      @jerry1333 said: There is nothing else on that host and this is only host in pool but it's using 30% of cpu all the time? it's not using 30% of CPU, you see a graph of cumulated (switch is on) core consumption of your 32 cores. never switch this on. it adds up like that : 32x1%=32%, wrongfully letting you think you are at 30%ish CPU usage.
    • stormiS

      Second (and final) Release Candidate for QCOW2 image format support

      Watching Ignoring Scheduled Pinned Locked Moved News
      12
      5 Votes
      12 Posts
      1k Views
      bogikornelB
      @stormi XCP-ng QCOW2 vs. VHD Performance Feedback on NVMe First of all, I would like to thank the team for all the hard work in bringing QCOW2 support to a production-ready state. It is a very welcome feature. I have performed some quick I/O benchmarks comparing the new QCOW2 format against the traditional VHD. In my tests, QCOW2 appears significantly slower than VHD on my hardware. Test Environment Hypervisor: Dell PowerEdge R420 CPU: Intel Xeon E5-2470 v2 Storage: Intel SSDPELKX010T8 NVMe VM OS: Debian 13 VM Specs: 2 vCPUs, 1GB RAM Setup: One 10GB VHD and one 10GB QCOW2 disk, both pre-filled from /dev/random. Methodology I used a custom test suite available here: https://vm01.unsoft.hu/~ventura/fio/fio_test_20250408.tar.gz [image: 1778009249525-vhd_bandwidth_summary.png] [image: 1778009256705-vhd_latency_summary.png] [image: 1778009281033-qcow2_bandwidth_summary.png] [image: 1778009286521-qcow2_latency_summary.png] I also ran a simplefio loop with the following results: VHD: root@Debian-13-CloudInit-20250810:/mnt/vhd# for mode in read write; do for jobs in 1 16; do for bs in 4 64; do for t in "" rand; do printf "%2i qd %2ik % 4s " $jobs $bs $t; fio --name=random-write --rw=$t$mode --bs=${bs}k --numjobs=1 --size=1g --iodepth=$jobs --runtime=10 --time_based --direct=1 --ioengine=libaio|grep -e BW -e runt ; done; done; done; done 1 qd 4k read: IOPS=9625, BW=37.6MiB/s (39.4MB/s)(376MiB/10001msec) 1 qd 4k rand read: IOPS=5414, BW=21.2MiB/s (22.2MB/s)(212MiB/10001msec) 1 qd 64k read: IOPS=2657, BW=166MiB/s (174MB/s)(1661MiB/10001msec) 1 qd 64k rand read: IOPS=2575, BW=161MiB/s (169MB/s)(1610MiB/10001msec) 16 qd 4k read: IOPS=45.7k, BW=178MiB/s (187MB/s)(1785MiB/10001msec) 16 qd 4k rand read: IOPS=45.9k, BW=179MiB/s (188MB/s)(1794MiB/10001msec) 16 qd 64k read: IOPS=16.7k, BW=1041MiB/s (1092MB/s)(10.2GiB/10001msec) 16 qd 64k rand read: IOPS=16.7k, BW=1042MiB/s (1093MB/s)(10.2GiB/10001msec) 1 qd 4k write: IOPS=8842, BW=34.5MiB/s (36.2MB/s)(345MiB/10001msec); 0 zone resets 1 qd 4k rand write: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(347MiB/10001msec); 0 zone resets 1 qd 64k write: IOPS=6095, BW=381MiB/s (399MB/s)(3810MiB/10001msec); 0 zone resets 1 qd 64k rand write: IOPS=6006, BW=375MiB/s (394MB/s)(3755MiB/10001msec); 0 zone resets 16 qd 4k write: IOPS=49.3k, BW=193MiB/s (202MB/s)(1928MiB/10001msec); 0 zone resets 16 qd 4k rand write: IOPS=47.3k, BW=185MiB/s (194MB/s)(1848MiB/10001msec); 0 zone resets 16 qd 64k write: IOPS=14.3k, BW=891MiB/s (934MB/s)(8910MiB/10001msec); 0 zone resets 16 qd 64k rand write: IOPS=15.5k, BW=966MiB/s (1013MB/s)(9663MiB/10001msec); 0 zone resets QCOW2 root@Debian-13-CloudInit-20250810:/mnt/qcow2# for mode in read write; do for jobs in 1 16; do for bs in 4 64; do for t in "" rand; do printf "%2i qd %2ik % 4s " $jobs $bs $t; fio --name=random-write --rw=$t$mode --bs=${bs}k --numjobs=1 --size=1g --iodepth=$jobs --runtime=10 --time_based --direct=1 --ioengine=libaio|grep -e BW -e runt ; done; done; done; done 1 qd 4k read: IOPS=5866, BW=22.9MiB/s (24.0MB/s)(229MiB/10001msec) 1 qd 4k rand read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(156MiB/10001msec) 1 qd 64k read: IOPS=2229, BW=139MiB/s (146MB/s)(1394MiB/10001msec) 1 qd 64k rand read: IOPS=2161, BW=135MiB/s (142MB/s)(1351MiB/10001msec) 16 qd 4k read: IOPS=16.9k, BW=66.2MiB/s (69.4MB/s)(662MiB/10001msec) 16 qd 4k rand read: IOPS=17.6k, BW=68.8MiB/s (72.1MB/s)(688MiB/10001msec) 16 qd 64k read: IOPS=7244, BW=453MiB/s (475MB/s)(4529MiB/10002msec) 16 qd 64k rand read: IOPS=6994, BW=437MiB/s (458MB/s)(4372MiB/10002msec) 1 qd 4k write: IOPS=5551, BW=21.7MiB/s (22.7MB/s)(217MiB/10001msec); 0 zone resets 1 qd 4k rand write: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(202MiB/10001msec); 0 zone resets 1 qd 64k write: IOPS=4024, BW=252MiB/s (264MB/s)(2515MiB/10001msec); 0 zone resets 1 qd 64k rand write: IOPS=4027, BW=252MiB/s (264MB/s)(2517MiB/10001msec); 0 zone resets 16 qd 4k write: IOPS=14.5k, BW=56.8MiB/s (59.6MB/s)(568MiB/10002msec); 0 zone resets 16 qd 4k rand write: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(547MiB/10001msec); 0 zone resets 16 qd 64k write: IOPS=6360, BW=398MiB/s (417MB/s)(3976MiB/10002msec); 0 zone resets 16 qd 64k rand write: IOPS=6090, BW=381MiB/s (399MB/s)(3807MiB/10002msec); 0 zone resets I would be interested to know if I'm overlooking something, or if the qcow2 format simply provides lower performance compared to VHD for the time being?
    • F

      [SOLVED] Just FYI: current update seams to break NUT dependancies

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      29
      0 Votes
      29 Posts
      2k Views
      F
      Hi, I just wanted to comment that the provided packages work for all my server. Thank you!
    • J

      Building from source, now introduces local changes in typed-router.d.ts?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      11
      0 Votes
      11 Posts
      199 Views
      J
      @MathieuRA I noticed you merged https://github.com/vatesfr/xen-orchestra/pull/9787 I just tried it. And it does seem to fix my original issue! Thank you! I am always impressed by you guys. Making testing and reporting upstream (to you guys) a good experience! Elise-FZI opened this pull request in vatesfr/xen-orchestra closed fix(xo6): remove dev routes from prod #9787
    • acebmxerA

      Lates commit breaks install

      Watching Ignoring Scheduled Pinned Locked Moved Management
      19
      0 Votes
      19 Posts
      647 Views
      acebmxerA
      @gregbinsd let us know. If you do use my script. It pulls nodejs from NodeSource so it may not install the latest 24.15.0 tls. If you specific 24.15.0 it will install that version. If you need to change node version with my script use the rebuild option.
    • O

      When the XCPNG host restart, it restarts running directly, instead of being in maintenance mode

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      17
      0 Votes
      17 Posts
      764 Views
      P
      perhaps "in the context of a proceeding RPU, do not start halted VMs" ? or "boot only halted VMs that have HA enabled" ? but I can imagine corner cases where this is not wanted. some chicken & egg problem.
    • A

      Lost connection to ISO Repository

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      10
      2
      0 Votes
      10 Posts
      300 Views
      A
      @Pilow Apologies for the late reply. Thank you for sharing the work around. I have tried it and confirmed that the work around works. I hope they can find a solution for this issue.
    • P

      xo-disk-cli on latest XOA node.js problem

      Watching Ignoring Scheduled Pinned Locked Moved Management
      10
      1
      0 Votes
      10 Posts
      270 Views
      M
      @Andrew Yeah but XOA is still using it hence my interest in aligning my XO-CE instance with XOA as close as possible.
    • V

      Question about pools

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      9
      0 Votes
      9 Posts
      42 Views
      D
      @vlamincktr said: @acebmxer I may just need to re-evaluate our backup strategy and adjust it so there is more time for the backups, I could also just run the daily delta's, the main issue is the weekly fulls that I run as a precaution, I'm always paranoid about something happening with the daily delta chain and having an unusable backup so i also pull dedicated weekly full backups which take a lot of time to run. I've also considered running the full backups at different days to spread them out more, sounds like one of those is my best option rather than adding more cost/complexity. I would absolutely change this backup plan, to running monthly full backups (weekly full backups are overkill for most). The backup mechanism in XO has improved a ton (since launch). Without more detail, types of VMs, workloads etc it's really difficult for anyone to offer a perfect answer, but most people here would likely agree that weekly full's aren't a benefit here. Changing the window on your backups is also an option as you mentioned, but that is only shifting when the work is being performed, not the type of work performed. If you have a 1TB server and you're backing that up daily with delta's and weekly with full backups you're backing up something like 1300 GB every week (of course this depends on your delta data change).
    • B

      New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client

      Watching Ignoring Scheduled Pinned Locked Moved News
      41
      1
      9 Votes
      41 Posts
      6k Views
      Tristis OrisT
      @benapetr said in New project - XenAdminQt - a cross-platform GNU/Linux, macOS, Windows native thick client: you can switch between them nope, that just 2 from multiple nics. Need press add ip address to add other NICs. Now i see how it works.
    • M

      CR backup with retention > 4

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      0 Votes
      8 Posts
      362 Views
      P
      @McHenry I think depends if the copy is a fast clone, depending of full chain length with 13 points behind, or a full copy of its own that will be independant
    • K

      Question about Continuous Replication/ Backups always doing Full Backups

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      16
      1
      0 Votes
      16 Posts
      608 Views
      K
      @tsukraw No worries! Just glad that we can all help each other out!
    • F

      Xen Orchestra 6.3.2 Random Replication Failure

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      1
      0 Votes
      8 Posts
      302 Views
      florentF
      @flakpyro that's a good news ( but at least another user saw this) we are currently testing the branch ensuring that at least the fix don't create other issues