<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Popular Topics]]></title><description><![CDATA[A list of topics that are sorted by post count]]></description><link>https://xcp-ng.org/forum/popular/alltime</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 04:51:25 GMT</lastBuildDate><atom:link href="https://xcp-ng.org/forum/popular/alltime.rss" rel="self" type="application/rss+xml"/><pubDate>Thu, 22 Jun 2023 16:23:50 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[XCP-ng 8.3 betas and RCs feedback 🚀]]></title><description><![CDATA[This is the end for this nice and useful thread, as XCP-ng 8.3 is not a beta nor a RC anymore: it's a supported release now.
However, we still need your feedback, as we publish update candidates ahead of their official release, for users to test them.
Right now, there's a security update candidate which is to be tested.
I strongly invite everyone who is currently subscribed to this thread to now subscribe to the new, dedicated thread: XCP-ng 8.3 updates announcements and testing, and to verify that their settings allow sending notification e-mails and/or other forms of notification.
]]></description><link>https://xcp-ng.org/forum/topic/7464/xcp-ng-8-3-betas-and-rcs-feedback</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/7464/xcp-ng-8-3-betas-and-rcs-feedback</guid><dc:creator><![CDATA[stormi]]></dc:creator><pubDate>Thu, 22 Jun 2023 16:23:50 GMT</pubDate></item><item><title><![CDATA[XCP-ng 8.2 updates announcements and testing]]></title><description><![CDATA[XCP-ng 8.2 has just reached its end of life, but the adventure continues with XCP-ng 8.3 (and other versions to come). You can read the communication on this point on our blog: https://xcp-ng.org/blog/2025/09/16/xcp-ng-8-2-lts-reached-its-end-of-life/
To continue benefiting from updates and developments, we invite you, if you haven't already done so, to upgrade your systems to XCP-ng 8.3.
A relevant thread has been around for quite some time if you want to participate in early testing of the updates: https://xcp-ng.org/forum/topic/9964/xcp-ng-8-3-updates-announcements-and-testing/
]]></description><link>https://xcp-ng.org/forum/topic/365/xcp-ng-8-2-updates-announcements-and-testing</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/365/xcp-ng-8-2-updates-announcements-and-testing</guid><dc:creator><![CDATA[gduperrey]]></dc:creator><pubDate>Thu, 30 Aug 2018 11:50:38 GMT</pubDate></item><item><title><![CDATA[XOSTOR hyperconvergence preview]]></title><description><![CDATA[@ronan-a we need to have CBT to use 3rd party backup &amp; replication solution such as Veeam.
On the other hand, XOSTOR with VHD disk might be ok using Xen Orchestra backup &amp; replication features but it needs more testing on our end to make sure it works good enough.
One big question is about the snapshot chain. Coming from VMware world where keeping snapshots may impact I/O performances, we're not very comfortable with it.
Also, we need to find solution for low-RPO replication. On our VMware infrastructure we use Zerto DR for this but XOA being more like Veeam using snapshots, low-RPO will be tough to achieve (unless making snapshots doesn't affect performances at all ?).
]]></description><link>https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview</guid><dc:creator><![CDATA[snk33]]></dc:creator><pubDate>Mon, 20 Dec 2021 09:39:54 GMT</pubDate></item><item><title><![CDATA[CBT: the thread to centralize your feedback]]></title><description><![CDATA[Okay, I thought the autoscan was only for like 10 minutes or so, but hey I'm not deep down in the stack anymore 
]]></description><link>https://xcp-ng.org/forum/topic/9268/cbt-the-thread-to-centralize-your-feedback</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/9268/cbt-the-thread-to-centralize-your-feedback</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Fri, 28 Jun 2024 07:40:36 GMT</pubDate></item><item><title><![CDATA[XCP-ng 8.3 updates announcements and testing]]></title><description><![CDATA[@rzr (edit) After upgrading two main pools, I'm having CR delta backup issues. Everything was working before the XCP update, now every VM has the same error of Backup fell back to a full. Using XO master db9c4, but the same XO setup was working just fine before the XCP update.
(edit 2) XO logs
Apr 15 22:55:40 xo1 xo-server[1409]: 2026-04-16T02:55:40.613Z xo:backups:worker INFO starting backup
Apr 15 22:55:42 xo1 xo-server[1409]: 2026-04-16T02:55:42.006Z xo:xapi:xapi-disks INFO export through vhd
Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.093Z xo:xapi:vdi INFO  OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e was already destroyed {
Apr 15 22:55:44 xo1 xo-server[1409]:   vdiRef: 'OpaqueRef:49ff18d3-5c18-176c-4930-0163c6727c2b',
Apr 15 22:55:44 xo1 xo-server[1409]:   vbdRef: 'OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e'
Apr 15 22:55:44 xo1 xo-server[1409]: }
Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.839Z xo:xapi:vdi INFO  OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82 has been disconnected from dom0 {
Apr 15 22:55:44 xo1 xo-server[1409]:   vdiRef: 'OpaqueRef:02f9ba92-1ee2-88eb-f660-a2cf3eeb287d',
Apr 15 22:55:44 xo1 xo-server[1409]:   vbdRef: 'OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82'
Apr 15 22:55:44 xo1 xo-server[1409]: }
Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.910Z xo:xapi:vm WARN _assertHealthyVdiChain, could not fetch VDI {
Apr 15 22:55:44 xo1 xo-server[1409]:   error: XapiError: UUID_INVALID(VDI, 8f233bfc-9deb-4a06-aa07-0510de7496a1)
Apr 15 22:55:44 xo1 xo-server[1409]:       at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12)
Apr 15 22:55:44 xo1 xo-server[1409]:       at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/transports/json-rpc.mjs:38:21
Apr 15 22:55:44 xo1 xo-server[1409]:       at process.processTicksAndRejections (node:internal/process/task_queues:104:5) {
Apr 15 22:55:44 xo1 xo-server[1409]:     code: 'UUID_INVALID',
Apr 15 22:55:44 xo1 xo-server[1409]:     params: [ 'VDI', '8f233bfc-9deb-4a06-aa07-0510de7496a1' ],
Apr 15 22:55:44 xo1 xo-server[1409]:     call: { duration: 3, method: 'VDI.get_by_uuid', params: [Array] },
Apr 15 22:55:44 xo1 xo-server[1409]:     url: undefined,
Apr 15 22:55:44 xo1 xo-server[1409]:     task: undefined
Apr 15 22:55:44 xo1 xo-server[1409]:   }
Apr 15 22:55:44 xo1 xo-server[1409]: }
Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.732Z xo:xapi:xapi-disks INFO Error in openNbdCBT XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], )
Apr 15 22:55:46 xo1 xo-server[1409]:     at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12)
Apr 15 22:55:46 xo1 xo-server[1409]:     at default (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_getTaskResult.mjs:13:29)
Apr 15 22:55:46 xo1 xo-server[1409]:     at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1078:24)
Apr 15 22:55:46 xo1 xo-server[1409]:     at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1112:14
Apr 15 22:55:46 xo1 xo-server[1409]:     at Array.forEach (&lt;anonymous&gt;)
Apr 15 22:55:46 xo1 xo-server[1409]:     at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1102:12)
Apr 15 22:55:46 xo1 xo-server[1409]:     at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1275:14)
Apr 15 22:55:46 xo1 xo-server[1409]:     at process.processTicksAndRejections (node:internal/process/task_queues:104:5) {
Apr 15 22:55:46 xo1 xo-server[1409]:   code: 'SR_BACKEND_FAILURE_460',
Apr 15 22:55:46 xo1 xo-server[1409]:   params: [
Apr 15 22:55:46 xo1 xo-server[1409]:     '',
Apr 15 22:55:46 xo1 xo-server[1409]:     'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]',
Apr 15 22:55:46 xo1 xo-server[1409]:     ''
Apr 15 22:55:46 xo1 xo-server[1409]:   ],
Apr 15 22:55:46 xo1 xo-server[1409]:   call: undefined,
Apr 15 22:55:46 xo1 xo-server[1409]:   url: undefined,
Apr 15 22:55:46 xo1 xo-server[1409]:   task: task {
Apr 15 22:55:46 xo1 xo-server[1409]:     uuid: '8fae41b4-de82-789c-980a-5ff2d490d2d8',
Apr 15 22:55:46 xo1 xo-server[1409]:     name_label: 'Async.VDI.list_changed_blocks',
Apr 15 22:55:46 xo1 xo-server[1409]:     name_description: '',
Apr 15 22:55:46 xo1 xo-server[1409]:     allowed_operations: [],
Apr 15 22:55:46 xo1 xo-server[1409]:     current_operations: {},
Apr 15 22:55:46 xo1 xo-server[1409]:     created: '20260416T02:55:46Z',
Apr 15 22:55:46 xo1 xo-server[1409]:     finished: '20260416T02:55:46Z',
Apr 15 22:55:46 xo1 xo-server[1409]:     status: 'failure',
Apr 15 22:55:46 xo1 xo-server[1409]:     resident_on: 'OpaqueRef:7b987b11-ada0-99ce-d831-6e589bf34b50',
Apr 15 22:55:46 xo1 xo-server[1409]:     progress: 1,
Apr 15 22:55:46 xo1 xo-server[1409]:     type: '&lt;none/&gt;',
Apr 15 22:55:46 xo1 xo-server[1409]:     result: '',
Apr 15 22:55:46 xo1 xo-server[1409]:     error_info: [
Apr 15 22:55:46 xo1 xo-server[1409]:       'SR_BACKEND_FAILURE_460',
Apr 15 22:55:46 xo1 xo-server[1409]:       '',
Apr 15 22:55:46 xo1 xo-server[1409]:       'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]',
Apr 15 22:55:46 xo1 xo-server[1409]:       ''
Apr 15 22:55:46 xo1 xo-server[1409]:     ],
Apr 15 22:55:46 xo1 xo-server[1409]:     other_config: {},
Apr 15 22:55:46 xo1 xo-server[1409]:     subtask_of: 'OpaqueRef:NULL',
Apr 15 22:55:46 xo1 xo-server[1409]:     subtasks: [],
Apr 15 22:55:46 xo1 xo-server[1409]:     backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_utils.ml)(line 150))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))'
Apr 15 22:55:46 xo1 xo-server[1409]:   }
Apr 15 22:55:46 xo1 xo-server[1409]: }
Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.735Z xo:xapi:xapi-disks INFO export through vhd
Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.115Z xo:xapi:vdi WARN invalid HTTP header in response body {
Apr 15 22:55:48 xo1 xo-server[1409]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
Apr 15 22:55:48 xo1 xo-server[1409]:     'content-length: 318\r\n' +
Apr 15 22:55:48 xo1 xo-server[1409]:     'content-type: text/html\r\n' +
Apr 15 22:55:48 xo1 xo-server[1409]:     'connection: close\r\n' +
Apr 15 22:55:48 xo1 xo-server[1409]:     'cache-control: no-cache, no-store\r\n' +
Apr 15 22:55:48 xo1 xo-server[1409]:     '\r\n' +
Apr 15 22:55:48 xo1 xo-server[1409]:     '&lt;html&gt;&lt;body&gt;&lt;h1&gt;HTTP 500 internal server error&lt;/h1&gt;An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.&lt;h1&gt; Additional information &lt;/h1&gt;VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac; CBT metadata ]&lt;/body&gt;&lt;/html&gt;'
Apr 15 22:55:48 xo1 xo-server[1409]: }
Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.124Z xo:xapi:xapi-disks WARN can't compute delta OpaqueRef:e7de1446-34fd-1ae8-4680-351b1e72b2dd from OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac, fallBack to a full {
Apr 15 22:55:48 xo1 xo-server[1409]:   error: Error: invalid HTTP header in response body
Apr 15 22:55:48 xo1 xo-server[1409]:       at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:37:19)
Apr 15 22:55:48 xo1 xo-server[1409]:       at process.processTicksAndRejections (node:internal/process/task_queues:104:5)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:261:5)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async XapiVhdStreamSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async #openExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:182:7)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:97:22)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async XapiDiskSource.openSource (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:258:18)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
Apr 15 22:55:48 xo1 xo-server[1409]:       at async file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/backups/_incrementalVm.mjs:66:5
Apr 15 22:55:48 xo1 xo-server[1409]: }
Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.126Z xo:xapi:xapi-disks INFO export through vhd
Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.047Z xo:backups:worker INFO backup has ended
Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.231Z xo:backups:worker INFO process will exit {
Apr 15 22:56:24 xo1 xo-server[1409]:   duration: 43618102,
Apr 15 22:56:24 xo1 xo-server[1409]:   exitCode: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:   resourceUsage: {
Apr 15 22:56:24 xo1 xo-server[1409]:     userCPUTime: 45307253,
Apr 15 22:56:24 xo1 xo-server[1409]:     systemCPUTime: 6674413,
Apr 15 22:56:24 xo1 xo-server[1409]:     maxRSS: 30928,
Apr 15 22:56:24 xo1 xo-server[1409]:     sharedMemorySize: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     unsharedDataSize: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     unsharedStackSize: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     minorPageFault: 287968,
Apr 15 22:56:24 xo1 xo-server[1409]:     majorPageFault: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     swappedOut: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     fsRead: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     fsWrite: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     ipcSent: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     ipcReceived: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     signalsCount: 0,
Apr 15 22:56:24 xo1 xo-server[1409]:     voluntaryContextSwitches: 14665,
Apr 15 22:56:24 xo1 xo-server[1409]:     involuntaryContextSwitches: 962
Apr 15 22:56:24 xo1 xo-server[1409]:   },
Apr 15 22:56:24 xo1 xo-server[1409]:   summary: { duration: '44s', cpuUsage: '119%', memoryUsage: '30.2 MiB' }
Apr 15 22:56:24 xo1 xo-server[1409]: }

]]></description><link>https://xcp-ng.org/forum/topic/9964/xcp-ng-8-3-updates-announcements-and-testing</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/9964/xcp-ng-8-3-updates-announcements-and-testing</guid><dc:creator><![CDATA[Andrew]]></dc:creator><pubDate>Wed, 13 Nov 2024 15:44:18 GMT</pubDate></item><item><title><![CDATA[VMware migration tool: we need your feedback!]]></title><description><![CDATA[On vmware u would need als vcenter for this kind of features. And as u can easy deploy an empty xoa, why would this be an issue?
]]></description><link>https://xcp-ng.org/forum/topic/6714/vmware-migration-tool-we-need-your-feedback</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/6714/vmware-migration-tool-we-need-your-feedback</guid><dc:creator><![CDATA[rtjdamen]]></dc:creator><pubDate>Mon, 19 Dec 2022 09:04:37 GMT</pubDate></item><item><title><![CDATA[Epyc VM to VM networking slow]]></title><description><![CDATA[@Maelstrom96 said in Epyc VM to VM networking slow:

What is the exact kernel patch that is required for the xen-platform-pci-bar-uc=false fix to work on a Linux guest?  We're looking at potentially compiling our own kernel with the xen-netfront.c patch, and we would like to see about adding the other part of the Kernel code needed for the Grant table fix.

Patch is in Linux since 5.19-rc. You also find it in some stable branches like 5.15.
Otherwise, you can check this patch https://lore.kernel.org/xen-devel/ea4945df138527ed63e711cb77e3b333f7b3a4c9.1751633056.git.teddy.astie@vates.tech/
]]></description><link>https://xcp-ng.org/forum/topic/7815/epyc-vm-to-vm-networking-slow</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/7815/epyc-vm-to-vm-networking-slow</guid><dc:creator><![CDATA[TeddyAstie]]></dc:creator><pubDate>Tue, 03 Oct 2023 12:14:53 GMT</pubDate></item><item><title><![CDATA[XCP-ng 8.3 public alpha 🚀]]></title><description><![CDATA[We just released XCP-ng 8.3 beta 1 !
I opened a new thread for us to discuss it and for you to provide feedback: https://xcp-ng.org/forum/topic/7464/xcp-ng-8-3-beta
Thanks for all the feedback already provided here, and see you on this new thread!
In order not to miss anything (and, let's be honest, for me to be sure that messages on the new thread reach you all), the best course of action is: open the new thread right now and use the "watch" button.
[image: 1687457779384-53fac025-6e0c-465b-97ab-5ca73a97bd93-image.png]
And let's answer this common and legitimate question: how to upgrade from alpha to beta ? Well, there's nothing to do, just update as usual. In fact, you might already be in beta state. However, as indicated in the blog post, we need a lot of testing of the installer, so it's also an option to start from the installation ISO again.
]]></description><link>https://xcp-ng.org/forum/topic/6578/xcp-ng-8-3-public-alpha</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/6578/xcp-ng-8-3-public-alpha</guid><dc:creator><![CDATA[stormi]]></dc:creator><pubDate>Thu, 17 Nov 2022 14:05:47 GMT</pubDate></item><item><title><![CDATA[Alert: Control Domain Memory Usage]]></title><description><![CDATA[Its not solving it, but you can run
echo 3 &gt;  /proc/sys/vm/drop_caches
to release some of the cache again, without interfering with running processes.
[root@host2 ~]# free -m
total        used        free      shared  buff/cache   available
Mem:          15958        3308         158           8       12491        2355
Swap:          1023         177         846
[root@host2 ~]# echo 3 &gt;  /proc/sys/vm/drop_caches
[root@host2 ~]# free -m
total        used        free      shared  buff/cache   available
Mem:          15958        3308        2598          10       10051        2751
Swap:          1023         177         846
]]></description><link>https://xcp-ng.org/forum/topic/2507/alert-control-domain-memory-usage</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/2507/alert-control-domain-memory-usage</guid><dc:creator><![CDATA[frankz]]></dc:creator><pubDate>Sat, 25 Jan 2020 16:58:58 GMT</pubDate></item><item><title><![CDATA[Introduce yourself!]]></title><description><![CDATA[@john-c said in Introduce yourself!:

@TS79 said in Introduce yourself!:

Hi. I'm a cloud solutions architect, with around 25 years of working experience in servers, storage, networking (your typical infrastructure stuff) and about 20 years of virtualisation. I started up a homelab many years ago, and through (too) many evolutions, I've ended up with Lenovo M710q mini PCs running XCP-ng, with another mini PC providing NFS storage (with backup and replication to cater for problems and failures).
Absolutely love XCP-ng and am promoting it wherever I can. I've architected and kicked off a project at my employer to replace VMware with XCP-ng, so I'm keen to use the forum to read other people's real-world experiences with storage and host specs, hurdles to avoid, and any tips &amp; tricks.
Looking forward to interacting with the community more and more.

When checking out Xen Orchestra make sure to look at both the Host Maintenance Mode and the SR Maintenance Mode. I came up with the idea for the SR Maintenance mode during the Covid-19 lock down in the UK. The Vates staff developed and implemented the idea, I pitched it as a useful tool for large infrastructures.
The reason being that pools (especially large ones) can have multiple shared storage implemented as SRs. The maintenance mode for SR permits, some of the SR to be put in maintenance mode when the backing separate bare metal hardware is in maintenance, while keeping others not in this situation active. So your less likely to need to put the host in maintenance mode, thus improving pools which are using HA, increasing the up time further. So the VMs aren't affected, when the VMs have been migrated to another storage SR, thus aiding in reducing down time for the VMs.

This is a great feature, but I havent used it - How does it work?
Is it something like:

You put the SR in maintenance mode
VM's are migrated to another shared SR
Notification about SR maintenance completed?

]]></description><link>https://xcp-ng.org/forum/topic/2/introduce-yourself</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/2/introduce-yourself</guid><dc:creator><![CDATA[nikade]]></dc:creator><pubDate>Sat, 07 Apr 2018 16:44:58 GMT</pubDate></item><item><title><![CDATA[🛰️ XO 6: dedicated thread for all your feedback!]]></title><description><![CDATA[Let me ping @Team-XO-Frontend
]]></description><link>https://xcp-ng.org/forum/topic/11604/xo-6-dedicated-thread-for-all-your-feedback</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/11604/xo-6-dedicated-thread-for-all-your-feedback</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Mon, 24 Nov 2025 12:39:08 GMT</pubDate></item><item><title><![CDATA[New Rust Xen guest tools]]></title><description><![CDATA[@yann Item Opened on Gitlab.
]]></description><link>https://xcp-ng.org/forum/topic/7974/new-rust-xen-guest-tools</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/7974/new-rust-xen-guest-tools</guid><dc:creator><![CDATA[Andrew]]></dc:creator><pubDate>Sat, 18 Nov 2023 10:40:08 GMT</pubDate></item><item><title><![CDATA[Netdata package is now available in XCP-ng]]></title><description><![CDATA[@grapesmc at one point I had the idea to set up a xcp-ng build environment and build netdata in there,  then simply copy it over to the xcp-ng hosts. Unfortunately I was not able to dedicate the time to this so far.
]]></description><link>https://xcp-ng.org/forum/topic/2288/netdata-package-is-now-available-in-xcp-ng</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/2288/netdata-package-is-now-available-in-xcp-ng</guid><dc:creator><![CDATA[Forza]]></dc:creator><pubDate>Fri, 29 Nov 2019 16:33:41 GMT</pubDate></item><item><title><![CDATA[Our future backup code: test it!]]></title><description><![CDATA[I created a new CR job for another VM and it worked. However, it didn't work with the XO VM.
So maybe the root cause of the problem is that the old CR copies have disappeared. Maybe they still exist, but I can't find them?
]]></description><link>https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it</guid><dc:creator><![CDATA[Tristis Oris]]></dc:creator><pubDate>Wed, 26 Mar 2025 19:00:12 GMT</pubDate></item><item><title><![CDATA[XCP-ng 8.0.0 Beta now available!]]></title><description><![CDATA[When you boot in UEFI mode, press "e" to edit the boot command line, you have a line to change the memory for dom0.
]]></description><link>https://xcp-ng.org/forum/topic/1370/xcp-ng-8-0-0-beta-now-available</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/1370/xcp-ng-8-0-0-beta-now-available</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Wed, 22 May 2019 09:08:04 GMT</pubDate></item><item><title><![CDATA[backup mail report says INTERRUPTED but it&#x27;s not ?]]></title><description><![CDATA[I updated to branch "mra-fix-rest-memory-leak".
I will look at backup job results tomorrow and report back.
]]></description><link>https://xcp-ng.org/forum/topic/11721/backup-mail-report-says-interrupted-but-it-s-not</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/11721/backup-mail-report-says-interrupted-but-it-s-not</guid><dc:creator><![CDATA[MajorP93]]></dc:creator><pubDate>Fri, 26 Dec 2025 09:20:17 GMT</pubDate></item><item><title><![CDATA[Non-server CPU compatibility - Ryzen and Intel]]></title><description><![CDATA[Hi, I don't know if my question can be part of this thread. I apologize in advance...
I bought a new system based on "AMD Ryzen 9 9950X" installed on "ASUS PRIME B850-PLUS-CSM".
My target is to install two VM:

linux based Ubuntu;
Windows 11

In a possible future, I would like to install a graphic card for windows 11 CAD applications...
I would like to ask if xcp-ng can run this environment...
Thank you
Claudio
]]></description><link>https://xcp-ng.org/forum/topic/6896/non-server-cpu-compatibility-ryzen-and-intel</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/6896/non-server-cpu-compatibility-ryzen-and-intel</guid><dc:creator><![CDATA[lonejack]]></dc:creator><pubDate>Mon, 30 Jan 2023 16:16:12 GMT</pubDate></item><item><title><![CDATA[Nested Virtualization of Windows Hyper-V on XCP-ng]]></title><description><![CDATA[@stormi said in Nested Virtualization of Windows Hyper-V on XCP-ng:

Actually, Xen never officially supported Nested Virtualization. It was experimental, and broke when other needed changes were made to Xen. Now there's work to be done to make it fully supported, and this won't happen before the final release of XCP-ng 8.3. This will be documented in the release notes.
This is also an issue for us internally as we create a lot of virtual pools for our tests.

I read through a lot of the earlier posts and finally started scrolling to find this, which is the answer I was looking for. Why do I care? There is a Microsoft evaluation learning lab for things like Intune that runs in Hyper-V, basically a bunch of VHD (x) that get spawned as needed. Applications I need to teach myself. Running XCP-NG 8.3 current updates for this lab.
If it doesn't happen, then I'll just need to throw an eval version of Windows Server on something else like an HP T740 to run these labs, not the biggest issue for me.
Link for the labs if anyone is curious (free with an email registration like all the evals):
https://www.microsoft.com/en-us/evalcenter/evaluate-mem-evaluation-lab-kit
I'd think direct Docker support would be a higher priority than nested virtualization with a focus on Hyper-V. But that's just me.
]]></description><link>https://xcp-ng.org/forum/topic/4643/nested-virtualization-of-windows-hyper-v-on-xcp-ng</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/4643/nested-virtualization-of-windows-hyper-v-on-xcp-ng</guid><dc:creator><![CDATA[Greg_E]]></dc:creator><pubDate>Sat, 29 May 2021 11:53:51 GMT</pubDate></item><item><title><![CDATA[XOA Error when installing]]></title><description><![CDATA[I did the manual installation via the XVA. It worked for me.
]]></description><link>https://xcp-ng.org/forum/topic/2680/xoa-error-when-installing</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/2680/xoa-error-when-installing</guid><dc:creator><![CDATA[Gerhard Roediger]]></dc:creator><pubDate>Sun, 01 Mar 2020 06:45:57 GMT</pubDate></item><item><title><![CDATA[Nvidia Quadro P400 not working on Ubuntu server via GPU&#x2F;PCIe passthrough]]></title><description><![CDATA[I'm having similar issue with A400 on xcp-ng8.3
Proprietary driver fails with following message when running nvidia-smi :
NVRM: GPU 0000:00:05.0: RmInitAdapter failed! (0x24:0x72:1568)
[   44.619030] NVRM: GPU 0000:00:05.0: rm_init_adapter failed, device minor number 0
[   45.095040] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint.
[   45.144703] nvidia-uvm: Loaded the UVM driver, major device number 241.
system is actually loading the driver :
[    6.026970] xen: --&gt; pirq=88 -&gt; irq=36 (gsi=36)
[    6.027485] nvidia 0000:00:05.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
[    6.029010] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  550.144.03  Mon Dec 30 17:44:08 UTC 2024
[    6.063945] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  550.144.03  Mon Dec 30 17:10:10 UTC 2024
[    6.118261] [drm] [nvidia-drm] [GPU ID 0x00000005] Loading driver
[    6.118265] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:00:05.0 on minor 1
xl pci-assignable-list gives :
0000:43:00.0
0000:43:00.1
and gpu  is assigned as passthrough,, but when listing test VM i have empty list of devices..
[23:06 epycrep ~]# xl pci-list Avideo-nvidia
[23:35 epycrep ~]#
Not sure if i want to try more before switching gpu to something else. Any hint where to look for ?
Server is gigabyte g292-z20 wih epyc 7402p and single gpu for testing. IOMMU enabled.
]]></description><link>https://xcp-ng.org/forum/topic/5072/nvidia-quadro-p400-not-working-on-ubuntu-server-via-gpu-pcie-passthrough</link><guid isPermaLink="true">https://xcp-ng.org/forum/topic/5072/nvidia-quadro-p400-not-working-on-ubuntu-server-via-gpu-pcie-passthrough</guid><dc:creator><![CDATA[bajtec]]></dc:creator><pubDate>Sun, 17 Oct 2021 09:32:27 GMT</pubDate></item></channel></rss>