XCP-ng 8.3 updates announcements and testing
-
@rzr Installed on test machines with some warnings:
Updating : blktap-3.55.5-6.4.xcpng8.3.x86_64 9/87 cat: /usr/lib/udev/rules.d/65-md-incremental.rules: No such file or directory warning: %triggerin(blktap-3.55.5-6.4.xcpng8.3.x86_64) scriptlet failed, exit status 1 Non-fatal <unknown> scriptlet failure in rpm package blktap-3.55.5-6.4.xcpng8.3. x86_64 Updating : sm-fairlock-3.2.12-17.2.xcpng8.3.x86_64 32/87 Warning: fairlock@devicemapper.service changed on disk. Run 'systemctl daemon-reload' to reload units. -
Installed on a handful of test machines. Not as many as usual as im being very cautious with this one for now. Everything rebooted and VMs started ok after. Using VHD for everything currently.
-
@rzr Installed on test machines with some warnings:
Updating : blktap-3.55.5-6.4.xcpng8.3.x86_64 9/87 cat: /usr/lib/udev/rules.d/65-md-incremental.rules: No such file or directory warning: %triggerin(blktap-3.55.5-6.4.xcpng8.3.x86_64) scriptlet failed, exit status 1 Non-fatal <unknown> scriptlet failure in rpm package blktap-3.55.5-6.4.xcpng8.3. x86_64Yes this was reported as "Known issues"
On blktap update a non blocking error is reported,the fix is ongoing and will be delivered soon
Updating : sm-fairlock-3.2.12-17.2.xcpng8.3.x86_64 32/87 Warning: fairlock@devicemapper.service changed on disk. Run 'systemctl daemon-reload' to reload units.I observed this too, maybe this should be documented too, a reboot will work too.
-
R rzr referenced this topic
-
@rzr Always a reboot after big updates, as instructed/required.
-
Upgraded my usual pool. VM migrations during reboot worked without issues. So far everything works
-
Now installed on my test systems and all seems to be working so far.
-
Tested but not much
Seems fine so far -
@rzr (edit) After upgrading two main pools, I'm having CR delta backup issues. Everything was working before the XCP update, now every VM has the same error of
Backup fell back to a full. Using XO master db9c4, but the same XO setup was working just fine before the XCP update.(edit 2) XO logs
Apr 15 22:55:40 xo1 xo-server[1409]: 2026-04-16T02:55:40.613Z xo:backups:worker INFO starting backup Apr 15 22:55:42 xo1 xo-server[1409]: 2026-04-16T02:55:42.006Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.093Z xo:xapi:vdi INFO OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e was already destroyed { Apr 15 22:55:44 xo1 xo-server[1409]: vdiRef: 'OpaqueRef:49ff18d3-5c18-176c-4930-0163c6727c2b', Apr 15 22:55:44 xo1 xo-server[1409]: vbdRef: 'OpaqueRef:07ac67ab-05cf-a066-5924-f28e15642d4e' Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.839Z xo:xapi:vdi INFO OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82 has been disconnected from dom0 { Apr 15 22:55:44 xo1 xo-server[1409]: vdiRef: 'OpaqueRef:02f9ba92-1ee2-88eb-f660-a2cf3eeb287d', Apr 15 22:55:44 xo1 xo-server[1409]: vbdRef: 'OpaqueRef:e5fa3d00-f629-6983-6ff2-841e9edacf82' Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: 2026-04-16T02:55:44.910Z xo:xapi:vm WARN _assertHealthyVdiChain, could not fetch VDI { Apr 15 22:55:44 xo1 xo-server[1409]: error: XapiError: UUID_INVALID(VDI, 8f233bfc-9deb-4a06-aa07-0510de7496a1) Apr 15 22:55:44 xo1 xo-server[1409]: at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12) Apr 15 22:55:44 xo1 xo-server[1409]: at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/transports/json-rpc.mjs:38:21 Apr 15 22:55:44 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) { Apr 15 22:55:44 xo1 xo-server[1409]: code: 'UUID_INVALID', Apr 15 22:55:44 xo1 xo-server[1409]: params: [ 'VDI', '8f233bfc-9deb-4a06-aa07-0510de7496a1' ], Apr 15 22:55:44 xo1 xo-server[1409]: call: { duration: 3, method: 'VDI.get_by_uuid', params: [Array] }, Apr 15 22:55:44 xo1 xo-server[1409]: url: undefined, Apr 15 22:55:44 xo1 xo-server[1409]: task: undefined Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:44 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.732Z xo:xapi:xapi-disks INFO Error in openNbdCBT XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], ) Apr 15 22:55:46 xo1 xo-server[1409]: at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_XapiError.mjs:16:12) Apr 15 22:55:46 xo1 xo-server[1409]: at default (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/_getTaskResult.mjs:13:29) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1078:24) Apr 15 22:55:46 xo1 xo-server[1409]: at file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1112:14 Apr 15 22:55:46 xo1 xo-server[1409]: at Array.forEach (<anonymous>) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1102:12) Apr 15 22:55:46 xo1 xo-server[1409]: at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202604151415/packages/xen-api/index.mjs:1275:14) Apr 15 22:55:46 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) { Apr 15 22:55:46 xo1 xo-server[1409]: code: 'SR_BACKEND_FAILURE_460', Apr 15 22:55:46 xo1 xo-server[1409]: params: [ Apr 15 22:55:46 xo1 xo-server[1409]: '', Apr 15 22:55:46 xo1 xo-server[1409]: 'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]', Apr 15 22:55:46 xo1 xo-server[1409]: '' Apr 15 22:55:46 xo1 xo-server[1409]: ], Apr 15 22:55:46 xo1 xo-server[1409]: call: undefined, Apr 15 22:55:46 xo1 xo-server[1409]: url: undefined, Apr 15 22:55:46 xo1 xo-server[1409]: task: task { Apr 15 22:55:46 xo1 xo-server[1409]: uuid: '8fae41b4-de82-789c-980a-5ff2d490d2d8', Apr 15 22:55:46 xo1 xo-server[1409]: name_label: 'Async.VDI.list_changed_blocks', Apr 15 22:55:46 xo1 xo-server[1409]: name_description: '', Apr 15 22:55:46 xo1 xo-server[1409]: allowed_operations: [], Apr 15 22:55:46 xo1 xo-server[1409]: current_operations: {}, Apr 15 22:55:46 xo1 xo-server[1409]: created: '20260416T02:55:46Z', Apr 15 22:55:46 xo1 xo-server[1409]: finished: '20260416T02:55:46Z', Apr 15 22:55:46 xo1 xo-server[1409]: status: 'failure', Apr 15 22:55:46 xo1 xo-server[1409]: resident_on: 'OpaqueRef:7b987b11-ada0-99ce-d831-6e589bf34b50', Apr 15 22:55:46 xo1 xo-server[1409]: progress: 1, Apr 15 22:55:46 xo1 xo-server[1409]: type: '<none/>', Apr 15 22:55:46 xo1 xo-server[1409]: result: '', Apr 15 22:55:46 xo1 xo-server[1409]: error_info: [ Apr 15 22:55:46 xo1 xo-server[1409]: 'SR_BACKEND_FAILURE_460', Apr 15 22:55:46 xo1 xo-server[1409]: '', Apr 15 22:55:46 xo1 xo-server[1409]: 'Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]', Apr 15 22:55:46 xo1 xo-server[1409]: '' Apr 15 22:55:46 xo1 xo-server[1409]: ], Apr 15 22:55:46 xo1 xo-server[1409]: other_config: {}, Apr 15 22:55:46 xo1 xo-server[1409]: subtask_of: 'OpaqueRef:NULL', Apr 15 22:55:46 xo1 xo-server[1409]: subtasks: [], Apr 15 22:55:46 xo1 xo-server[1409]: backtrace: '(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_utils.ml)(line 150))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))' Apr 15 22:55:46 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: } Apr 15 22:55:46 xo1 xo-server[1409]: 2026-04-16T02:55:46.735Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.115Z xo:xapi:vdi WARN invalid HTTP header in response body { Apr 15 22:55:48 xo1 xo-server[1409]: body: 'HTTP/1.1 500 Internal Error\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'content-length: 318\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'content-type: text/html\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'connection: close\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: 'cache-control: no-cache, no-store\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: '\r\n' + Apr 15 22:55:48 xo1 xo-server[1409]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac; CBT metadata ]</body></html>' Apr 15 22:55:48 xo1 xo-server[1409]: } Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.124Z xo:xapi:xapi-disks WARN can't compute delta OpaqueRef:e7de1446-34fd-1ae8-4680-351b1e72b2dd from OpaqueRef:3b37047e-11dd-f836-ebed-acfaff2072ac, fallBack to a full { Apr 15 22:55:48 xo1 xo-server[1409]: error: Error: invalid HTTP header in response body Apr 15 22:55:48 xo1 xo-server[1409]: at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:37:19) Apr 15 22:55:48 xo1 xo-server[1409]: at process.processTicksAndRejections (node:internal/process/task_queues:104:5) Apr 15 22:55:48 xo1 xo-server[1409]: at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/vdi.mjs:261:5) Apr 15 22:55:48 xo1 xo-server[1409]: at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiVhdStreamSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23) Apr 15 22:55:48 xo1 xo-server[1409]: at async #openExportStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:182:7) Apr 15 22:55:48 xo1 xo-server[1409]: at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:97:22) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiDiskSource.openSource (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/xapi/disks/Xapi.mjs:258:18) Apr 15 22:55:48 xo1 xo-server[1409]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41) Apr 15 22:55:48 xo1 xo-server[1409]: at async file:///opt/xo/xo-builds/xen-orchestra-202604151415/@xen-orchestra/backups/_incrementalVm.mjs:66:5 Apr 15 22:55:48 xo1 xo-server[1409]: } Apr 15 22:55:48 xo1 xo-server[1409]: 2026-04-16T02:55:48.126Z xo:xapi:xapi-disks INFO export through vhd Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.047Z xo:backups:worker INFO backup has ended Apr 15 22:56:24 xo1 xo-server[1409]: 2026-04-16T02:56:24.231Z xo:backups:worker INFO process will exit { Apr 15 22:56:24 xo1 xo-server[1409]: duration: 43618102, Apr 15 22:56:24 xo1 xo-server[1409]: exitCode: 0, Apr 15 22:56:24 xo1 xo-server[1409]: resourceUsage: { Apr 15 22:56:24 xo1 xo-server[1409]: userCPUTime: 45307253, Apr 15 22:56:24 xo1 xo-server[1409]: systemCPUTime: 6674413, Apr 15 22:56:24 xo1 xo-server[1409]: maxRSS: 30928, Apr 15 22:56:24 xo1 xo-server[1409]: sharedMemorySize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: unsharedDataSize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: unsharedStackSize: 0, Apr 15 22:56:24 xo1 xo-server[1409]: minorPageFault: 287968, Apr 15 22:56:24 xo1 xo-server[1409]: majorPageFault: 0, Apr 15 22:56:24 xo1 xo-server[1409]: swappedOut: 0, Apr 15 22:56:24 xo1 xo-server[1409]: fsRead: 0, Apr 15 22:56:24 xo1 xo-server[1409]: fsWrite: 0, Apr 15 22:56:24 xo1 xo-server[1409]: ipcSent: 0, Apr 15 22:56:24 xo1 xo-server[1409]: ipcReceived: 0, Apr 15 22:56:24 xo1 xo-server[1409]: signalsCount: 0, Apr 15 22:56:24 xo1 xo-server[1409]: voluntaryContextSwitches: 14665, Apr 15 22:56:24 xo1 xo-server[1409]: involuntaryContextSwitches: 962 Apr 15 22:56:24 xo1 xo-server[1409]: }, Apr 15 22:56:24 xo1 xo-server[1409]: summary: { duration: '44s', cpuUsage: '119%', memoryUsage: '30.2 MiB' } Apr 15 22:56:24 xo1 xo-server[1409]: } -
Thanks @Andrew. They'll have a close look.
-
Edit - If @olivierlambert wants to make this post its own thread im ok with that.
Updated home lab and live convert to qcow2 seems to work.
Process was a little long 30ish min. but it worked and did not fail.
Kubuntu 26.04 lts beta

Tried a windows vm got error - 2026-04-16T13_04_47.720Z - XO.txt
Gives VDI has CBT enabled...
Windows 11 vm.

From backup job...

Just tried another vm that was powerd off. regular ubuntu 24.04 successfull...



-
@acebmxer Hello,
The error
VDI_CBT_ENABLEDmeans that the XAPI doesn't want to move the VDI to not break the CBT chain.
You can disable the CBT on the VDI before migrating the VDI but if you have snapshots with CBT enabled it can be complicated and it might necessitate to remove them before moving the VDI.
We have changes planned to improve the CBT handling in this kind of case. -
@Andrew Hello Andrew,
Thank you for reporting.
It appear that the CBT on FileSR-based SR is not working in addition to data-destroy (the option that allow to remove the VDI content and only keep the CBT).
Can you confirm that you are using a FileSR (ext or nfs)?
Is it possible to disable purge data on the CR job? -
@dthenot Yes, VHD files on both local disk and NFS, same problem.
Testing one VM, I removed the snapshot, disk CBT setting, and removed the destination replica. First CR run does a full backup without issue (same NBD/CBT/purge enabled). Second run has the same problem (fell back to full). So, clearing out things does not fix it (with the same original setup).
Testing several combinations, just disabling the backup purge snapshot option makes the delta CR backup work again (NBD/CBT still enabled). It does a full backup the first run (fell back to full), but then does delta after that.
-
I have been able to migrate all vms over to qcow2. Think shutting down the vms and booting backup. Alos if anything from this thread might have had an impact. https://xcp-ng.org/forum/topic/12087/backups-with-qcow2-enabled/9
-
@Andrew Hello,
I have been able to find the problem and make a fix, it's in the process of being packaged.
I can confirm it only happen for file based SR when using purge snapshots.
For some reason, the vdi type of CBT_metadata is cbtlog for FileSR but stays the image format it was for LVMSR
And it would make a condition fail during thelist_changed_blockscall. -
Nice catch @dthenot !
-
@dthenot Great! I'm happy I was able to help test it. I look forward to the update release.
Interesting note, CR is faster when the snapshots are not deleted.... or CR is faster because of the update, I'll test again after the fix.
-
Feature fixes, security and maintenance update candidates for you to test!
This release batch contains fixes on the major storage feature previously announced,
read the RC2 announcement for QCOW2 image format support for 2TiB+ images.The whole platform has been hardened with back-porting security patches from the latest version of OpenSSH.
An additional driver fix is part of this minor package set.
What changed
Storage
QCOW2 image format support is the major feature of this release batch,
check related announcement in forum.Some fixes have been applied to fix issues found during the testing phase. Many thanks go to @Andrew who found a CBT-related bug on file-based SRs!
sm: 3.2.12-17.5- Fix a regression on CBT (Changed block tracking) on file-based SRs (EXT, NFS, ...), causing backup jobs using the "purge snapshot data when using CBT" option to create full backups each time instead of deltas.
- Deactivate unused LVM snapshot base before deletion to prevent LVM leak. This fix is not related to the QCOW2 feature, but is important and localized enough for us to provide it in addition the other changes.
- Minor fix that prevents a warning when updating the package.
blktap: 3.55.5-6.5- Fix install warning when triggering mdadm to generate a udev rule.
Network
openssh: Update to 9.8p1-1.2.3- Two vulnerabilities disclosed along with the OpenSSH 10.3 release have been fixed.
- In authorized_keys, when principals="" was defined along with a CA with a common CA, an interpretation error occurred, which could lead to unauthorized access.
- When one ECDSA algorithm was active, it activated all others regardless of their configuration. (By default, all ECDSA algorithms are active.)
- For more details please track the upcoming Vates Security Advisories.
- Two vulnerabilities disclosed along with the OpenSSH 10.3 release have been fixed.
Drivers updates
More information about drivers and current versions is maintained on the drivers wiki page.
qlogic-fastlinq-alt: 8.74.6.0-1- Fixes 2 issues in the qede module driver:
- Driver does not retain configured MAC and MTU post reset recovery
- Driver does not recover from TX timeout error
- Fixes 2 issues in the qede module driver:
Versions:
blktap: 3.55.5-6.4.xcpng8.3 -> 3.55.5-6.5.xcpng8.3openssh: 9.8p1-1.2.2.xcpng8.3 -> 9.8p1-1.2.3.xcpng8.3sm: 3.2.12-17.2.xcpng8.3 -> 3.2.12-17.5.xcpng8.3
Optional packages:
qlogic-fastlinq-alt: 8.70.12.0-1.xcpng8.3 -> 8.74.6.0-1.xcpng8.3
Test on XCP-ng 8.3
If you are using XOSTOR, please refer to our documentation for the update method.
yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates rebootThe usual update rules apply: pool coordinator first, etc.
What to test
The most important change is related to storage: adding QCOW2 support also affects the codebase managing VHD disks. What matters here is, above all, to detect any regression on VHD support (we tested it deeply, but on this matter there's no such thing as too much testing). Of course, you are also welcome to test the QCOW2 image format support.
See the dedicated thread for more information.
Other significant changes requiring attention:
- SSH connectivity
And, as usual, normal use and anything else you want to test.
Test window before official release of the updates
~4 days
We would like to thank users who reported feedback on the QCOW RC2 release: @acebmxer, @andrew, @bufanda, @flakpyro, @jeffberntsen, @ph7
-
installed updates will report back.
Update - I had migrated vms back over to vhd prior to update release. I have migrated 2 vms back over to qcow2 and the initial backup ran successfull. Ran a second delta backup and that as well was successful with out issues. Backups happen very quickly now. But it appears the % and progress bar are working.
When CBT is enabled on the vm vdi. They show up as needing to be coalesced. VMs without CBT enabled the vdis are coalesced.

Will continue to monitor.
Once the coalesence hits 2 for the vm. The vm is skipped form future backups until cleared. (shutting down the vm will allow the coalescence to happen.
2026-04-23T19_52_34.694Z - backup NG.txt

-
@rzr XCP 8.3 pools updated and running.
CR delta backup snapshot problem corrected and working now.
SSH from old system to XCP displays the warning (per documentation).
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login