Error: invalid HTTP header in response body
-
@olivierlambert Have you heard of any progress in this case ? I found out that I got rid of the error when I disabled "Purge snapshot data when using CBT" on all my backups, so the backups are no longer failing, except from the Docker VM I have mentioned in another case "Success or failure" (or similar), which indicates failure, but restores successfully (which is weird).
-
I'm not aware of similar issues for our XOA users
@florent does it ring any bell?
-
today same issue again
Snapshot
Start: 2025-06-12 07:16
End: 2025-06-12 07:16
50G-Backup-Delta
Start: 2025-06-12 07:16
End: 2025-06-12 07:16
Duration: a few seconds
Start: 2025-06-12 07:16
End: 2025-06-12 07:16
Duration: a few seconds
Error: invalid HTTP header in response body
Type: fullvm is a PFSense
with template Other install media
Management agent 6.2.0-76888 detected
Hardware virtualization with paravirtualization drivers enabled (PVHVM) -
@markxc if you are on master you should have something on the xo logs , starting with
invalid HTTP header in response body
Can you check and tell us what is the error message attached ?
-
un 12 07:16:23 xenorchestra xo-server[1679463]: 2025-06-12T05:16:23.729Z xo:xapi:vdi WARN invalid HTTP header in response body { Jun 12 07:16:23 xenorchestra xo-server[1679463]: body: 'HTTP/1.1 500 Internal Error\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'content-length: 266\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'content-type:text/html\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'connection:close\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'cache-control:no-cache, no-store\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: '\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a whi>Jun 12 07:16:23 xenorchestra xo-server[1679463]: } Jun 12 07:16:28 xenorchestra xo-server[1679463]: 2025-06-12T05:16:28.522Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata { Jun 12 07:16:28 xenorchestra xo-server[1679463]: path: '/xo-vm-backups/04d4ce05-81fd-e2b6-ef0d-4e9b91f3ffb1/20250610T051506Z.json', Jun 12 07:16:28 xenorchestra xo-server[1679463]: actual: 1116023296, Jun 12 07:16:28 xenorchestra xo-server[1679463]: expected: 1116023808 Jun 12 07:16:28 xenorchestra xo-server[1679463]: } Jun 12 07:16:29 xenorchestra xo-server[1675344]: 2025-06-12T05:16:29.954Z xo:backups:mergeWorker INFO merge in progress { Jun 12 07:16:29 xenorchestra xo-server[1675344]: done: 59198, Jun 12 07:16:29 xenorchestra xo-server[1675344]: parent: '/xo-vm-backups/eda7fcd9-484f-7f19-b5ae-0cfd94ca2207/vdis/89511625-67f3-46be-98fe-8d7a7584386>Jun 12 07:16:29 xenorchestra xo-server[1675344]: progress: 83, Jun 12 07:16:29 xenorchestra xo-server[1675344]: total: 71442 Jun 12 07:16:29 xenorchestra xo-server[1675344]: } Jun 12 07:16:39 xenorchestra xo-server[1675344]: 2025-06-12T05:16:39.954Z xo:backups:mergeWorker INFO merge in progress { Jun 12 07:16:39 xenorchestra xo-server[1675344]: done: 59467, Jun 12 07:16:39 xenorchestra xo-server[1675344]: parent: '/xo-vm-backups/eda7fcd9-484f-7f19-b5ae-0cfd94ca2207/vdis/89511625-67f3-46be-98fe-8d7a7584386 un 12 07:16:39 xenorchestra xo-server[1675344]: 2025-06-12T05:16:39.954Z xo:backups:mergeWorker INFO merge in progress { Jun 12 07:16:39 xenorchestra xo-server[1675344]: done: 59467, Jun 12 07:16:39 xenorchestra xo-server[1675344]: parent: '/xo-vm-backups/eda7fcd9-484f-7f19-b5ae-0cfd94ca2207/vdis/89511625-67f3-46be-98fe-8d7a7584386> Jun 12 07:16:39 xenorchestra xo-server[1675344]: progress: 83, Jun 12 07:16:39 xenorchestra xo-server[1675344]: total: 71442 Jun 12 07:16:39 xenorchestra xo-server[1675344]: } Jun 12 07:16:46 xenorchestra xo-server[1679463]: 2025-06-12T05:16:46.783Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: CBT is disabled Jun 12 07:16:46 xenorchestra xo-server[1679463]: at XapiVhdCbtSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/d> Jun 12 07:16:46 xenorchestra xo-server[1679463]: at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async #openNbdCbt (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/disks>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/di>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/_incrementalVm.> Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async Promise.all (index 1) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/ba>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xe> Jun 12 07:16:46 xenorchestra xo-server[1679463]: code: 'CBT_DISABLED' Jun 12 07:16:46 xenorchestra xo-server[1679463]: } Jun 12 07:16:46 xenorchestra xo-server[1679463]: 2025-06-12T05:16:46.863Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: CBT is disabled Jun 12 07:16:46 xenorchestra xo-server[1679463]: at XapiVhdCbtSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/d>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async #openNbdCbt (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/disks>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/di>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/_incrementalVm.> Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async Promise.all (index 2) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/ba>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xe> Jun 12 07:16:46 xenorchestra xo-server[1679463]: code: 'CBT_DISABLED' Jun 12 07:16:46 xenorchestra xo-server[1679463]: }
-
@markxc thank you that is the part I wanted
I think someone on the xapi side should look into it
-
Hello everyone,
this type of error now popped up at my delta backup as well.
Some specs:
OS: Debian 12 patched
Xen Orchestra: as of 20250616 0005
XEN-NG: 8.3 latest patches applied
Job Type: Delta Backup with 8 VMs backing up to remote Synology NASThe job run for a quiet a few weeks without any problems. But I have to admit, that I cannot say, what exactly induced the problem, since I updated xen orchestra and modified the backup job (I removed 2 VM, moved them to another machine, added additional disks and readded them the same backup job). Manually triggering the backup via "Restart VM backup" upon the failures dialog successfully runs the backup.
I get the following error in the log:
... Jun 18 02:41:28 xoa xo-server[258369]: 2025-06-18T00:41:28.527Z xo:backups:MixinBackupWriter INFO merge in progress { Jun 18 02:41:28 xoa xo-server[258369]: done: 6895, Jun 18 02:41:28 xoa xo-server[258369]: parent: '/xo-vm-backups/924b4cf4-c8b3-18ab-5f78-d1daa77bc3fc/vdis/8c0477b9-b6e8-45ca-bcac-b78549e05b6f/ab2c3be9-bec5-4361-9ad2-81acfc14c16e/20250611T005140Z.vhd', Jun 18 02:41:28 xoa xo-server[258369]: progress: 97, Jun 18 02:41:28 xoa xo-server[258369]: total: 7132 Jun 18 02:41:28 xoa xo-server[258369]: } Jun 18 02:41:38 xoa xo-server[258369]: 2025-06-18T00:41:38.528Z xo:backups:MixinBackupWriter INFO merge in progress { Jun 18 02:41:38 xoa xo-server[258369]: done: 7073, Jun 18 02:41:38 xoa xo-server[258369]: parent: '/xo-vm-backups/924b4cf4-c8b3-18ab-5f78-d1daa77bc3fc/vdis/8c0477b9-b6e8-45ca-bcac-b78549e05b6f/ab2c3be9-bec5-4361-9ad2-81acfc14c16e/20250611T005140Z.vhd', Jun 18 02:41:38 xoa xo-server[258369]: progress: 99, Jun 18 02:41:38 xoa xo-server[258369]: total: 7132 Jun 18 02:41:38 xoa xo-server[258369]: } Jun 18 02:41:46 xoa xo-server[258369]: 2025-06-18T00:41:46.228Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata { Jun 18 02:41:46 xoa xo-server[258369]: path: '/xo-vm-backups/924b4cf4-c8b3-18ab-5f78-d1daa77bc3fc/20250617T235823Z.json', Jun 18 02:41:46 xoa xo-server[258369]: actual: 108580044800, Jun 18 02:41:46 xoa xo-server[258369]: expected: 108606965248 Jun 18 02:41:46 xoa xo-server[258369]: } Jun 18 02:46:20 xoa xo-server[258369]: 2025-06-18T00:46:20.182Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata { Jun 18 02:46:20 xoa xo-server[258369]: path: '/xo-vm-backups/9960fd34-ad5a-8854-6a90-3b1e88c1398f/20250618T004205Z.json', Jun 18 02:46:20 xoa xo-server[258369]: actual: 12184453120, Jun 18 02:46:20 xoa xo-server[258369]: expected: 12190142976 Jun 18 02:46:20 xoa xo-server[258369]: } Jun 18 02:46:41 xoa xo-server[258369]: 2025-06-18T00:46:41.281Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client Jun 18 02:46:41 xoa xo-server[258369]: at connectNbdClientIfPossible (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/xapi/disks/utils.mjs:23:19) Jun 18 02:46:41 xoa xo-server[258369]: at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jun 18 02:46:41 xoa xo-server[258369]: at async XapiVhdCbtSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20) Jun 18 02:46:41 xoa xo-server[258369]: at async #openNbdCbt (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/xapi/disks/Xapi.mjs:129:7) Jun 18 02:46:41 xoa xo-server[258369]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41) Jun 18 02:46:41 xoa xo-server[258369]: at async file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_incrementalVm.mjs:65:5 Jun 18 02:46:41 xoa xo-server[258369]: at async Promise.all (index 1) Jun 18 02:46:41 xoa xo-server[258369]: at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_cancelableMap.mjs:11:12) Jun 18 02:46:41 xoa xo-server[258369]: at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_incrementalVm.mjs:28:3) Jun 18 02:46:41 xoa xo-server[258369]: at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:38:25) { Jun 18 02:46:41 xoa xo-server[258369]: code: 'NO_NBD_AVAILABLE' Jun 18 02:46:41 xoa xo-server[258369]: } Jun 18 02:46:43 xoa xo-server[258369]: 2025-06-18T00:46:43.098Z xo:xapi:vdi WARN invalid HTTP header in response body { Jun 18 02:46:43 xoa xo-server[258369]: body: 'HTTP/1.1 500 Internal Error\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'content-length: 318\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'content-type: text/html\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'connection: close\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'cache-control: no-cache, no-store\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: '\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:1ed06eb9-ed6f-d8f0-25a4-647a4ff22ca6; CBT metadata ]</body></html>' Jun 18 02:46:43 xoa xo-server[258369]: } ...
Anyone has any ideas?
-
@FritzGerald Do you have "Purge snapshot data when using CBT" enabled?
I had and when I disabled this, it didn't happen since.
-
Hi @manilx.
thank you for the quick reply. I indeed do have it enabled.
I am just surprised since the exact same job worked previously and it does not make sense to me to keep it. Nevertheless I have to admit, that my knowledge is limited to these types of snapshots and its impact upon storage usage. Just FYI, there is this discussion I found: https://xcp-ng.org/forum/topic/10400/purge-snapshot-data-when-using-cbt-why-wouldn-t-i-enable-this.
However, your point is very good. I just disabled it in order to validate your thesis. If so, I think upon our system behavior we could then report some sort of "bug". Did you backup did not work from the start either or was it also after "some sort of modifications"?
-
@FritzGerald This error happened only on my backups to a remore (a bit slower location but also mounted as nfs like to the local one). It started out of the blue after running perfectly and didn't happen all the time. No rime.
I read about disabling this option in this thread and tried it. Seems to have worked and it is most definitely a bug. -
@manilx okay. I will wait and see. Backup runs tonight.
-
@manilx said in Error: invalid HTTP header in response body:
@peo I don't have this setting set. The errors appear inconsistently.
So you confirmed my "solution" to the problem, even if you did not had that setting enabled when I suggested turning it off.
It's great that more people have this problem and it gets resolved with the same "solution". I was starting to think I imagined the problems week after week before I first reported it.
-
@peo I had the setting turned on and turned it off! Seems to have helped.
-
Hi everyone, just FYI. During my delta backup testing, I ran out of space on my NAS (although it should have had enough?!?). It must have created more data then I expected. Therefore I had other priorities, I removed the backups and set up new delta backups. However, thereby I could not dive into further exploring the problems.
-
Hi everyone again. I pretty much got back to square one. What I can observer is, that all my VMs where I added additional disc run into the above error code. So 1 disc per VM works fine for 6 backups, the two others VMs (on with 2 disc, one with 5 discs) fail. Does CBT based delta backups only work if there is no disc attached? I really appreciate any help.
-
@FritzGerald It has nothing to do with the number of disks attached to the VM. It just fails every second time:
https://xcp-ng.org/forum/post/93508The "solution" (until there is a real solution) is in the reply below the linked one: turn off "Purge snapshot data when using CBT" under advanced settings for all backup jobs.
-
@peo Hi, thank you for your quick reply. Since I had this storage issues after disabling it, I am a little bit careful. My knowledge is really limited about CBT based backups, can you tell me, what it means in terms of storage use. To my understanding it will keep the snapshots and thereby significantly increase space usage, or do I miss something? And have you heard about whether the bug is officially known and worked on?
-
@FritzGerald The snapshots (left on the disk until next backup) will only consist of the differences between the previous backup and the current one.
BUT.. when you do the first backup of a machine, the snapshot will use the full (used) size of each disk attached to the machine (this might be what happened at your first attempt).If you have the space for it, just do one backup at the time with snapshot deletion disabled, then do another one when it's finished. The snapshots will then be reduced to only the difference between the first and second backup.
-
@peo
Hi thank you. Your are most likely right about the backup storage overflow.Just two more questions:
- have you every experienced this problem on a VM with only one disc attached? I am asking since at my site only delta backups fail with additional discs attached.
- Is this somehow addressed as a bug or shall I officially report a bug, since now quite a few users experience this problem.
-
@FritzGerald I have not had any problems like this since I disabled the deletion of the in-between-backup snapshots. I have for example a couple of machines with 50GB+ disks (one with a 100+ GB, mostly unused now, so the snapshot in between the backups takes less than a MB).
All backups were failing (most of my VMs have more than one disk, a trick I use to lock it to a specific host) until I disabled the deletion of the snapshot. Not at once, but more and more of them until all..I still have the other "imaginary problem" with my VM for Docker (but that's a completely other problem which have not yet been acknowledged - backups "fail" but I'm able to restore them to a fully working new VM)