Error: invalid HTTP header in response body
-
@manilx as you describe it, it might be a problem with the remote NFS storage, but still, the it would be nice if that error message could be a bit clearer.
-
@peo Nope. Remote has no issue whatsoever.
The issue is with the backup code because this is what changed. Worked before for many months without issues.
And it's still working at the office on stable XOA. -
dear all i am facing the same issue
Error: invalid HTTP header in response body
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1749107336350", "jobId": "89511625-67f3-46be-98fe-8d7a7584386f", "jobName": "All-2-50G-Delta", "message": "backup", "scheduleId": "405be84c-8b49-408a-871e-b21c211ee326", "start": 1749107336350, "status": "failure", "infos": [ { "data": { "vms": [ "20e7d42f-bf69-186d-ec65-82e08230e557" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "20e7d42f-bf69-186d-ec65-82e08230e557", "name_label": "PfSense_Ip_92_61_60_23_EXT_172_16_16_99_INT_2022_03_03_07_24_57" }, "id": "1749107339106", "message": "backup VM", "start": 1749107339106, "status": "failure", "tasks": [ { "id": "1749107339113", "message": "clean-vm", "start": 1749107339113, "status": "success", "end": 1749107339118, "result": { "merge": false } }, { "id": "1749107339665", "message": "snapshot", "start": 1749107339665, "status": "success", "end": 1749107341927, "result": "02828209-8cc6-8694-8fdd-af5aa04d5b90" }, { "data": { "id": "455eee8b-49ca-4552-9431-ebb417fbd8f8", "isFull": true, "type": "remote" }, "id": "1749107341927:0", "message": "export", "start": 1749107341927, "status": "success", "tasks": [ { "id": "1749107347491", "message": "clean-vm", "start": 1749107347491, "status": "success", "end": 1749107347494, "result": { "merge": false } } ], "end": 1749107347496 } ], "end": 1749107347496, "result": { "message": "invalid HTTP header in response body", "name": "Error", "stack": "Error: invalid HTTP header in response body\n at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/xapi/vdi.mjs:40:19)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/xapi/vdi.mjs:326:7)\n at async file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/backups/_incrementalVm.mjs:56:34\n at async Promise.all (index 1)\n at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/backups/_incrementalVm.mjs:25:3)\n at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:396:9)\n at async file:///opt/xo/xo-builds/xen-orchestra-202503211145/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38" } } ], "end": 1749107347497 }
-
@olivierlambert Have you heard of any progress in this case ? I found out that I got rid of the error when I disabled "Purge snapshot data when using CBT" on all my backups, so the backups are no longer failing, except from the Docker VM I have mentioned in another case "Success or failure" (or similar), which indicates failure, but restores successfully (which is weird).
-
I'm not aware of similar issues for our XOA users
@florent does it ring any bell?
-
today same issue again
Snapshot
Start: 2025-06-12 07:16
End: 2025-06-12 07:16
50G-Backup-Delta
Start: 2025-06-12 07:16
End: 2025-06-12 07:16
Duration: a few seconds
Start: 2025-06-12 07:16
End: 2025-06-12 07:16
Duration: a few seconds
Error: invalid HTTP header in response body
Type: fullvm is a PFSense
with template Other install media
Management agent 6.2.0-76888 detected
Hardware virtualization with paravirtualization drivers enabled (PVHVM) -
@markxc if you are on master you should have something on the xo logs , starting with
invalid HTTP header in response body
Can you check and tell us what is the error message attached ?
-
un 12 07:16:23 xenorchestra xo-server[1679463]: 2025-06-12T05:16:23.729Z xo:xapi:vdi WARN invalid HTTP header in response body { Jun 12 07:16:23 xenorchestra xo-server[1679463]: body: 'HTTP/1.1 500 Internal Error\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'content-length: 266\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'content-type:text/html\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'connection:close\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: 'cache-control:no-cache, no-store\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: '\r\n' + Jun 12 07:16:23 xenorchestra xo-server[1679463]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a whi>Jun 12 07:16:23 xenorchestra xo-server[1679463]: } Jun 12 07:16:28 xenorchestra xo-server[1679463]: 2025-06-12T05:16:28.522Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata { Jun 12 07:16:28 xenorchestra xo-server[1679463]: path: '/xo-vm-backups/04d4ce05-81fd-e2b6-ef0d-4e9b91f3ffb1/20250610T051506Z.json', Jun 12 07:16:28 xenorchestra xo-server[1679463]: actual: 1116023296, Jun 12 07:16:28 xenorchestra xo-server[1679463]: expected: 1116023808 Jun 12 07:16:28 xenorchestra xo-server[1679463]: } Jun 12 07:16:29 xenorchestra xo-server[1675344]: 2025-06-12T05:16:29.954Z xo:backups:mergeWorker INFO merge in progress { Jun 12 07:16:29 xenorchestra xo-server[1675344]: done: 59198, Jun 12 07:16:29 xenorchestra xo-server[1675344]: parent: '/xo-vm-backups/eda7fcd9-484f-7f19-b5ae-0cfd94ca2207/vdis/89511625-67f3-46be-98fe-8d7a7584386>Jun 12 07:16:29 xenorchestra xo-server[1675344]: progress: 83, Jun 12 07:16:29 xenorchestra xo-server[1675344]: total: 71442 Jun 12 07:16:29 xenorchestra xo-server[1675344]: } Jun 12 07:16:39 xenorchestra xo-server[1675344]: 2025-06-12T05:16:39.954Z xo:backups:mergeWorker INFO merge in progress { Jun 12 07:16:39 xenorchestra xo-server[1675344]: done: 59467, Jun 12 07:16:39 xenorchestra xo-server[1675344]: parent: '/xo-vm-backups/eda7fcd9-484f-7f19-b5ae-0cfd94ca2207/vdis/89511625-67f3-46be-98fe-8d7a7584386 un 12 07:16:39 xenorchestra xo-server[1675344]: 2025-06-12T05:16:39.954Z xo:backups:mergeWorker INFO merge in progress { Jun 12 07:16:39 xenorchestra xo-server[1675344]: done: 59467, Jun 12 07:16:39 xenorchestra xo-server[1675344]: parent: '/xo-vm-backups/eda7fcd9-484f-7f19-b5ae-0cfd94ca2207/vdis/89511625-67f3-46be-98fe-8d7a7584386> Jun 12 07:16:39 xenorchestra xo-server[1675344]: progress: 83, Jun 12 07:16:39 xenorchestra xo-server[1675344]: total: 71442 Jun 12 07:16:39 xenorchestra xo-server[1675344]: } Jun 12 07:16:46 xenorchestra xo-server[1679463]: 2025-06-12T05:16:46.783Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: CBT is disabled Jun 12 07:16:46 xenorchestra xo-server[1679463]: at XapiVhdCbtSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/d> Jun 12 07:16:46 xenorchestra xo-server[1679463]: at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async #openNbdCbt (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/disks>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/di>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/_incrementalVm.> Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async Promise.all (index 1) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/ba>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xe> Jun 12 07:16:46 xenorchestra xo-server[1679463]: code: 'CBT_DISABLED' Jun 12 07:16:46 xenorchestra xo-server[1679463]: } Jun 12 07:16:46 xenorchestra xo-server[1679463]: 2025-06-12T05:16:46.863Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: CBT is disabled Jun 12 07:16:46 xenorchestra xo-server[1679463]: at XapiVhdCbtSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/d>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async #openNbdCbt (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/xapi/disks>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/di>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/_incrementalVm.> Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async Promise.all (index 2) Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/backups/>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xen-orchestra/ba>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@>Jun 12 07:16:46 xenorchestra xo-server[1679463]: at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202506050918/@xe> Jun 12 07:16:46 xenorchestra xo-server[1679463]: code: 'CBT_DISABLED' Jun 12 07:16:46 xenorchestra xo-server[1679463]: }
-
@markxc thank you that is the part I wanted
I think someone on the xapi side should look into it
-
Hello everyone,
this type of error now popped up at my delta backup as well.
Some specs:
OS: Debian 12 patched
Xen Orchestra: as of 20250616 0005
XEN-NG: 8.3 latest patches applied
Job Type: Delta Backup with 8 VMs backing up to remote Synology NASThe job run for a quiet a few weeks without any problems. But I have to admit, that I cannot say, what exactly induced the problem, since I updated xen orchestra and modified the backup job (I removed 2 VM, moved them to another machine, added additional disks and readded them the same backup job). Manually triggering the backup via "Restart VM backup" upon the failures dialog successfully runs the backup.
I get the following error in the log:
... Jun 18 02:41:28 xoa xo-server[258369]: 2025-06-18T00:41:28.527Z xo:backups:MixinBackupWriter INFO merge in progress { Jun 18 02:41:28 xoa xo-server[258369]: done: 6895, Jun 18 02:41:28 xoa xo-server[258369]: parent: '/xo-vm-backups/924b4cf4-c8b3-18ab-5f78-d1daa77bc3fc/vdis/8c0477b9-b6e8-45ca-bcac-b78549e05b6f/ab2c3be9-bec5-4361-9ad2-81acfc14c16e/20250611T005140Z.vhd', Jun 18 02:41:28 xoa xo-server[258369]: progress: 97, Jun 18 02:41:28 xoa xo-server[258369]: total: 7132 Jun 18 02:41:28 xoa xo-server[258369]: } Jun 18 02:41:38 xoa xo-server[258369]: 2025-06-18T00:41:38.528Z xo:backups:MixinBackupWriter INFO merge in progress { Jun 18 02:41:38 xoa xo-server[258369]: done: 7073, Jun 18 02:41:38 xoa xo-server[258369]: parent: '/xo-vm-backups/924b4cf4-c8b3-18ab-5f78-d1daa77bc3fc/vdis/8c0477b9-b6e8-45ca-bcac-b78549e05b6f/ab2c3be9-bec5-4361-9ad2-81acfc14c16e/20250611T005140Z.vhd', Jun 18 02:41:38 xoa xo-server[258369]: progress: 99, Jun 18 02:41:38 xoa xo-server[258369]: total: 7132 Jun 18 02:41:38 xoa xo-server[258369]: } Jun 18 02:41:46 xoa xo-server[258369]: 2025-06-18T00:41:46.228Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata { Jun 18 02:41:46 xoa xo-server[258369]: path: '/xo-vm-backups/924b4cf4-c8b3-18ab-5f78-d1daa77bc3fc/20250617T235823Z.json', Jun 18 02:41:46 xoa xo-server[258369]: actual: 108580044800, Jun 18 02:41:46 xoa xo-server[258369]: expected: 108606965248 Jun 18 02:41:46 xoa xo-server[258369]: } Jun 18 02:46:20 xoa xo-server[258369]: 2025-06-18T00:46:20.182Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata { Jun 18 02:46:20 xoa xo-server[258369]: path: '/xo-vm-backups/9960fd34-ad5a-8854-6a90-3b1e88c1398f/20250618T004205Z.json', Jun 18 02:46:20 xoa xo-server[258369]: actual: 12184453120, Jun 18 02:46:20 xoa xo-server[258369]: expected: 12190142976 Jun 18 02:46:20 xoa xo-server[258369]: } Jun 18 02:46:41 xoa xo-server[258369]: 2025-06-18T00:46:41.281Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client Jun 18 02:46:41 xoa xo-server[258369]: at connectNbdClientIfPossible (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/xapi/disks/utils.mjs:23:19) Jun 18 02:46:41 xoa xo-server[258369]: at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jun 18 02:46:41 xoa xo-server[258369]: at async XapiVhdCbtSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20) Jun 18 02:46:41 xoa xo-server[258369]: at async #openNbdCbt (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/xapi/disks/Xapi.mjs:129:7) Jun 18 02:46:41 xoa xo-server[258369]: at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41) Jun 18 02:46:41 xoa xo-server[258369]: at async file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_incrementalVm.mjs:65:5 Jun 18 02:46:41 xoa xo-server[258369]: at async Promise.all (index 1) Jun 18 02:46:41 xoa xo-server[258369]: at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_cancelableMap.mjs:11:12) Jun 18 02:46:41 xoa xo-server[258369]: at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_incrementalVm.mjs:28:3) Jun 18 02:46:41 xoa xo-server[258369]: at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202506160005/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:38:25) { Jun 18 02:46:41 xoa xo-server[258369]: code: 'NO_NBD_AVAILABLE' Jun 18 02:46:41 xoa xo-server[258369]: } Jun 18 02:46:43 xoa xo-server[258369]: 2025-06-18T00:46:43.098Z xo:xapi:vdi WARN invalid HTTP header in response body { Jun 18 02:46:43 xoa xo-server[258369]: body: 'HTTP/1.1 500 Internal Error\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'content-length: 318\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'content-type: text/html\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'connection: close\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: 'cache-control: no-cache, no-store\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: '\r\n' + Jun 18 02:46:43 xoa xo-server[258369]: '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:1ed06eb9-ed6f-d8f0-25a4-647a4ff22ca6; CBT metadata ]</body></html>' Jun 18 02:46:43 xoa xo-server[258369]: } ...
Anyone has any ideas?
-
@FritzGerald Do you have "Purge snapshot data when using CBT" enabled?
I had and when I disabled this, it didn't happen since.
-
Hi @manilx.
thank you for the quick reply. I indeed do have it enabled.
I am just surprised since the exact same job worked previously and it does not make sense to me to keep it. Nevertheless I have to admit, that my knowledge is limited to these types of snapshots and its impact upon storage usage. Just FYI, there is this discussion I found: https://xcp-ng.org/forum/topic/10400/purge-snapshot-data-when-using-cbt-why-wouldn-t-i-enable-this.
However, your point is very good. I just disabled it in order to validate your thesis. If so, I think upon our system behavior we could then report some sort of "bug". Did you backup did not work from the start either or was it also after "some sort of modifications"?
-
@FritzGerald This error happened only on my backups to a remore (a bit slower location but also mounted as nfs like to the local one). It started out of the blue after running perfectly and didn't happen all the time. No rime.
I read about disabling this option in this thread and tried it. Seems to have worked and it is most definitely a bug. -
@manilx okay. I will wait and see. Backup runs tonight.
-
@manilx said in Error: invalid HTTP header in response body:
@peo I don't have this setting set. The errors appear inconsistently.
So you confirmed my "solution" to the problem, even if you did not had that setting enabled when I suggested turning it off.
It's great that more people have this problem and it gets resolved with the same "solution". I was starting to think I imagined the problems week after week before I first reported it.
-
@peo I had the setting turned on and turned it off! Seems to have helped.
-
Hi everyone, just FYI. During my delta backup testing, I ran out of space on my NAS (although it should have had enough?!?). It must have created more data then I expected. Therefore I had other priorities, I removed the backups and set up new delta backups. However, thereby I could not dive into further exploring the problems.
-
Hi everyone again. I pretty much got back to square one. What I can observer is, that all my VMs where I added additional disc run into the above error code. So 1 disc per VM works fine for 6 backups, the two others VMs (on with 2 disc, one with 5 discs) fail. Does CBT based delta backups only work if there is no disc attached? I really appreciate any help.
-
@FritzGerald It has nothing to do with the number of disks attached to the VM. It just fails every second time:
https://xcp-ng.org/forum/post/93508The "solution" (until there is a real solution) is in the reply below the linked one: turn off "Purge snapshot data when using CBT" under advanced settings for all backup jobs.
-
@peo Hi, thank you for your quick reply. Since I had this storage issues after disabling it, I am a little bit careful. My knowledge is really limited about CBT based backups, can you tell me, what it means in terms of storage use. To my understanding it will keep the snapshots and thereby significantly increase space usage, or do I miss something? And have you heard about whether the bug is officially known and worked on?