backup using http instead of https
-
@florent said in backup using http instead of https:
@KPS it's an easy win, so here is your christmas present
https://github.com/vatesfr/xen-orchestra/pull/6596
the message should show if nbd is really activated or not in XO, if there is a nbd enabled network on the host , and if the connection is successfullJust wondering, could NBD also be used in the future for speeding up live migrations with local storage?
-
It's possible we (with XenServer) decide to use NBD everywhere (ie in SMAPIv3 for storage migration, since there's no dedicated format unlike SMAPIv1 and VHD at all stages)
-
@florent
My problem seems to be, that NBD is not getting activated/tried:Dec 27 09:35:59 XOA xo-server[6477]: 2022-12-27T08:35:59.951Z xo:backups:DeltaBackupWriter INFO use nbd is NOT activated
...although:
- useNbd = true is set
- Backup-Remote is an SMB-share wir the setting "Store backup as multiple data blocks instead of a whole VHD file. (creates 500-1000 files per backed up TB but allows faster merge) "
- NBD-network is activated on one interface, XO and XCP-ng can work with
Is there any other dependency, that stops NBD from getting used?
-
@florent could tell I think
-
-
@florent
Thank you for the update. NBD seems to be used, but there is an error after that, that I do not understand (VDI_IN_USE):Jan 2 11:32:53 XOA xo-server[166886]: 2023-01-02T10:32:53.954Z xo:backups:DeltaBackupWriter INFO use nbd is activated Jan 2 11:32:53 XOA xo-server[166886]: 2023-01-02T10:32:53.995Z xo:backups:DeltaBackupWriter INFO got nbd info { Jan 2 11:32:53 XOA xo-server[166886]: nbdInfo: { Jan 2 11:32:53 XOA xo-server[166886]: exportname: '/ed614338-1f35-4faf-b679-cc24bcd6f566?session_id=OpaqueRef:c9997f76-072c-415d-8624-91da8bd95489', Jan 2 11:32:53 XOA xo-server[166886]: address: '172.31.0.143', Jan 2 11:32:53 XOA xo-server[166886]: port: 10809, Jan 2 11:32:53 XOA xo-server[166886]: cert: '-----BEGIN CERTIFICATE-----\n' + Jan 2 11:32:53 XOA xo-server[166886]: 'xxxx + Jan 2 11:32:53 XOA xo-server[166886]: 'xxxx' + Jan 2 11:32:53 XOA xo-server[166886]: '' + Jan 2 11:32:53 XOA xo-server[166886]: '-----END CERTIFICATE-----', Jan 2 11:32:53 XOA xo-server[166886]: subject: 'xenpool4host3' Jan 2 11:32:53 XOA xo-server[166886]: }, Jan 2 11:32:53 XOA xo-server[166886]: id: 'OpaqueRef:1dba9109-30fa-454e-86d1-1a89d6d3d6c7' Jan 2 11:32:53 XOA xo-server[166886]: } Jan 2 11:32:53 XOA xo-server[166886]: 2023-01-02T10:32:53.997Z xo:backups:DeltaBackupWriter INFO nbd client instantiated { vdi: 'OpaqueRef:1dba9109-30fa-454e-86d1-1a89d6d3d6c7' } Jan 2 11:32:57 XOA xo-server[166886]: 2023-01-02T10:32:57.876Z xo:backups:DeltaBackupWriter INFO nbd client connected { vdi: 'OpaqueRef:1dba9109-30fa-454e-86d1-1a89d6d3d6c7' } Jan 2 11:33:03 XOA xo-server[166886]: 2023-01-02T10:33:03.061Z xo:xapi WARN retry { Jan 2 11:33:03 XOA xo-server[166886]: attemptNumber: 0, Jan 2 11:33:03 XOA xo-server[166886]: delay: 5000, Jan 2 11:33:03 XOA xo-server[166886]: error: XapiError: VDI_IN_USE(OpaqueRef:d7a06e27-cba7-475c-b309-0a72e889444d, destroy) Jan 2 11:33:03 XOA xo-server[166886]: at XapiError.wrap (/opt/xen-orchestra/packages/xen-api/dist/_XapiError.js:21:12) Jan 2 11:33:03 XOA xo-server[166886]: at _default (/opt/xen-orchestra/packages/xen-api/dist/_getTaskResult.js:18:38) Jan 2 11:33:03 XOA xo-server[166886]: at Xapi._addRecordToCache (/opt/xen-orchestra/packages/xen-api/dist/index.js:748:51) Jan 2 11:33:03 XOA xo-server[166886]: at /opt/xen-orchestra/packages/xen-api/dist/index.js:781:14 Jan 2 11:33:03 XOA xo-server[166886]: at Array.forEach (<anonymous>) Jan 2 11:33:03 XOA xo-server[166886]: at Xapi._processEvents (/opt/xen-orchestra/packages/xen-api/dist/index.js:769:12) Jan 2 11:33:03 XOA xo-server[166886]: at Xapi._watchEvents (/opt/xen-orchestra/packages/xen-api/dist/index.js:901:14) Jan 2 11:33:03 XOA xo-server[166886]: at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { Jan 2 11:33:03 XOA xo-server[166886]: code: 'VDI_IN_USE', Jan 2 11:33:03 XOA xo-server[166886]: params: [ 'OpaqueRef:d7a06e27-cba7-475c-b309-0a72e889444d', 'destroy' ], Jan 2 11:33:03 XOA xo-server[166886]: call: undefined, Jan 2 11:33:03 XOA xo-server[166886]: url: undefined, Jan 2 11:33:03 XOA xo-server[166886]: task: task { Jan 2 11:33:03 XOA xo-server[166886]: uuid: '644e680d-74ed-32cb-8ffd-ee5f29c47fed', Jan 2 11:33:03 XOA xo-server[166886]: name_label: 'Async.VDI.destroy', Jan 2 11:33:03 XOA xo-server[166886]: name_description: '', Jan 2 11:33:03 XOA xo-server[166886]: allowed_operations: [], Jan 2 11:33:03 XOA xo-server[166886]: current_operations: {}, Jan 2 11:33:03 XOA xo-server[166886]: created: '20230102T10:33:02Z', Jan 2 11:33:03 XOA xo-server[166886]: finished: '20230102T10:33:02Z', Jan 2 11:33:03 XOA xo-server[166886]: status: 'failure', Jan 2 11:33:03 XOA xo-server[166886]: resident_on: 'OpaqueRef:1c3a1cae-b9be-980d-09f6-2ed35b498411', Jan 2 11:33:03 XOA xo-server[166886]: progress: 1, Jan 2 11:33:03 XOA xo-server[166886]: type: '<none/>', Jan 2 11:33:03 XOA xo-server[166886]: result: '', Jan 2 11:33:03 XOA xo-server[166886]: error_info: [Array], Jan 2 11:33:03 XOA xo-server[166886]: other_config: {}, Jan 2 11:33:03 XOA xo-server[166886]: subtask_of: 'OpaqueRef:NULL', Jan 2 11:33:03 XOA xo-server[166886]: subtasks: [], Jan 2 11:33:03 XOA xo-server[166886]: backtrace: '(((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4367))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))' Jan 2 11:33:03 XOA xo-server[166886]: } Jan 2 11:33:03 XOA xo-server[166886]: }, Jan 2 11:33:03 XOA xo-server[166886]: fn: 'destroy', Jan 2 11:33:03 XOA xo-server[166886]: arguments: [ 'OpaqueRef:d7a06e27-cba7-475c-b309-0a72e889444d' ], Jan 2 11:33:03 XOA xo-server[166886]: pool: { Jan 2 11:33:03 XOA xo-server[166886]: uuid: '001cacae-1717-bad3-c2e4-078d64efe4f7', Jan 2 11:33:03 XOA xo-server[166886]: name_label: 'XenPool4' Jan 2 11:33:03 XOA xo-server[166886]: } Jan 2 11:33:03 XOA xo-server[166886]: } Jan 2 11:33:09 XOA xo-server[166903]: 2023-01-02T10:33:09.326Z xo:backups:mergeWorker INFO starting Jan 2 11:33:09 XOA xo-server[166903]: 2023-01-02T10:33:09.378Z xo:backups:mergeWorker INFO merging VHD chain { Jan 2 11:33:09 XOA xo-server[166903]: chain: [ Jan 2 11:33:09 XOA xo-server[166903]: '/xo-vm-backups/4f54db87-ddec-3ecb-333e-150514683c0b/vdis/d4984b22-8028-49ae-9f76-849406fa4d83/2b88b90c-4f44-40e2-bcb2-b15248dd12ce/20221231T230016Z.alias.vhd', Jan 2 11:33:09 XOA xo-server[166903]: '/xo-vm-backups/4f54db87-ddec-3ecb-333e-150514683c0b/vdis/d4984b22-8028-49ae-9f76-849406fa4d83/2b88b90c-4f44-40e2-bcb2-b15248dd12ce/20230101T230012Z.alias.vhd' Jan 2 11:33:09 XOA xo-server[166903]: ] Jan 2 11:33:09 XOA xo-server[166903]: }
Is this related to NBD in any way?
Thank you and best wishes
KPS -
@KPS VDI_IN_USE is not NBD related , it may happen but the retry should handle this correctly
thank you for your help
-
One strange thing: According to the docs, NBD should only be used, when there is a remote storage with "Store backup as multiple data blocks instead of a whole VHD file. (creates 500-1000 files per backed up TB but allows faster merge) "
But: There is also the message "INFO nbd client connected" for storage without that option.
-
@florent
Sorry, but next issue:
Backup size is always shown as 122 KiB although, there are 4GB saved to the remote.If I save to a storage without "many-files"-seting, 4 GiB is shown
-
@KPS that is a second bug
I will look into it this afternoon.
edit : confirmed, I'm working on it