backup using http instead of https
-
@olivierlambert
NBD sounds great. I did activate it, but is there any possibility to see, if it is used for a backup`? -
I think @florent added some debug in the
xo-server
output.Did you make some benchs?
-
@olivierlambert
I want to do some benchmarking -
@olivierlambert @KPS You need to activate debug by settings
filter='xo:backups:DeltaBackupWriter'
in the[logs]
part of the configdon't forget to set
useNbd
to true in your[backups]
configif nbd works you will have messages like
got nbd connection
if there is a nbdError, you will havecan't connect to nbd server or no server available
-
@florent
I added the filter entry and already had "useNbd" in place, and did a restart - but there is nothing about nbd in syslog...
Can you give me a hint, what I did wrong?Using XOCE with xo-server 5.107.4
This is my log:
Dec 23 14:31:38 XOA kernel: [ 266.374438] CIFS: Attempting to mount //xxx\xxx/XOAnbd Dec 23 14:31:48 XOA xo-server[758]: 2022-12-23T13:31:48.382Z xo:backups:mergeWorker INFO starting Dec 23 14:31:48 XOA xo-server[758]: 2022-12-23T13:31:48.434Z xo:backups:mergeWorker INFO merging VHD chain { Dec 23 14:31:48 XOA xo-server[758]: chain: [ Dec 23 14:31:48 XOA xo-server[758]: '/xo-vm-backups/4f54db87-ddec-3ecb-333e-150514683c0b/vdis/d4984b22-8028-49ae-9f76-849406fa4d83/2b88b90c-4f44-40e2-bcb2-b15248dd12ce/20221223T125553Z.alias.vhd', Dec 23 14:31:48 XOA xo-server[758]: '/xo-vm-backups/4f54db87-ddec-3ecb-333e-150514683c0b/vdis/d4984b22-8028-49ae-9f76-849406fa4d83/2b88b90c-4f44-40e2-bcb2-b15248dd12ce/20221223T132728Z.alias.vhd' Dec 23 14:31:48 XOA xo-server[758]: ] Dec 23 14:31:48 XOA xo-server[758]: } Dec 23 14:31:52 XOA xo-server[758]: 2022-12-23T13:31:52.358Z xo:backups:mergeWorker FATAL ENOENT: no such file or directory, rename '/run/xo-server/mounts/cebefbdf-8fba-49ed-b821-47cd7525987b/xo-vm-backups/.queue/clean-vm/_20221223T133147Z-oj2uwisl1vr' -> '/run/xo-server/mounts/cebefbdf-8fba-49ed-b821-47cd7525987b/xo-vm-backups/.queue/clean-vm/__20221223T133147Z-oj2uwisl1vr' { Dec 23 14:31:52 XOA xo-server[758]: error: [Error: ENOENT: no such file or directory, rename '/run/xo-server/mounts/cebefbdf-8fba-49ed-b821-47cd7525987b/xo-vm-backups/.queue/clean-vm/_20221223T133147Z-oj2uwisl1vr' -> '/run/xo-server/mounts/cebefbdf-8fba-49ed-b821-47cd7525987b/xo-vm-backups/.queue/clean-vm/__20221223T133147Z-oj2uwisl1vr'] { Dec 23 14:31:52 XOA xo-server[758]: errno: -2, Dec 23 14:31:52 XOA xo-server[758]: code: 'ENOENT', Dec 23 14:31:52 XOA xo-server[758]: syscall: 'rename', Dec 23 14:31:52 XOA xo-server[758]: path: '/run/xo-server/mounts/cebefbdf-8fba-49ed-b821-47cd7525987b/xo-vm-backups/.queue/clean-vm/_20221223T133147Z-oj2uwisl1vr', Dec 23 14:31:52 XOA xo-server[758]: dest: '/run/xo-server/mounts/cebefbdf-8fba-49ed-b821-47cd7525987b/xo-vm-backups/.queue/clean-vm/__20221223T133147Z-oj2uwisl1vr', Dec 23 14:31:52 XOA xo-server[758]: syncStack: 'Error\n' + Dec 23 14:31:52 XOA xo-server[758]: ' at LocalHandler.addSyncStackTrace [as _addSyncStackTrace] (/opt/xen-orchestra/@xen-orchestra/fs/dist/local.js:20:26)\n' + Dec 23 14:31:52 XOA xo-server[758]: ' at LocalHandler._rename (/opt/xen-orchestra/@xen-orchestra/fs/dist/local.js:152:17)\n' + Dec 23 14:31:52 XOA xo-server[758]: ' at #rename (/opt/xen-orchestra/@xen-orchestra/fs/dist/abstract.js:284:49)\n' + Dec 23 14:31:52 XOA xo-server[758]: ' at #rename (/opt/xen-orchestra/@xen-orchestra/fs/dist/abstract.js:292:28)' Dec 23 14:31:52 XOA xo-server[758]: } Dec 23 14:31:52 XOA xo-server[758]: }
Backup is a delta-backup to an SMB-drive with "multiple files" enabled
-
@KPS the nbd part is during the transfer , this part of the log is during the merge
there can be a merge before transfer, and another after
Is it easy to possible for you to a new branch based on master ? I can make a quik branch with more informations on the nbd state before my holidays
-
@florent
Thats strange... There is nothing else in the logs. The first entry after starting the job is the "mount-entry"...I can immediately update, if you want to push this prior to the holiday, but - to be fair: This is currently just a proof of concept for a new installation in 2023 - so - no stress.
-
@KPS it's an easy win, so here is your christmas present
https://github.com/vatesfr/xen-orchestra/pull/6596
the message should show if nbd is really activated or not in XO, if there is a nbd enabled network on the host , and if the connection is successfull -
@florent
Hi Florent!
I did just install the patch...
Updated version 5.107.5 / 5.109.0...but still nothing in the logs about NBD. I have no idea, what I did wrong.
The log does still only contain the information about the merge and the mounts. -
@florent
Please do not invest any time in my debugging. I fist have to really understand, why my changes to config.toml seem to be inactive and I think, i did something wrong.As I will buy XOA soon, this does not matter for the future, but my current testing does only make sense, if I understand the build-process of my XOCE...
-
@KPS
strange
does the job succeed ? no error message in the ui ?
that's strang you sould have something like that :2022-12-23T14:35:00.387Z xo:backups:MixinBackupWriter INFO merging VHD chain { chain: [ '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/e8c486de-293a-46b4-8964-34537383a9fd/20221221T135820Z.alias.vhd', '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/e8c486de-293a-46b4-8964-34537383a9fd/20221221T140054Z.alias.vhd' ] } 2022-12-23T14:35:00.413Z xo:backups:MixinBackupWriter INFO merging VHD chain { chain: [ '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/f1c4f818-6b6f-490b-9955-4920046ba695/20221221T135820Z.alias.vhd', '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/f1c4f818-6b6f-490b-9955-4920046ba695/20221221T140054Z.alias.vhd' ] } 2022-12-23T14:35:17.119Z xo:backups:DeltaBackupWriter INFO use nbd is activated 2022-12-23T14:35:17.120Z xo:backups:DeltaBackupWriter INFO use nbd is activated 2022-12-23T14:35:17.359Z xo:backups:DeltaBackupWriter INFO got nbd info { nbdInfo: { exportname: '/38ebcc3e-04eb-44ad-be6d-2cec55ef6557?session_id=OpaqueRef:43de5d35-6d78-44ba-8143-fc39eda691ed', address: '172.16.210.11', port: 10809, cert: '-----BEGIN CERTIFICATE-----\n' + 'MIIC0DCCAbigAwIBAgIJAJYT6F6eRgKmMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV\n' + <REDACTED CERTIFICATE> 'eJqPRd+mowBDpbf4O3Av7ZkmiUZkHOHEQJan5w/0KQwEkXmwPAJPuxlGHxtOXq0+\n' + 'SsFaZA==\n' + '-----END CERTIFICATE-----', subject: 'R620-1' }, id: 'OpaqueRef:1551d934-9d2f-4e12-9b50-a3ba0518c94c' } 2022-12-23T14:35:17.359Z xo:backups:DeltaBackupWriter INFO nbd client instantiated { vdi: 'OpaqueRef:1551d934-9d2f-4e12-9b50-a3ba0518c94c' } 2022-12-23T14:35:17.365Z xo:backups:DeltaBackupWriter INFO got nbd info { nbdInfo: { exportname: '/319a266c-9909-4210-aae4-41041747d607?session_id=OpaqueRef:43de5d35-6d78-44ba-8143-fc39eda691ed', address: '172.16.210.11', port: 10809, cert: '-----BEGIN CERTIFICATE-----\n' + < REDACTED CERTIFICATE > 'SsFaZA==\n' + '-----END CERTIFICATE-----', subject: 'R620-1' }, id: 'OpaqueRef:01ac600a-8bb5-4fda-83ca-ee8635c631ab' } 2022-12-23T14:35:17.365Z xo:backups:DeltaBackupWriter INFO nbd client instantiated { vdi: 'OpaqueRef:01ac600a-8bb5-4fda-83ca-ee8635c631ab' } 2022-12-23T14:35:17.451Z xo:backups:DeltaBackupWriter WARN can't connect to nbd server or no server available { error: Error: stream has ended without data at readChunkStrict (/home/florent/Documents/xen-orchestra/@vates/read-chunk/index.js:39:11) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async NbdClient.#handshake (/home/florent/Documents/xen-orchestra/@vates/nbd-client/index.js:110:13) at async NbdClient.connect (/home/florent/Documents/xen-orchestra/@vates/nbd-client/index.js:84:5) at async /home/florent/Documents/xen-orchestra/@xen-orchestra/backups/writers/DeltaBackupWriter.js:212:15 at async Promise.all (index 1) at async /home/florent/Documents/xen-orchestra/@xen-orchestra/backups/writers/DeltaBackupWriter.js:175:7, vdi: 'OpaqueRef:01ac600a-8bb5-4fda-83ca-ee8635c631ab' } 2022-12-23T14:35:18.594Z xo:backups:DeltaBackupWriter INFO nbd client connected { vdi: 'OpaqueRef:1551d934-9d2f-4e12-9b50-a3ba0518c94c' } 2022-12-23T14:35:33.466Z xo:backups:mergeWorker INFO starting 2022-12-23T14:35:33.536Z xo:backups:mergeWorker INFO merging VHD chain { chain: [ '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/e8c486de-293a-46b4-8964-34537383a9fd/20221221T140054Z.alias.vhd', '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/e8c486de-293a-46b4-8964-34537383a9fd/20221223T142751Z.alias.vhd' ] } 2022-12-23T14:35:33.545Z xo:backups:mergeWorker INFO merging VHD chain { chain: [ '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/f1c4f818-6b6f-490b-9955-4920046ba695/20221221T140054Z.alias.vhd', '/xo-vm-backups/93454bd8-d763-96f7-d230-50b6545122be/vdis/18e8af20-1235-469c-a417-b5dcd754d933/f1c4f818-6b6f-490b-9955-4920046ba695/20221223T142751Z.alias.vhd' ] }
-
@florent said in backup using http instead of https:
@KPS it's an easy win, so here is your christmas present
https://github.com/vatesfr/xen-orchestra/pull/6596
the message should show if nbd is really activated or not in XO, if there is a nbd enabled network on the host , and if the connection is successfullJust wondering, could NBD also be used in the future for speeding up live migrations with local storage?
-
It's possible we (with XenServer) decide to use NBD everywhere (ie in SMAPIv3 for storage migration, since there's no dedicated format unlike SMAPIv1 and VHD at all stages)
-
@florent
My problem seems to be, that NBD is not getting activated/tried:Dec 27 09:35:59 XOA xo-server[6477]: 2022-12-27T08:35:59.951Z xo:backups:DeltaBackupWriter INFO use nbd is NOT activated
...although:
- useNbd = true is set
- Backup-Remote is an SMB-share wir the setting "Store backup as multiple data blocks instead of a whole VHD file. (creates 500-1000 files per backed up TB but allows faster merge) "
- NBD-network is activated on one interface, XO and XCP-ng can work with
Is there any other dependency, that stops NBD from getting used?
-
@florent could tell I think
-
-
@florent
Thank you for the update. NBD seems to be used, but there is an error after that, that I do not understand (VDI_IN_USE):Jan 2 11:32:53 XOA xo-server[166886]: 2023-01-02T10:32:53.954Z xo:backups:DeltaBackupWriter INFO use nbd is activated Jan 2 11:32:53 XOA xo-server[166886]: 2023-01-02T10:32:53.995Z xo:backups:DeltaBackupWriter INFO got nbd info { Jan 2 11:32:53 XOA xo-server[166886]: nbdInfo: { Jan 2 11:32:53 XOA xo-server[166886]: exportname: '/ed614338-1f35-4faf-b679-cc24bcd6f566?session_id=OpaqueRef:c9997f76-072c-415d-8624-91da8bd95489', Jan 2 11:32:53 XOA xo-server[166886]: address: '172.31.0.143', Jan 2 11:32:53 XOA xo-server[166886]: port: 10809, Jan 2 11:32:53 XOA xo-server[166886]: cert: '-----BEGIN CERTIFICATE-----\n' + Jan 2 11:32:53 XOA xo-server[166886]: 'xxxx + Jan 2 11:32:53 XOA xo-server[166886]: 'xxxx' + Jan 2 11:32:53 XOA xo-server[166886]: '' + Jan 2 11:32:53 XOA xo-server[166886]: '-----END CERTIFICATE-----', Jan 2 11:32:53 XOA xo-server[166886]: subject: 'xenpool4host3' Jan 2 11:32:53 XOA xo-server[166886]: }, Jan 2 11:32:53 XOA xo-server[166886]: id: 'OpaqueRef:1dba9109-30fa-454e-86d1-1a89d6d3d6c7' Jan 2 11:32:53 XOA xo-server[166886]: } Jan 2 11:32:53 XOA xo-server[166886]: 2023-01-02T10:32:53.997Z xo:backups:DeltaBackupWriter INFO nbd client instantiated { vdi: 'OpaqueRef:1dba9109-30fa-454e-86d1-1a89d6d3d6c7' } Jan 2 11:32:57 XOA xo-server[166886]: 2023-01-02T10:32:57.876Z xo:backups:DeltaBackupWriter INFO nbd client connected { vdi: 'OpaqueRef:1dba9109-30fa-454e-86d1-1a89d6d3d6c7' } Jan 2 11:33:03 XOA xo-server[166886]: 2023-01-02T10:33:03.061Z xo:xapi WARN retry { Jan 2 11:33:03 XOA xo-server[166886]: attemptNumber: 0, Jan 2 11:33:03 XOA xo-server[166886]: delay: 5000, Jan 2 11:33:03 XOA xo-server[166886]: error: XapiError: VDI_IN_USE(OpaqueRef:d7a06e27-cba7-475c-b309-0a72e889444d, destroy) Jan 2 11:33:03 XOA xo-server[166886]: at XapiError.wrap (/opt/xen-orchestra/packages/xen-api/dist/_XapiError.js:21:12) Jan 2 11:33:03 XOA xo-server[166886]: at _default (/opt/xen-orchestra/packages/xen-api/dist/_getTaskResult.js:18:38) Jan 2 11:33:03 XOA xo-server[166886]: at Xapi._addRecordToCache (/opt/xen-orchestra/packages/xen-api/dist/index.js:748:51) Jan 2 11:33:03 XOA xo-server[166886]: at /opt/xen-orchestra/packages/xen-api/dist/index.js:781:14 Jan 2 11:33:03 XOA xo-server[166886]: at Array.forEach (<anonymous>) Jan 2 11:33:03 XOA xo-server[166886]: at Xapi._processEvents (/opt/xen-orchestra/packages/xen-api/dist/index.js:769:12) Jan 2 11:33:03 XOA xo-server[166886]: at Xapi._watchEvents (/opt/xen-orchestra/packages/xen-api/dist/index.js:901:14) Jan 2 11:33:03 XOA xo-server[166886]: at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { Jan 2 11:33:03 XOA xo-server[166886]: code: 'VDI_IN_USE', Jan 2 11:33:03 XOA xo-server[166886]: params: [ 'OpaqueRef:d7a06e27-cba7-475c-b309-0a72e889444d', 'destroy' ], Jan 2 11:33:03 XOA xo-server[166886]: call: undefined, Jan 2 11:33:03 XOA xo-server[166886]: url: undefined, Jan 2 11:33:03 XOA xo-server[166886]: task: task { Jan 2 11:33:03 XOA xo-server[166886]: uuid: '644e680d-74ed-32cb-8ffd-ee5f29c47fed', Jan 2 11:33:03 XOA xo-server[166886]: name_label: 'Async.VDI.destroy', Jan 2 11:33:03 XOA xo-server[166886]: name_description: '', Jan 2 11:33:03 XOA xo-server[166886]: allowed_operations: [], Jan 2 11:33:03 XOA xo-server[166886]: current_operations: {}, Jan 2 11:33:03 XOA xo-server[166886]: created: '20230102T10:33:02Z', Jan 2 11:33:03 XOA xo-server[166886]: finished: '20230102T10:33:02Z', Jan 2 11:33:03 XOA xo-server[166886]: status: 'failure', Jan 2 11:33:03 XOA xo-server[166886]: resident_on: 'OpaqueRef:1c3a1cae-b9be-980d-09f6-2ed35b498411', Jan 2 11:33:03 XOA xo-server[166886]: progress: 1, Jan 2 11:33:03 XOA xo-server[166886]: type: '<none/>', Jan 2 11:33:03 XOA xo-server[166886]: result: '', Jan 2 11:33:03 XOA xo-server[166886]: error_info: [Array], Jan 2 11:33:03 XOA xo-server[166886]: other_config: {}, Jan 2 11:33:03 XOA xo-server[166886]: subtask_of: 'OpaqueRef:NULL', Jan 2 11:33:03 XOA xo-server[166886]: subtasks: [], Jan 2 11:33:03 XOA xo-server[166886]: backtrace: '(((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4367))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))' Jan 2 11:33:03 XOA xo-server[166886]: } Jan 2 11:33:03 XOA xo-server[166886]: }, Jan 2 11:33:03 XOA xo-server[166886]: fn: 'destroy', Jan 2 11:33:03 XOA xo-server[166886]: arguments: [ 'OpaqueRef:d7a06e27-cba7-475c-b309-0a72e889444d' ], Jan 2 11:33:03 XOA xo-server[166886]: pool: { Jan 2 11:33:03 XOA xo-server[166886]: uuid: '001cacae-1717-bad3-c2e4-078d64efe4f7', Jan 2 11:33:03 XOA xo-server[166886]: name_label: 'XenPool4' Jan 2 11:33:03 XOA xo-server[166886]: } Jan 2 11:33:03 XOA xo-server[166886]: } Jan 2 11:33:09 XOA xo-server[166903]: 2023-01-02T10:33:09.326Z xo:backups:mergeWorker INFO starting Jan 2 11:33:09 XOA xo-server[166903]: 2023-01-02T10:33:09.378Z xo:backups:mergeWorker INFO merging VHD chain { Jan 2 11:33:09 XOA xo-server[166903]: chain: [ Jan 2 11:33:09 XOA xo-server[166903]: '/xo-vm-backups/4f54db87-ddec-3ecb-333e-150514683c0b/vdis/d4984b22-8028-49ae-9f76-849406fa4d83/2b88b90c-4f44-40e2-bcb2-b15248dd12ce/20221231T230016Z.alias.vhd', Jan 2 11:33:09 XOA xo-server[166903]: '/xo-vm-backups/4f54db87-ddec-3ecb-333e-150514683c0b/vdis/d4984b22-8028-49ae-9f76-849406fa4d83/2b88b90c-4f44-40e2-bcb2-b15248dd12ce/20230101T230012Z.alias.vhd' Jan 2 11:33:09 XOA xo-server[166903]: ] Jan 2 11:33:09 XOA xo-server[166903]: }
Is this related to NBD in any way?
Thank you and best wishes
KPS -
@KPS VDI_IN_USE is not NBD related , it may happen but the retry should handle this correctly
thank you for your help
-
One strange thing: According to the docs, NBD should only be used, when there is a remote storage with "Store backup as multiple data blocks instead of a whole VHD file. (creates 500-1000 files per backed up TB but allows faster merge) "
But: There is also the message "INFO nbd client connected" for storage without that option.
-
@florent
Sorry, but next issue:
Backup size is always shown as 122 KiB although, there are 4GB saved to the remote.If I save to a storage without "many-files"-seting, 4 GiB is shown