-
@ronan-a thanks. I've deployed it already with the script on the first post. Seems to be working. I've opted to used redundancy=3 in a 3 hosts setup. It's a lot of 'wasted' resources but seems to be the best option for performance and reliability.
May I ask now a licensing issue: if we upgrade to Vates VM, does the deployment mode on the first message is considered supported or everything will need to be done again from XOA?
Thanks.
-
@ferrao said in XOSTOR hyperconvergence preview:
May I ask now a licensing issue: if we upgrade to Vates VM, does the deployment mode on the first message is considered supported or everything will need to be done again from XOA?
Regarding XOSTOR Support Licenses: In general, we prefer our users to use a trial license through XOA. And if they are interested, they can subscribe to a commercial license.
To be more precise: the manual steps in this thread are still valid to configure an SR LINSTOR, no difference with the XOA commands. However, if you wish to suscribe to a support license from a pool without XOA nor trial license, we are quite strict on the fact that the infrastructure must be in a stable state. -
Anyone else getting a 301 error?
http://mirrors.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently Trying other mirror.
-
@lover said in XOSTOR hyperconvergence preview:
Anyone else getting a 301 error?
http://mirrors.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently Trying other mirror.
301 is not an error (as a failure) it's a redirect. Here it redirects correctly to a mirror nearby. In my case: https://mirror.uepg.br/xcp-ng/8/8.2/base/x86_64/repodata/repomd.xml
-
This post is deleted! -
See /etc/yum.repos.d/xcp-ng.repo and update all references from http:/ to https:/
-
Hello all!
I have an issue with backing up to S3. I am hoping someone can point out the mistake I am making.
Our xcp-ng hosts are all up to date.jonathon@jonathon-framework:~$ linstor --controller 10.2.0.10 remote l โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ Name โ Type โ Info โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โ linbit-velero-backup โ S3 โ us-east-1.s3.wasabisys.com/velero-preprod โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ jonathon@jonathon-framework:~$ linstor --controller 10.2.0.10 backup create linbit-velero-backup pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d SUCCESS: Suspended IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-vtest-k8s01-worker02' for snapshot SUCCESS: Suspended IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen02' for snapshot SUCCESS: Suspended IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen03' for snapshot SUCCESS: Suspended IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen01' for snapshot SUCCESS: Took snapshot of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-vtest-k8s01-worker02' SUCCESS: Took snapshot of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen02' SUCCESS: Took snapshot of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen03' SUCCESS: Took snapshot of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen01' SUCCESS: Resumed IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-vtest-k8s01-worker02' after snapshot SUCCESS: Resumed IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen01' after snapshot SUCCESS: Resumed IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen02' after snapshot SUCCESS: Resumed IO of '[pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d]' on 'ovbh-pprod-xen03' after snapshot INFO: Generated snapshot name for backup of resourcepvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d to remote linbit-velero-backup INFO: Shipping of resource pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d to remote linbit-velero-backup in progress. SUCCESS: Started shipping of resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' SUCCESS: Started shipping of resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' SUCCESS: Started shipping of resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' jonathon@jonathon-framework:~$ linstor --controller 10.2.0.10 snapshot l โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ ResourceName โ SnapshotName โ NodeNames โ Volumes โ CreatedOn โ State โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โ pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d โ back_20241002_191139 โ ovbh-pprod-xen01, ovbh-pprod-xen02, ovbh-pprod-xen03 โ 0: 50 GiB โ 2024-10-02 16:11:40 โ Shipping โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ jonathon@jonathon-framework:~$ linstor --controller 10.2.0.10 backup list linbit-velero-backup โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ Resource โ Snapshot โ Finished at โ Based On โ Status โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ jonathon@jonathon-framework:~$ linstor --controller 10.2.0.10 snapshot l โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ ResourceName โ SnapshotName โ NodeNames โ Volumes โ CreatedOn โ State โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โ pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d โ back_20241002_191139 โ ovbh-pprod-xen01, ovbh-pprod-xen02, ovbh-pprod-xen03 โ 0: 50 GiB โ 2024-10-02 16:11:40 โ Successful โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Nothing shows up on S3.
And after, enabling logs by modifying /usr/share/linstor-server/lib/conf/logback.xml I see the following[19:15 ovbh-pprod-xen01 ~]# tail /var/log/linstor-satellite/linstor-Satellite.log -n 20 2024_10_02 19:11:41.511 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] WARN LINSTOR/Satellite - SYSTEM - stdErr: Device read short 40960 bytes remaining 2024_10_02 19:11:41.512 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] WARN LINSTOR/Satellite - SYSTEM - stdErr: Device read short 40960 bytes remaining 2024_10_02 19:11:41.513 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] WARN LINSTOR/Satellite - SYSTEM - stdErr: Device read short 40960 bytes remaining 2024_10_02 19:11:41.516 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] WARN LINSTOR/Satellite - SYSTEM - stdErr: Device read short 82432 bytes remaining 2024_10_02 19:11:41.543 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - End DeviceManager cycle 42 2024_10_02 19:11:41.543 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Begin DeviceManager cycle 43 2024_10_02 19:11:41.552 [MainWorkerPool-5] INFO LINSTOR/Satellite - SYSTEM - Snapshot 'back_20241002_191139' of resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' registered. 2024_10_02 19:11:41.553 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Aligning /dev/linstor_group/pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000 size from 52440040 KiB to 52441088 KiB to be a multiple of extent size 4096 KiB (from Storage Pool) 2024_10_02 19:11:41.615 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' [DRBD] adjusted. 2024_10_02 19:11:41.781 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - End DeviceManager cycle 43 2024_10_02 19:11:41.781 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Begin DeviceManager cycle 44 2024_10_02 19:11:47.220 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] WARN LINSTOR/Satellite - SYSTEM - stdErr: Incomplete copy_data, 4194304 bytes missing. 2024_10_02 19:11:47.295 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] WARN LINSTOR/Satellite - SYSTEM - Exception occurred while checking for support of requester-pays on remote linbit-velero-backup. Defaulting to false 2024_10_02 19:11:47.307 [MainWorkerPool-7] INFO LINSTOR/Satellite - SYSTEM - Snapshot 'back_20241002_191139' of resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' registered. 2024_10_02 19:11:47.309 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Aligning /dev/linstor_group/pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000 size from 52440040 KiB to 52441088 KiB to be a multiple of extent size 4096 KiB (from Storage Pool) 2024_10_02 19:11:47.312 [shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139] ERROR LINSTOR/Satellite - SYSTEM - [Report number 66FDD1AE-3AE91-000000] 2024_10_02 19:11:47.398 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Resource 'pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d' [DRBD] adjusted. 2024_10_02 19:11:47.561 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - End DeviceManager cycle 44 2024_10_02 19:11:47.561 [DeviceManager] INFO LINSTOR/Satellite - SYSTEM - Begin DeviceManager cycle 45
The error
[19:12 ovbh-pprod-xen01 ~]# cat /var/log/linstor-satellite/ErrorReport-66FDD1AE-3AE91-000000.log ERROR REPORT 66FDD1AE-3AE91-000000 ============================================================ Application: LINBITยฎ LINSTOR Module: Satellite Version: 1.26.1 Build ID: 12746ac9c6e7882807972c3df56e9a89eccad4e5 Build time: 2024-02-22T05:27:50+00:00 Error time: 2024-10-02 19:11:47 Node: ovbh-pprod-xen01 Thread: shipping_pvc-7746af6f-d37e-4c5d-9f44-9616f2f9b33d_00000_back_20241002_191139 ============================================================ Reported error: =============== Category: RuntimeException Class name: AbortedException Class canonical name: com.amazonaws.AbortedException Generated at: Method 'handleInterruptedException', Source file 'AmazonHttpClient.java', Line #880 Error message: Call backtrace: Method Native Class:Line number handleInterruptedException N com.amazonaws.http.AmazonHttpClient$RequestExecutor:880 execute N com.amazonaws.http.AmazonHttpClient$RequestExecutor:757 access$500 N com.amazonaws.http.AmazonHttpClient$RequestExecutor:715 execute N com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl:697 execute N com.amazonaws.http.AmazonHttpClient:561 execute N com.amazonaws.http.AmazonHttpClient:541 invoke N com.amazonaws.services.s3.AmazonS3Client:5516 invoke N com.amazonaws.services.s3.AmazonS3Client:5463 abortMultipartUpload N com.amazonaws.services.s3.AmazonS3Client:3620 abortMultipart N com.linbit.linstor.api.BackupToS3:199 threadFinished N com.linbit.linstor.backupshipping.BackupShippingS3Daemon:320 run N com.linbit.linstor.backupshipping.BackupShippingS3Daemon:298 run N java.lang.Thread:829 Caused by: ========== Category: Exception Class name: SdkInterruptedException Class canonical name: com.amazonaws.http.timers.client.SdkInterruptedException Generated at: Method 'checkInterrupted', Source file 'AmazonHttpClient.java', Line #935 Call backtrace: Method Native Class:Line number checkInterrupted N com.amazonaws.http.AmazonHttpClient$RequestExecutor:935 checkInterrupted N com.amazonaws.http.AmazonHttpClient$RequestExecutor:921 executeHelper N com.amazonaws.http.AmazonHttpClient$RequestExecutor:1115 doExecute N com.amazonaws.http.AmazonHttpClient$RequestExecutor:814 executeWithTimer N com.amazonaws.http.AmazonHttpClient$RequestExecutor:781 execute N com.amazonaws.http.AmazonHttpClient$RequestExecutor:755 access$500 N com.amazonaws.http.AmazonHttpClient$RequestExecutor:715 execute N com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl:697 execute N com.amazonaws.http.AmazonHttpClient:561 execute N com.amazonaws.http.AmazonHttpClient:541 invoke N com.amazonaws.services.s3.AmazonS3Client:5516 invoke N com.amazonaws.services.s3.AmazonS3Client:5463 abortMultipartUpload N com.amazonaws.services.s3.AmazonS3Client:3620 abortMultipart N com.linbit.linstor.api.BackupToS3:199 threadFinished N com.linbit.linstor.backupshipping.BackupShippingS3Daemon:320 run N com.linbit.linstor.backupshipping.BackupShippingS3Daemon:298 run N java.lang.Thread:829 END OF ERROR REPORT.
-
Ok, so, turns out this is because of the
thin-send-recv
package I build from https://github.com/LINBIT/thin-send-recv/tree/masterI just swapped out the version I built for the last one I was able to get online to test, and it works.
The last version I was able to get from any repository before they went 403 was thin-send-recv-1.0.1-1.x86_64.rpm.txt, I was able to get this from https://piraeus.daocloud.io/linbit/rpms/7/x86_64/thin-send-recv-1.0.1-1.x86_64.rpm. FYI https://packages.linbit.com/yum/sles12-sp2/drbd-9.0/x86_64/Packages/ returns 403's too so no point in looking for it there if they have it hosted.
I built thin-send-recv-1.1.2-1.xcpng8.2.x86_64.rpm.txt using this doc I put together thin-send-recv.txt. But this package I built is resulting in the error posted previously.
So I am a bit at a loss, I want to be able to use velero for backing up pvs which are not managed by an operator with backup capabilities, but I do not want to be stuck with this old version I can not update.
Any advice would be greatly appreciated!
-
Ok great, I manually built 1.0.1, and it works just like the package I got online, that means that what I am doing is working and the build process is correct.
The bad new is there is a breaking change with v1.1.2, and I think I am potentially SOL.
I am going to build and test v1.1.0 and v1.1.1 to see which ones work. NVM v1.1.0 is also broken.So the change that breaks it is in here: https://github.com/LINBIT/thin-send-recv/compare/6b7c9002cd7716ff6ef93f5a5e8908032b81f853...e44f566ea0c975e2baa475868ebc176065a5b22d
v1.0.1 might just be the version that works with the version of linstor, and whenever that gets updated it might call for a newer version of thin-send-recv.
-
@ronan-a might take a look if he found a minute free (which is complicated right now )