OK we have debugged and improved this process, so including it here if it helps anyone else.
How to migrate resources between XOSTOR (linstor) clusters. This also works with piraeus-operator, which we use for k8s.
Manually moving listor resource with thin_send_recv
Migration of data
Commands
# PV: pvc-6408a214-6def-44c4-8d9a-bebb67be5510
# S: pgdata-snapshot
# s: 10741612544B
#get size
lvs --noheadings --units B -o lv_size linstor_group/pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000
#prep
lvcreate -V 10741612544B --thinpool linstor_group/thin_device -n pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000 linstor_group
#create snapshot
linstor --controller original-xostor-server s create pvc-6408a214-6def-44c4-8d9a-bebb67be5510 pgdata-snapshot
#send
thin_send linstor_group/pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000_pgdata-snapshot 2>/dev/null | ssh root@new-xostor-server-01 thin_recv linstor_group/pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000 2>/dev/null
Walk-through
Prep migration
[13:29 original-xostor-server ~]# lvs --noheadings --units B -o lv_size linstor_group/pvc-12aca72c-d94a-4c09-8102-0a6646906f8d_00000
26851934208B
[13:53 new-xostor-server-01 ~]# lvcreate -V 26851934208B --thinpool linstor_group/thin_device -n pvc-12aca72c-d94a-4c09-8102-0a6646906f8d_00000 linstor_group
Logical volume "pvc-12aca72c-d94a-4c09-8102-0a6646906f8d_00000" created.
Create snapshot
15:35:03] jonathon@jonathon-framework:~$ linstor --controller original-xostor-server s create pvc-12aca72c-d94a-4c09-8102-0a6646906f8d s_test
SUCCESS:
Description:
New snapshot 's_test' of resource 'pvc-12aca72c-d94a-4c09-8102-0a6646906f8d' registered.
Details:
Snapshot 's_test' of resource 'pvc-12aca72c-d94a-4c09-8102-0a6646906f8d' UUID is: 3a07d2fd-6dc3-4994-b13f-8c3a2bb206b8
SUCCESS:
Suspended IO of '[pvc-12aca72c-d94a-4c09-8102-0a6646906f8d]' on 'ovbh-vprod-k8s04-worker02' for snapshot
SUCCESS:
Suspended IO of '[pvc-12aca72c-d94a-4c09-8102-0a6646906f8d]' on 'original-xostor-server' for snapshot
SUCCESS:
Took snapshot of '[pvc-12aca72c-d94a-4c09-8102-0a6646906f8d]' on 'ovbh-vprod-k8s04-worker02'
SUCCESS:
Took snapshot of '[pvc-12aca72c-d94a-4c09-8102-0a6646906f8d]' on 'original-xostor-server'
SUCCESS:
Resumed IO of '[pvc-12aca72c-d94a-4c09-8102-0a6646906f8d]' on 'ovbh-vprod-k8s04-worker02' after snapshot
SUCCESS:
Resumed IO of '[pvc-12aca72c-d94a-4c09-8102-0a6646906f8d]' on 'original-xostor-server' after snapshot
Migration
[13:53 original-xostor-server ~]# thin_send /dev/linstor_group/pvc-12aca72c-d94a-4c09-8102-0a6646906f8d_00000_s_test 2>/dev/null | ssh root@new-xostor-server-01 thin_recv linstor_group/pvc-12aca72c-d94a-4c09-8102-0a6646906f8d_00000 2>/dev/null
Need to yeet errors on both ends of command or it will fail.
This is the same setup process for replica-1 or replica-3. For replica-3 can target new-xostor-server-01 each time, for replica-1 be sure to spread them out right.
Replica-3 Setup
Explanation
thin_send
to new-xostor-server-01, will need to run commands to force sync of data to replicas.
Commands
# PV: pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
# snapshot: snipeit-snapshot
# size: 21483225088B
#get size
lvs --noheadings --units B -o lv_size linstor_group/pvc-96cbebbe-f827-4a47-ae95-38b078e0d584_00000
#prep
lvcreate -V 21483225088B --thinpool linstor_group/thin_device -n pvc-96cbebbe-f827-4a47-ae95-38b078e0d584_00000 linstor_group
#create snapshot
linstor --controller original-xostor-server s create pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 snipeit-snapshot
linstor --controller original-xostor-server s l | grep -e 'snipeit-snapshot'
#send
thin_send linstor_group/pvc-96cbebbe-f827-4a47-ae95-38b078e0d584_00000_snipeit-snapshot 2>/dev/null | ssh root@new-xostor-server-01 thin_recv linstor_group/pvc-96cbebbe-f827-4a47-ae95-38b078e0d584_00000 2>/dev/null
#linstor setup
linstor --controller new-xostor-server-01 resource-definition create pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 --resource-group sc-74e1434b-b435-587e-9dea-fa067deec898
linstor --controller new-xostor-server-01 volume-definition create pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 21483225088B --storage-pool xcp-sr-linstor_group_thin_device
linstor --controller new-xostor-server-01 resource create --storage-pool xcp-sr-linstor_group_thin_device --providers LVM_THIN new-xostor-server-01 pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
linstor --controller new-xostor-server-01 resource create --auto-place +1 pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
#Run the following on the node with the data. This is the prefered command
drbdadm invalidate-remote pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
#Run the following on the node without the data. This is just for reference
drbdadm invalidate pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
linstor --controller new-xostor-server-01 r l | grep -e 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584'
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
annotations:
pv.kubernetes.io/provisioned-by: linstor.csi.linbit.com
finalizers:
- external-provisioner.volume.kubernetes.io/finalizer
- kubernetes.io/pv-protection
- external-attacher/linstor-csi-linbit-com
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi # Ensure this matches the actual size of the LINSTOR volume
persistentVolumeReclaimPolicy: Retain
storageClassName: linstor-replica-three # Adjust to the storage class you want to use
volumeMode: Filesystem
csi:
driver: linstor.csi.linbit.com
fsType: ext4
volumeHandle: pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
volumeAttributes:
linstor.csi.linbit.com/mount-options: ''
linstor.csi.linbit.com/post-mount-xfs-opts: ''
linstor.csi.linbit.com/uses-volume-context: 'true'
linstor.csi.linbit.com/remote-access-policy: 'true'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: 'yes'
pv.kubernetes.io/bound-by-controller: 'yes'
volume.beta.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
volume.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
finalizers:
- kubernetes.io/pvc-protection
name: pp-snipeit-pvc
namespace: snipe-it
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: linstor-replica-three
volumeMode: Filesystem
volumeName: pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
Walk-through
jonathon@jonathon-framework:~$ linstor --controller new-xostor-server-01 resource-definition create pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 --resource-group sc-74e1434b-b435-587e-9dea-fa067deec898
SUCCESS:
Description:
New resource definition 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' created.
Details:
Resource definition 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' UUID is: 772692e2-3fca-4069-92e9-2bef22c68a6f
jonathon@jonathon-framework:~$ linstor --controller new-xostor-server-01 volume-definition create pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 21483225088B --storage-pool xcp-sr-linstor_group_thin_device
SUCCESS:
Successfully set property key(s): StorPoolName
SUCCESS:
New volume definition with number '0' of resource definition 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' created.
jonathon@jonathon-framework:~$ linstor --controller new-xostor-server-01 resource create --storage-pool xcp-sr-linstor_group_thin_device --providers LVM_THIN new-xostor-server-01 pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
SUCCESS:
Successfully set property key(s): StorPoolName
INFO:
Updated pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 DRBD auto verify algorithm to 'crct10dif-pclmul'
SUCCESS:
Description:
New resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on node 'new-xostor-server-01' registered.
Details:
Resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on node 'new-xostor-server-01' UUID is: 3072aaae-4a34-453e-bdc6-facb47809b3d
SUCCESS:
Description:
Volume with number '0' on resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on node 'new-xostor-server-01' successfully registered
Details:
Volume UUID is: 52b11ef6-ec50-42fb-8710-1d3f8c15c657
SUCCESS:
Created resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on 'new-xostor-server-01'
jonathon@jonathon-framework:~$ linstor --controller new-xostor-server-01 resource create --auto-place +1 pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
SUCCESS:
Successfully set property key(s): StorPoolName
SUCCESS:
Successfully set property key(s): StorPoolName
SUCCESS:
Description:
Resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' successfully autoplaced on 2 nodes
Details:
Used nodes (storage pool name): 'new-xostor-server-02 (xcp-sr-linstor_group_thin_device)', 'new-xostor-server-03 (xcp-sr-linstor_group_thin_device)'
INFO:
Resource-definition property 'DrbdOptions/Resource/quorum' updated from 'off' to 'majority' by auto-quorum
INFO:
Resource-definition property 'DrbdOptions/Resource/on-no-quorum' updated from 'off' to 'suspend-io' by auto-quorum
SUCCESS:
Added peer(s) 'new-xostor-server-02', 'new-xostor-server-03' to resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on 'new-xostor-server-01'
SUCCESS:
Created resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on 'new-xostor-server-02'
SUCCESS:
Created resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on 'new-xostor-server-03'
SUCCESS:
Description:
Resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on 'new-xostor-server-03' ready
Details:
Auto-placing resource: pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
SUCCESS:
Description:
Resource 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584' on 'new-xostor-server-02' ready
Details:
Auto-placing resource: pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
At this point
jonathon@jonathon-framework:~$ linstor --controller new-xostor-server-01 v l | grep -e 'pvc-96cbebbe-f827-4a47-ae95-38b078e0d584'
| new-xostor-server-01 | pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 | xcp-sr-linstor_group_thin_device | 0 | 1032 | /dev/drbd1032 | 9.20 GiB | Unused | UpToDate |
| new-xostor-server-02 | pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 | xcp-sr-linstor_group_thin_device | 0 | 1032 | /dev/drbd1032 | 112.73 MiB | Unused | UpToDate |
| new-xostor-server-03 | pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 | xcp-sr-linstor_group_thin_device | 0 | 1032 | /dev/drbd1032 | 112.73 MiB | Unused | UpToDate |
To force the sync, run the following command on the node with the data
drbdadm invalidate-remote pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
This will kick it to get the data re-synced.
[14:51 new-xostor-server-01 ~]# drbdadm invalidate-remote pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
[14:51 new-xostor-server-01 ~]# drbdadm status pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 role:Secondary
disk:UpToDate
new-xostor-server-02 role:Secondary
replication:SyncSource peer-disk:Inconsistent done:1.14
new-xostor-server-03 role:Secondary
replication:SyncSource peer-disk:Inconsistent done:1.18
[14:51 new-xostor-server-01 ~]# drbdadm status pvc-96cbebbe-f827-4a47-ae95-38b078e0d584
pvc-96cbebbe-f827-4a47-ae95-38b078e0d584 role:Secondary
disk:UpToDate
new-xostor-server-02 role:Secondary
peer-disk:UpToDate
new-xostor-server-03 role:Secondary
peer-disk:UpToDate
See: https://github.com/LINBIT/linstor-server/issues/389
Replica-1setup
# PV: pvc-6408a214-6def-44c4-8d9a-bebb67be5510
# S: pgdata-snapshot
# s: 10741612544B
#get size
lvs --noheadings --units B -o lv_size linstor_group/pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000
#prep
lvcreate -V 10741612544B --thinpool linstor_group/thin_device -n pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000 linstor_group
#create snapshot
linstor --controller original-xostor-server s create pvc-6408a214-6def-44c4-8d9a-bebb67be5510 pgdata-snapshot
#send
thin_send linstor_group/pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000_pgdata-snapshot 2>/dev/null | ssh root@new-xostor-server-01 thin_recv linstor_group/pvc-6408a214-6def-44c4-8d9a-bebb67be5510_00000 2>/dev/null
# 1
linstor --controller new-xostor-server-01 resource-definition create pvc-6408a214-6def-44c4-8d9a-bebb67be5510 --resource-group sc-b066e430-6206-5588-a490-cc91ecef53d6
linstor --controller new-xostor-server-01 volume-definition create pvc-6408a214-6def-44c4-8d9a-bebb67be5510 10741612544B --storage-pool xcp-sr-linstor_group_thin_device
linstor --controller new-xostor-server-01 resource create new-xostor-server-01 pvc-6408a214-6def-44c4-8d9a-bebb67be5510
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-6408a214-6def-44c4-8d9a-bebb67be5510
annotations:
pv.kubernetes.io/provisioned-by: linstor.csi.linbit.com
finalizers:
- external-provisioner.volume.kubernetes.io/finalizer
- kubernetes.io/pv-protection
- external-attacher/linstor-csi-linbit-com
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi # Ensure this matches the actual size of the LINSTOR volume
persistentVolumeReclaimPolicy: Retain
storageClassName: linstor-replica-one-local # Adjust to the storage class you want to use
volumeMode: Filesystem
csi:
driver: linstor.csi.linbit.com
fsType: ext4
volumeHandle: pvc-6408a214-6def-44c4-8d9a-bebb67be5510
volumeAttributes:
linstor.csi.linbit.com/mount-options: ''
linstor.csi.linbit.com/post-mount-xfs-opts: ''
linstor.csi.linbit.com/uses-volume-context: 'true'
linstor.csi.linbit.com/remote-access-policy: |
- fromSame:
- xcp-ng/node
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: xcp-ng/node
operator: In
values:
- new-xostor-server-01
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: 'yes'
pv.kubernetes.io/bound-by-controller: 'yes'
volume.beta.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
volume.kubernetes.io/selected-node: ovbh-vtest-k8s01-worker01
volume.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
finalizers:
- kubernetes.io/pvc-protection
name: acid-merch-2
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: linstor-replica-one-local
volumeMode: Filesystem
volumeName: pvc-6408a214-6def-44c4-8d9a-bebb67be5510