How to Re-attach an SR
-
Hey all!
I'll try to keep this short. I re-installed xcp-ng 8.3 because I wanted to run a dual stack ipv4 and ivp6.
I have three drives in my system.
SSD - /dev/sda (xcp-ng) NVMe1 - /dev/nvme0n1 (SR) NVMe2 - /dev/nvme1n1 (SR)
I know I should have checked where exactly the VMs are stored and I should have made a backup, but I didn't. So, that's on me.
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:5 0 931.5G 0 disk nvme1n1 259:0 0 477G 0 disk ├─nvme1n1p4 259:4 0 706M 0 part ├─nvme1n1p2 259:2 0 16M 0 part ├─nvme1n1p3 259:3 0 463.1G 0 part └─nvme1n1p1 259:1 0 100M 0 part sda 8:0 0 465.8G 0 disk ├─sda2 8:2 0 18G 0 part ├─sda5 8:5 0 4G 0 part /var/log ├─sda3 8:3 0 512M 0 part /boot/efi ├─sda1 8:1 0 18G 0 part / └─sda6 8:6 0 1G 0 part [SWAP]
After I re-installed xcp-ng, I don't have any SR's available. I am pretty sure that I had both nvme drive setup as SR... how can I re-attach them?
I haven't be able to find anything solid online.
I MAY have had VM installed on the SSD drive... are those gone because I reinstlaled xcp-ng on that drive?
-
No metadata backup? All that info should be contained within. Otherwise, it could be a long, painful process. Are there LVM drives or how were they initially created?
I assume "xe sr-list" shows nothing or? -
@tjkreidl said in How to Re-attach an SR:
xe sr-list
Don't completely remember... I would have kept things as default or basic as possible. Likely LVM is probably how I created them.
xe sr-list uuid ( RO) : a08b6338-3855-1da2-b108-ad9c5b553001 name-label ( RW): Removable storage name-description ( RW): host ( RO): vm type ( RO): udev content-type ( RO): disk uuid ( RO) : dd4907eb-707d-1f1a-ee05-5ea838bca63a name-label ( RW): XCP-ng Tools name-description ( RW): XCP-ng Tools ISOs host ( RO): vm type ( RO): iso content-type ( RO): iso uuid ( RO) : 84f169c5-4c4f-455a-ae1a-7629e9ce0b85 name-label ( RW): DVD drives name-description ( RW): Physical DVD drives host ( RO): vm type ( RO): udev content-type ( RO): iso
-
@Chrome OK, now try "xe sr-introduce" (check the syntax for the full command syntax you need), depending on what your connection type is:
xe sr-introduce uuid=<device uuid> shared=true type=lvmohba name-label=<name>
xe sr-introduce uuid=<device uuid> shared=true type=lvmoiscsi name-label=<name>
xe sr-introduce uuid=<device uuid> shared=true type=nfs name-label=<name>
If you are lucky and the interface still exists by which the SR was attached, that might do the trick.
-
@tjkreidl How do I get the UUID for the nvme1n1 drive? Its not listed on xe sr-list .....
-
@tjkreidl Ok....I figured out how to get the uuid...
I used this command:
xe sr-introduce uuid=ONK72A-tDcE-rfAs-P78e-3hlk-wSE5-OVmyiL shared=true type=LVM2_member name-label=LocalSR
Response:
ONK72A-tDcE-rfAs-P78e-3hlk-wSE5-OVmyiL
Which I figured was good:
Then:
xe sr-list
Returns:
uuid ( RO) : a08b6338-3855-1da2-b108-ad9c5b553001 name-label ( RW): Removable storage name-description ( RW): host ( RO): vm type ( RO): udev content-type ( RO): disk uuid ( RO) : ONK72A-tDcE-rfAs-P78e-3hlk-wSE5-OVmyiL name-label ( RW): LocalSR name-description ( RW): host ( RO): <not in database> type ( RO): lvm2_member content-type ( RO): uuid ( RO) : dd4907eb-707d-1f1a-ee05-5ea838bca63a name-label ( RW): XCP-ng Tools name-description ( RW): XCP-ng Tools ISOs host ( RO): vm type ( RO): iso content-type ( RO): iso uuid ( RO) : 84f169c5-4c4f-455a-ae1a-7629e9ce0b85 name-label ( RW): DVD drives name-description ( RW): Physical DVD drives host ( RO): vm type ( RO): udev content-type ( RO): iso
So, it looks like the SR was added back?
-
@Chrome Try "xe vm-list params=all"
Do you only have a local storage or did you have any attached storage that's not showing up? -
Here's the output:
xe vm-list params=all uuid ( RO) : 6dccd15c-2682-460c-9049-bdd210288c74 name-label ( RW): Control domain on host: vm name-description ( RW): The domain which manages physical devices and manages other domains user-version ( RW): 1 is-a-template ( RW): false is-default-template ( RW): false is-a-snapshot ( RO): false snapshot-of ( RO): <not in database> snapshots ( RO): snapshot-time ( RO): 19700101T00:00:00Z snapshot-info ( RO): parent ( RO): <not in database> children ( RO): is-control-domain ( RO): true power-state ( RO): running memory-actual ( RO): 8589934592 memory-target ( RO): <expensive field> memory-overhead ( RO): 84934656 memory-static-max ( RW): 8589934592 memory-dynamic-max ( RW): 8589934592 memory-dynamic-min ( RW): 8589934592 memory-static-min ( RW): 8589934592 suspend-VDI-uuid ( RW): <not in database> suspend-SR-uuid ( RW): <not in database> VCPUs-params (MRW): VCPUs-max ( RW): 16 VCPUs-at-startup ( RW): 16 actions-after-shutdown ( RW): Destroy actions-after-softreboot ( RW): Soft reboot actions-after-reboot ( RW): Destroy actions-after-crash ( RW): Destroy console-uuids (SRO): ab3d576c-6b9d-8829-f4c1-c67d9873b03b; 030565eb-6b01-d076-81e8-3e2186eb79e9 hvm ( RO): false platform (MRW): allowed-operations (SRO): metadata_export; changing_static_range; changing_dynamic_range current-operations (SRO): blocked-operations (MRW): allowed-VBD-devices (SRO): <expensive field> allowed-VIF-devices (SRO): <expensive field> possible-hosts ( RO): <expensive field> domain-type ( RW): pv current-domain-type ( RO): pv HVM-boot-policy ( RW): HVM-boot-params (MRW): HVM-shadow-multiplier ( RW): 1.000 PV-kernel ( RW): PV-ramdisk ( RW): PV-args ( RW): PV-legacy-args ( RW): PV-bootloader ( RW): PV-bootloader-args ( RW): last-boot-CPU-flags ( RO): last-boot-record ( RO): <expensive field> resident-on ( RO): 9cbf654a-d889-4317-a37a-aa2f96ea3b69 affinity ( RW): 9cbf654a-d889-4317-a37a-aa2f96ea3b69 other-config (MRW): storage_driver_domain: OpaqueRef:47cb73c1-4143-f216-5c97-fc3977f291d9; is_system_domain: true; perfmon: <config><variable><name value="fs_usage"/><alarm_trigger_level value="0.9"/><alarm_trigger_period value="60"/><alarm_auto_inhibit_period value="3600"/></variable><variable><name value="mem_usage"/><alarm_trigger_level value="0.95"/><alarm_trigger_period value="60"/><alarm_auto_inhibit_period value="3600"/></variable><variable><name value="log_fs_usage"/><alarm_trigger_level value="0.9"/><alarm_trigger_period value="60"/><alarm_auto_inhibit_period value="3600"/></variable></config> dom-id ( RO): 0 recommendations ( RO): xenstore-data (MRW): ha-always-run ( RW) [DEPRECATED]: false ha-restart-priority ( RW): blobs ( RO): start-time ( RO): 19700101T00:00:00Z install-time ( RO): 19700101T00:00:00Z VCPUs-number ( RO): 16 VCPUs-utilisation (MRO): <expensive field> os-version (MRO): <not in database> netbios-name (MRO): <not in database> PV-drivers-version (MRO): <not in database> PV-drivers-up-to-date ( RO) [DEPRECATED]: <not in database> memory (MRO): <not in database> disks (MRO): <not in database> VBDs (SRO): networks (MRO): <not in database> PV-drivers-detected ( RO): <not in database> other (MRO): <not in database> live ( RO): <not in database> guest-metrics-last-updated ( RO): <not in database> can-use-hotplug-vbd ( RO): <not in database> can-use-hotplug-vif ( RO): <not in database> cooperative ( RO) [DEPRECATED]: <expensive field> tags (SRW): appliance ( RW): <not in database> groups ( RW): snapshot-schedule ( RW): <not in database> is-vmss-snapshot ( RO): false start-delay ( RW): 0 shutdown-delay ( RW): 0 order ( RW): 0 version ( RO): 0 generation-id ( RO): hardware-platform-version ( RO): 0 has-vendor-device ( RW): false requires-reboot ( RO): false reference-label ( RO): bios-strings (MRO): pending-guidances ( RO): vtpms ( RO): pending-guidances-recommended ( RO): pending-guidances-full ( RO):
-
@Chrome Do then just a "xe vm-list" and see if you recogniize any VMs other than the dom0 instance of XCP-ng.
If there is nothing else showing up, you will need to try to find your other LVM storage. -
This is what showed up:
uuid ( RO) : 6dccd15c-2682-460c-9049-bdd210288c74 name-label ( RW): Control domain on host: vm power-state ( RO): running
I think I may have had some VMs on the SSD drive...but I guess the drive was wiped during the re-install?
Also, when I login to Xen Orchestra, the SR I just re-attached shows as "disconnected".... so, I'm guess that's why I don't see any VMs.
-
VM metadata isn't stored in the SR but in XAPI DB. If you removed it, then you lost all the VM metadata (VM name, description, number of disks, CPU, RAM etc.)
However, if you don't formatted the SR itself, you should be able to find the actual data, then "just" recreate the VM and attach each disk to your recreated VM.
Now the question is: do you have formatted your SR? If yes, you also lost data, not just metadata. If not, you need to re-introduce the SR and then recreate the associated PBD (the PBD is the "link" between your host and the SR, telling how to access data, eg the path of the local drive in your case)
-
@Chrome As M. Lambert says, you may be able to sue pbd-plug to re-attach the SR if you can sr-introduce the old SR back into the system.
If not, and if your LVM configuration has not been wiped out, here are some steps t try to recover it (it's an ugly process!):- Identify the LVM configuration:
Check for Backups: Look for LVM metadata backups in /etc/lvm/archive/ or /etc/lvm/backup/.
Use vgscan: This command will search for volume groups and their metadata.
Use pvscan: This command will scan for physical volumes.
Use lvs: This command will list logical volumes and their status.
Use vgs: This command will list volume groups. - Restore from Backup (if available):
Find the Backup: Locate the LVM metadata backup file (e.g., /etc/lvm/backup/<vg_name>).
Boot into Rescue Mode: If you're unable to access the system, boot into a rescue environment.
Restore Metadata: Use vgcfgrestore to restore the LVM configuration. - Recreate LVM Configuration (if no backup):
Identify PVs: Use pvscan to list available physical volumes.
Identify VGs: Use vgscan to identify volume groups if they are present.
Recreate PVs: If necessary, use pvcreate to create physical volumes.
Create VGs: If necessary, use vgcreate to create a new volume group.
Create LVs: If necessary, use lvcreate to create logical volumes. - Mount and Verify:
Mount the Logical Volumes: Mount the restored LVM volumes to their respective mount points.
Verify Data: Check the integrity of the data on the restored LVM volumes. - Extend LVM (if adding capacity):
Add a new disk: Ensure the new disk is recognized by the system.
Create PV: Use pvcreate on the new disk.
Add PV to VG: Use vgextend to add the PV to the volume group.
Extend LV: Use lvextend to extend the size of an existing logical volume.
Extend Filesystem: Use e2resize (for ext4) or resize2fs (for ext3) to extend the filesystem on the LV.
- Identify the LVM configuration:
-
@olivierlambert said in How to Re-attach an SR:
VM metadata isn't stored in the SR but in XAPI DB. If you removed it, then you lost all the VM metadata (VM name, description, number of disks, CPU, RAM etc.)
However, if you don't formatted the SR itself, you should be able to find the actual data, then "just" recreate the VM and attach each disk to your recreated VM.
Now the question is: do you have formatted your SR? If yes, you also lost data, not just metadata. If not, you need to re-introduce the SR and then recreate the associated PBD (the PBD is the "link" between your host and the SR, telling how to access data, eg the path of the local drive in your case)
Thanks for your reply! I wanted to get back to you yesterday, but had family obligations.
I did not format the SR, in fact, I was hoping after the re-install of XCP-ng (I re-installed to get IPv6 enabled), xcp-ng would just "pickup" the SR. I obviously didn't understand that there's much more to it than that.
-
This post is deleted! -
Thank you for the detailed reply, I will need much hand holding through this process! I waned to respond yesterday to your helpful instructions, but I was with the family. So, I am going to try these commands now:
/etc/lvm/backup: -rw------- 1 root root 1330 May 30 19:53 XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d
/etc/lvm/archive/: Nothing in this directory
vgscan: Reading all physical volumes. This may take a while... Found volume group "XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d" using metadata type lvm2
pvscan: PV /dev/nvme0n1 VG XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d lvm2 [931.50 GiB / 0 free] Total: 1 [931.50 GiB] / in use: 1 [931.50 GiB] / in no VG: 0 [0 ]
lvs: LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert de0e7bd7-e938-78a8-1c1f-2eac2639298d XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d -wi------- 931.50g
vgs: VG #PV #LV #SN Attr VSize VFree XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d 1 1 0 wz--n- 931.50g 0
So, I got this far... and then I booted off the INSTALLER USB drive, to see if I get to rescue mode... but I noticed that there's a "restore" option, so, I decided to try that.
To my delight, it restored the connection to the nvme SR, which looks like it had 4 VMs on it... and they appear to be up and running now. Sweet!
The other VMs I had... looks like I stored them on the SSD... which I reinstalled xcp-ng on...so, I guess those are gone? So, lesson learned there.
-
@Chrome Fantastic! Please mark my post as helpful if you found it as such. Was traveling much of today, hence the late response.
BTW, it's always good to make a backup and/or archive of your LVM configuration anytime you change it, as the restore option is the cleanest way to deal with connectivity issues if there is some sort of corruption. It's saved my rear end before, I can assure you!
Yeah, if the SSD drive got wiped, there's no option to get those back unless you made a backup somewhere of all that before you installed XCP-ng onto it.
BTW, another very useful command for LVM is "vgchange -ay" which will attempt to renew VG information if a VG seems missing or the like.
-
@tjkreidl Yes, thank you so much for all your help! I will be diving in backups.... even something on a schedule. I appreciate your patience, and your teachings!
All the best to you.
It was a pleasure. I hope I correctly marked your post as helpful, it really was.
-
@Chrome Cheers -- always glad to help out. I put in many thousands of posts on the old Citrix XenServer site, and am happy to share whatever knowledge I still have, as long as it's still relevant! In a few years, it probably won't be, so carpe diem!
-
O olivierlambert marked this topic as a question
-
O olivierlambert has marked this topic as solved
-
Hehe another great example of why a community is fantastic!! (it's a bit sad that Citrix never realized it)
-
@olivierlambert Agreed. The Citrix forum used to be very active, but especially since Citrix was taken over, https://community.citrix.com has had way less activity, sadly.
It's still gratifying that a lot of the functionality still is common to both platforms, although as XCP-ng evolves, there will be continually less commonality.