XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    How to Re-attach an SR

    Scheduled Pinned Locked Moved Solved XCP-ng
    20 Posts 3 Posters 686 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • tjkreidlT Offline
      tjkreidl Ambassador @Chrome
      last edited by tjkreidl

      @Chrome Try "xe vm-list params=all"
      Do you only have a local storage or did you have any attached storage that's not showing up?

      C 1 Reply Last reply Reply Quote 1
      • C Offline
        Chrome @tjkreidl
        last edited by

        @tjkreidl

        Here's the output:

        xe vm-list params=all
        uuid ( RO)                                  : 6dccd15c-2682-460c-9049-bdd210288c74
                                    name-label ( RW): Control domain on host: vm
                              name-description ( RW): The domain which manages physical devices and manages other domains
                                  user-version ( RW): 1
                                 is-a-template ( RW): false
                           is-default-template ( RW): false
                                 is-a-snapshot ( RO): false
                                   snapshot-of ( RO): <not in database>
                                     snapshots ( RO): 
                                 snapshot-time ( RO): 19700101T00:00:00Z
                                 snapshot-info ( RO): 
                                        parent ( RO): <not in database>
                                      children ( RO): 
                             is-control-domain ( RO): true
                                   power-state ( RO): running
                                 memory-actual ( RO): 8589934592
                                 memory-target ( RO): <expensive field>
                               memory-overhead ( RO): 84934656
                             memory-static-max ( RW): 8589934592
                            memory-dynamic-max ( RW): 8589934592
                            memory-dynamic-min ( RW): 8589934592
                             memory-static-min ( RW): 8589934592
                              suspend-VDI-uuid ( RW): <not in database>
                               suspend-SR-uuid ( RW): <not in database>
                                  VCPUs-params (MRW): 
                                     VCPUs-max ( RW): 16
                              VCPUs-at-startup ( RW): 16
                        actions-after-shutdown ( RW): Destroy
                      actions-after-softreboot ( RW): Soft reboot
                          actions-after-reboot ( RW): Destroy
                           actions-after-crash ( RW): Destroy
                                 console-uuids (SRO): ab3d576c-6b9d-8829-f4c1-c67d9873b03b; 030565eb-6b01-d076-81e8-3e2186eb79e9
                                           hvm ( RO): false
                                      platform (MRW): 
                            allowed-operations (SRO): metadata_export; changing_static_range; changing_dynamic_range
                            current-operations (SRO): 
                            blocked-operations (MRW): 
                           allowed-VBD-devices (SRO): <expensive field>
                           allowed-VIF-devices (SRO): <expensive field>
                                possible-hosts ( RO): <expensive field>
                                   domain-type ( RW): pv
                           current-domain-type ( RO): pv
                               HVM-boot-policy ( RW): 
                               HVM-boot-params (MRW): 
                         HVM-shadow-multiplier ( RW): 1.000
                                     PV-kernel ( RW): 
                                    PV-ramdisk ( RW): 
                                       PV-args ( RW): 
                                PV-legacy-args ( RW): 
                                 PV-bootloader ( RW): 
                            PV-bootloader-args ( RW): 
                           last-boot-CPU-flags ( RO): 
                              last-boot-record ( RO): <expensive field>
                                   resident-on ( RO): 9cbf654a-d889-4317-a37a-aa2f96ea3b69
                                      affinity ( RW): 9cbf654a-d889-4317-a37a-aa2f96ea3b69
                                  other-config (MRW): storage_driver_domain: OpaqueRef:47cb73c1-4143-f216-5c97-fc3977f291d9; is_system_domain: true; perfmon: <config><variable><name value="fs_usage"/><alarm_trigger_level value="0.9"/><alarm_trigger_period value="60"/><alarm_auto_inhibit_period value="3600"/></variable><variable><name value="mem_usage"/><alarm_trigger_level value="0.95"/><alarm_trigger_period value="60"/><alarm_auto_inhibit_period value="3600"/></variable><variable><name value="log_fs_usage"/><alarm_trigger_level value="0.9"/><alarm_trigger_period value="60"/><alarm_auto_inhibit_period value="3600"/></variable></config>
                                        dom-id ( RO): 0
                               recommendations ( RO): 
                                 xenstore-data (MRW): 
                    ha-always-run ( RW) [DEPRECATED]: false
                           ha-restart-priority ( RW): 
                                         blobs ( RO): 
                                    start-time ( RO): 19700101T00:00:00Z
                                  install-time ( RO): 19700101T00:00:00Z
                                  VCPUs-number ( RO): 16
                             VCPUs-utilisation (MRO): <expensive field>
                                    os-version (MRO): <not in database>
                                  netbios-name (MRO): <not in database>
                            PV-drivers-version (MRO): <not in database>
            PV-drivers-up-to-date ( RO) [DEPRECATED]: <not in database>
                                        memory (MRO): <not in database>
                                         disks (MRO): <not in database>
                                          VBDs (SRO): 
                                      networks (MRO): <not in database>
                           PV-drivers-detected ( RO): <not in database>
                                         other (MRO): <not in database>
                                          live ( RO): <not in database>
                    guest-metrics-last-updated ( RO): <not in database>
                           can-use-hotplug-vbd ( RO): <not in database>
                           can-use-hotplug-vif ( RO): <not in database>
                      cooperative ( RO) [DEPRECATED]: <expensive field>
                                          tags (SRW): 
                                     appliance ( RW): <not in database>
                                        groups ( RW): 
                             snapshot-schedule ( RW): <not in database>
                              is-vmss-snapshot ( RO): false
                                   start-delay ( RW): 0
                                shutdown-delay ( RW): 0
                                         order ( RW): 0
                                       version ( RO): 0
                                 generation-id ( RO): 
                     hardware-platform-version ( RO): 0
                             has-vendor-device ( RW): false
                               requires-reboot ( RO): false
                               reference-label ( RO): 
                                  bios-strings (MRO): 
                             pending-guidances ( RO): 
                                         vtpms ( RO): 
                 pending-guidances-recommended ( RO): 
                        pending-guidances-full ( RO): 
        
        tjkreidlT 1 Reply Last reply Reply Quote 0
        • tjkreidlT Offline
          tjkreidl Ambassador @Chrome
          last edited by tjkreidl

          @Chrome Do then just a "xe vm-list" and see if you recogniize any VMs other than the dom0 instance of XCP-ng.
          If there is nothing else showing up, you will need to try to find your other LVM storage.

          C 1 Reply Last reply Reply Quote 1
          • C Offline
            Chrome @tjkreidl
            last edited by

            @tjkreidl

            This is what showed up:

            uuid ( RO)           : 6dccd15c-2682-460c-9049-bdd210288c74
                 name-label ( RW): Control domain on host: vm
                power-state ( RO): running
            

            I think I may have had some VMs on the SSD drive...but I guess the drive was wiped during the re-install?

            Also, when I login to Xen Orchestra, the SR I just re-attached shows as "disconnected".... so, I'm guess that's why I don't see any VMs.

            tjkreidlT 1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              VM metadata isn't stored in the SR but in XAPI DB. If you removed it, then you lost all the VM metadata (VM name, description, number of disks, CPU, RAM etc.)

              However, if you don't formatted the SR itself, you should be able to find the actual data, then "just" recreate the VM and attach each disk to your recreated VM.

              Now the question is: do you have formatted your SR? If yes, you also lost data, not just metadata. If not, you need to re-introduce the SR and then recreate the associated PBD (the PBD is the "link" between your host and the SR, telling how to access data, eg the path of the local drive in your case)

              C 3 Replies Last reply Reply Quote 1
              • tjkreidlT Offline
                tjkreidl Ambassador @Chrome
                last edited by

                @Chrome As M. Lambert says, you may be able to sue pbd-plug to re-attach the SR if you can sr-introduce the old SR back into the system.
                If not, and if your LVM configuration has not been wiped out, here are some steps t try to recover it (it's an ugly process!):

                1. Identify the LVM configuration:
                  Check for Backups: Look for LVM metadata backups in /etc/lvm/archive/ or /etc/lvm/backup/.
                  Use vgscan: This command will search for volume groups and their metadata.
                  Use pvscan: This command will scan for physical volumes.
                  Use lvs: This command will list logical volumes and their status.
                  Use vgs: This command will list volume groups.
                2. Restore from Backup (if available):
                  Find the Backup: Locate the LVM metadata backup file (e.g., /etc/lvm/backup/<vg_name>).
                  Boot into Rescue Mode: If you're unable to access the system, boot into a rescue environment.
                  Restore Metadata: Use vgcfgrestore to restore the LVM configuration.
                3. Recreate LVM Configuration (if no backup):
                  Identify PVs: Use pvscan to list available physical volumes.
                  Identify VGs: Use vgscan to identify volume groups if they are present.
                  Recreate PVs: If necessary, use pvcreate to create physical volumes.
                  Create VGs: If necessary, use vgcreate to create a new volume group.
                  Create LVs: If necessary, use lvcreate to create logical volumes.
                4. Mount and Verify:
                  Mount the Logical Volumes: Mount the restored LVM volumes to their respective mount points.
                  Verify Data: Check the integrity of the data on the restored LVM volumes.
                5. Extend LVM (if adding capacity):
                  Add a new disk: Ensure the new disk is recognized by the system.
                  Create PV: Use pvcreate on the new disk.
                  Add PV to VG: Use vgextend to add the PV to the volume group.
                  Extend LV: Use lvextend to extend the size of an existing logical volume.
                  Extend Filesystem: Use e2resize (for ext4) or resize2fs (for ext3) to extend the filesystem on the LV.
                1 Reply Last reply Reply Quote 1
                • C Offline
                  Chrome @olivierlambert
                  last edited by

                  @olivierlambert said in How to Re-attach an SR:

                  VM metadata isn't stored in the SR but in XAPI DB. If you removed it, then you lost all the VM metadata (VM name, description, number of disks, CPU, RAM etc.)

                  However, if you don't formatted the SR itself, you should be able to find the actual data, then "just" recreate the VM and attach each disk to your recreated VM.

                  Now the question is: do you have formatted your SR? If yes, you also lost data, not just metadata. If not, you need to re-introduce the SR and then recreate the associated PBD (the PBD is the "link" between your host and the SR, telling how to access data, eg the path of the local drive in your case)

                  Thanks for your reply! I wanted to get back to you yesterday, but had family obligations.

                  I did not format the SR, in fact, I was hoping after the re-install of XCP-ng (I re-installed to get IPv6 enabled), xcp-ng would just "pickup" the SR. I obviously didn't understand that there's much more to it than that.

                  1 Reply Last reply Reply Quote 0
                  • C Offline
                    Chrome @olivierlambert
                    last edited by

                    This post is deleted!
                    1 Reply Last reply Reply Quote 1
                    • C Offline
                      Chrome @olivierlambert
                      last edited by Chrome

                      @tjkreidl

                      Thank you for the detailed reply, I will need much hand holding through this process! I waned to respond yesterday to your helpful instructions, but I was with the family. So, I am going to try these commands now:

                      /etc/lvm/backup:
                      -rw------- 1 root root 1330 May 30 19:53 XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d
                      
                      /etc/lvm/archive/:
                      Nothing in this directory
                      
                      vgscan:
                        Reading all physical volumes.  This may take a while...
                        Found volume group "XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d" using metadata type lvm2
                      
                      pvscan:
                        PV /dev/nvme0n1   VG XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d   lvm2 [931.50 GiB / 0    free]
                        Total: 1 [931.50 GiB] / in use: 1 [931.50 GiB] / in no VG: 0 [0   ]
                      
                      lvs:
                       LV                                   VG                                              Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
                        de0e7bd7-e938-78a8-1c1f-2eac2639298d XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d -wi------- 931.50g              
                      
                      vgs:
                        VG                                              #PV #LV #SN Attr   VSize   VFree
                        XSLocalEXT-de0e7bd7-e938-78a8-1c1f-2eac2639298d   1   1   0 wz--n- 931.50g    0 
                      

                      So, I got this far... and then I booted off the INSTALLER USB drive, to see if I get to rescue mode... but I noticed that there's a "restore" option, so, I decided to try that.

                      To my delight, it restored the connection to the nvme SR, which looks like it had 4 VMs on it... and they appear to be up and running now. Sweet!

                      The other VMs I had... looks like I stored them on the SSD... which I reinstalled xcp-ng on...so, I guess those are gone? So, lesson learned there.

                      tjkreidlT 1 Reply Last reply Reply Quote 0
                      • tjkreidlT Offline
                        tjkreidl Ambassador @Chrome
                        last edited by tjkreidl

                        @Chrome Fantastic! Please mark my post as helpful if you found it as such. Was traveling much of today, hence the late response.

                        BTW, it's always good to make a backup and/or archive of your LVM configuration anytime you change it, as the restore option is the cleanest way to deal with connectivity issues if there is some sort of corruption. It's saved my rear end before, I can assure you!

                        Yeah, if the SSD drive got wiped, there's no option to get those back unless you made a backup somewhere of all that before you installed XCP-ng onto it.

                        BTW, another very useful command for LVM is "vgchange -ay" which will attempt to renew VG information if a VG seems missing or the like.

                        C 1 Reply Last reply Reply Quote 1
                        • C Offline
                          Chrome @tjkreidl
                          last edited by

                          @tjkreidl Yes, thank you so much for all your help! I will be diving in backups.... even something on a schedule. I appreciate your patience, and your teachings! 🙂 All the best to you. 🙂 It was a pleasure. I hope I correctly marked your post as helpful, it really was.

                          tjkreidlT 1 Reply Last reply Reply Quote 1
                          • tjkreidlT Offline
                            tjkreidl Ambassador @Chrome
                            last edited by

                            @Chrome Cheers -- always glad to help out. I put in many thousands of posts on the old Citrix XenServer site, and am happy to share whatever knowledge I still have, as long as it's still relevant! In a few years, it probably won't be, so carpe diem!

                            1 Reply Last reply Reply Quote 1
                            • olivierlambertO olivierlambert marked this topic as a question on
                            • olivierlambertO olivierlambert has marked this topic as solved on
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              Hehe another great example of why a community is fantastic!! (it's a bit sad that Citrix never realized it)

                              tjkreidlT 1 Reply Last reply Reply Quote 1
                              • tjkreidlT Offline
                                tjkreidl Ambassador @olivierlambert
                                last edited by

                                @olivierlambert Agreed. The Citrix forum used to be very active, but especially since Citrix was taken over, https://community.citrix.com has had way less activity, sadly.
                                It's still gratifying that a lot of the functionality still is common to both platforms, although as XCP-ng evolves, there will be continually less commonality.

                                1 Reply Last reply Reply Quote 1
                                • First post
                                  Last post