XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VDI not showing in XO 5 from Source.

    Scheduled Pinned Locked Moved Unsolved Management
    17 Posts 11 Posters 391 Views 9 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • florentF Online
      florent Vates 🪐 XO Team
      last edited by

      @wmazren said in VDI not showing in XO 5 from Source.:

             is-a-snapshot ( RO): false
               snapshot-of ( RO): d31f2db0-be21-4794-b858-1bea357869c8
      

      this disk is now recognized as a snapshot by xo
      is it a disk from a restored VM ?

      1 Reply Last reply Reply Quote 0
      • A Offline
        abudef
        last edited by abudef

        Hi,
        I’m afraid I’ve encountered the same problem:

        • Xen Orchestra, commit 0b52a
          e9d960d3-930c-4b81-bff1-50644a62be78-image.png

        • v6
          da2e5259-fb1b-4703-bf99-ca7828bed2c9-image.png

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          Pilow @abudef
          last edited by Pilow

          😕

          I am also impacted on some SRs, latest XOA 6.0.3

          In the VM DISKS tab, no disk
          e64c1143-5c66-4918-bf09-255868da37a8-{14091121-3DE9-4E4C-8DC6-ACAE262294BA}.png

          VM is running OK. can snapshot, can backup.

          on the impacted SR (local RAID5 for this one) I noted that there is no more green progress bar as if SR is empty, but showing 47 VDIs and occupied space OK :
          557e3927-604f-443e-9dad-4d0249bb2813-{BAE0069B-4449-4EBF-8262-E9BCDD7E4A2C}.png

          DISKS tab of the SR is showing the VDIs on the running VMs :
          38be29c7-0ec4-44b2-909a-bec1f8f1bf71-{48C0989B-A413-45C9-BF28-87DA09CB8BEA}.png

          as @danp asked, check of params on one impacted VDI & VBD :

          # xe vdi-param-list uuid=a81ecd87-3788-4528-819d-7d7c03aa6c61
          uuid ( RO)                    : a81ecd87-3788-4528-819d-7d7c03aa6c61
                        name-label ( RW): xxx-xxx-xxxxxx_sda
                  name-description ( RW):
                     is-a-snapshot ( RO): false
                       snapshot-of ( RO): f9cbd30f-a261-4b95-97db-b6846147634a
                         snapshots ( RO): cb65b96a-bed9-4e9a-82d3-e73b5aed546d
                     snapshot-time ( RO): 20250912T17:38:57Z
                allowed-operations (SRO): snapshot; clone
                current-operations (SRO):
                           sr-uuid ( RO): b1b80611-7223-c829-8953-6aa2bf5865b3
                     sr-name-label ( RO): xxx-xx-xxxxxxx RAID5 Local
                         vbd-uuids (SRO): 51bb1797-c6c7-50f0-13a9-dfaad4c99d90
                   crashdump-uuids (SRO):
                      virtual-size ( RO): 68719476736
              physical-utilisation ( RO): 30686765056
                          location ( RO): a81ecd87-3788-4528-819d-7d7c03aa6c61
                              type ( RO): User
                          sharable ( RO): false
                         read-only ( RO): false
                      storage-lock ( RO): false
                           managed ( RO): true
               parent ( RO) [DEPRECATED]: <not in database>
                           missing ( RO): false
                      is-tools-iso ( RO): false
                      other-config (MRW):
                     xenstore-data (MRO):
                         sm-config (MRO): vhd-parent: daeee201-3891-443e-8bdb-b00ed1051279; host_OpaqueRef:3e7283ba-5a42-1881-958a-9f96b71fb98f: RW; read-caching-enabled-on-f2868da5-4509-43d7-9ef9-2bb3857e1ba5: true
                           on-boot ( RW): persist
                     allow-caching ( RW): false
                   metadata-latest ( RO): false
                  metadata-of-pool ( RO): <not in database>
                              tags (SRW):
                       cbt-enabled ( RO): true
          
          
          # xe vbd-param-list uuid=51bb1797-c6c7-50f0-13a9-dfaad4c99d90
          uuid ( RO)                        : 51bb1797-c6c7-50f0-13a9-dfaad4c99d90
                               vm-uuid ( RO): 108ad69b-1fa5-d80b-fb16-a62509ad642a
                         vm-name-label ( RO): xxx-xxx-xxxxxx
                              vdi-uuid ( RO): a81ecd87-3788-4528-819d-7d7c03aa6c61
                        vdi-name-label ( RO): xxx-xxx-xxxxxx_sda
                    allowed-operations (SRO): attach; unpause; pause
                    current-operations (SRO):
                                 empty ( RO): false
                                device ( RO): xvda
                            userdevice ( RW): 0
                              bootable ( RW): false
                                  mode ( RW): RW
                                  type ( RW): Disk
                           unpluggable ( RW): false
                    currently-attached ( RO): true
                            attachable ( RO): true
                          storage-lock ( RO): false
                           status-code ( RO): 0
                         status-detail ( RO):
                    qos_algorithm_type ( RW):
                  qos_algorithm_params (MRW):
              qos_supported_algorithms (SRO):
                          other-config (MRW): owner:
                           io_read_kbs ( RO): 0.000
                          io_write_kbs ( RO): 93.752
          
          

          VDI seen as a snapshot...
          in XO6, VDI appears ok.
          we have an internal webapp that access by API and disk appears OK, like in XO6.

          seems to be rooted on the SR, not the VMs, as the entire SR is impacted... ? not all SRs in this XOA instance are impacted (have other RAID5 local SR and iSCSI SRs)
          all VMs hosted on this SR have invisible disks in XO5

          have an XO CE attached to same servers, and same behavior, invisible disks
          9aaad5b2-c8e3-4057-a47d-8da39b5a4664-{9E011831-A228-4A0C-B12C-9EE224398B8E}.png

          edit :
          on GENERAL tab of an impacted VM :
          a66f9b9e-b71a-4219-a8eb-68506284b47a-{4B5E10D1-2240-40E9-8EA4-88A41E97C738}.png
          we can see a 0Bytes VDI, but activity
          b7b73755-7bcf-4d9d-afd5-79dfa913cda9-{6EA81F62-B519-4711-A810-519D270D3102}.png

          P 1 Reply Last reply Reply Quote 0
          • P Offline
            Pilow @Pilow
            last edited by

            tried to deploy a NEW vm for testing purpose on an impacted SR :
            31059047-0663-4438-8131-dae10fdece65-{876A5496-1224-46ED-B5FB-282584D4CBAD}.png

            and it appears ! it's the only VDI visible, there are OTHER vms on this same SR that have invisible disks

            all as if nothing
            10f77f41-6599-4ead-b7cb-101c5dc7f076-{DD14C76B-144E-4A0C-B985-01EBE178A1A0}.png

            I guess we could migrate the impacted VMs out of this SR and back, and it would correct the issue !

            does that help ?!

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Online
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              @anthoineb or someone from the @team-storage, you might want to take a look (IDK if it's a known problem internally)

              A 1 Reply Last reply Reply Quote 0
              • A Online
                anthoineb Vates 🪐 XCP-ng Team @olivierlambert
                last edited by

                @olivierlambert Yes, we saw this before, we are investigating.

                1 Reply Last reply Reply Quote 2
                • olivierlambertO Online
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Thanks!

                  1 Reply Last reply Reply Quote 0
                  • P Pilow referenced this topic
                  • W Offline
                    wilsonqanda
                    last edited by

                    Hello All,

                    Found some roundabout solutions you may want to try:
                    https://xcp-ng.org/forum/post/101370

                    P 1 Reply Last reply Reply Quote 0
                    • P Offline
                      Pilow @wilsonqanda
                      last edited by Pilow

                      @wilsonqanda tried your workaround on a halted VM, and it worked !
                      If i snapshot -> disk still invisible
                      delete snapshot - > disk still invisible

                      but

                      if i snapshot --> disk still invisible
                      revert snapshot (with take snapshot option) --> disk APPEAR again
                      delete the two snapshots -> disks still there

                      edit : even without the take snapshot before revert, it is working, tried on another VM

                      W 1 Reply Last reply Reply Quote 0
                      • W Offline
                        wilsonqanda @Pilow
                        last edited by wilsonqanda

                        @Pilow Lol glad I was able to help. Its only if you have a few VMs having hundreds will be a nightmare... giod luck to those that has that issue. In the meantime I will use the method I mention for now. 🙂

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post