XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 481.1k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G Offline
      gb.123 @BenjiReis
      last edited by

      @BenjiReis said in XOSTOR hyperconvergence preview:

      @gb-123 are you using XOA or XO from sources?
      If from sources the issue might come from having a different openssl when creating the sdn-controller certificates.
      You can either try with an XOA or generate manually your certificates.

      Using XO from sources. I just turned on "Over-ride certificates" and reinstalled the whole XO virtual machine. Seems to work fine now.

      My only problem was that why it suddenly stopped working when I installed Linstore as installing Linstor should not impact upon this. So thats why I reported this on this thread. ๐Ÿ™‚

      BenjiReisB 1 Reply Last reply Reply Quote 0
      • BenjiReisB Offline
        BenjiReis Vates ๐Ÿช XCP-ng Team @gb.123
        last edited by

        @gb-123 I see thanks ๐Ÿ™‚

        Just bad timing imho because Linstor doesn't touch this part of the host and the openssl issue is more probably coming from the env when you run your XO.

        anyway, glad it's working now!

        1 Reply Last reply Reply Quote 0
        • G Offline
          gb.123 @ronan-a
          last edited by

          @ronan-a

          Any idea why I am getting the error while running the linstor command ?

          Thanks

          ronan-aR 1 Reply Last reply Reply Quote 0
          • ronan-aR Offline
            ronan-a Vates ๐Ÿช XCP-ng Team @gb.123
            last edited by

            @gb-123 What command and error exactly?

            G 1 Reply Last reply Reply Quote 0
            • Maelstrom96M Offline
              Maelstrom96 @Maelstrom96
              last edited by

              @Maelstrom96 said in XOSTOR hyperconvergence preview:

              Is there a procedure on how we can update our current 8.2 XCP-ng cluster to 8.3? My undertanding is that if I update the host using the ISO, it will effectively wipe all changes that were made to DOM0, including the linstor/sm-linstor packages.

              Any input on this @ronan-a?

              ronan-aR 1 Reply Last reply Reply Quote 0
              • olivierlambertO Online
                olivierlambert Vates ๐Ÿช Co-Founder CEO
                last edited by olivierlambert

                @ronan-a is in holidays for the next 2 weeks ๐Ÿ™‚

                However, I can answer: I'm currently using XOSTOR on a test cluster running on a fresh 8.3 and it works ๐Ÿ™‚ Can't tell for the upgrade though.

                1 Reply Last reply Reply Quote 0
                • G Offline
                  gb.123 @ronan-a
                  last edited by gb.123

                  @ronan-a

                  @ronan-a said in XOSTOR hyperconvergence preview:

                  @gb-123 What command and error exactly?

                  I was referring to the our earlier conversation :

                  @gb-123 said in XOSTOR hyperconvergence preview:

                  @ronan-a

                  running linstor command on both hosts gives the following error :

                  Traceback (most recent call last):
                  File "/usr/bin/linstor", line 21, in <module>
                  import linstor_client_main
                  ImportError: No module named linstor_client_main

                  @ronan-a said in XOSTOR hyperconvergence preview:

                  Regarding this error, can you say me what's your XCP-ng version? And what's the output of: yum list installed | grep -i linstor?

                  Output of yum list installed | grep -i linstor :

                  drbd.x86_64                     9.22.0-1.el7               @xcp-ng-linstor      
                  drbd-bash-completion.x86_64     9.22.0-1.el7               @xcp-ng-linstor      
                  drbd-pacemaker.x86_64           9.22.0-1.el7               @xcp-ng-linstor      
                  drbd-reactor.x86_64             1.0.0-1                    @xcp-ng-linstor      
                  drbd-udev.x86_64                9.22.0-1.el7               @xcp-ng-linstor      
                  drbd-utils.x86_64               9.22.0-1.el7               @xcp-ng-linstor      
                  drbd-xen.x86_64                 9.22.0-1.el7               @xcp-ng-linstor      
                  kmod-drbd.x86_64                9.2.2+ptf.1_4.19.0+1-1     @xcp-ng-linstor      
                  linstor-client.noarch           1.18.0-1                   @xcp-ng-linstor      
                  linstor-common.noarch           1.21.1-1.el7               @xcp-ng-linstor      
                  linstor-controller.noarch       1.21.1-1.el7               @xcp-ng-linstor      
                  linstor-satellite.noarch        1.21.1-1.el7               @xcp-ng-linstor      
                  python-linstor.noarch           1.18.0-1                   @xcp-ng-linstor      
                  xcp-ng-linstor.noarch           1.2-1.xcpng8.3             @xcp-ng-linstor      
                  xcp-ng-release-linstor.noarch   1.4-1.xcpng8.3             @xcp-ng-base
                  

                  @ronan-a said in XOSTOR hyperconvergence preview:

                  Do you have a XCP-ng 8.3? And did you run the beta installation script on it?

                  I installed XCP-ng 8.3 beta from ISO and then applied all the patches that were available. To install linstor, I followed the 1st Post of this topic.

                  ronan-aR 1 Reply Last reply Reply Quote 0
                  • ronan-aR Offline
                    ronan-a Vates ๐Ÿช XCP-ng Team @gb.123
                    last edited by ronan-a

                    @gb-123 If you run LINSTOR on 8.3 (FYI, linstor packages on this version are considered unstable), you must call the linstor command like that:

                    > python3 /usr/bin/linstor 
                    

                    By default python2 is called and is not compatible with this version of LINSTOR:

                    > python2 /usr/bin/linstor resource list
                    Traceback (most recent call last):
                      File "/usr/bin/linstor", line 21, in <module>
                        import linstor_client_main
                    ImportError: No module named linstor_client_main
                    
                    G 1 Reply Last reply Reply Quote 0
                    • ronan-aR Offline
                      ronan-a Vates ๐Ÿช XCP-ng Team @Maelstrom96
                      last edited by

                      @Maelstrom96 We must update our documentation for that, This will probably require executing commands manually during an upgrade.

                      Maelstrom96M 1 Reply Last reply Reply Quote 1
                      • G Offline
                        gb.123 @ronan-a
                        last edited by gb.123

                        @ronan-a

                        Are there any other packages in xcp-ng requiring python2 ?
                        Would it be advisable to change default python to python3 (by adding alias in bashrc) ?
                        Or do you think I should create a script which in-turn runs python3 command only for linstor ?

                        PS: By executing :

                        python3 /usr/bin/linstor 
                        

                        Result :

                        Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
                        

                        Although through XO, I can see the VM on Linstor SR running fine,

                        ronan-aR 1 Reply Last reply Reply Quote 0
                        • ronan-aR Offline
                          ronan-a Vates ๐Ÿช XCP-ng Team @gb.123
                          last edited by ronan-a

                          @gb-123 said in XOSTOR hyperconvergence preview:

                          Are there any other packages in xcp-ng requiring python2 ?

                          Probably yes for few packages.

                          Would it be advisable to change default python to python3 (by adding alias in bashrc) ?

                          I don't recommend to do that.

                          Or do you think I should create a script which in-turn runs python3 command only for linstor ?

                          It's probably the best solution for the moment. Our ideal goal would obviously to only have Python 3:

                          • But currently XCP-ng 8.3 is still in development.
                          • And there are still things to check before thinking about globally changing the default Python interpreter.
                          G 1 Reply Last reply Reply Quote 0
                          • G Offline
                            gb.123 @ronan-a
                            last edited by

                            @ronan-a

                            Any idea why I am getting :

                            Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
                            

                            I installed linstor with replication as 2 and the VM is running fine. I am executing this command from dom0 of Pool Master.

                            SwenS 1 Reply Last reply Reply Quote 0
                            • SwenS Offline
                              Swen @gb.123
                              last edited by

                              @gb-123 pool master does not need to be linstor controller. Try this command on the other hosts. It will only work on the node which is the linstor controller.

                              G 1 Reply Last reply Reply Quote 1
                              • olivierlambertO olivierlambert moved this topic from Feedback and requests on
                              • G Offline
                                gb.123 @Swen
                                last edited by gb.123

                                @Swen

                                Thanks so much. It seems that the controller Is running on Node 2.

                                Btw, is there any way to control the node on which controller is installed (without loosing the data)? I remember installing on Node 1 first and then on Node 2. But it seems that Node 2 is actually running the controller.

                                I know the above is a noob question, but I just wanted to be sure if the linstor controller actually moves to another node when one is not available.

                                1 Reply Last reply Reply Quote 0
                                • B bbruun referenced this topic on
                                • Maelstrom96M Offline
                                  Maelstrom96 @ronan-a
                                  last edited by Maelstrom96

                                  @ronan-a said in XOSTOR hyperconvergence preview:

                                  @Maelstrom96 We must update our documentation for that, This will probably require executing commands manually during an upgrade.

                                  Any news on that? We're still pretty much blocked until that's figured out.

                                  Also, any news on when it will be officially released?

                                  ronan-aR 1 Reply Last reply Reply Quote 1
                                  • L Offline
                                    limezest
                                    last edited by limezest

                                    I have a 3 node cluster in my lab running 8.2.1 with thin provisioned LVM on SATA SSDs.

                                    I changed the 10G management interfaces from being untagged to being being VLAN tagged and I did an emergency network reset following the changeover.
                                    Each node only has one 10G interface which is used for all management, vm and storage traffic.

                                    Following the network reset, the cluster did not automatically recover, so I restarted my three nodes.

                                    Now the cluster is up and the nodes claim to be connected to the XOSTOR SR, but I cannot take snapshots, clone VMs, and some VMs fail to start with the following error:

                                    SR_BACKEND_FAILURE_1200(, Empty dev path for 19f26a13-a09e-4c38-8219-b0b6b2d4dc26, but definition "seems" to exist, )
                                    

                                    Any troubleshooting guidance is appreciated. Thanks in advance.

                                    [edit1]
                                    I am only able to run linstor commands such as 'linstor node list' on the node that is currently the linstor controller.

                                    If I try linstor node list (or any linstor command) on the satellite nodes, i get the error:

                                    Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
                                    

                                    linstor node interface list host1 gives me

                                    host2       |   NetInterface     | IP            | Port   | EncryptionType
                                    + StltCon   |   default          | 10.10.10.11   | 3366   | PLAIN
                                    

                                    I am able to use 'linstor-kv-tool' to find the linstor volume that maps to each of my virtual disk images, but only on the controller.

                                    linstor-controller.service is running on host 2
                                    linstror-satellite.service is running on hosts 0 and 1, but i don't see any process listening on 3366 from netstat -tulpn
                                    [/edit1]

                                    ronan-aR 1 Reply Last reply Reply Quote 0
                                    • ronan-aR Offline
                                      ronan-a Vates ๐Ÿช XCP-ng Team @limezest
                                      last edited by

                                      @limezest Are you sure that you only have one linstor controller running?
                                      What's the output of linstor resource list? Same for mountpoint /var/lib/linstor on each host.

                                      Note: it's not a surprise to have this error: Cannot assign requested address.
                                      You must specify linstor --controllers=<ips> <cmd> to execute a command from any host.

                                      L 1 Reply Last reply Reply Quote 0
                                      • ronan-aR Offline
                                        ronan-a Vates ๐Ÿช XCP-ng Team @Maelstrom96
                                        last edited by

                                        @Maelstrom96 The XCP-ng 8.3 LINSTOR version is not often updated, and we are totally focused on the stable 8.2 version.
                                        As a reminder XCP-ng 8.3 is still in beta, so we can't write now a documentation to update LINSTOR between these versions because we still have important issues to fix and improvements to add that can impact and/or invalidate a migration process.

                                        1 Reply Last reply Reply Quote 1
                                        • L Offline
                                          limezest @ronan-a
                                          last edited by limezest

                                          @ronan-a Thanks for the reply.

                                          Node2 is currently the controller. All three nodes are currently running the satellite and monitor service.

                                          On nodes 0 and 1 (the satellite nodes) I see:

                                          /var/lib/linstor is not a mountpoint
                                          

                                          On node 2, the current controller node, I see:

                                          /var/lib/linstor is a mountpoint
                                          

                                          Currently, some VDI are accessible, others are not.

                                          For example, when i try to start my XOA VM I get the following error. I get the same error no matter which node i try to start the VM on:

                                          XOSTOR: POST_ATTACH_SCAN_FAILED","2","Failed to scan SR 7c0374c1-17d4-a52b-7c2a-a5ca74e1db66 after attaching, error The SR is not available [opterr=Database is not mounted]
                                          

                                          There is no entry for this UUID beginning in 7c03 in the output of the linstor-kv-tool

                                          
                                          ~~node0~~
                                          [11:44 node0 ~]# systemctl status linstor*
                                          โ— linstor-monitor.service - LINSTOR Monitor
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                                             Active: active (running) since Wed 2023-11-15 22:07:50 EST; 13h ago
                                           Main PID: 1867 (linstor-monitor)
                                             CGroup: /system.slice/linstor-monitor.service
                                                     โ””โ”€1867 /opt/xensource/libexec/linstor-monitord
                                          
                                          โ— linstor-satellite.service - LINSTOR Satellite Service
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                                            Drop-In: /etc/systemd/system/linstor-satellite.service.d
                                                     โ””โ”€override.conf
                                             Active: active (running) since Wed 2023-11-15 22:07:59 EST; 13h ago
                                           Main PID: 4786 (java)
                                             CGroup: /system.slice/linstor-satellite.service
                                                     โ”œโ”€4786 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                                                     โ”œโ”€5342 drbdsetup events2 all
                                                     โ””โ”€6331 /usr/sbin/dmeventd
                                          
                                          ~~node1~~
                                          [11:44 node1 ~]# systemctl status linstor*
                                          โ— linstor-satellite.service - LINSTOR Satellite Service
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                                            Drop-In: /etc/systemd/system/linstor-satellite.service.d
                                                     โ””โ”€override.conf
                                             Active: active (running) since Wed 2023-11-15 15:59:10 EST; 19h ago
                                           Main PID: 5035 (java)
                                             CGroup: /system.slice/linstor-satellite.service
                                                     โ”œโ”€5035 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                                                     โ””โ”€5585 drbdsetup events2 all
                                          
                                          โ— linstor-monitor.service - LINSTOR Monitor
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                                             Active: active (running) since Wed 2023-11-15 15:57:35 EST; 19h ago
                                           Main PID: 1825 (linstor-monitor)
                                             CGroup: /system.slice/linstor-monitor.service
                                                     โ””โ”€1825 /opt/xensource/libexec/linstor-monitord
                                          
                                          
                                          ~~node2~~
                                          [11:38 node2 ~]# systemctl status linstor*
                                          โ— linstor-satellite.service - LINSTOR Satellite Service
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                                            Drop-In: /etc/systemd/system/linstor-satellite.service.d
                                                     โ””โ”€override.conf
                                             Active: active (running) since Wed 2023-11-15 15:49:43 EST; 19h ago
                                           Main PID: 5212 (java)
                                             CGroup: /system.slice/linstor-satellite.service
                                                     โ”œโ”€5212 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                                                     โ””โ”€5439 drbdsetup events2 all
                                          
                                          โ— linstor-monitor.service - LINSTOR Monitor
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                                             Active: active (running) since Wed 2023-11-15 15:48:11 EST; 19h ago
                                           Main PID: 1830 (linstor-monitor)
                                             CGroup: /system.slice/linstor-monitor.service
                                                     โ””โ”€1830 /opt/xensource/libexec/linstor-monitord
                                          
                                          โ— linstor-controller.service - drbd-reactor controlled linstor-controller
                                             Loaded: loaded (/usr/lib/systemd/system/linstor-controller.service; disabled; vendor preset: disabled)
                                            Drop-In: /run/systemd/system/linstor-controller.service.d
                                                     โ””โ”€reactor.conf
                                             Active: active (running) since Wed 2023-11-15 22:04:11 EST; 13h ago
                                           Main PID: 1512 (java)
                                             CGroup: /system.slice/linstor-controller.service
                                                     โ””โ”€1512 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Controller --logs=/var/log/linstor-controller --config-directo...
                                          
                                          
                                          [11:37 node2 ~]# linstor resource list
                                          โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
                                          โ”Š ResourceName                                    โ”Š Node  โ”Š Port โ”Š Usage  โ”Š Conns                     โ”Š      State โ”Š CreatedOn           โ”Š
                                          โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
                                          โ”Š xcp-persistent-database                         โ”Š node0 โ”Š 7000 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-08-30 13:53:54 โ”Š
                                          โ”Š xcp-persistent-database                         โ”Š node1 โ”Š 7000 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-08-30 13:53:49 โ”Š
                                          โ”Š xcp-persistent-database                         โ”Š node2 โ”Š 7000 โ”Š InUse  โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-08-30 13:53:54 โ”Š
                                          โ”Š xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd โ”Š node0 โ”Š 7004 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:53:24 โ”Š
                                          โ”Š xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd โ”Š node1 โ”Š 7004 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:27 โ”Š
                                          โ”Š xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd โ”Š node2 โ”Š 7004 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:27 โ”Š
                                          โ”Š xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 โ”Š node0 โ”Š 7012 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 16:13:15 โ”Š
                                          โ”Š xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 โ”Š node1 โ”Š 7012 โ”Š InUse  โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:20 โ”Š
                                          โ”Š xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 โ”Š node2 โ”Š 7012 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:20 โ”Š
                                          โ”Š xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 โ”Š node0 โ”Š 7005 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:53:31 โ”Š
                                          โ”Š xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 โ”Š node1 โ”Š 7005 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:35 โ”Š
                                          โ”Š xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 โ”Š node2 โ”Š 7005 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:35 โ”Š
                                          โ”Š xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 โ”Š node0 โ”Š 7022 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-11-14 13:00:37 โ”Š
                                          โ”Š xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 โ”Š node1 โ”Š 7022 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-11-14 13:00:37 โ”Š
                                          โ”Š xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 โ”Š node2 โ”Š 7022 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-11-14 13:00:33 โ”Š
                                          โ”Š xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 โ”Š node0 โ”Š 7009 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 15:28:56 โ”Š
                                          โ”Š xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 โ”Š node1 โ”Š 7009 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:00 โ”Š
                                          โ”Š xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 โ”Š node2 โ”Š 7009 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:00 โ”Š
                                          โ”Š xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 โ”Š node0 โ”Š 7017 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:28:26 โ”Š
                                          โ”Š xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 โ”Š node1 โ”Š 7017 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-09-07 22:28:23 โ”Š
                                          โ”Š xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 โ”Š node2 โ”Š 7017 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:28:26 โ”Š
                                          โ”Š xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b โ”Š node0 โ”Š 7013 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 16:13:24 โ”Š
                                          โ”Š xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b โ”Š node1 โ”Š 7013 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:28 โ”Š
                                          โ”Š xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b โ”Š node2 โ”Š 7013 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:28 โ”Š
                                          โ”Š xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 โ”Š node0 โ”Š 7003 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:28:20 โ”Š
                                          โ”Š xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 โ”Š node1 โ”Š 7003 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 11:28:16 โ”Š
                                          โ”Š xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 โ”Š node2 โ”Š 7003 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:28:20 โ”Š
                                          โ”Š xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 โ”Š node0 โ”Š 7016 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:25:53 โ”Š
                                          โ”Š xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 โ”Š node1 โ”Š 7016 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-09-07 22:25:50 โ”Š
                                          โ”Š xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 โ”Š node2 โ”Š 7016 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:25:53 โ”Š
                                          โ”Š xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 โ”Š node0 โ”Š 7014 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-06 14:55:13 โ”Š
                                          โ”Š xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 โ”Š node1 โ”Š 7014 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-09-06 14:55:08 โ”Š
                                          โ”Š xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 โ”Š node2 โ”Š 7014 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-06 14:55:13 โ”Š
                                          โ”Š xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 โ”Š node0 โ”Š 7023 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-31 11:53:28 โ”Š
                                          โ”Š xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 โ”Š node1 โ”Š 7023 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-31 11:53:28 โ”Š
                                          โ”Š xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 โ”Š node2 โ”Š 7023 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-10-31 11:53:24 โ”Š
                                          โ”Š xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 โ”Š node0 โ”Š 7026 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-11-14 13:00:44 โ”Š
                                          โ”Š xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 โ”Š node1 โ”Š 7026 โ”Š InUse  โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-11-14 13:00:44 โ”Š
                                          โ”Š xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 โ”Š node2 โ”Š 7026 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-11-14 13:00:40 โ”Š
                                          โ”Š xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 โ”Š node0 โ”Š 7020 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-30 09:56:20 โ”Š
                                          โ”Š xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 โ”Š node1 โ”Š 7020 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-30 09:56:20 โ”Š
                                          โ”Š xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 โ”Š node2 โ”Š 7020 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-10-30 09:56:16 โ”Š
                                          โ”Š xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe โ”Š node0 โ”Š 7007 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:54:00 โ”Š
                                          โ”Š xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe โ”Š node1 โ”Š 7007 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:54:04 โ”Š
                                          โ”Š xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe โ”Š node2 โ”Š 7007 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:54:04 โ”Š
                                          โ”Š xcp-volume-776758d5-503c-4dac-9d83-169be6470075 โ”Š node0 โ”Š 7008 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:55:33 โ”Š
                                          โ”Š xcp-volume-776758d5-503c-4dac-9d83-169be6470075 โ”Š node1 โ”Š 7008 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:55:38 โ”Š
                                          โ”Š xcp-volume-776758d5-503c-4dac-9d83-169be6470075 โ”Š node2 โ”Š 7008 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:55:38 โ”Š
                                          โ”Š xcp-volume-81809c66-5763-4558-919a-591b864d3f22 โ”Š node0 โ”Š 7019 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-05 10:41:37 โ”Š
                                          โ”Š xcp-volume-81809c66-5763-4558-919a-591b864d3f22 โ”Š node1 โ”Š 7019 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-05 10:41:37 โ”Š
                                          โ”Š xcp-volume-81809c66-5763-4558-919a-591b864d3f22 โ”Š node2 โ”Š 7019 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-10-05 10:41:33 โ”Š
                                          โ”Š xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d โ”Š node0 โ”Š 7018 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:28:35 โ”Š
                                          โ”Š xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d โ”Š node1 โ”Š 7018 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-09-07 22:28:32 โ”Š
                                          โ”Š xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d โ”Š node2 โ”Š 7018 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:28:35 โ”Š
                                          โ”Š xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e โ”Š node0 โ”Š 7002 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:26:57 โ”Š
                                          โ”Š xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e โ”Š node1 โ”Š 7002 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-09-05 11:26:53 โ”Š
                                          โ”Š xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e โ”Š node2 โ”Š 7002 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:26:57 โ”Š
                                          โ”Š xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 โ”Š node0 โ”Š 7015 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:25:46 โ”Š
                                          โ”Š xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 โ”Š node1 โ”Š 7015 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-09-07 22:25:42 โ”Š
                                          โ”Š xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 โ”Š node2 โ”Š 7015 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:25:46 โ”Š
                                          โ”Š xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 โ”Š node0 โ”Š 7021 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-30 09:56:28 โ”Š
                                          โ”Š xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 โ”Š node1 โ”Š 7021 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-30 09:56:28 โ”Š
                                          โ”Š xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 โ”Š node2 โ”Š 7021 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-10-30 09:56:24 โ”Š
                                          โ”Š xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe โ”Š node0 โ”Š 7010 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 15:29:02 โ”Š
                                          โ”Š xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe โ”Š node1 โ”Š 7010 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:06 โ”Š
                                          โ”Š xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe โ”Š node2 โ”Š 7010 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:06 โ”Š
                                          โ”Š xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 โ”Š node0 โ”Š 7006 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:53:52 โ”Š
                                          โ”Š xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 โ”Š node1 โ”Š 7006 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:56 โ”Š
                                          โ”Š xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 โ”Š node2 โ”Š 7006 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:56 โ”Š
                                          โ”Š xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 โ”Š node0 โ”Š 7011 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 16:12:48 โ”Š
                                          โ”Š xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 โ”Š node1 โ”Š 7011 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:12:52 โ”Š
                                          โ”Š xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 โ”Š node2 โ”Š 7011 โ”Š InUse  โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:12:52 โ”Š
                                          โ”Š xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a โ”Š node0 โ”Š 7001 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-08-30 16:16:32 โ”Š
                                          โ”Š xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a โ”Š node1 โ”Š 7001 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-08-30 16:16:35 โ”Š
                                          โ”Š xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a โ”Š node2 โ”Š 7001 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-08-30 16:16:30 โ”Š
                                          โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
                                          

                                          [edit1]
                                          More interesting results:

                                          One of my VMs has multiple vdi. The OS disk loads fine. The second disk can only be mounted read-only via mount -o ro,noload /dev/xvdb /mnt/example

                                          The second disk xcp-volume-5283a6e0... has status "unknown" from linstor resource list

                                          ~~xvda~~
                                          "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"distro_OS\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                                          "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/not-exists": "0", 
                                          "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/volume-name": "xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26", 
                                          
                                          ~~xvdb~~ 
                                          "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"type\": \"user\", \"name_description\": \"\", \"name_label\": \"distro_repos\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}"
                                          "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/not-exists": "0", 
                                          "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/volume-name": "xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3", 
                                          

                                          [/edit1]

                                          [edit2]

                                          I can make VMs that have no disks.
                                          I cannot create new VMs with VDIs on our XOSTOR SR.
                                          I cannot create XOSTOR-hosted disks on existing VMs.
                                          I cannot make snapshots.
                                          I cannot revert to earlier snapshots
                                          I get a not particularly helpful error from xcp-ng center: The attempt to create a VDI failed. Any recommendations about where I should look for related logs?

                                          [/edit2]

                                          [edit3]
                                          In case it's relevant, here are the currently installed versions:

                                          # yum list installed | grep -i linstor
                                          drbd.x86_64                     9.25.0-1.el7                @xcp-ng-linstor
                                          drbd-bash-completion.x86_64     9.25.0-1.el7                @xcp-ng-linstor
                                          drbd-pacemaker.x86_64           9.25.0-1.el7                @xcp-ng-linstor
                                          drbd-reactor.x86_64             1.2.0-1                     @xcp-ng-linstor
                                          drbd-udev.x86_64                9.25.0-1.el7                @xcp-ng-linstor
                                          drbd-utils.x86_64               9.25.0-1.el7                @xcp-ng-linstor
                                          drbd-xen.x86_64                 9.25.0-1.el7                @xcp-ng-linstor
                                          java-11-openjdk-headless.x86_64 1:11.0.20.0.8-1.el7_9       @xcp-ng-linstor
                                                                                                      @xcp-ng-linstor
                                          linstor-client.noarch           1.19.0-1                    @xcp-ng-linstor
                                          linstor-common.noarch           1.24.2-1.el7                @xcp-ng-linstor
                                          linstor-controller.noarch       1.24.2-1.el7                @xcp-ng-linstor
                                          linstor-satellite.noarch        1.24.2-1.el7                @xcp-ng-linstor
                                          python-linstor.noarch           1.19.0-1                    @xcp-ng-linstor
                                          sm.x86_64                       2.30.8-7.1.0.linstor.2.xcpng8.2
                                                                                                      @xcp-ng-linstor
                                          sm-rawhba.x86_64                2.30.8-7.1.0.linstor.2.xcpng8.2
                                                                                                      @xcp-ng-linstor
                                          tzdata.noarch                   2023c-1.el7                 @xcp-ng-linstor
                                          tzdata-java.noarch              2023c-1.el7                 @xcp-ng-linstor
                                          xcp-ng-linstor.noarch           1.1-3.xcpng8.2              @xcp-ng-updates
                                          xcp-ng-release-linstor.noarch   1.3-1.xcpng8.2              @xcp-ng-updates
                                          

                                          [/edit3]

                                          1 Reply Last reply Reply Quote 0
                                          • L Offline
                                            limezest
                                            last edited by limezest

                                            So, controller failover works. I used instructions here to test drbd-reactor failover: https://linbit.com/blog/drbd-reactor-promoter/

                                            I'm seeing an error in linstor error-reports list that has to do with how linstor queries free space on thin provisioned LVM storage. It traces back to this ticket. https://github.com/LINBIT/linstor-server/issues/80

                                            ERROR REPORT 65558791-33400-000000
                                            
                                            ============================================================
                                            
                                            Application:                        LINBITยฎ LINSTOR
                                            Module:                             Satellite
                                            Version:                            1.24.2
                                            Build ID:                           adb19ca96a07039401023410c1ea116f09929295
                                            Build time:                         2023-08-30T05:15:08+00:00
                                            Error time:                         2023-11-15 22:08:11
                                            Node:                               node0
                                            
                                            ============================================================
                                            
                                            Reported error:
                                            ===============
                                            
                                            Description:
                                                Expected 3 columns, but got 2
                                            Cause:
                                                Failed to parse line:   thin_device;23044370202624;
                                            Additional information:
                                                External command: vgs --config devices { filter=['a|/dev/sdn|','a|/dev/sdk|','a|/dev/sdj|','a|/dev/sdm|','a|/dev/sdl|','a|/dev/sdg|','a|/dev/sdf|','a|/dev/sdi|','a|/dev/sdh|','a|/dev/sdc|','a|/dev/sde|','a|/dev/sdd|','r|.*|'] } -o lv_name,lv_size,data_percent --units b --separator ; --noheadings --nosuffix linstor_group/thin_device
                                            
                                            Category:                           LinStorException
                                            Class name:                         StorageException
                                            Class canonical name:               com.linbit.linstor.storage.StorageException
                                            Generated at:                       Method 'getThinFreeSize', Source file 'LvmUtils.java', Line #399
                                            
                                            Error message:                      Unable to parse free thin sizes
                                            
                                            ErrorContext:   Description: Expected 3 columns, but got 2
                                              Cause:       Failed to parse line:   thin_device;23044370202624;
                                              Details:     External command: vgs --config devices { filter=['a|/dev/sdn|','a|/dev/sdk|','a|/dev/sdj|','a|/dev/sdm|','a|/dev/sdl|','a|/dev/sdg|','a|/dev/sdf|','a|/dev/sdi|','a|/dev/sdh|','a|/dev/sdc|','a|/dev/sde|','a|/dev/sdd|','r|.*|'] } -o lv_name,lv_size,data_percent --units b --separator ; --noheadings --nosuffix linstor_group/thin_device
                                            
                                            
                                            Call backtrace:
                                            
                                                Method                                   Native Class:Line number
                                                getThinFreeSize                          N      com.linbit.linstor.layer.storage.lvm.utils.LvmUtils:399
                                                getSpaceInfo                             N      com.linbit.linstor.layer.storage.lvm.LvmThinProvider:406
                                                getStoragePoolSpaceInfo                  N      com.linbit.linstor.layer.storage.StorageLayer:441
                                                getSpaceInfo                             N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1116
                                                getSpaceInfo                             N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:1816
                                                getStoragePoolSpaceInfo                  N      com.linbit.linstor.core.apicallhandler.StltApiCallHandlerUtils:325
                                                applyChanges                             N      com.linbit.linstor.core.apicallhandler.StltStorPoolApiCallHandler:274
                                                applyFullSync                            N      com.linbit.linstor.core.apicallhandler.StltApiCallHandler:330
                                                execute                                  N      com.linbit.linstor.api.protobuf.FullSync:113
                                                executeNonReactive                       N      com.linbit.linstor.proto.CommonMessageProcessor:534
                                                lambda$execute$14                        N      com.linbit.linstor.proto.CommonMessageProcessor:509
                                                doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:149
                                                lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:76
                                                call                                     N      reactor.core.publisher.MonoCallable:72
                                                trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:127
                                                subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
                                                subscribe                                N      reactor.core.publisher.Flux:8759
                                                onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:195
                                                request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2545
                                                onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
                                                subscribe                                N      reactor.core.publisher.MonoJust:55
                                                subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
                                                subscribe                                N      reactor.core.publisher.Flux:8773
                                                onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:427
                                                slowPath                                 N      reactor.core.publisher.FluxArray$ArraySubscription:127
                                                request                                  N      reactor.core.publisher.FluxArray$ArraySubscription:100
                                                onSubscribe                              N      reactor.core.publisher.FluxFlatMap$FlatMapMain:371
                                                subscribe                                N      reactor.core.publisher.FluxMerge:70
                                                subscribe                                N      reactor.core.publisher.Flux:8773
                                                onComplete                               N      reactor.core.publisher.FluxConcatArray$ConcatArraySubscriber:258
                                                subscribe                                N      reactor.core.publisher.FluxConcatArray:78
                                                subscribe                                N      reactor.core.publisher.InternalFluxOperator:62
                                                subscribe                                N      reactor.core.publisher.FluxDefer:54
                                                subscribe                                N      reactor.core.publisher.Flux:8773
                                                onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:427
                                                drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:453
                                                drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:724
                                                onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:256
                                                drainFused                               N      reactor.core.publisher.SinkManyUnicast:319
                                                drain                                    N      reactor.core.publisher.SinkManyUnicast:362
                                                tryEmitNext                              N      reactor.core.publisher.SinkManyUnicast:237
                                                tryEmitNext                              N      reactor.core.publisher.SinkManySerialized:100
                                                processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:392
                                                doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:227
                                                lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:164
                                                onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:185
                                                runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:440
                                                run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:527
                                                call                                     N      reactor.core.scheduler.WorkerTask:84
                                                call                                     N      reactor.core.scheduler.WorkerTask:37
                                                run                                      N      java.util.concurrent.FutureTask:264
                                                run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304
                                                runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1128
                                                run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:628
                                                run                                      N      java.lang.Thread:829
                                            
                                            
                                            END OF ERROR REPORT.
                                            
                                            

                                            I think the vgs query is improperly formatted for this version of device-mapper-persistent-data

                                            [12:31 node0 ~]# vgs -o lv_name,lv_size,data_percent --units b --noheadings --separator ;
                                            vgs: option '--separator' requires an argument
                                              Error during parsing of command line.
                                            

                                            but it works if formatted like this:

                                            [12:32 node0 ~]# vgs -o lv_name,lv_size,data_percent --units b --noheadings --separator=";"
                                              MGT;4194304B;
                                              VHD-d959f7a9-2bd1-4ac5-83af-1724336a73d0;532676608B;
                                              thin_device;23044370202624B;6.96
                                              xcp-persistent-database_00000;1077936128B;13.85
                                              xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a_00000;86083895296B;25.20
                                              xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e_00000;215197155328B;2.35
                                              xcp-volume-43467341-30c8-4fec-b807-81334d0dd309_00000;215197155328B;2.52
                                              xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3_00000;2194921226240B;69.30
                                              xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92_00000;86088089600B;0.60
                                              xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028_00000;86088089600B;0.06
                                              xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206_00000;86092283904B;24.81
                                              xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d_00000;86092283904B;0.04
                                              xcp-volume-81809c66-5763-4558-919a-591b864d3f22_00000;215197155328B;4.66
                                              xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7_00000;86096478208B;0.04
                                              xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460_00000;215205543936B;6.12
                                              xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213_00000;215209738240B;0.02
                                              xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627_00000;215209738240B;2.19
                                            
                                            

                                            In fact, vgs --separator accepts pretty much any character except semicolon. Maybe it's a problem with this version of LVM2?

                                            [12:37 node0 ~]# yum info device-mapper-persistent-data.x86_64
                                            Loaded plugins: fastestmirror
                                            Loading mirror speeds from cached hostfile
                                            Excluding mirror: updates.xcp-ng.org
                                             * xcp-ng-base: mirrors.xcp-ng.org
                                            Excluding mirror: updates.xcp-ng.org
                                             * xcp-ng-updates: mirrors.xcp-ng.org
                                            Installed Packages
                                            Name        : device-mapper-persistent-data
                                            Arch        : x86_64
                                            Version     : 0.7.3
                                            Release     : 3.el7
                                            Size        : 1.2 M
                                            Repo        : installed
                                            From repo   : install
                                            
                                            
                                            containerman17 created this issue in LINBIT/linstor-server

                                            closed Unable to parse free thin sizes error on Satellite #80

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post