XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 479.0k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G Offline
      gb.123 @Swen
      last edited by gb.123

      @Swen

      Thanks so much. It seems that the controller Is running on Node 2.

      Btw, is there any way to control the node on which controller is installed (without loosing the data)? I remember installing on Node 1 first and then on Node 2. But it seems that Node 2 is actually running the controller.

      I know the above is a noob question, but I just wanted to be sure if the linstor controller actually moves to another node when one is not available.

      1 Reply Last reply Reply Quote 0
      • B bbruun referenced this topic on
      • Maelstrom96M Offline
        Maelstrom96 @ronan-a
        last edited by Maelstrom96

        @ronan-a said in XOSTOR hyperconvergence preview:

        @Maelstrom96 We must update our documentation for that, This will probably require executing commands manually during an upgrade.

        Any news on that? We're still pretty much blocked until that's figured out.

        Also, any news on when it will be officially released?

        ronan-aR 1 Reply Last reply Reply Quote 1
        • L Offline
          limezest
          last edited by limezest

          I have a 3 node cluster in my lab running 8.2.1 with thin provisioned LVM on SATA SSDs.

          I changed the 10G management interfaces from being untagged to being being VLAN tagged and I did an emergency network reset following the changeover.
          Each node only has one 10G interface which is used for all management, vm and storage traffic.

          Following the network reset, the cluster did not automatically recover, so I restarted my three nodes.

          Now the cluster is up and the nodes claim to be connected to the XOSTOR SR, but I cannot take snapshots, clone VMs, and some VMs fail to start with the following error:

          SR_BACKEND_FAILURE_1200(, Empty dev path for 19f26a13-a09e-4c38-8219-b0b6b2d4dc26, but definition "seems" to exist, )
          

          Any troubleshooting guidance is appreciated. Thanks in advance.

          [edit1]
          I am only able to run linstor commands such as 'linstor node list' on the node that is currently the linstor controller.

          If I try linstor node list (or any linstor command) on the satellite nodes, i get the error:

          Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
          

          linstor node interface list host1 gives me

          host2       |   NetInterface     | IP            | Port   | EncryptionType
          + StltCon   |   default          | 10.10.10.11   | 3366   | PLAIN
          

          I am able to use 'linstor-kv-tool' to find the linstor volume that maps to each of my virtual disk images, but only on the controller.

          linstor-controller.service is running on host 2
          linstror-satellite.service is running on hosts 0 and 1, but i don't see any process listening on 3366 from netstat -tulpn
          [/edit1]

          ronan-aR 1 Reply Last reply Reply Quote 0
          • ronan-aR Offline
            ronan-a Vates ๐Ÿช XCP-ng Team @limezest
            last edited by

            @limezest Are you sure that you only have one linstor controller running?
            What's the output of linstor resource list? Same for mountpoint /var/lib/linstor on each host.

            Note: it's not a surprise to have this error: Cannot assign requested address.
            You must specify linstor --controllers=<ips> <cmd> to execute a command from any host.

            L 1 Reply Last reply Reply Quote 0
            • ronan-aR Offline
              ronan-a Vates ๐Ÿช XCP-ng Team @Maelstrom96
              last edited by

              @Maelstrom96 The XCP-ng 8.3 LINSTOR version is not often updated, and we are totally focused on the stable 8.2 version.
              As a reminder XCP-ng 8.3 is still in beta, so we can't write now a documentation to update LINSTOR between these versions because we still have important issues to fix and improvements to add that can impact and/or invalidate a migration process.

              1 Reply Last reply Reply Quote 1
              • L Offline
                limezest @ronan-a
                last edited by limezest

                @ronan-a Thanks for the reply.

                Node2 is currently the controller. All three nodes are currently running the satellite and monitor service.

                On nodes 0 and 1 (the satellite nodes) I see:

                /var/lib/linstor is not a mountpoint
                

                On node 2, the current controller node, I see:

                /var/lib/linstor is a mountpoint
                

                Currently, some VDI are accessible, others are not.

                For example, when i try to start my XOA VM I get the following error. I get the same error no matter which node i try to start the VM on:

                XOSTOR: POST_ATTACH_SCAN_FAILED","2","Failed to scan SR 7c0374c1-17d4-a52b-7c2a-a5ca74e1db66 after attaching, error The SR is not available [opterr=Database is not mounted]
                

                There is no entry for this UUID beginning in 7c03 in the output of the linstor-kv-tool

                
                ~~node0~~
                [11:44 node0 ~]# systemctl status linstor*
                โ— linstor-monitor.service - LINSTOR Monitor
                   Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                   Active: active (running) since Wed 2023-11-15 22:07:50 EST; 13h ago
                 Main PID: 1867 (linstor-monitor)
                   CGroup: /system.slice/linstor-monitor.service
                           โ””โ”€1867 /opt/xensource/libexec/linstor-monitord
                
                โ— linstor-satellite.service - LINSTOR Satellite Service
                   Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                  Drop-In: /etc/systemd/system/linstor-satellite.service.d
                           โ””โ”€override.conf
                   Active: active (running) since Wed 2023-11-15 22:07:59 EST; 13h ago
                 Main PID: 4786 (java)
                   CGroup: /system.slice/linstor-satellite.service
                           โ”œโ”€4786 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                           โ”œโ”€5342 drbdsetup events2 all
                           โ””โ”€6331 /usr/sbin/dmeventd
                
                ~~node1~~
                [11:44 node1 ~]# systemctl status linstor*
                โ— linstor-satellite.service - LINSTOR Satellite Service
                   Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                  Drop-In: /etc/systemd/system/linstor-satellite.service.d
                           โ””โ”€override.conf
                   Active: active (running) since Wed 2023-11-15 15:59:10 EST; 19h ago
                 Main PID: 5035 (java)
                   CGroup: /system.slice/linstor-satellite.service
                           โ”œโ”€5035 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                           โ””โ”€5585 drbdsetup events2 all
                
                โ— linstor-monitor.service - LINSTOR Monitor
                   Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                   Active: active (running) since Wed 2023-11-15 15:57:35 EST; 19h ago
                 Main PID: 1825 (linstor-monitor)
                   CGroup: /system.slice/linstor-monitor.service
                           โ””โ”€1825 /opt/xensource/libexec/linstor-monitord
                
                
                ~~node2~~
                [11:38 node2 ~]# systemctl status linstor*
                โ— linstor-satellite.service - LINSTOR Satellite Service
                   Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                  Drop-In: /etc/systemd/system/linstor-satellite.service.d
                           โ””โ”€override.conf
                   Active: active (running) since Wed 2023-11-15 15:49:43 EST; 19h ago
                 Main PID: 5212 (java)
                   CGroup: /system.slice/linstor-satellite.service
                           โ”œโ”€5212 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                           โ””โ”€5439 drbdsetup events2 all
                
                โ— linstor-monitor.service - LINSTOR Monitor
                   Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                   Active: active (running) since Wed 2023-11-15 15:48:11 EST; 19h ago
                 Main PID: 1830 (linstor-monitor)
                   CGroup: /system.slice/linstor-monitor.service
                           โ””โ”€1830 /opt/xensource/libexec/linstor-monitord
                
                โ— linstor-controller.service - drbd-reactor controlled linstor-controller
                   Loaded: loaded (/usr/lib/systemd/system/linstor-controller.service; disabled; vendor preset: disabled)
                  Drop-In: /run/systemd/system/linstor-controller.service.d
                           โ””โ”€reactor.conf
                   Active: active (running) since Wed 2023-11-15 22:04:11 EST; 13h ago
                 Main PID: 1512 (java)
                   CGroup: /system.slice/linstor-controller.service
                           โ””โ”€1512 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Controller --logs=/var/log/linstor-controller --config-directo...
                
                
                [11:37 node2 ~]# linstor resource list
                โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
                โ”Š ResourceName                                    โ”Š Node  โ”Š Port โ”Š Usage  โ”Š Conns                     โ”Š      State โ”Š CreatedOn           โ”Š
                โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
                โ”Š xcp-persistent-database                         โ”Š node0 โ”Š 7000 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-08-30 13:53:54 โ”Š
                โ”Š xcp-persistent-database                         โ”Š node1 โ”Š 7000 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-08-30 13:53:49 โ”Š
                โ”Š xcp-persistent-database                         โ”Š node2 โ”Š 7000 โ”Š InUse  โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-08-30 13:53:54 โ”Š
                โ”Š xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd โ”Š node0 โ”Š 7004 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:53:24 โ”Š
                โ”Š xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd โ”Š node1 โ”Š 7004 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:27 โ”Š
                โ”Š xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd โ”Š node2 โ”Š 7004 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:27 โ”Š
                โ”Š xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 โ”Š node0 โ”Š 7012 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 16:13:15 โ”Š
                โ”Š xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 โ”Š node1 โ”Š 7012 โ”Š InUse  โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:20 โ”Š
                โ”Š xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 โ”Š node2 โ”Š 7012 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:20 โ”Š
                โ”Š xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 โ”Š node0 โ”Š 7005 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:53:31 โ”Š
                โ”Š xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 โ”Š node1 โ”Š 7005 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:35 โ”Š
                โ”Š xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 โ”Š node2 โ”Š 7005 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:35 โ”Š
                โ”Š xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 โ”Š node0 โ”Š 7022 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-11-14 13:00:37 โ”Š
                โ”Š xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 โ”Š node1 โ”Š 7022 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-11-14 13:00:37 โ”Š
                โ”Š xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 โ”Š node2 โ”Š 7022 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-11-14 13:00:33 โ”Š
                โ”Š xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 โ”Š node0 โ”Š 7009 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 15:28:56 โ”Š
                โ”Š xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 โ”Š node1 โ”Š 7009 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:00 โ”Š
                โ”Š xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 โ”Š node2 โ”Š 7009 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:00 โ”Š
                โ”Š xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 โ”Š node0 โ”Š 7017 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:28:26 โ”Š
                โ”Š xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 โ”Š node1 โ”Š 7017 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-09-07 22:28:23 โ”Š
                โ”Š xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 โ”Š node2 โ”Š 7017 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:28:26 โ”Š
                โ”Š xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b โ”Š node0 โ”Š 7013 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 16:13:24 โ”Š
                โ”Š xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b โ”Š node1 โ”Š 7013 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:28 โ”Š
                โ”Š xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b โ”Š node2 โ”Š 7013 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:13:28 โ”Š
                โ”Š xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 โ”Š node0 โ”Š 7003 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:28:20 โ”Š
                โ”Š xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 โ”Š node1 โ”Š 7003 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 11:28:16 โ”Š
                โ”Š xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 โ”Š node2 โ”Š 7003 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:28:20 โ”Š
                โ”Š xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 โ”Š node0 โ”Š 7016 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:25:53 โ”Š
                โ”Š xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 โ”Š node1 โ”Š 7016 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-09-07 22:25:50 โ”Š
                โ”Š xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 โ”Š node2 โ”Š 7016 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:25:53 โ”Š
                โ”Š xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 โ”Š node0 โ”Š 7014 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-06 14:55:13 โ”Š
                โ”Š xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 โ”Š node1 โ”Š 7014 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-09-06 14:55:08 โ”Š
                โ”Š xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 โ”Š node2 โ”Š 7014 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-06 14:55:13 โ”Š
                โ”Š xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 โ”Š node0 โ”Š 7023 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-31 11:53:28 โ”Š
                โ”Š xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 โ”Š node1 โ”Š 7023 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-31 11:53:28 โ”Š
                โ”Š xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 โ”Š node2 โ”Š 7023 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-10-31 11:53:24 โ”Š
                โ”Š xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 โ”Š node0 โ”Š 7026 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-11-14 13:00:44 โ”Š
                โ”Š xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 โ”Š node1 โ”Š 7026 โ”Š InUse  โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-11-14 13:00:44 โ”Š
                โ”Š xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 โ”Š node2 โ”Š 7026 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-11-14 13:00:40 โ”Š
                โ”Š xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 โ”Š node0 โ”Š 7020 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-30 09:56:20 โ”Š
                โ”Š xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 โ”Š node1 โ”Š 7020 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-30 09:56:20 โ”Š
                โ”Š xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 โ”Š node2 โ”Š 7020 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-10-30 09:56:16 โ”Š
                โ”Š xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe โ”Š node0 โ”Š 7007 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:54:00 โ”Š
                โ”Š xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe โ”Š node1 โ”Š 7007 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:54:04 โ”Š
                โ”Š xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe โ”Š node2 โ”Š 7007 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:54:04 โ”Š
                โ”Š xcp-volume-776758d5-503c-4dac-9d83-169be6470075 โ”Š node0 โ”Š 7008 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:55:33 โ”Š
                โ”Š xcp-volume-776758d5-503c-4dac-9d83-169be6470075 โ”Š node1 โ”Š 7008 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:55:38 โ”Š
                โ”Š xcp-volume-776758d5-503c-4dac-9d83-169be6470075 โ”Š node2 โ”Š 7008 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:55:38 โ”Š
                โ”Š xcp-volume-81809c66-5763-4558-919a-591b864d3f22 โ”Š node0 โ”Š 7019 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-05 10:41:37 โ”Š
                โ”Š xcp-volume-81809c66-5763-4558-919a-591b864d3f22 โ”Š node1 โ”Š 7019 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-10-05 10:41:37 โ”Š
                โ”Š xcp-volume-81809c66-5763-4558-919a-591b864d3f22 โ”Š node2 โ”Š 7019 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-10-05 10:41:33 โ”Š
                โ”Š xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d โ”Š node0 โ”Š 7018 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:28:35 โ”Š
                โ”Š xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d โ”Š node1 โ”Š 7018 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-09-07 22:28:32 โ”Š
                โ”Š xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d โ”Š node2 โ”Š 7018 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-07 22:28:35 โ”Š
                โ”Š xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e โ”Š node0 โ”Š 7002 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:26:57 โ”Š
                โ”Š xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e โ”Š node1 โ”Š 7002 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-09-05 11:26:53 โ”Š
                โ”Š xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e โ”Š node2 โ”Š 7002 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 11:26:57 โ”Š
                โ”Š xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 โ”Š node0 โ”Š 7015 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:25:46 โ”Š
                โ”Š xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 โ”Š node1 โ”Š 7015 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-09-07 22:25:42 โ”Š
                โ”Š xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 โ”Š node2 โ”Š 7015 โ”Š Unused โ”Š Connecting(node1)         โ”Š   UpToDate โ”Š 2023-09-07 22:25:46 โ”Š
                โ”Š xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 โ”Š node0 โ”Š 7021 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-30 09:56:28 โ”Š
                โ”Š xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 โ”Š node1 โ”Š 7021 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-10-30 09:56:28 โ”Š
                โ”Š xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 โ”Š node2 โ”Š 7021 โ”Š Unused โ”Š Ok                        โ”Š TieBreaker โ”Š 2023-10-30 09:56:24 โ”Š
                โ”Š xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe โ”Š node0 โ”Š 7010 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 15:29:02 โ”Š
                โ”Š xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe โ”Š node1 โ”Š 7010 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:06 โ”Š
                โ”Š xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe โ”Š node2 โ”Š 7010 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 15:29:06 โ”Š
                โ”Š xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 โ”Š node0 โ”Š 7006 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 14:53:52 โ”Š
                โ”Š xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 โ”Š node1 โ”Š 7006 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:56 โ”Š
                โ”Š xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 โ”Š node2 โ”Š 7006 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 14:53:56 โ”Š
                โ”Š xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 โ”Š node0 โ”Š 7011 โ”Š Unused โ”Š Ok                        โ”Š   Diskless โ”Š 2023-09-05 16:12:48 โ”Š
                โ”Š xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 โ”Š node1 โ”Š 7011 โ”Š Unused โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:12:52 โ”Š
                โ”Š xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 โ”Š node2 โ”Š 7011 โ”Š InUse  โ”Š Ok                        โ”Š   UpToDate โ”Š 2023-09-05 16:12:52 โ”Š
                โ”Š xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a โ”Š node0 โ”Š 7001 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-08-30 16:16:32 โ”Š
                โ”Š xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a โ”Š node1 โ”Š 7001 โ”Š Unused โ”Š Connecting(node2)         โ”Š   UpToDate โ”Š 2023-08-30 16:16:35 โ”Š
                โ”Š xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a โ”Š node2 โ”Š 7001 โ”Š        โ”Š                           โ”Š    Unknown โ”Š 2023-08-30 16:16:30 โ”Š
                โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
                

                [edit1]
                More interesting results:

                One of my VMs has multiple vdi. The OS disk loads fine. The second disk can only be mounted read-only via mount -o ro,noload /dev/xvdb /mnt/example

                The second disk xcp-volume-5283a6e0... has status "unknown" from linstor resource list

                ~~xvda~~
                "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"distro_OS\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/not-exists": "0", 
                "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/volume-name": "xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26", 
                
                ~~xvdb~~ 
                "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"type\": \"user\", \"name_description\": \"\", \"name_label\": \"distro_repos\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}"
                "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/not-exists": "0", 
                "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/volume-name": "xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3", 
                

                [/edit1]

                [edit2]

                I can make VMs that have no disks.
                I cannot create new VMs with VDIs on our XOSTOR SR.
                I cannot create XOSTOR-hosted disks on existing VMs.
                I cannot make snapshots.
                I cannot revert to earlier snapshots
                I get a not particularly helpful error from xcp-ng center: The attempt to create a VDI failed. Any recommendations about where I should look for related logs?

                [/edit2]

                [edit3]
                In case it's relevant, here are the currently installed versions:

                # yum list installed | grep -i linstor
                drbd.x86_64                     9.25.0-1.el7                @xcp-ng-linstor
                drbd-bash-completion.x86_64     9.25.0-1.el7                @xcp-ng-linstor
                drbd-pacemaker.x86_64           9.25.0-1.el7                @xcp-ng-linstor
                drbd-reactor.x86_64             1.2.0-1                     @xcp-ng-linstor
                drbd-udev.x86_64                9.25.0-1.el7                @xcp-ng-linstor
                drbd-utils.x86_64               9.25.0-1.el7                @xcp-ng-linstor
                drbd-xen.x86_64                 9.25.0-1.el7                @xcp-ng-linstor
                java-11-openjdk-headless.x86_64 1:11.0.20.0.8-1.el7_9       @xcp-ng-linstor
                                                                            @xcp-ng-linstor
                linstor-client.noarch           1.19.0-1                    @xcp-ng-linstor
                linstor-common.noarch           1.24.2-1.el7                @xcp-ng-linstor
                linstor-controller.noarch       1.24.2-1.el7                @xcp-ng-linstor
                linstor-satellite.noarch        1.24.2-1.el7                @xcp-ng-linstor
                python-linstor.noarch           1.19.0-1                    @xcp-ng-linstor
                sm.x86_64                       2.30.8-7.1.0.linstor.2.xcpng8.2
                                                                            @xcp-ng-linstor
                sm-rawhba.x86_64                2.30.8-7.1.0.linstor.2.xcpng8.2
                                                                            @xcp-ng-linstor
                tzdata.noarch                   2023c-1.el7                 @xcp-ng-linstor
                tzdata-java.noarch              2023c-1.el7                 @xcp-ng-linstor
                xcp-ng-linstor.noarch           1.1-3.xcpng8.2              @xcp-ng-updates
                xcp-ng-release-linstor.noarch   1.3-1.xcpng8.2              @xcp-ng-updates
                

                [/edit3]

                1 Reply Last reply Reply Quote 0
                • L Offline
                  limezest
                  last edited by limezest

                  So, controller failover works. I used instructions here to test drbd-reactor failover: https://linbit.com/blog/drbd-reactor-promoter/

                  I'm seeing an error in linstor error-reports list that has to do with how linstor queries free space on thin provisioned LVM storage. It traces back to this ticket. https://github.com/LINBIT/linstor-server/issues/80

                  ERROR REPORT 65558791-33400-000000
                  
                  ============================================================
                  
                  Application:                        LINBITยฎ LINSTOR
                  Module:                             Satellite
                  Version:                            1.24.2
                  Build ID:                           adb19ca96a07039401023410c1ea116f09929295
                  Build time:                         2023-08-30T05:15:08+00:00
                  Error time:                         2023-11-15 22:08:11
                  Node:                               node0
                  
                  ============================================================
                  
                  Reported error:
                  ===============
                  
                  Description:
                      Expected 3 columns, but got 2
                  Cause:
                      Failed to parse line:   thin_device;23044370202624;
                  Additional information:
                      External command: vgs --config devices { filter=['a|/dev/sdn|','a|/dev/sdk|','a|/dev/sdj|','a|/dev/sdm|','a|/dev/sdl|','a|/dev/sdg|','a|/dev/sdf|','a|/dev/sdi|','a|/dev/sdh|','a|/dev/sdc|','a|/dev/sde|','a|/dev/sdd|','r|.*|'] } -o lv_name,lv_size,data_percent --units b --separator ; --noheadings --nosuffix linstor_group/thin_device
                  
                  Category:                           LinStorException
                  Class name:                         StorageException
                  Class canonical name:               com.linbit.linstor.storage.StorageException
                  Generated at:                       Method 'getThinFreeSize', Source file 'LvmUtils.java', Line #399
                  
                  Error message:                      Unable to parse free thin sizes
                  
                  ErrorContext:   Description: Expected 3 columns, but got 2
                    Cause:       Failed to parse line:   thin_device;23044370202624;
                    Details:     External command: vgs --config devices { filter=['a|/dev/sdn|','a|/dev/sdk|','a|/dev/sdj|','a|/dev/sdm|','a|/dev/sdl|','a|/dev/sdg|','a|/dev/sdf|','a|/dev/sdi|','a|/dev/sdh|','a|/dev/sdc|','a|/dev/sde|','a|/dev/sdd|','r|.*|'] } -o lv_name,lv_size,data_percent --units b --separator ; --noheadings --nosuffix linstor_group/thin_device
                  
                  
                  Call backtrace:
                  
                      Method                                   Native Class:Line number
                      getThinFreeSize                          N      com.linbit.linstor.layer.storage.lvm.utils.LvmUtils:399
                      getSpaceInfo                             N      com.linbit.linstor.layer.storage.lvm.LvmThinProvider:406
                      getStoragePoolSpaceInfo                  N      com.linbit.linstor.layer.storage.StorageLayer:441
                      getSpaceInfo                             N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1116
                      getSpaceInfo                             N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:1816
                      getStoragePoolSpaceInfo                  N      com.linbit.linstor.core.apicallhandler.StltApiCallHandlerUtils:325
                      applyChanges                             N      com.linbit.linstor.core.apicallhandler.StltStorPoolApiCallHandler:274
                      applyFullSync                            N      com.linbit.linstor.core.apicallhandler.StltApiCallHandler:330
                      execute                                  N      com.linbit.linstor.api.protobuf.FullSync:113
                      executeNonReactive                       N      com.linbit.linstor.proto.CommonMessageProcessor:534
                      lambda$execute$14                        N      com.linbit.linstor.proto.CommonMessageProcessor:509
                      doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:149
                      lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:76
                      call                                     N      reactor.core.publisher.MonoCallable:72
                      trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:127
                      subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
                      subscribe                                N      reactor.core.publisher.Flux:8759
                      onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:195
                      request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2545
                      onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
                      subscribe                                N      reactor.core.publisher.MonoJust:55
                      subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
                      subscribe                                N      reactor.core.publisher.Flux:8773
                      onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:427
                      slowPath                                 N      reactor.core.publisher.FluxArray$ArraySubscription:127
                      request                                  N      reactor.core.publisher.FluxArray$ArraySubscription:100
                      onSubscribe                              N      reactor.core.publisher.FluxFlatMap$FlatMapMain:371
                      subscribe                                N      reactor.core.publisher.FluxMerge:70
                      subscribe                                N      reactor.core.publisher.Flux:8773
                      onComplete                               N      reactor.core.publisher.FluxConcatArray$ConcatArraySubscriber:258
                      subscribe                                N      reactor.core.publisher.FluxConcatArray:78
                      subscribe                                N      reactor.core.publisher.InternalFluxOperator:62
                      subscribe                                N      reactor.core.publisher.FluxDefer:54
                      subscribe                                N      reactor.core.publisher.Flux:8773
                      onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:427
                      drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:453
                      drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:724
                      onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:256
                      drainFused                               N      reactor.core.publisher.SinkManyUnicast:319
                      drain                                    N      reactor.core.publisher.SinkManyUnicast:362
                      tryEmitNext                              N      reactor.core.publisher.SinkManyUnicast:237
                      tryEmitNext                              N      reactor.core.publisher.SinkManySerialized:100
                      processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:392
                      doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:227
                      lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:164
                      onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:185
                      runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:440
                      run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:527
                      call                                     N      reactor.core.scheduler.WorkerTask:84
                      call                                     N      reactor.core.scheduler.WorkerTask:37
                      run                                      N      java.util.concurrent.FutureTask:264
                      run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304
                      runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1128
                      run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:628
                      run                                      N      java.lang.Thread:829
                  
                  
                  END OF ERROR REPORT.
                  
                  

                  I think the vgs query is improperly formatted for this version of device-mapper-persistent-data

                  [12:31 node0 ~]# vgs -o lv_name,lv_size,data_percent --units b --noheadings --separator ;
                  vgs: option '--separator' requires an argument
                    Error during parsing of command line.
                  

                  but it works if formatted like this:

                  [12:32 node0 ~]# vgs -o lv_name,lv_size,data_percent --units b --noheadings --separator=";"
                    MGT;4194304B;
                    VHD-d959f7a9-2bd1-4ac5-83af-1724336a73d0;532676608B;
                    thin_device;23044370202624B;6.96
                    xcp-persistent-database_00000;1077936128B;13.85
                    xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a_00000;86083895296B;25.20
                    xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e_00000;215197155328B;2.35
                    xcp-volume-43467341-30c8-4fec-b807-81334d0dd309_00000;215197155328B;2.52
                    xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3_00000;2194921226240B;69.30
                    xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92_00000;86088089600B;0.60
                    xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028_00000;86088089600B;0.06
                    xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206_00000;86092283904B;24.81
                    xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d_00000;86092283904B;0.04
                    xcp-volume-81809c66-5763-4558-919a-591b864d3f22_00000;215197155328B;4.66
                    xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7_00000;86096478208B;0.04
                    xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460_00000;215205543936B;6.12
                    xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213_00000;215209738240B;0.02
                    xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627_00000;215209738240B;2.19
                  
                  

                  In fact, vgs --separator accepts pretty much any character except semicolon. Maybe it's a problem with this version of LVM2?

                  [12:37 node0 ~]# yum info device-mapper-persistent-data.x86_64
                  Loaded plugins: fastestmirror
                  Loading mirror speeds from cached hostfile
                  Excluding mirror: updates.xcp-ng.org
                   * xcp-ng-base: mirrors.xcp-ng.org
                  Excluding mirror: updates.xcp-ng.org
                   * xcp-ng-updates: mirrors.xcp-ng.org
                  Installed Packages
                  Name        : device-mapper-persistent-data
                  Arch        : x86_64
                  Version     : 0.7.3
                  Release     : 3.el7
                  Size        : 1.2 M
                  Repo        : installed
                  From repo   : install
                  
                  
                  containerman17 created this issue in LINBIT/linstor-server

                  closed Unable to parse free thin sizes error on Satellite #80

                  1 Reply Last reply Reply Quote 0
                  • J Offline
                    jmm
                    last edited by

                    Hi team,
                    I'm currently testing xostor on a three nodes xcp-8.2.1 pool
                    Before adding any new vm, i replaced a node (xcp-hc3)
                    Since everything seems to be ok, i've added two vms.
                    But I think that a diskless resource is missing for "xcp-persistent-database"
                    Is there a way to resolve this situation ?

                    [10:23 xcp-hc1 ~]# linstor resource list
                    โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
                    โ”Š ResourceName โ”Š Node โ”Š Port โ”Š Usage โ”Š Conns โ”Š State โ”Š CreatedOn โ”Š
                    โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
                    โ”Š xcp-persistent-database โ”Š xcp-hc1 โ”Š 7000 โ”Š InUse โ”Š Ok โ”Š UpToDate โ”Š 2023-12-18 15:47:37 โ”Š
                    โ”Š xcp-persistent-database โ”Š xcp-hc2 โ”Š 7000 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-18 15:47:37 โ”Š
                    โ”Š xcp-volume-17208381-56c0-4d8a-9c16-0a2000a45e56 โ”Š xcp-hc1 โ”Š 7004 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-18 17:41:41 โ”Š
                    โ”Š xcp-volume-17208381-56c0-4d8a-9c16-0a2000a45e56 โ”Š xcp-hc2 โ”Š 7004 โ”Š InUse โ”Š Ok โ”Š Diskless โ”Š 2023-12-18 17:41:41 โ”Š
                    โ”Š xcp-volume-17208381-56c0-4d8a-9c16-0a2000a45e56 โ”Š xcp-hc3 โ”Š 7004 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-18 17:41:42 โ”Š
                    โ”Š xcp-volume-94af3c03-91b4-46ea-bf51-d0c50a085e6b โ”Š xcp-hc1 โ”Š 7002 โ”Š InUse โ”Š Ok โ”Š Diskless โ”Š 2023-12-19 10:17:15 โ”Š
                    โ”Š xcp-volume-94af3c03-91b4-46ea-bf51-d0c50a085e6b โ”Š xcp-hc2 โ”Š 7002 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-19 09:49:35 โ”Š
                    โ”Š xcp-volume-94af3c03-91b4-46ea-bf51-d0c50a085e6b โ”Š xcp-hc3 โ”Š 7002 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-19 09:49:35 โ”Š
                    โ”Š xcp-volume-a395bb01-76a2-4e9a-a082-f18b3287afb2 โ”Š xcp-hc1 โ”Š 7005 โ”Š Unused โ”Š Ok โ”Š Diskless โ”Š 2023-12-19 10:17:16 โ”Š
                    โ”Š xcp-volume-a395bb01-76a2-4e9a-a082-f18b3287afb2 โ”Š xcp-hc2 โ”Š 7005 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-19 09:49:45 โ”Š
                    โ”Š xcp-volume-a395bb01-76a2-4e9a-a082-f18b3287afb2 โ”Š xcp-hc3 โ”Š 7005 โ”Š Unused โ”Š Ok โ”Š UpToDate โ”Š 2023-12-19 09:49:45 โ”Š

                    J 1 Reply Last reply Reply Quote 0
                    • J Offline
                      jmm @jmm
                      last edited by

                      @jmm Self answer :
                      linstor resource create xcp-hc3 xcp-persistent-database --drbd-diskless

                      ๐Ÿ™‚

                      1 Reply Last reply Reply Quote 0
                      • J john.c referenced this topic on
                      • G Offline
                        gb.123
                        last edited by

                        I am getting :

                          WARNING: Pool zeroing and 1.00 MiB large chunk size slows down thin provisioning.
                          WARNING: Consider disabling zeroing (-Zn) or using smaller chunk size (<512.00 KiB).
                        

                        How do I change Chunk Size and/or zeroing ?

                        Can this be done 'on the fly' (without loosing data) ?

                        1 Reply Last reply Reply Quote 0
                        • G Offline
                          gb.123
                          last edited by gb.123

                          This post is deleted!
                          1 Reply Last reply Reply Quote 0
                          • B Offline
                            BHellman 3rd party vendor
                            last edited by

                            This thread has grown quite large and has a lot of information in it. Is there an official documentation chapter on XOSTOR available anywhere?

                            ronan-aR 1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates ๐Ÿช Co-Founder CEO
                              last edited by

                              For now it's within this thread ๐Ÿ™‚ Feel free to tell us what's missing in the first post!

                              1 Reply Last reply Reply Quote 0
                              • ronan-aR Offline
                                ronan-a Vates ๐Ÿช XCP-ng Team @BHellman
                                last edited by

                                @BHellman The first post has a FAQ that I update each time I meet users with a common/recurring problem. ๐Ÿ˜‰

                                1 Reply Last reply Reply Quote 2
                                • B Offline
                                  BHellman 3rd party vendor
                                  last edited by

                                  Thanks for the replies. My issues are currently with the GUI so I don't know if that applies here. This is all from the GUI, so please let me know if that's outside the scope of this post and I can post elsewhere.

                                  One issue is upon creating a new XOSTOR SR, the packages are installed, however the SR creation fails due to one of the package, sm-rawhba, that needs updating. You have to apply patched through the GUI then reboot the node, or execute "xe-restart-toolstack" on each node. You can then go back and create a new SR, but only after wiping the disks that you originally tried to create the SR on; vgremove and pvremove.

                                  I'm planning on doing some more testing, please let me know if GUI issues are appropriate to post here.

                                  ronan-aR 1 Reply Last reply Reply Quote 0
                                  • ronan-aR Offline
                                    ronan-a Vates ๐Ÿช XCP-ng Team @BHellman
                                    last edited by

                                    @BHellman It's fine to post simple issues in this thread. For complex problems a ticket is probably better. ๐Ÿ™‚

                                    One issue is upon creating a new XOSTOR SR, the packages are installed, however the SR creation fails due to one of the package, sm-rawhba, that needs updating.

                                    Not totally that, sm-rawhba is added to the list because the UI installs a modified version of sm with LINSTOR support.
                                    The real issue is that xe-toolstack-restart is not called during the initial setup, a method is missing in our updater plugin to check if a package is present or not, I will add this method for the XOA team. ๐Ÿ˜‰

                                    1 Reply Last reply Reply Quote 0
                                    • B Offline
                                      BHellman 3rd party vendor
                                      last edited by

                                      I'm not sure what the expected behavior is but....

                                      I have xcp1, xcp2, xcp3 as hosts in my XOSTOR pool, using an XOSTOR repository. I had a VM running on xcp2, unplugged the power from it and left it uplugged for about 5 minutes. The VM remained "running" according to XOA, however it wasn't.

                                      What is the expected behavior when this happens and how do you go about recovering from a temporarily failed/powered off node?

                                      My expectation was that my vm would move to xcp1 (where there is a replica) and start, then outdate xcp2. I have "auto start" enabled under advanced on the VM.

                                      L 1 Reply Last reply Reply Quote 0
                                      • L Offline
                                        limezest @BHellman
                                        last edited by

                                        @BHellman
                                        "auto start" means that when you power up the cluster or host node that VM will be automatically started.

                                        I think you're describing high availability, which needs to be enabled at the cluster level. Then you need to define a HA policy for the vm

                                        ronan-aR 1 Reply Last reply Reply Quote 1
                                        • ronan-aR Offline
                                          ronan-a Vates ๐Ÿช XCP-ng Team @limezest
                                          last edited by

                                          @limezest Exactly. Auto start feature is only checked during host boot.

                                          @BHellman To automatically restart a VM in case of failure:

                                          xe vm-param-set uuid=<VM_UUID> ha-restart-priority=restart order=1 
                                          xe pool-ha-enable heartbeat-sr-uuids=<SR_UUID> 
                                          
                                          B 1 Reply Last reply Reply Quote 0
                                          • B Offline
                                            BHellman 3rd party vendor @ronan-a
                                            last edited by

                                            @ronan-a @limezest

                                            Thank you for the replies ๐Ÿ™‚

                                            Sorry for all the newb questions - I'm diving into this when time permits. Appreciate the help and understanding.

                                            1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post