XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XOSTOR hyperconvergence preview

    Scheduled Pinned Locked Moved XOSTOR
    446 Posts 47 Posters 479.0k Views 48 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ronan-aR Offline
      ronan-a Vates 🪐 XCP-ng Team @gb.123
      last edited by ronan-a

      @gb-123 said in XOSTOR hyperconvergence preview:

      Are there any other packages in xcp-ng requiring python2 ?

      Probably yes for few packages.

      Would it be advisable to change default python to python3 (by adding alias in bashrc) ?

      I don't recommend to do that.

      Or do you think I should create a script which in-turn runs python3 command only for linstor ?

      It's probably the best solution for the moment. Our ideal goal would obviously to only have Python 3:

      • But currently XCP-ng 8.3 is still in development.
      • And there are still things to check before thinking about globally changing the default Python interpreter.
      G 1 Reply Last reply Reply Quote 0
      • G Offline
        gb.123 @ronan-a
        last edited by

        @ronan-a

        Any idea why I am getting :

        Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
        

        I installed linstor with replication as 2 and the VM is running fine. I am executing this command from dom0 of Pool Master.

        SwenS 1 Reply Last reply Reply Quote 0
        • SwenS Offline
          Swen @gb.123
          last edited by

          @gb-123 pool master does not need to be linstor controller. Try this command on the other hosts. It will only work on the node which is the linstor controller.

          G 1 Reply Last reply Reply Quote 1
          • olivierlambertO olivierlambert moved this topic from Feedback and requests on
          • G Offline
            gb.123 @Swen
            last edited by gb.123

            @Swen

            Thanks so much. It seems that the controller Is running on Node 2.

            Btw, is there any way to control the node on which controller is installed (without loosing the data)? I remember installing on Node 1 first and then on Node 2. But it seems that Node 2 is actually running the controller.

            I know the above is a noob question, but I just wanted to be sure if the linstor controller actually moves to another node when one is not available.

            1 Reply Last reply Reply Quote 0
            • B bbruun referenced this topic on
            • Maelstrom96M Offline
              Maelstrom96 @ronan-a
              last edited by Maelstrom96

              @ronan-a said in XOSTOR hyperconvergence preview:

              @Maelstrom96 We must update our documentation for that, This will probably require executing commands manually during an upgrade.

              Any news on that? We're still pretty much blocked until that's figured out.

              Also, any news on when it will be officially released?

              ronan-aR 1 Reply Last reply Reply Quote 1
              • L Offline
                limezest
                last edited by limezest

                I have a 3 node cluster in my lab running 8.2.1 with thin provisioned LVM on SATA SSDs.

                I changed the 10G management interfaces from being untagged to being being VLAN tagged and I did an emergency network reset following the changeover.
                Each node only has one 10G interface which is used for all management, vm and storage traffic.

                Following the network reset, the cluster did not automatically recover, so I restarted my three nodes.

                Now the cluster is up and the nodes claim to be connected to the XOSTOR SR, but I cannot take snapshots, clone VMs, and some VMs fail to start with the following error:

                SR_BACKEND_FAILURE_1200(, Empty dev path for 19f26a13-a09e-4c38-8219-b0b6b2d4dc26, but definition "seems" to exist, )
                

                Any troubleshooting guidance is appreciated. Thanks in advance.

                [edit1]
                I am only able to run linstor commands such as 'linstor node list' on the node that is currently the linstor controller.

                If I try linstor node list (or any linstor command) on the satellite nodes, i get the error:

                Error: Unable to connect to linstor://localhost:3370: [Errno 99] Cannot assign requested address
                

                linstor node interface list host1 gives me

                host2       |   NetInterface     | IP            | Port   | EncryptionType
                + StltCon   |   default          | 10.10.10.11   | 3366   | PLAIN
                

                I am able to use 'linstor-kv-tool' to find the linstor volume that maps to each of my virtual disk images, but only on the controller.

                linstor-controller.service is running on host 2
                linstror-satellite.service is running on hosts 0 and 1, but i don't see any process listening on 3366 from netstat -tulpn
                [/edit1]

                ronan-aR 1 Reply Last reply Reply Quote 0
                • ronan-aR Offline
                  ronan-a Vates 🪐 XCP-ng Team @limezest
                  last edited by

                  @limezest Are you sure that you only have one linstor controller running?
                  What's the output of linstor resource list? Same for mountpoint /var/lib/linstor on each host.

                  Note: it's not a surprise to have this error: Cannot assign requested address.
                  You must specify linstor --controllers=<ips> <cmd> to execute a command from any host.

                  L 1 Reply Last reply Reply Quote 0
                  • ronan-aR Offline
                    ronan-a Vates 🪐 XCP-ng Team @Maelstrom96
                    last edited by

                    @Maelstrom96 The XCP-ng 8.3 LINSTOR version is not often updated, and we are totally focused on the stable 8.2 version.
                    As a reminder XCP-ng 8.3 is still in beta, so we can't write now a documentation to update LINSTOR between these versions because we still have important issues to fix and improvements to add that can impact and/or invalidate a migration process.

                    1 Reply Last reply Reply Quote 1
                    • L Offline
                      limezest @ronan-a
                      last edited by limezest

                      @ronan-a Thanks for the reply.

                      Node2 is currently the controller. All three nodes are currently running the satellite and monitor service.

                      On nodes 0 and 1 (the satellite nodes) I see:

                      /var/lib/linstor is not a mountpoint
                      

                      On node 2, the current controller node, I see:

                      /var/lib/linstor is a mountpoint
                      

                      Currently, some VDI are accessible, others are not.

                      For example, when i try to start my XOA VM I get the following error. I get the same error no matter which node i try to start the VM on:

                      XOSTOR: POST_ATTACH_SCAN_FAILED","2","Failed to scan SR 7c0374c1-17d4-a52b-7c2a-a5ca74e1db66 after attaching, error The SR is not available [opterr=Database is not mounted]
                      

                      There is no entry for this UUID beginning in 7c03 in the output of the linstor-kv-tool

                      
                      ~~node0~~
                      [11:44 node0 ~]# systemctl status linstor*
                      ● linstor-monitor.service - LINSTOR Monitor
                         Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                         Active: active (running) since Wed 2023-11-15 22:07:50 EST; 13h ago
                       Main PID: 1867 (linstor-monitor)
                         CGroup: /system.slice/linstor-monitor.service
                                 └─1867 /opt/xensource/libexec/linstor-monitord
                      
                      ● linstor-satellite.service - LINSTOR Satellite Service
                         Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                        Drop-In: /etc/systemd/system/linstor-satellite.service.d
                                 └─override.conf
                         Active: active (running) since Wed 2023-11-15 22:07:59 EST; 13h ago
                       Main PID: 4786 (java)
                         CGroup: /system.slice/linstor-satellite.service
                                 ├─4786 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                                 ├─5342 drbdsetup events2 all
                                 └─6331 /usr/sbin/dmeventd
                      
                      ~~node1~~
                      [11:44 node1 ~]# systemctl status linstor*
                      ● linstor-satellite.service - LINSTOR Satellite Service
                         Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                        Drop-In: /etc/systemd/system/linstor-satellite.service.d
                                 └─override.conf
                         Active: active (running) since Wed 2023-11-15 15:59:10 EST; 19h ago
                       Main PID: 5035 (java)
                         CGroup: /system.slice/linstor-satellite.service
                                 ├─5035 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                                 └─5585 drbdsetup events2 all
                      
                      ● linstor-monitor.service - LINSTOR Monitor
                         Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                         Active: active (running) since Wed 2023-11-15 15:57:35 EST; 19h ago
                       Main PID: 1825 (linstor-monitor)
                         CGroup: /system.slice/linstor-monitor.service
                                 └─1825 /opt/xensource/libexec/linstor-monitord
                      
                      
                      ~~node2~~
                      [11:38 node2 ~]# systemctl status linstor*
                      ● linstor-satellite.service - LINSTOR Satellite Service
                         Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
                        Drop-In: /etc/systemd/system/linstor-satellite.service.d
                                 └─override.conf
                         Active: active (running) since Wed 2023-11-15 15:49:43 EST; 19h ago
                       Main PID: 5212 (java)
                         CGroup: /system.slice/linstor-satellite.service
                                 ├─5212 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory...
                                 └─5439 drbdsetup events2 all
                      
                      ● linstor-monitor.service - LINSTOR Monitor
                         Loaded: loaded (/usr/lib/systemd/system/linstor-monitor.service; enabled; vendor preset: disabled)
                         Active: active (running) since Wed 2023-11-15 15:48:11 EST; 19h ago
                       Main PID: 1830 (linstor-monitor)
                         CGroup: /system.slice/linstor-monitor.service
                                 └─1830 /opt/xensource/libexec/linstor-monitord
                      
                      ● linstor-controller.service - drbd-reactor controlled linstor-controller
                         Loaded: loaded (/usr/lib/systemd/system/linstor-controller.service; disabled; vendor preset: disabled)
                        Drop-In: /run/systemd/system/linstor-controller.service.d
                                 └─reactor.conf
                         Active: active (running) since Wed 2023-11-15 22:04:11 EST; 13h ago
                       Main PID: 1512 (java)
                         CGroup: /system.slice/linstor-controller.service
                                 └─1512 /usr/lib/jvm/jre-11/bin/java -Xms32M -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Controller --logs=/var/log/linstor-controller --config-directo...
                      
                      
                      [11:37 node2 ~]# linstor resource list
                      ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
                      ┊ ResourceName                                    ┊ Node  ┊ Port ┊ Usage  ┊ Conns                     ┊      State ┊ CreatedOn           ┊
                      ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
                      ┊ xcp-persistent-database                         ┊ node0 ┊ 7000 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-08-30 13:53:54 ┊
                      ┊ xcp-persistent-database                         ┊ node1 ┊ 7000 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-08-30 13:53:49 ┊
                      ┊ xcp-persistent-database                         ┊ node2 ┊ 7000 ┊ InUse  ┊ Ok                        ┊   UpToDate ┊ 2023-08-30 13:53:54 ┊
                      ┊ xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd ┊ node0 ┊ 7004 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 14:53:24 ┊
                      ┊ xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd ┊ node1 ┊ 7004 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:53:27 ┊
                      ┊ xcp-volume-00345120-0b6c-4ebd-abf9-96722640e5cd ┊ node2 ┊ 7004 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:53:27 ┊
                      ┊ xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 ┊ node0 ┊ 7012 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 16:13:15 ┊
                      ┊ xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 ┊ node1 ┊ 7012 ┊ InUse  ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 16:13:20 ┊
                      ┊ xcp-volume-0877abd7-5665-4c00-8d16-1f13603c7328 ┊ node2 ┊ 7012 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 16:13:20 ┊
                      ┊ xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 ┊ node0 ┊ 7005 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 14:53:31 ┊
                      ┊ xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 ┊ node1 ┊ 7005 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:53:35 ┊
                      ┊ xcp-volume-14f1acb1-1b8f-4bc6-8a42-7e5047807d07 ┊ node2 ┊ 7005 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:53:35 ┊
                      ┊ xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 ┊ node0 ┊ 7022 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-11-14 13:00:37 ┊
                      ┊ xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 ┊ node1 ┊ 7022 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-11-14 13:00:37 ┊
                      ┊ xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213 ┊ node2 ┊ 7022 ┊ Unused ┊ Ok                        ┊ TieBreaker ┊ 2023-11-14 13:00:33 ┊
                      ┊ xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 ┊ node0 ┊ 7009 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 15:28:56 ┊
                      ┊ xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 ┊ node1 ┊ 7009 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 15:29:00 ┊
                      ┊ xcp-volume-295d43ed-f520-4752-8e65-6118f608a097 ┊ node2 ┊ 7009 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 15:29:00 ┊
                      ┊ xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 ┊ node0 ┊ 7017 ┊ Unused ┊ Connecting(node1)         ┊   UpToDate ┊ 2023-09-07 22:28:26 ┊
                      ┊ xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 ┊ node1 ┊ 7017 ┊        ┊                           ┊    Unknown ┊ 2023-09-07 22:28:23 ┊
                      ┊ xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206 ┊ node2 ┊ 7017 ┊ Unused ┊ Connecting(node1)         ┊   UpToDate ┊ 2023-09-07 22:28:26 ┊
                      ┊ xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b ┊ node0 ┊ 7013 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 16:13:24 ┊
                      ┊ xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b ┊ node1 ┊ 7013 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 16:13:28 ┊
                      ┊ xcp-volume-3ccd3499-d635-4ddb-9878-c86f5852a33b ┊ node2 ┊ 7013 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 16:13:28 ┊
                      ┊ xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 ┊ node0 ┊ 7003 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 11:28:20 ┊
                      ┊ xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 ┊ node1 ┊ 7003 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 11:28:16 ┊
                      ┊ xcp-volume-43467341-30c8-4fec-b807-81334d0dd309 ┊ node2 ┊ 7003 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 11:28:20 ┊
                      ┊ xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 ┊ node0 ┊ 7016 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-07 22:25:53 ┊
                      ┊ xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 ┊ node1 ┊ 7016 ┊ Unused ┊ Ok                        ┊ TieBreaker ┊ 2023-09-07 22:25:50 ┊
                      ┊ xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028 ┊ node2 ┊ 7016 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-07 22:25:53 ┊
                      ┊ xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 ┊ node0 ┊ 7014 ┊ Unused ┊ Connecting(node1)         ┊   UpToDate ┊ 2023-09-06 14:55:13 ┊
                      ┊ xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 ┊ node1 ┊ 7014 ┊        ┊                           ┊    Unknown ┊ 2023-09-06 14:55:08 ┊
                      ┊ xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3 ┊ node2 ┊ 7014 ┊ Unused ┊ Connecting(node1)         ┊   UpToDate ┊ 2023-09-06 14:55:13 ┊
                      ┊ xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 ┊ node0 ┊ 7023 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-10-31 11:53:28 ┊
                      ┊ xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 ┊ node1 ┊ 7023 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-10-31 11:53:28 ┊
                      ┊ xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460 ┊ node2 ┊ 7023 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-10-31 11:53:24 ┊
                      ┊ xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 ┊ node0 ┊ 7026 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-11-14 13:00:44 ┊
                      ┊ xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 ┊ node1 ┊ 7026 ┊ InUse  ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-11-14 13:00:44 ┊
                      ┊ xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627 ┊ node2 ┊ 7026 ┊        ┊                           ┊    Unknown ┊ 2023-11-14 13:00:40 ┊
                      ┊ xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 ┊ node0 ┊ 7020 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-10-30 09:56:20 ┊
                      ┊ xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 ┊ node1 ┊ 7020 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-10-30 09:56:20 ┊
                      ┊ xcp-volume-702e10ee-6621-4d12-8335-ca2d43553597 ┊ node2 ┊ 7020 ┊        ┊                           ┊    Unknown ┊ 2023-10-30 09:56:16 ┊
                      ┊ xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe ┊ node0 ┊ 7007 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 14:54:00 ┊
                      ┊ xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe ┊ node1 ┊ 7007 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:54:04 ┊
                      ┊ xcp-volume-7294b09b-6267-4696-a547-57766c08d8fe ┊ node2 ┊ 7007 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:54:04 ┊
                      ┊ xcp-volume-776758d5-503c-4dac-9d83-169be6470075 ┊ node0 ┊ 7008 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 14:55:33 ┊
                      ┊ xcp-volume-776758d5-503c-4dac-9d83-169be6470075 ┊ node1 ┊ 7008 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:55:38 ┊
                      ┊ xcp-volume-776758d5-503c-4dac-9d83-169be6470075 ┊ node2 ┊ 7008 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:55:38 ┊
                      ┊ xcp-volume-81809c66-5763-4558-919a-591b864d3f22 ┊ node0 ┊ 7019 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-10-05 10:41:37 ┊
                      ┊ xcp-volume-81809c66-5763-4558-919a-591b864d3f22 ┊ node1 ┊ 7019 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-10-05 10:41:37 ┊
                      ┊ xcp-volume-81809c66-5763-4558-919a-591b864d3f22 ┊ node2 ┊ 7019 ┊        ┊                           ┊    Unknown ┊ 2023-10-05 10:41:33 ┊
                      ┊ xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d ┊ node0 ┊ 7018 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-07 22:28:35 ┊
                      ┊ xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d ┊ node1 ┊ 7018 ┊ Unused ┊ Ok                        ┊ TieBreaker ┊ 2023-09-07 22:28:32 ┊
                      ┊ xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d ┊ node2 ┊ 7018 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-07 22:28:35 ┊
                      ┊ xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e ┊ node0 ┊ 7002 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 11:26:57 ┊
                      ┊ xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e ┊ node1 ┊ 7002 ┊ Unused ┊ Ok                        ┊ TieBreaker ┊ 2023-09-05 11:26:53 ┊
                      ┊ xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e ┊ node2 ┊ 7002 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 11:26:57 ┊
                      ┊ xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 ┊ node0 ┊ 7015 ┊ Unused ┊ Connecting(node1)         ┊   UpToDate ┊ 2023-09-07 22:25:46 ┊
                      ┊ xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 ┊ node1 ┊ 7015 ┊        ┊                           ┊    Unknown ┊ 2023-09-07 22:25:42 ┊
                      ┊ xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92 ┊ node2 ┊ 7015 ┊ Unused ┊ Connecting(node1)         ┊   UpToDate ┊ 2023-09-07 22:25:46 ┊
                      ┊ xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 ┊ node0 ┊ 7021 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-10-30 09:56:28 ┊
                      ┊ xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 ┊ node1 ┊ 7021 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-10-30 09:56:28 ┊
                      ┊ xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7 ┊ node2 ┊ 7021 ┊ Unused ┊ Ok                        ┊ TieBreaker ┊ 2023-10-30 09:56:24 ┊
                      ┊ xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe ┊ node0 ┊ 7010 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 15:29:02 ┊
                      ┊ xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe ┊ node1 ┊ 7010 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 15:29:06 ┊
                      ┊ xcp-volume-b24e6e82-d1a4-4935-99ae-dc25df5e8cbe ┊ node2 ┊ 7010 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 15:29:06 ┊
                      ┊ xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 ┊ node0 ┊ 7006 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 14:53:52 ┊
                      ┊ xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 ┊ node1 ┊ 7006 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:53:56 ┊
                      ┊ xcp-volume-d6163cb3-95b1-4126-8767-0b64ad35abc9 ┊ node2 ┊ 7006 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 14:53:56 ┊
                      ┊ xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 ┊ node0 ┊ 7011 ┊ Unused ┊ Ok                        ┊   Diskless ┊ 2023-09-05 16:12:48 ┊
                      ┊ xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 ┊ node1 ┊ 7011 ┊ Unused ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 16:12:52 ┊
                      ┊ xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26 ┊ node2 ┊ 7011 ┊ InUse  ┊ Ok                        ┊   UpToDate ┊ 2023-09-05 16:12:52 ┊
                      ┊ xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a ┊ node0 ┊ 7001 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-08-30 16:16:32 ┊
                      ┊ xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a ┊ node1 ┊ 7001 ┊ Unused ┊ Connecting(node2)         ┊   UpToDate ┊ 2023-08-30 16:16:35 ┊
                      ┊ xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a ┊ node2 ┊ 7001 ┊        ┊                           ┊    Unknown ┊ 2023-08-30 16:16:30 ┊
                      ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
                      

                      [edit1]
                      More interesting results:

                      One of my VMs has multiple vdi. The OS disk loads fine. The second disk can only be mounted read-only via mount -o ro,noload /dev/xvdb /mnt/example

                      The second disk xcp-volume-5283a6e0... has status "unknown" from linstor resource list

                      ~~xvda~~
                      "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"name_label\": \"distro_OS\", \"name_description\": \"Created by XO\", \"type\": \"user\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}", 
                      "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/not-exists": "0", 
                      "da5187e4-fdab-4d1b-a5ac-ca1ca383cc70/volume-name": "xcp-volume-ec956c38-cb1b-4d5d-94d4-fa1c3b754c26", 
                      
                      ~~xvdb~~ 
                      "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/metadata": "{\"read_only\": false, \"snapshot_time\": \"\", \"vdi_type\": \"vhd\", \"snapshot_of\": \"\", \"type\": \"user\", \"name_description\": \"\", \"name_label\": \"distro_repos\", \"metadata_of_pool\": \"\", \"is_a_snapshot\": false}"
                      "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/not-exists": "0", 
                      "19f26a13-a09e-4c38-8219-b0b6b2d4dc26/volume-name": "xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3", 
                      

                      [/edit1]

                      [edit2]

                      I can make VMs that have no disks.
                      I cannot create new VMs with VDIs on our XOSTOR SR.
                      I cannot create XOSTOR-hosted disks on existing VMs.
                      I cannot make snapshots.
                      I cannot revert to earlier snapshots
                      I get a not particularly helpful error from xcp-ng center: The attempt to create a VDI failed. Any recommendations about where I should look for related logs?

                      [/edit2]

                      [edit3]
                      In case it's relevant, here are the currently installed versions:

                      # yum list installed | grep -i linstor
                      drbd.x86_64                     9.25.0-1.el7                @xcp-ng-linstor
                      drbd-bash-completion.x86_64     9.25.0-1.el7                @xcp-ng-linstor
                      drbd-pacemaker.x86_64           9.25.0-1.el7                @xcp-ng-linstor
                      drbd-reactor.x86_64             1.2.0-1                     @xcp-ng-linstor
                      drbd-udev.x86_64                9.25.0-1.el7                @xcp-ng-linstor
                      drbd-utils.x86_64               9.25.0-1.el7                @xcp-ng-linstor
                      drbd-xen.x86_64                 9.25.0-1.el7                @xcp-ng-linstor
                      java-11-openjdk-headless.x86_64 1:11.0.20.0.8-1.el7_9       @xcp-ng-linstor
                                                                                  @xcp-ng-linstor
                      linstor-client.noarch           1.19.0-1                    @xcp-ng-linstor
                      linstor-common.noarch           1.24.2-1.el7                @xcp-ng-linstor
                      linstor-controller.noarch       1.24.2-1.el7                @xcp-ng-linstor
                      linstor-satellite.noarch        1.24.2-1.el7                @xcp-ng-linstor
                      python-linstor.noarch           1.19.0-1                    @xcp-ng-linstor
                      sm.x86_64                       2.30.8-7.1.0.linstor.2.xcpng8.2
                                                                                  @xcp-ng-linstor
                      sm-rawhba.x86_64                2.30.8-7.1.0.linstor.2.xcpng8.2
                                                                                  @xcp-ng-linstor
                      tzdata.noarch                   2023c-1.el7                 @xcp-ng-linstor
                      tzdata-java.noarch              2023c-1.el7                 @xcp-ng-linstor
                      xcp-ng-linstor.noarch           1.1-3.xcpng8.2              @xcp-ng-updates
                      xcp-ng-release-linstor.noarch   1.3-1.xcpng8.2              @xcp-ng-updates
                      

                      [/edit3]

                      1 Reply Last reply Reply Quote 0
                      • L Offline
                        limezest
                        last edited by limezest

                        So, controller failover works. I used instructions here to test drbd-reactor failover: https://linbit.com/blog/drbd-reactor-promoter/

                        I'm seeing an error in linstor error-reports list that has to do with how linstor queries free space on thin provisioned LVM storage. It traces back to this ticket. https://github.com/LINBIT/linstor-server/issues/80

                        ERROR REPORT 65558791-33400-000000
                        
                        ============================================================
                        
                        Application:                        LINBIT® LINSTOR
                        Module:                             Satellite
                        Version:                            1.24.2
                        Build ID:                           adb19ca96a07039401023410c1ea116f09929295
                        Build time:                         2023-08-30T05:15:08+00:00
                        Error time:                         2023-11-15 22:08:11
                        Node:                               node0
                        
                        ============================================================
                        
                        Reported error:
                        ===============
                        
                        Description:
                            Expected 3 columns, but got 2
                        Cause:
                            Failed to parse line:   thin_device;23044370202624;
                        Additional information:
                            External command: vgs --config devices { filter=['a|/dev/sdn|','a|/dev/sdk|','a|/dev/sdj|','a|/dev/sdm|','a|/dev/sdl|','a|/dev/sdg|','a|/dev/sdf|','a|/dev/sdi|','a|/dev/sdh|','a|/dev/sdc|','a|/dev/sde|','a|/dev/sdd|','r|.*|'] } -o lv_name,lv_size,data_percent --units b --separator ; --noheadings --nosuffix linstor_group/thin_device
                        
                        Category:                           LinStorException
                        Class name:                         StorageException
                        Class canonical name:               com.linbit.linstor.storage.StorageException
                        Generated at:                       Method 'getThinFreeSize', Source file 'LvmUtils.java', Line #399
                        
                        Error message:                      Unable to parse free thin sizes
                        
                        ErrorContext:   Description: Expected 3 columns, but got 2
                          Cause:       Failed to parse line:   thin_device;23044370202624;
                          Details:     External command: vgs --config devices { filter=['a|/dev/sdn|','a|/dev/sdk|','a|/dev/sdj|','a|/dev/sdm|','a|/dev/sdl|','a|/dev/sdg|','a|/dev/sdf|','a|/dev/sdi|','a|/dev/sdh|','a|/dev/sdc|','a|/dev/sde|','a|/dev/sdd|','r|.*|'] } -o lv_name,lv_size,data_percent --units b --separator ; --noheadings --nosuffix linstor_group/thin_device
                        
                        
                        Call backtrace:
                        
                            Method                                   Native Class:Line number
                            getThinFreeSize                          N      com.linbit.linstor.layer.storage.lvm.utils.LvmUtils:399
                            getSpaceInfo                             N      com.linbit.linstor.layer.storage.lvm.LvmThinProvider:406
                            getStoragePoolSpaceInfo                  N      com.linbit.linstor.layer.storage.StorageLayer:441
                            getSpaceInfo                             N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1116
                            getSpaceInfo                             N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:1816
                            getStoragePoolSpaceInfo                  N      com.linbit.linstor.core.apicallhandler.StltApiCallHandlerUtils:325
                            applyChanges                             N      com.linbit.linstor.core.apicallhandler.StltStorPoolApiCallHandler:274
                            applyFullSync                            N      com.linbit.linstor.core.apicallhandler.StltApiCallHandler:330
                            execute                                  N      com.linbit.linstor.api.protobuf.FullSync:113
                            executeNonReactive                       N      com.linbit.linstor.proto.CommonMessageProcessor:534
                            lambda$execute$14                        N      com.linbit.linstor.proto.CommonMessageProcessor:509
                            doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:149
                            lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:76
                            call                                     N      reactor.core.publisher.MonoCallable:72
                            trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:127
                            subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
                            subscribe                                N      reactor.core.publisher.Flux:8759
                            onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:195
                            request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2545
                            onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
                            subscribe                                N      reactor.core.publisher.MonoJust:55
                            subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
                            subscribe                                N      reactor.core.publisher.Flux:8773
                            onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:427
                            slowPath                                 N      reactor.core.publisher.FluxArray$ArraySubscription:127
                            request                                  N      reactor.core.publisher.FluxArray$ArraySubscription:100
                            onSubscribe                              N      reactor.core.publisher.FluxFlatMap$FlatMapMain:371
                            subscribe                                N      reactor.core.publisher.FluxMerge:70
                            subscribe                                N      reactor.core.publisher.Flux:8773
                            onComplete                               N      reactor.core.publisher.FluxConcatArray$ConcatArraySubscriber:258
                            subscribe                                N      reactor.core.publisher.FluxConcatArray:78
                            subscribe                                N      reactor.core.publisher.InternalFluxOperator:62
                            subscribe                                N      reactor.core.publisher.FluxDefer:54
                            subscribe                                N      reactor.core.publisher.Flux:8773
                            onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:427
                            drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:453
                            drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:724
                            onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:256
                            drainFused                               N      reactor.core.publisher.SinkManyUnicast:319
                            drain                                    N      reactor.core.publisher.SinkManyUnicast:362
                            tryEmitNext                              N      reactor.core.publisher.SinkManyUnicast:237
                            tryEmitNext                              N      reactor.core.publisher.SinkManySerialized:100
                            processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:392
                            doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:227
                            lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:164
                            onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:185
                            runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:440
                            run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:527
                            call                                     N      reactor.core.scheduler.WorkerTask:84
                            call                                     N      reactor.core.scheduler.WorkerTask:37
                            run                                      N      java.util.concurrent.FutureTask:264
                            run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304
                            runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1128
                            run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:628
                            run                                      N      java.lang.Thread:829
                        
                        
                        END OF ERROR REPORT.
                        
                        

                        I think the vgs query is improperly formatted for this version of device-mapper-persistent-data

                        [12:31 node0 ~]# vgs -o lv_name,lv_size,data_percent --units b --noheadings --separator ;
                        vgs: option '--separator' requires an argument
                          Error during parsing of command line.
                        

                        but it works if formatted like this:

                        [12:32 node0 ~]# vgs -o lv_name,lv_size,data_percent --units b --noheadings --separator=";"
                          MGT;4194304B;
                          VHD-d959f7a9-2bd1-4ac5-83af-1724336a73d0;532676608B;
                          thin_device;23044370202624B;6.96
                          xcp-persistent-database_00000;1077936128B;13.85
                          xcp-volume-fda3d913-47cc-4a8d-8a54-3364c8ae722a_00000;86083895296B;25.20
                          xcp-volume-8ddb8f7e-a549-4c53-a9d5-9b2e40d3810e_00000;215197155328B;2.35
                          xcp-volume-43467341-30c8-4fec-b807-81334d0dd309_00000;215197155328B;2.52
                          xcp-volume-5283a6e0-4e95-4aca-b5e1-7eb3fea7fcd3_00000;2194921226240B;69.30
                          xcp-volume-907e72d1-4389-4425-8e1e-e53a4718cb92_00000;86088089600B;0.60
                          xcp-volume-4c368a33-d0af-4f1d-9f7d-486a1df1d028_00000;86088089600B;0.06
                          xcp-volume-2bd88964-3feb-401a-afc1-c88c790cc206_00000;86092283904B;24.81
                          xcp-volume-833eba2a-a70b-4787-b78a-afef8cc0e14d_00000;86092283904B;0.04
                          xcp-volume-81809c66-5763-4558-919a-591b864d3f22_00000;215197155328B;4.66
                          xcp-volume-9fa2ec95-9bea-45ae-a583-6f1941a614e7_00000;86096478208B;0.04
                          xcp-volume-5dbfaef0-cc83-43a8-bba1-469d65bc3460_00000;215205543936B;6.12
                          xcp-volume-1e2dd480-a505-46fc-a6e8-ac8d4341a213_00000;215209738240B;0.02
                          xcp-volume-603ac344-edf1-43d7-8c27-eecfd7e6d627_00000;215209738240B;2.19
                        
                        

                        In fact, vgs --separator accepts pretty much any character except semicolon. Maybe it's a problem with this version of LVM2?

                        [12:37 node0 ~]# yum info device-mapper-persistent-data.x86_64
                        Loaded plugins: fastestmirror
                        Loading mirror speeds from cached hostfile
                        Excluding mirror: updates.xcp-ng.org
                         * xcp-ng-base: mirrors.xcp-ng.org
                        Excluding mirror: updates.xcp-ng.org
                         * xcp-ng-updates: mirrors.xcp-ng.org
                        Installed Packages
                        Name        : device-mapper-persistent-data
                        Arch        : x86_64
                        Version     : 0.7.3
                        Release     : 3.el7
                        Size        : 1.2 M
                        Repo        : installed
                        From repo   : install
                        
                        
                        containerman17 created this issue in LINBIT/linstor-server

                        closed Unable to parse free thin sizes error on Satellite #80

                        1 Reply Last reply Reply Quote 0
                        • J Offline
                          jmm
                          last edited by

                          Hi team,
                          I'm currently testing xostor on a three nodes xcp-8.2.1 pool
                          Before adding any new vm, i replaced a node (xcp-hc3)
                          Since everything seems to be ok, i've added two vms.
                          But I think that a diskless resource is missing for "xcp-persistent-database"
                          Is there a way to resolve this situation ?

                          [10:23 xcp-hc1 ~]# linstor resource list
                          ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
                          ┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ Conns ┊ State ┊ CreatedOn ┊
                          ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
                          ┊ xcp-persistent-database ┊ xcp-hc1 ┊ 7000 ┊ InUse ┊ Ok ┊ UpToDate ┊ 2023-12-18 15:47:37 ┊
                          ┊ xcp-persistent-database ┊ xcp-hc2 ┊ 7000 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-18 15:47:37 ┊
                          ┊ xcp-volume-17208381-56c0-4d8a-9c16-0a2000a45e56 ┊ xcp-hc1 ┊ 7004 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-18 17:41:41 ┊
                          ┊ xcp-volume-17208381-56c0-4d8a-9c16-0a2000a45e56 ┊ xcp-hc2 ┊ 7004 ┊ InUse ┊ Ok ┊ Diskless ┊ 2023-12-18 17:41:41 ┊
                          ┊ xcp-volume-17208381-56c0-4d8a-9c16-0a2000a45e56 ┊ xcp-hc3 ┊ 7004 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-18 17:41:42 ┊
                          ┊ xcp-volume-94af3c03-91b4-46ea-bf51-d0c50a085e6b ┊ xcp-hc1 ┊ 7002 ┊ InUse ┊ Ok ┊ Diskless ┊ 2023-12-19 10:17:15 ┊
                          ┊ xcp-volume-94af3c03-91b4-46ea-bf51-d0c50a085e6b ┊ xcp-hc2 ┊ 7002 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-19 09:49:35 ┊
                          ┊ xcp-volume-94af3c03-91b4-46ea-bf51-d0c50a085e6b ┊ xcp-hc3 ┊ 7002 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-19 09:49:35 ┊
                          ┊ xcp-volume-a395bb01-76a2-4e9a-a082-f18b3287afb2 ┊ xcp-hc1 ┊ 7005 ┊ Unused ┊ Ok ┊ Diskless ┊ 2023-12-19 10:17:16 ┊
                          ┊ xcp-volume-a395bb01-76a2-4e9a-a082-f18b3287afb2 ┊ xcp-hc2 ┊ 7005 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-19 09:49:45 ┊
                          ┊ xcp-volume-a395bb01-76a2-4e9a-a082-f18b3287afb2 ┊ xcp-hc3 ┊ 7005 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-12-19 09:49:45 ┊

                          J 1 Reply Last reply Reply Quote 0
                          • J Offline
                            jmm @jmm
                            last edited by

                            @jmm Self answer :
                            linstor resource create xcp-hc3 xcp-persistent-database --drbd-diskless

                            🙂

                            1 Reply Last reply Reply Quote 0
                            • J john.c referenced this topic on
                            • G Offline
                              gb.123
                              last edited by

                              I am getting :

                                WARNING: Pool zeroing and 1.00 MiB large chunk size slows down thin provisioning.
                                WARNING: Consider disabling zeroing (-Zn) or using smaller chunk size (<512.00 KiB).
                              

                              How do I change Chunk Size and/or zeroing ?

                              Can this be done 'on the fly' (without loosing data) ?

                              1 Reply Last reply Reply Quote 0
                              • G Offline
                                gb.123
                                last edited by gb.123

                                This post is deleted!
                                1 Reply Last reply Reply Quote 0
                                • B Offline
                                  BHellman 3rd party vendor
                                  last edited by

                                  This thread has grown quite large and has a lot of information in it. Is there an official documentation chapter on XOSTOR available anywhere?

                                  ronan-aR 1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates 🪐 Co-Founder CEO
                                    last edited by

                                    For now it's within this thread 🙂 Feel free to tell us what's missing in the first post!

                                    1 Reply Last reply Reply Quote 0
                                    • ronan-aR Offline
                                      ronan-a Vates 🪐 XCP-ng Team @BHellman
                                      last edited by

                                      @BHellman The first post has a FAQ that I update each time I meet users with a common/recurring problem. 😉

                                      1 Reply Last reply Reply Quote 2
                                      • B Offline
                                        BHellman 3rd party vendor
                                        last edited by

                                        Thanks for the replies. My issues are currently with the GUI so I don't know if that applies here. This is all from the GUI, so please let me know if that's outside the scope of this post and I can post elsewhere.

                                        One issue is upon creating a new XOSTOR SR, the packages are installed, however the SR creation fails due to one of the package, sm-rawhba, that needs updating. You have to apply patched through the GUI then reboot the node, or execute "xe-restart-toolstack" on each node. You can then go back and create a new SR, but only after wiping the disks that you originally tried to create the SR on; vgremove and pvremove.

                                        I'm planning on doing some more testing, please let me know if GUI issues are appropriate to post here.

                                        ronan-aR 1 Reply Last reply Reply Quote 0
                                        • ronan-aR Offline
                                          ronan-a Vates 🪐 XCP-ng Team @BHellman
                                          last edited by

                                          @BHellman It's fine to post simple issues in this thread. For complex problems a ticket is probably better. 🙂

                                          One issue is upon creating a new XOSTOR SR, the packages are installed, however the SR creation fails due to one of the package, sm-rawhba, that needs updating.

                                          Not totally that, sm-rawhba is added to the list because the UI installs a modified version of sm with LINSTOR support.
                                          The real issue is that xe-toolstack-restart is not called during the initial setup, a method is missing in our updater plugin to check if a package is present or not, I will add this method for the XOA team. 😉

                                          1 Reply Last reply Reply Quote 0
                                          • B Offline
                                            BHellman 3rd party vendor
                                            last edited by

                                            I'm not sure what the expected behavior is but....

                                            I have xcp1, xcp2, xcp3 as hosts in my XOSTOR pool, using an XOSTOR repository. I had a VM running on xcp2, unplugged the power from it and left it uplugged for about 5 minutes. The VM remained "running" according to XOA, however it wasn't.

                                            What is the expected behavior when this happens and how do you go about recovering from a temporarily failed/powered off node?

                                            My expectation was that my vm would move to xcp1 (where there is a replica) and start, then outdate xcp2. I have "auto start" enabled under advanced on the VM.

                                            L 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post