XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Unable to enable HA with XOSTOR

    Scheduled Pinned Locked Moved Advanced features
    xostorcluster
    11 Posts 2 Posters 504 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ronan-aR Offline
      ronan-a Vates 🪐 XCP-ng Team @dslauter
      last edited by

      @dslauter Can you check the SMlog/kernel.log/daemon.log traces? Without these details, it is not easy to investigate. Thanks!

      D 2 Replies Last reply Reply Quote 0
      • D Offline
        dslauter @ronan-a
        last edited by

        @ronan-a Sorry this is my first time pulling logs from XOA, I looked in /var/logs, but didn't see anything. Where are the log files located or do I need to pull them from one of the hosts?

        1 Reply Last reply Reply Quote 0
        • D Offline
          dslauter @ronan-a
          last edited by

          @ronan-a I found the logs, I also tried to enable HA via the CLI and here are the results.

          [08:14 xcp-ng-node01 log]# xe pool-ha-enable heartbeat-sr-uuids=3b8c8b54-8943-c596-f4c8-0f2be93bf58a
          This operation cannot be performed because this VDI could not be properly attached to the VM.
          vdi: f83ed9c9-e2fc-4ddb-b4c1-5c8aae176118 (Statefile for HA)
          
          

          Here is the output from SMlog

          Nov 12 08:25:42 xcp-ng-node01 SM: [3943570] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:25:42 xcp-ng-node01 SM: [3943602] lock: acquired /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:25:42 xcp-ng-node01 SM: [3943602] sr_update {'host_ref': 'OpaqueRef:bc3d66ff-541d-b169-2a9e-8f34900b290e', 'command': 'sr_update', 'args': [], 'device_config': {'SRmaster': 'true', 'provisioning': 'thin', 'redundancy': '2', 'group-name': 'linstor_group/thin_device'}, 'session_ref': 'OpaqueRef:ec892d30-6783-58d3-7d24-1f867e0a16f7', 'sr_ref': 'OpaqueRef:987a867d-ca6f-f342-6bb8-fd50c5456ca0', 'sr_uuid': '3b8c8b54-8943-c596-f4c8-0f2be93bf58a', 'subtask_of': 'DummyRef:|4e89a859-b42a-aae9-460d-7e4b19b4b36b|SR.stat', 'local_cache_sr': '3767df0a-ccbe-86c0-57b7-a9be1b05da9e'}
          Nov 12 08:25:42 xcp-ng-node01 SM: [3943602] Synchronize metadata...
          Nov 12 08:25:42 xcp-ng-node01 SM: [3943602] LinstorSR.update for 3b8c8b54-8943-c596-f4c8-0f2be93bf58a
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] Got sm-config for *217fdead(25.000G??): {'vhd-blocks': 'eJxrYMAPFjDY//////3/BgLqhitoAMKBtR8CLP6DQItP/X/soAGPGcQAdPP2g4j5////g8q8OrV5RX31/3qIXdkfKLSOaACzfz8OfxMLGki0F6bvH8N7MH0X6IJ9D2TqqGU+IXsR5p6Rx+2rHwyMYLqRYnfYY5id/p+Bsc7+H4aLiAcWfygPjw9k2k9uOKCDBgbqmjf4ACP5EQwEAOHNyWQ=', 'vhd-parent': '3a229b20-674e-442f-a6e0-48bc7af5dc50'}
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] Got sm-config for *3a229b20(25.000G??): {'vhd-blocks': 'eJzFlGEKgCAMRuvmu1kexRvU74i+qHShoNmG64GMifo2BZehDrBekbJ5bd4KQByxCQ8R1APmbWUfcv8Ushkw9rssT6Hu/tsQ+/edfOU69qQeMvfzu9dQe6jgjR17tUFG9Lvf/C5E2/fn61f8eW2M56BnfNt8AAO+vBw='}
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] Got sm-config for *6c940c52(32.000G??): {'vhd-blocks': 'eJz7/7+BgYFBwf4/BNz/jwTa/w8x8IMBBJgH2hmjAAT+MVADMA60NxDgM1GqHsj//8APzlSjYBSMglEwkABeLH1gbgBy6zGKKwDI8hy6'}
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] No work, exiting
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] GC process exiting, no work left
          Nov 12 08:25:42 xcp-ng-node01 SM: [3943570] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/gc_active
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] In cleanup
          Nov 12 08:25:42 xcp-ng-node01 SMGC: [3943570] SR 3b8c ('xostor') (13 VDIs in 7 VHD trees): no changes
          Nov 12 08:25:43 xcp-ng-node01 SM: [3943602] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:25:43 xcp-ng-node01 SM: [3943602] lock: closed /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943974] sr_scan {'host_ref': 'OpaqueRef:bc3d66ff-541d-b169-2a9e-8f34900b290e', 'command': 'sr_scan', 'args': [], 'device_config': {'SRmaster': 'true', 'location': '/media', 'legacy_mode': 'true'}, 'session_ref': 'OpaqueRef:a22ce0f0-ae6f-b049-8fef-3fdb2604ec6f', 'sr_ref': 'OpaqueRef:9d6d60ca-c423-fd4c-6aeb-a93ee7d7c007', 'sr_uuid': '8b3ae612-eab2-c567-06c2-de53f77e1b11', 'subtask_of': 'DummyRef:|ef109410-b9b3-2af5-5e4d-9cfe9c8a8a68|SR.scan', 'local_cache_sr': '3767df0a-ccbe-86c0-57b7-a9be1b05da9e'}
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] lock: opening lock file /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] lock: acquired /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] sr_scan {'host_ref': 'OpaqueRef:bc3d66ff-541d-b169-2a9e-8f34900b290e', 'command': 'sr_scan', 'args': [], 'device_config': {'SRmaster': 'true', 'provisioning': 'thin', 'redundancy': '2', 'group-name': 'linstor_group/thin_device'}, 'session_ref': 'OpaqueRef:c1d3ce7d-568c-7c82-4cbf-1b2aaa6b4c90', 'sr_ref': 'OpaqueRef:987a867d-ca6f-f342-6bb8-fd50c5456ca0', 'sr_uuid': '3b8c8b54-8943-c596-f4c8-0f2be93bf58a', 'subtask_of': 'DummyRef:|2474fdb9-0afe-9fe7-aa4a-c9cd655489d2|SR.scan', 'local_cache_sr': '3767df0a-ccbe-86c0-57b7-a9be1b05da9e'}
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943996] sr_update {'host_ref': 'OpaqueRef:bc3d66ff-541d-b169-2a9e-8f34900b290e', 'command': 'sr_update', 'args': [], 'device_config': {'SRmaster': 'true', 'location': '/media', 'legacy_mode': 'true'}, 'session_ref': 'OpaqueRef:d4b9fa5f-21d2-1deb-c8fe-ed530cd2ab47', 'sr_ref': 'OpaqueRef:9d6d60ca-c423-fd4c-6aeb-a93ee7d7c007', 'sr_uuid': '8b3ae612-eab2-c567-06c2-de53f77e1b11', 'subtask_of': 'DummyRef:|006baf72-e046-ce5b-dafb-554feb5c905c|SR.stat', 'local_cache_sr': '3767df0a-ccbe-86c0-57b7-a9be1b05da9e'}
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-403e2109-8d27-4220-a3dd-df5c182926fa/0']
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI 6c940c52-1cdf-47e9-bc72-344d8050e7f6 loaded! (path=/dev/drbd/by-res/xcp-volume-403e2109-8d27-4220-a3dd-df5c182926fa/0, hidden=1)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-dbc6010f-494a-42b3-a139-e6652b26c712/0']
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI 217fdead-5688-4c0c-b5cd-4d1c610aaefc loaded! (path=/dev/drbd/by-res/xcp-volume-dbc6010f-494a-42b3-a139-e6652b26c712/0, hidden=1)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-b55499f0-9dcd-42ad-a8d2-bfac924aac1f/0']
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI 43b0ecaf-898b-4ee3-9300-501dee5bd943 loaded! (path=/dev/drbd/by-res/xcp-volume-b55499f0-9dcd-42ad-a8d2-bfac924aac1f/0, hidden=0)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-9187ccc0-b6dc-4e17-834f-e8806541a4d2/0']
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI 93cef280-1894-48f1-a237-d14a6743c6be loaded! (path=/dev/drbd/by-res/xcp-volume-9187ccc0-b6dc-4e17-834f-e8806541a4d2/0, hidden=0)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-71f834dc-4fa0-42ba-b35b-bf2bc0c340aa/0']
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI 0c63e270-5705-4ff3-8d09-036583fa1fd5 loaded! (path=/dev/drbd/by-res/xcp-volume-71f834dc-4fa0-42ba-b35b-bf2bc0c340aa/0, hidden=0)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI 36083326-c85b-464f-aa6e-92fc29f22624 loaded! (path=/dev/drbd/by-res/xcp-persistent-redo-log/0, hidden=0)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] VDI f83ed9c9-e2fc-4ddb-b4c1-5c8aae176118 loaded! (path=/dev/drbd/by-res/xcp-persistent-ha-statefile/0, hidden=0)
          Nov 12 08:26:10 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-4dff30b5-0e20-4cc8-aeba-bece4ac5f4e7/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] VDI f6cbbb5c-01c9-4842-90ea-4adba5063591 loaded! (path=/dev/drbd/by-res/xcp-volume-4dff30b5-0e20-4cc8-aeba-bece4ac5f4e7/0, hidden=0)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-d7f1e380-8388-4d3e-9fc8-bf25361d7179/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] VDI 3a229b20-674e-442f-a6e0-48bc7af5dc50 loaded! (path=/dev/drbd/by-res/xcp-volume-d7f1e380-8388-4d3e-9fc8-bf25361d7179/0, hidden=1)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-5cfeecb4-a32f-4e18-8ff9-f296eecce08d/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] VDI 0a3e72c7-4e3a-408e-942b-848c7503d724 loaded! (path=/dev/drbd/by-res/xcp-volume-5cfeecb4-a32f-4e18-8ff9-f296eecce08d/0, hidden=0)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-69ad1b0b-6143-4dcb-9f26-77d393074590/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] VDI 43ccc9dd-96a3-4b2d-b952-73fdce6301c2 loaded! (path=/dev/drbd/by-res/xcp-volume-69ad1b0b-6143-4dcb-9f26-77d393074590/0, hidden=0)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-2ab66919-f055-47b2-8c65-7ff7d7f48f28/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] VDI 47353755-f3b8-48c7-a001-69b68ebb4750 loaded! (path=/dev/drbd/by-res/xcp-volume-2ab66919-f055-47b2-8c65-7ff7d7f48f28/0, hidden=0)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] unable to execute `getVHDInfo` locally, retry using a readable host... (cause: local diskless + in use or not up to date)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] call-plugin (getVHDInfo with {'devicePath': '/dev/drbd/by-res/xcp-volume-987a2379-1c58-40cb-a17d-a01a6def1903/0', 'groupName': 'linstor_group/thin_device', 'includeParent': 'True', 'resolveParent': 'False'}) returned: {"uuid": "4af7379f-8b88-4472-a8d2-a0386877e75a", "sizeVirt": 21474836480, "sizePhys": 6110552576, "hidden": 0, "sizeAllocated": 2906, "path": "/dev/drbd/by-res/xcp-volume-987a2379-1c58-40cb-a17d-a01a6def1903/0"}
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] VDI 4af7379f-8b88-4472-a8d2-a0386877e75a loaded! (path=/dev/drbd/by-res/xcp-volume-987a2379-1c58-40cb-a17d-a01a6def1903/0, hidden=0)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] Undoing all journal transactions...
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] Synchronize metadata...
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] LinstorSR.scan for 3b8c8b54-8943-c596-f4c8-0f2be93bf58a
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] lock: opening lock file /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/running
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] lock: tried lock /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/running, acquired: True (exists: True)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/running
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] Kicking GC
          Nov 12 08:26:11 xcp-ng-node01 SMGC: [3943972] === SR 3b8c8b54-8943-c596-f4c8-0f2be93bf58a: gc ===
          Nov 12 08:26:11 xcp-ng-node01 SMGC: [3944118] Will finish as PID [3944119]
          Nov 12 08:26:11 xcp-ng-node01 SMGC: [3943972] New PID [3944118]
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: opening lock file /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/running
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: opening lock file /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/gc_active
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: opening lock file /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] lock: closed /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/running
          Nov 12 08:26:11 xcp-ng-node01 SM: [3943972] lock: closed /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: acquired /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: tried lock /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/gc_active, acquired: True (exists: True)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] lock: tried lock /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr, acquired: True (exists: True)
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-403e2109-8d27-4220-a3dd-df5c182926fa/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-dbc6010f-494a-42b3-a139-e6652b26c712/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-b55499f0-9dcd-42ad-a8d2-bfac924aac1f/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-9187ccc0-b6dc-4e17-834f-e8806541a4d2/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-71f834dc-4fa0-42ba-b35b-bf2bc0c340aa/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-4dff30b5-0e20-4cc8-aeba-bece4ac5f4e7/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944151] lock: opening lock file /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944151] Failed to lock /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr on first attempt, blocked by PID 3944119
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-d7f1e380-8388-4d3e-9fc8-bf25361d7179/0']
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:11 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-5cfeecb4-a32f-4e18-8ff9-f296eecce08d/0']
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-69ad1b0b-6143-4dcb-9f26-77d393074590/0']
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119] ['/usr/bin/vhd-util', 'query', '--debug', '-vsafpu', '-n', '/dev/drbd/by-res/xcp-volume-2ab66919-f055-47b2-8c65-7ff7d7f48f28/0']
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119]   pread SUCCESS
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119] unable to execute `getVHDInfo` locally, retry using a readable host... (cause: local diskless + in use or not up to date)
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119] call-plugin (getVHDInfo with {'devicePath': '/dev/drbd/by-res/xcp-volume-987a2379-1c58-40cb-a17d-a01a6def1903/0', 'groupName': 'linstor_group/thin_device', 'includeParent': 'True', 'resolveParent': 'False'}) returned: {"uuid": "4af7379f-8b88-4472-a8d2-a0386877e75a", "sizeVirt": 21474836480, "sizePhys": 6110552576, "hidden": 0, "sizeAllocated": 2906, "path": "/dev/drbd/by-res/xcp-volume-987a2379-1c58-40cb-a17d-a01a6def1903/0"}
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119] SR 3b8c ('xostor') (13 VDIs in 7 VHD trees):
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         *6c940c52(32.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]             43b0ecaf(32.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]             f6cbbb5c(32.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         93cef280(25.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         36083326(??)[RAW]
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         f83ed9c9(??)[RAW]
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         *3a229b20(25.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]             *217fdead(25.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]                 0a3e72c7(25.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]                 47353755(25.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]             0c63e270(25.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         43ccc9dd(4.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]         4af7379f(20.000G??)
          Nov 12 08:26:12 xcp-ng-node01 SMGC: [3944119]
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944119] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944151] lock: acquired /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
          Nov 12 08:26:12 xcp-ng-node01 SM: [3944151] sr_update {'host_ref': 'OpaqueRef:bc3d66ff-541d-b169-2a9e-8f34900b290e', 'command': 'sr_update', 'args': [], 'device_config': {'SRmaster': 'true', 'provisioning': 'thin', 'redundancy': '2', 'group-name': 'linstor_group/thin_device'}, 'session_ref': 'OpaqueRef:38ec5b0c-1810-002b-10e3-a18f698741a4', 'sr_ref': 'OpaqueRef:987a867d-ca6f-f342-6bb8-fd50c5456ca0', 'sr_uuid': '3b8c8b54-8943-c596-f4c8-0f2be93bf58a', 'subtask_of': 'DummyRef:|d189fd68-16a9-80c9-410e-2bc724ce6a97|SR.stat', 'local_cache_sr': '3767df0a-ccbe-86c0-57b7-a9be1b05da9e'}
          
          ronan-aR 1 Reply Last reply Reply Quote 0
          • ronan-aR Offline
            ronan-a Vates 🪐 XCP-ng Team @dslauter
            last edited by ronan-a

            @dslauter said in Unable to enable HA with XOSTOR:

            I don't see any error here. Can you check the other hosts? And how did you create the SR? Is the shared=true flag set?

            D 1 Reply Last reply Reply Quote 0
            • D Offline
              dslauter @ronan-a
              last edited by

              @ronan-a I created it with the WebUI in XOA. where would I check the shared flag? I was able to find the error. It looks like it's failing to start a http-disk-server.

              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] lock: acquired /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
              Nov 13 09:04:23 xcp-ng-node01 fairlock[3237]: /run/fairlock/devicemapper acquired
              Nov 13 09:04:23 xcp-ng-node01 fairlock[3237]: /run/fairlock/devicemapper sent '1386563 - 313847.383599871#013[�'
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] ['/sbin/vgchange', '-ay', 'linstor_group']
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   pread SUCCESS
              Nov 13 09:04:23 xcp-ng-node01 fairlock[3237]: /run/fairlock/devicemapper released
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] VDI f83ed9c9-e2fc-4ddb-b4c1-5c8aae176118 loaded! (path=/dev/http-nbd/xcp-persistent-ha-statefile, hidden=False)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] lock: released /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] vdi_attach_from_config {'device_config': {'SRmaster': 'true', 'provisioning': 'thin', 'redundancy': '2', 'group-name': 'linstor_group/thin_device'}, 'sr_uuid': '3b8c8b54-8943-c596-f4c8-0f2be93bf58a', 'vdi_uuid': 'f83ed9c9-e2fc-4ddb-b4c1-5c8aae176118', 'sr_sm_config': {}, 'command': 'vdi_attach_from_config', 'vdi_path': '/dev/http-nbd/xcp-persistent-ha-statefile'}
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] LinstorVDI.attach_from_config for f83ed9c9-e2fc-4ddb-b4c1-5c8aae176118
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] LinstorVDI.attach for f83ed9c9-e2fc-4ddb-b4c1-5c8aae176118
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] Starting http-disk-server on port 8076...
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] Raising exception [46, The VDI is not available [opterr=Failed to start http-server: cannot use a string pattern on a bytes-like object]]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] ***** LinstorVDI.attach_from_config: EXCEPTION <class 'xs_errors.SROSError'>, The VDI is not available [opterr=Failed to start http-server: cannot use a string pattern on a bytes-like object]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/LinstorSR", line 2064, in attach_from_config
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     return self.attach(sr_uuid, vdi_uuid)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/LinstorSR", line 1834, in attach
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     return self._attach_using_http_nbd()
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/LinstorSR", line 2806, in _attach_using_http_nbd
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     self._start_persistent_http_server(volume_name)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/LinstorSR", line 2599, in _start_persistent_http_server
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     opterr='Failed to start http-server: {}'.format(e)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] Raising exception [47, The SR is not available [opterr=Unable to attach from config]]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] ***** generic exception: vdi_attach_from_config: EXCEPTION <class 'xs_errors.SROSError'>, The SR is not available [opterr=Unable to attach from config]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 111, in run
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     return self._run_locked(sr)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 161, in _run_locked
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     rv = self._run(sr, target)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 300, in _run
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     ret = target.attach_from_config(self.params['sr_uuid'], self.vdi_uuid)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/LinstorSR", line 2069, in attach_from_config
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     opterr='Unable to attach from config'
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] ***** LINSTOR resources on XCP-ng: EXCEPTION <class 'xs_errors.SROSError'>, The SR is not available [opterr=Unable to attach from config]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 385, in run
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     ret = cmd.run(sr)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 111, in run
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     return self._run_locked(sr)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 161, in _run_locked
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     rv = self._run(sr, target)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/SRCommand.py", line 300, in _run
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     ret = target.attach_from_config(self.params['sr_uuid'], self.vdi_uuid)
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]   File "/opt/xensource/sm/LinstorSR", line 2069, in attach_from_config
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]     opterr='Unable to attach from config'
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563]
              Nov 13 09:04:23 xcp-ng-node01 SM: [1386563] lock: closed /var/lock/sm/3b8c8b54-8943-c596-f4c8-0f2be93bf58a/sr
              
              
              ronan-aR 1 Reply Last reply Reply Quote 0
              • ronan-aR Offline
                ronan-a Vates 🪐 XCP-ng Team @dslauter
                last edited by

                @dslauter Are you using XCP-ng 8.3? If this is the case I think there is a porting problem concerning python 3...

                D 1 Reply Last reply Reply Quote 0
                • D Offline
                  dslauter @ronan-a
                  last edited by

                  @ronan-a Yes I'm on 8.3

                  ronan-aR 1 Reply Last reply Reply Quote 0
                  • ronan-aR Offline
                    ronan-a Vates 🪐 XCP-ng Team @dslauter
                    last edited by

                    @dslauter Just for your information, I will update the http-nbd-transfer + sm in a few weeks. I fixed many issues regarding HA activation in 8.3 due to bad migration of specific python code from version 2 to version 3.

                    D 1 Reply Last reply Reply Quote 1
                    • D Offline
                      dslauter @ronan-a
                      last edited by

                      @ronan-a thank you for the update!

                      ronan-aR 1 Reply Last reply Reply Quote 0
                      • ronan-aR Offline
                        ronan-a Vates 🪐 XCP-ng Team @dslauter
                        last edited by

                        @dslauter You can test the new RPMs using the testing repository, FYI: sm-3.2.3-1.14.xcpng8.3 and http-nbd-transfer-1.5.0-1.xcpng8.3.

                        1 Reply Last reply Reply Quote 1
                        • First post
                          Last post