XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dumarjo
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 17
    • Groups 0

    dumarjo

    @dumarjo

    4
    Reputation
    7
    Profile views
    17
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    dumarjo Unfollow Follow

    Best posts made by dumarjo

    • RE: XOSTOR hyperconvergence preview

      @ronan-a Hi,

      I tested your branch and now the new added hosts to the pool are now attached to the XOSTOR. This is nice !

      I have looked at the code, but I'm not sure if in the current state of your branch we can add a disk on the new host and update the replication ? I think not... but just to be sure.

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Hi,

      Ok, Figured out how to do it and get it working on 2 or more nodes. Here the process:

      [xcp-ng-01 ~]#wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/26d1db55fafa4622af2d9ee29a48f6756b8b11a3/gistfile1.txt -O install && chmod +x install
      [xcp-ng-01 ~]# ./install --disks /dev/sdb --thin
      [xcp-ng-01 ~]# vgchange -a y linstor_group
      [xcp-ng-01 ~]# xe sr-create type=linstor name-label=XOSTOR host-uuid=71324aae-aff1-4323-bb0b-2c5f858b223e device-config:hosts=xcp-ng-01 device-config:group-name=linstor_group/thin_device device-config:redundancy=1 shared=true device-config:provisioning=thin
      

      Now the SR is available to create the VMs. For simplicity, I won't create VM now.

      [xcp-ng-02 ~]# wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/26d1db55fafa4622af2d9ee29a48f6756b8b11a3/gistfile1.txt -O install && chmod +x install
      [xcp-ng-02 ~]# ./install --disks /dev/sdb --thin
      [xcp-ng-02 ~]# vgchange -a y linstor_group
      

      On both hosts, I modify the /etc/hosts to add both hosts with their IP to workaround the driver bug

      Start services on nodes 2

      systemctl enable minidrbdcluster.service
      systemctl enable linstor-satellite.service
      systemctl start linstor-satellite.service
      systemctl start minidrbdcluster.service
      

      Open IPTables on node 2

      /etc/xapi.d/plugins/firewall-port open 3366
      /etc/xapi.d/plugins/firewall-port open 3370
      /etc/xapi.d/plugins/firewall-port open 3376
      /etc/xapi.d/plugins/firewall-port open 3377
      /etc/xapi.d/plugins/firewall-port open 8076
      /etc/xapi.d/plugins/firewall-port open 8077
      /etc/xapi.d/plugins/firewall-port open 7000:8000
      
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 node create --node-type combined $HOSTNAME
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 storage-pool create lvmthin $HOSTNAME xcp-sr-linstor_group_thin_device linstor_group/thin_device
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 resource create $HOSTNAME xcp-persistent-database --storage-pool xcp-sr-linstor_group_thin_device
      

      After that, you should be in splitbrain. I have no Idea !, my knowledge are not good enough to figure it right now. But I know how to fix it.

      On node 2 run those commands:

      drbdadm secondary all
      drbdadm disconnect all
      drbdadm -- --discard-my-data connect all
      

      On node 1 run those commands:

      drbdadm primary all
      drbdadm disconnect all
      drbdadm connect all
      

      Now the linstor/drbd are in good shape and should have all the resources.
      for the sake of fun, I change the place count from 1 to 2 on linstor controller

      [xcp-ng-01 ~]linstor rg modify --place-count 2 xcp-sr-linstor_group_thin_device
      

      Now the replication is working.

      Now on node 1, I unplug the pbd of xcp-ng-02, destroyit and create a new on with 2 hosts
      [xcp-ng-01 ~]# xe pbd-unplug uuid=6295519d-1071-2127-4313-f14c9615f244
      [xcp-ng-01 ~]# xe pbd-destroy uuid=6295519d-1071-2127-4313-f14c9615f244
      [xcp-ng-01 ~]# xe pbd-create host-uuid=eb48f91d-9916-4542-9cf4-4a718abdc451  sr-uuid=505c1928-d39d-421c-1556-143f82770ff5  device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02
      [xcp-ng-01 ~]# xe pbd-plug uuid=774971b4-dd03-18c8-92e5-32cac9bdc1e3
      

      do the same thing with the second pbd and everything is connected together.

      Not an easy task !

      Imagine if I have 30 Vms... alot of resources to be created....

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      To be able to recreate and reconnect all the PBD, I have modified the /etc/hosts file to add manually each hosts with their IPs. I know that @ronan-a is working to fix the hostname addressing in the driver. But at least I can continue to test the scalability.

      Looks promising !

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Hi,
      @ronan-a said in XOSTOR hyperconvergence preview:

      @dumarjo Could you open a ticket with a tunnel please? I can take a look. Also: I started a script this week to simplify the management of LINSTOR with add/remove commands. šŸ™‚

      Ticket done on vates.

      posted in XOSTOR
      dumarjoD
      dumarjo

    Latest posts made by dumarjo

    • RE: Import from vmware esxi 6.0 or 6.5

      @olivierlambert Not an urgent thing. First test on this.

      posted in Xen Orchestra
      dumarjoD
      dumarjo
    • Import from vmware esxi 6.0 or 6.5

      Hi,

      I just try to import a simple VM from ESXI 6.0 or 6.5 (I have both hosts). I juste update everything to latest (xcp-ng, XO and XOA). When I try to import the VM I get an instant error message:

      vm.importMultipleFromEsxi
      {
        "concurrency": 2,
        "host": "10.0.1.9",
        "network": "0029d5bd-a537-0d88-862c-edb0ff89948e",
        "password": "* obfuscated *",
        "sr": "a4c92ff0-5388-12f6-7b30-021b76f6bbb9",
        "sslVerify": false,
        "stopOnError": true,
        "stopSource": true,
        "thin": true,
        "user": "root",
        "vms": [
          "5"
        ]
      }
      {
        "succeeded": {},
        "message": "Property description must be an object: undefined",
        "name": "TypeError",
        "stack": "TypeError: Property description must be an object: undefined
          at Function.defineProperty (<anonymous>)
          at Task.onProgress (/etc/xen-orchestra/@vates/task/combineEvents.js:51:16)
          at Task.#emit (/etc/xen-orchestra/@vates/task/index.js:126:21)
          at Task.#maybeStart (/etc/xen-orchestra/@vates/task/index.js:133:17)
          at Task.runInside (/etc/xen-orchestra/@vates/task/index.js:152:21)
          at Task.run (/etc/xen-orchestra/@vates/task/index.js:138:31)
          at asyncEach.concurrency.concurrency (file:///etc/xen-orchestra/packages/xo-server/src/api/vm.mjs:1372:58)
          at next (/etc/xen-orchestra/@vates/async-each/index.js:90:37)"
      } 
      

      I tried by the web interface of XO and XOA.

      Anything I can do ?

      Regards

      posted in Xen Orchestra
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      @ronan-a Hi,

      I tested your branch and now the new added hosts to the pool are now attached to the XOSTOR. This is nice !

      I have looked at the code, but I'm not sure if in the current state of your branch we can add a disk on the new host and update the replication ? I think not... but just to be sure.

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      @ronan-a

      Imagine if I have 30 Vms... alot of resources to be created....

      I'm not sure to understand the link with VMs? ^^"

      From what I understand, I have to recreate all the resource manually, since all the VMs disk create a resource.. Maybe I'm wrong again

      I'm very interested testing this new add_host functionnality. The fun part is now I have a better undertanding of what is going on under the hood !

      regards,

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Hi,

      Ok, Figured out how to do it and get it working on 2 or more nodes. Here the process:

      [xcp-ng-01 ~]#wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/26d1db55fafa4622af2d9ee29a48f6756b8b11a3/gistfile1.txt -O install && chmod +x install
      [xcp-ng-01 ~]# ./install --disks /dev/sdb --thin
      [xcp-ng-01 ~]# vgchange -a y linstor_group
      [xcp-ng-01 ~]# xe sr-create type=linstor name-label=XOSTOR host-uuid=71324aae-aff1-4323-bb0b-2c5f858b223e device-config:hosts=xcp-ng-01 device-config:group-name=linstor_group/thin_device device-config:redundancy=1 shared=true device-config:provisioning=thin
      

      Now the SR is available to create the VMs. For simplicity, I won't create VM now.

      [xcp-ng-02 ~]# wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/26d1db55fafa4622af2d9ee29a48f6756b8b11a3/gistfile1.txt -O install && chmod +x install
      [xcp-ng-02 ~]# ./install --disks /dev/sdb --thin
      [xcp-ng-02 ~]# vgchange -a y linstor_group
      

      On both hosts, I modify the /etc/hosts to add both hosts with their IP to workaround the driver bug

      Start services on nodes 2

      systemctl enable minidrbdcluster.service
      systemctl enable linstor-satellite.service
      systemctl start linstor-satellite.service
      systemctl start minidrbdcluster.service
      

      Open IPTables on node 2

      /etc/xapi.d/plugins/firewall-port open 3366
      /etc/xapi.d/plugins/firewall-port open 3370
      /etc/xapi.d/plugins/firewall-port open 3376
      /etc/xapi.d/plugins/firewall-port open 3377
      /etc/xapi.d/plugins/firewall-port open 8076
      /etc/xapi.d/plugins/firewall-port open 8077
      /etc/xapi.d/plugins/firewall-port open 7000:8000
      
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 node create --node-type combined $HOSTNAME
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 storage-pool create lvmthin $HOSTNAME xcp-sr-linstor_group_thin_device linstor_group/thin_device
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 resource create $HOSTNAME xcp-persistent-database --storage-pool xcp-sr-linstor_group_thin_device
      

      After that, you should be in splitbrain. I have no Idea !, my knowledge are not good enough to figure it right now. But I know how to fix it.

      On node 2 run those commands:

      drbdadm secondary all
      drbdadm disconnect all
      drbdadm -- --discard-my-data connect all
      

      On node 1 run those commands:

      drbdadm primary all
      drbdadm disconnect all
      drbdadm connect all
      

      Now the linstor/drbd are in good shape and should have all the resources.
      for the sake of fun, I change the place count from 1 to 2 on linstor controller

      [xcp-ng-01 ~]linstor rg modify --place-count 2 xcp-sr-linstor_group_thin_device
      

      Now the replication is working.

      Now on node 1, I unplug the pbd of xcp-ng-02, destroyit and create a new on with 2 hosts
      [xcp-ng-01 ~]# xe pbd-unplug uuid=6295519d-1071-2127-4313-f14c9615f244
      [xcp-ng-01 ~]# xe pbd-destroy uuid=6295519d-1071-2127-4313-f14c9615f244
      [xcp-ng-01 ~]# xe pbd-create host-uuid=eb48f91d-9916-4542-9cf4-4a718abdc451  sr-uuid=505c1928-d39d-421c-1556-143f82770ff5  device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02
      [xcp-ng-01 ~]# xe pbd-plug uuid=774971b4-dd03-18c8-92e5-32cac9bdc1e3
      

      do the same thing with the second pbd and everything is connected together.

      Not an easy task !

      Imagine if I have 30 Vms... alot of resources to be created....

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      To be able to recreate and reconnect all the PBD, I have modified the /etc/hosts file to add manually each hosts with their IPs. I know that @ronan-a is working to fix the hostname addressing in the driver. But at least I can continue to test the scalability.

      Looks promising !

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Hi,
      @ronan-a said in XOSTOR hyperconvergence preview:

      @dumarjo Could you open a ticket with a tunnel please? I can take a look. Also: I started a script this week to simplify the management of LINSTOR with add/remove commands. šŸ™‚

      Ticket done on vates.

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Any input appreciated

      Regards

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      @ronan-a
      Hi,

      I did some experiment... and cannot get the missing piece of the puzzle to add my xcp-ng-04 host.

      From my last status, I have my new xcp-ng-04 host part of the pool and I have installed all the tools for the linstor.

      I checked the services for satellite and controller on the xcp-ng-04 and they are not running. I have no Idea if I need to start something manualy of not.

      Here my SMlog on a freshly booted xcp-ng-04

      Mar 22 11:54:14 xcp-ng-04 SM: [2685] sr_attach {'sr_uuid': '38c2baa3-bc8f-fbc5-ef5a-42461db92c51', 'subtask_of': 'DummyRef:|b4ea936f-80bb-4a6b-98b9-66a95f86b006|SR.attach', 'args': [],$
      Mar 22 11:54:14 xcp-ng-04 SMGC: [2685] === SR 38c2baa3-bc8f-fbc5-ef5a-42461db92c51: abort ===
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: opening lock file /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/running
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: opening lock file /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/gc_active
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: tried lock /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/gc_active, acquired: True (exists: True)
      Mar 22 11:54:14 xcp-ng-04 SMGC: [2685] abort: releasing the process lock
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: released /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/gc_active
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: opening lock file /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/sr
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: acquired /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/running
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: acquired /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/sr
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] RESET for SR 38c2baa3-bc8f-fbc5-ef5a-42461db92c51 (master: True)
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: released /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/sr
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] lock: released /var/lock/sm/38c2baa3-bc8f-fbc5-ef5a-42461db92c51/running
      Mar 22 11:54:14 xcp-ng-04 SM: [2685] set_dirty 'OpaqueRef:ea31ae92-4207-4c8b-8db6-da901c6a00a8' succeeded
      Mar 22 11:54:14 xcp-ng-04 SM: [2709] sr_update {'sr_uuid': '38c2baa3-bc8f-fbc5-ef5a-42461db92c51', 'subtask_of': 'DummyRef:|ae4505d5-b9fe-4f39-92dc-67ab9d0b579b|SR.stat', 'args': [], '$
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: opening lock file /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/sr
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: acquired /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/sr
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] sr_attach {'sr_uuid': 'bef191f3-e976-94ec-6bb7-d87529a72dbb', 'subtask_of': 'DummyRef:|ae624d96-e91a-4a12-afd6-35593be4ce51|SR.attach', 'args': [],$
      Mar 22 11:54:15 xcp-ng-04 SMGC: [2725] === SR bef191f3-e976-94ec-6bb7-d87529a72dbb: abort ===
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: opening lock file /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/running
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: opening lock file /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/gc_active
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: tried lock /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/gc_active, acquired: True (exists: True)
      Mar 22 11:54:15 xcp-ng-04 SMGC: [2725] abort: releasing the process lock
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: released /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/gc_active
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: acquired /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/running
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] RESET for SR bef191f3-e976-94ec-6bb7-d87529a72dbb (master: False)
      Mar 22 11:54:15 xcp-ng-04 SM: [2725] lock: released /var/lock/sm/bef191f3-e976-94ec-6bb7-d87529a72dbb/running
      Mar 22 11:54:16 xcp-ng-04 SM: [2725] Got exception: Error: Unable to connect to any of the given controller hosts: ['linstor://xcp-ng-02']. Retry number: 0
      Mar 22 11:54:19 xcp-ng-04 SM: [2725] Got exception: Error: Unable to connect to any of the given controller hosts: ['linstor://xcp-ng-02']. Retry number: 1
      
      

      On xcp-ng-02 (linstor controller)

      [11:42 xcp-ng-02 ~]# linstor node list
      ╭────────────────────────────────────────────────────────────╮
      ā”Š Node      ā”Š NodeType ā”Š Addresses                  ā”Š State  ā”Š
      ā•žā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•”
      ā”Š xcp-ng-01 ā”Š COMBINED ā”Š 192.168.2.221:3366 (PLAIN) ā”Š Online ā”Š
      ā”Š xcp-ng-02 ā”Š COMBINED ā”Š 192.168.2.222:3366 (PLAIN) ā”Š Online ā”Š
      ā”Š xcp-ng-03 ā”Š COMBINED ā”Š 192.168.2.223:3366 (PLAIN) ā”Š Online ā”Š
      ╰────────────────────────────────────────────────────────────╯
      
      

      I list all the linstor PBD on hosts:

      [11:49 xcp-ng-01 ~]# xe pbd-list | grep linstor -3
      uuid ( RO)                  : e75fdc51-29a4-aa57-bc44-459f80a0d230
                   host-uuid ( RO): ad95c6ca-612d-42af-8909-d4e9dc7645bb
                     sr-uuid ( RO): bef191f3-e976-94ec-6bb7-d87529a72dbb
               device-config (MRO): provisioning: thin; redundancy: 2; group-name: linstor_group/thin_device; hosts: xcp-ng-01,xcp-ng-02,xcp-ng-03
          currently-attached ( RO): true
      
      
      --
      uuid ( RO)                  : 063db650-55b1-a3e0-9d9a-e94ce938988d
                   host-uuid ( RO): 5747f145-0dc2-4987-a6b9-b6c5a7ed0505
                     sr-uuid ( RO): bef191f3-e976-94ec-6bb7-d87529a72dbb
               device-config (MRO): hosts: xcp-ng-01,xcp-ng-02,xcp-ng-03; group-name: linstor_group/thin_device; redundancy: 2; provisioning: thin
          currently-attached ( RO): false
      
      
      --
      uuid ( RO)                  : a1f876b1-0568-71ac-9ffb-720e626cb4ab
                   host-uuid ( RO): e286a04a-69bf-4d59-a0c8-e7338e8c1831
                     sr-uuid ( RO): bef191f3-e976-94ec-6bb7-d87529a72dbb
               device-config (MRO): provisioning: thin; redundancy: 2; group-name: linstor_group/thin_device; hosts: xcp-ng-01,xcp-ng-02,xcp-ng-03
          currently-attached ( RO): true
      
      
      uuid ( RO)                  : 08564ab5-a518-f709-8527-f592c2592d14
                   host-uuid ( RO): eb48f91d-9916-4542-9cf4-4a718abdc451
                     sr-uuid ( RO): bef191f3-e976-94ec-6bb7-d87529a72dbb
               device-config (MRO): provisioning: thin; redundancy: 2; group-name: linstor_group/thin_device; hosts: xcp-ng-01,xcp-ng-02,xcp-ng-03
          currently-attached ( RO): true
      

      After that, I remove all the PBD to all hosts to be able to recreate the PBDs on all hosts with the new xcp-ng-04

      xe pbd-create host-uuid=e286a04a-69bf-4d59-a0c8-e7338e8c1831 sr-uuid=bef191f3-e976-94ec-6bb7-d87529a72dbb device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02,xcp-ng-03,xcp-ng-04
      xe pbd-create host-uuid=ad95c6ca-612d-42af-8909-d4e9dc7645bb sr-uuid=bef191f3-e976-94ec-6bb7-d87529a72dbb device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02,xcp-ng-03,xcp-ng-04
      xe pbd-create host-uuid=eb48f91d-9916-4542-9cf4-4a718abdc451 sr-uuid=bef191f3-e976-94ec-6bb7-d87529a72dbb device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02,xcp-ng-03,xcp-ng-04
      xe pbd-create host-uuid=5747f145-0dc2-4987-a6b9-b6c5a7ed0505 sr-uuid=bef191f3-e976-94ec-6bb7-d87529a72dbb device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02,xcp-ng-03,xcp-ng-04
      

      all succeed. No error.

      After that I try to connect all PBDs to the SR

      [11:13 xcp-ng-01 ~]# xe pbd-plug uuid=7d588c37-a152-9666-175e-91b2d48c150f
      [11:13 xcp-ng-01 ~]# xe pbd-plug uuid=99f76235-1b1a-e5fa-bb19-3883737fcc6d
      [11:13 xcp-ng-01 ~]# xe pbd-plug uuid=df727345-f475-b929-ecc1-b506f0053361
      [11:13 xcp-ng-01 ~]# xe pbd-plug uuid=8b4ddc6f-e25a-1942-a435-345ccc93551a
      Error code: SR_BACKEND_FAILURE_47
      Error parameters: , The SR is not available [opterr=Error: Unable to connect to any of the given controller hosts: ['linstor://xcp-ng-02']],
      

      From there, I'm a bit lost...

      1- Do I need to add the linstor satellite xcp-ng-04 before doing all the PBDs ?
      2- Should I start any of the services on xcp-ng-04 before doing all this ?

      Regards

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      @ronan-a

      Is my understanding is good ? A XCP-NG host cannot use a shared SR (based on linstor) if it's not part of the linstor nodes ?

      How this new host can be part of the nodes if I don't want to add HDD/SSD to this new host ? can it be done ?

      Again, I want to know the limitation before thinking using this new promising technology !

      posted in XOSTOR
      dumarjoD
      dumarjo