XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dumarjo
    3. Best
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 17
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: XOSTOR hyperconvergence preview

      @ronan-a Hi,

      I tested your branch and now the new added hosts to the pool are now attached to the XOSTOR. This is nice !

      I have looked at the code, but I'm not sure if in the current state of your branch we can add a disk on the new host and update the replication ? I think not... but just to be sure.

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Hi,

      Ok, Figured out how to do it and get it working on 2 or more nodes. Here the process:

      [xcp-ng-01 ~]#wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/26d1db55fafa4622af2d9ee29a48f6756b8b11a3/gistfile1.txt -O install && chmod +x install
      [xcp-ng-01 ~]# ./install --disks /dev/sdb --thin
      [xcp-ng-01 ~]# vgchange -a y linstor_group
      [xcp-ng-01 ~]# xe sr-create type=linstor name-label=XOSTOR host-uuid=71324aae-aff1-4323-bb0b-2c5f858b223e device-config:hosts=xcp-ng-01 device-config:group-name=linstor_group/thin_device device-config:redundancy=1 shared=true device-config:provisioning=thin
      

      Now the SR is available to create the VMs. For simplicity, I won't create VM now.

      [xcp-ng-02 ~]# wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/26d1db55fafa4622af2d9ee29a48f6756b8b11a3/gistfile1.txt -O install && chmod +x install
      [xcp-ng-02 ~]# ./install --disks /dev/sdb --thin
      [xcp-ng-02 ~]# vgchange -a y linstor_group
      

      On both hosts, I modify the /etc/hosts to add both hosts with their IP to workaround the driver bug

      Start services on nodes 2

      systemctl enable minidrbdcluster.service
      systemctl enable linstor-satellite.service
      systemctl start linstor-satellite.service
      systemctl start minidrbdcluster.service
      

      Open IPTables on node 2

      /etc/xapi.d/plugins/firewall-port open 3366
      /etc/xapi.d/plugins/firewall-port open 3370
      /etc/xapi.d/plugins/firewall-port open 3376
      /etc/xapi.d/plugins/firewall-port open 3377
      /etc/xapi.d/plugins/firewall-port open 8076
      /etc/xapi.d/plugins/firewall-port open 8077
      /etc/xapi.d/plugins/firewall-port open 7000:8000
      
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 node create --node-type combined $HOSTNAME
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 storage-pool create lvmthin $HOSTNAME xcp-sr-linstor_group_thin_device linstor_group/thin_device
      [xcp-ng-02 ~]linstor --controllers=10.33.33.40 resource create $HOSTNAME xcp-persistent-database --storage-pool xcp-sr-linstor_group_thin_device
      

      After that, you should be in splitbrain. I have no Idea !, my knowledge are not good enough to figure it right now. But I know how to fix it.

      On node 2 run those commands:

      drbdadm secondary all
      drbdadm disconnect all
      drbdadm -- --discard-my-data connect all
      

      On node 1 run those commands:

      drbdadm primary all
      drbdadm disconnect all
      drbdadm connect all
      

      Now the linstor/drbd are in good shape and should have all the resources.
      for the sake of fun, I change the place count from 1 to 2 on linstor controller

      [xcp-ng-01 ~]linstor rg modify --place-count 2 xcp-sr-linstor_group_thin_device
      

      Now the replication is working.

      Now on node 1, I unplug the pbd of xcp-ng-02, destroyit and create a new on with 2 hosts
      [xcp-ng-01 ~]# xe pbd-unplug uuid=6295519d-1071-2127-4313-f14c9615f244
      [xcp-ng-01 ~]# xe pbd-destroy uuid=6295519d-1071-2127-4313-f14c9615f244
      [xcp-ng-01 ~]# xe pbd-create host-uuid=eb48f91d-9916-4542-9cf4-4a718abdc451  sr-uuid=505c1928-d39d-421c-1556-143f82770ff5  device-config:provisioning=thin device-config:redundancy=2 device-config:group-name=linstor_group/thin_device device-config:hosts=xcp-ng-01,xcp-ng-02
      [xcp-ng-01 ~]# xe pbd-plug uuid=774971b4-dd03-18c8-92e5-32cac9bdc1e3
      

      do the same thing with the second pbd and everything is connected together.

      Not an easy task !

      Imagine if I have 30 Vms... alot of resources to be created....

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      To be able to recreate and reconnect all the PBD, I have modified the /etc/hosts file to add manually each hosts with their IPs. I know that @ronan-a is working to fix the hostname addressing in the driver. But at least I can continue to test the scalability.

      Looks promising !

      posted in XOSTOR
      dumarjoD
      dumarjo
    • RE: XOSTOR hyperconvergence preview

      Hi,
      @ronan-a said in XOSTOR hyperconvergence preview:

      @dumarjo Could you open a ticket with a tunnel please? I can take a look. Also: I started a script this week to simplify the management of LINSTOR with add/remove commands. 🙂

      Ticket done on vates.

      posted in XOSTOR
      dumarjoD
      dumarjo