XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. olivierlambert
    Offline
    • Profile
    • Following 0
    • Followers 75
    • Topics 63
    • Posts 9,686
    • Groups 6

    olivierlambert

    @olivierlambert

    Vates ๐Ÿช Co-Founder CEO

    Xen Orchestra and XCP-ng founder. Vates CEO and co-founder.

    3.0k
    Reputation
    5.3k
    Profile views
    9.7k
    Posts
    75
    Followers
    0
    Following
    Joined
    Last Online
    Website vates.fr
    Location Grenoble, France

    olivierlambert Unfollow Follow
    Co-Founder CEO Pro Support Team Vates ๐Ÿช XCP-ng Team Admin

    Best posts made by olivierlambert

    • XCP-ng issue 1: closed!

      This is not just a powerful symbolic thing: by closing issue nยฐ1, we are really reaffirming our independance and our capacity to build everything without using any binary provided by Citrix.

      It's really a great news for XCP-ng's future!

      https://github.com/xcp-ng/xcp/issues/1

      ๐ŸŽŠ ๐ŸŽŠ ๐ŸŽŠ

      olivierlambert created this issue in xcp-ng/xcp

      closed Build components from the sources #1

      posted in News
      olivierlambertO
      olivierlambert
    • XCP-ng team is growing

      Since last week, we have a new team member ๐Ÿ™‚ @ronan-a is now part of Vates XCP-ng team, with @stormi and myself.

      @r1 and @johnelse are still with us as external contributors.

      @ronan-a , feel free to introduce yourself here (and the stuff you are working on)

      posted in News
      olivierlambertO
      olivierlambert
    • 2 weeks break for me

      Hey everyone,

      Just to let you know I'm finally taking a 2 weeks break, and it's really needed ๐Ÿ˜† (thanks Broadcom for the extra work those last months ๐Ÿ‘ผ )

      I will make sure that everyone with admin powers will review often the registration waiting list. For the rest, it's up to you, the community, to demonstrate you can help each other without having me around. I don't doubt it, but I wanted to tell you why you won't see me for the next 2 weeks ๐Ÿ˜‰

      Enjoy and see you then!

      posted in News
      olivierlambertO
      olivierlambert
    • New XCP-ng documentation!

      https://xcp-ng.org/docs/

      It's far from being complete, but it's a great first step in improving XCP-ng existing documentation and also visibility/credibility!

      Everyone can still contribute (link on the bottom of each page), even if it's a bit more difficult than the wiki.

      Let me know what you think about it ๐Ÿ™‚

      What about the wiki? We'll probably to continue to have very specific content there, and "promote" content to the doc if it "fits" for enough people.

      posted in News
      olivierlambertO
      olivierlambert
    • 100,000 unique downloads for XCP-ng

      https://xcp-ng.org/blog/2020/04/17/100-000-downloads-for-xcp-ng/

      So you probably spotted the news, feel free to comment here ๐Ÿ˜„

      posted in News
      olivierlambertO
      olivierlambert
    • RE: French government initiative to support

      Maybe it wasn't clear? French gov won't do anything in the project, they just support it as a research and dev project boosting innovation, and indirectly contributing to create more jobs that can't be replicated easily elsewhere.

      There's no "control" on how we decide to do the R&D, as long as we put resources into it as we said.

      XCP-ng is a free and community project, and everybody who want to contribute can do it. There's no "link" between us and any government.

      posted in News
      olivierlambertO
      olivierlambert
    • RE: New XCP-ng "theme" in shell

      What about this one?

      4.png

      posted in News
      olivierlambertO
      olivierlambert
    • Merging XO forum here

      To avoid splitting the community, we'll merge XO forum in this category. It seems not possible (or very not trivial) to fetch the previous XO threads here, so we'll probably have a transition time until we redirect directly XO forum URL toward this category directly.

      posted in Xen Orchestra
      olivierlambertO
      olivierlambert
    • XOSTOR hyperconvergence preview

      XOSTOR - Tech preview

      โš  Installation script is compatible with XCP-ng 8.2 and 8.3 โš 

      โš  UPDATE to sm-2.30.7-1.3.0.linstor.7 โš 

      Please read this: https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview/224?_=1679390249707

      โš  UPDATE from an older version (before sm-2.30.7-1.3.0.linstor.3) โš 

      Please read this: https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview/177?_=1667938000897


      XOSTOR is a "disaggregated hyperconvergence storage solution". In plain English: you can assemble local storage of multiple hosts into one "fake" shared storage.

      The key to get fast hyperconvergence is to try a different approach. We used GlusterFS for XOSAN, and it wasn't really fast for small random blocks (due to the nature of the global filesystem). But in XOSTOR, there's a catch: unlike traditional hyperconvergence, it won't create a global clustered and shared filesystem. This time, when you'll create a VM disk, it will create a "resource", that will be replicated "n" times on multiple hosts (eg twice or 3 times).

      So in the end, the number of resources depends on the VM disk numbers (and snapshots).

      The technology we use is not invented from scratch, we are using LINSTOR from LINBIT, based itself on DRBD. See https://linbit.com/linstor/

      For you, it will be (ideally) transparent.

      Ultimate goals

      Our first goal here is to validate the technology at scale. If it works as we expect, then we'll add a complete automated solution and UI on top of it, and sell pro support for people who want to get a "turnkey" supported solution (a la XOA).

      The manual/shell script installation as described here is meant to stay fully open/accessible with community support ๐Ÿ™‚

      Now I'm letting @ronan-a writing the rest of this message ๐Ÿ™‚ Thanks a lot for your hard work ๐Ÿ˜‰

      How-it-works-Isometric-Deck-5Integrations-VersionV2-1024x554.png


      โš  Important โš 

      Despite we are doing intensive testing with this technology in the last 2 years (!), it was really HARD to integrate it easily into SMAPIv1 (legacy storage stack of XCP-ng). Especially when you have to test all potential cases.

      The goal of this tech preview is to scale our testing to a LOT of users.

      Right now, this version should be installed on pools with 3 or 4 hosts. We plan to release another test release in one month to remove this limitation. Also, in order to ensure data integrity, it is more than recommended to use at least 3 hosts.

      How to install XOSTOR on your pool?

      1. Download installation script

      First, you must ensure you have at least one free disk or more on each host of your pool.
      Then you can download the installation script using this command:

      wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/052b3dfff9c06b1765e51d8de72c90f2f90f475b/gistfile1.txt -O install && chmod +x install
      

      2. Install

      Then, on each host you must execute the script with the disks to use, for example with one partition:

      ./install --disks /dev/sdb
      

      If you have many disks you can use them, BUT for optimal use, the sum of all disks should be the same on each host:

      ./install --disks /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3
      

      By default, thick provisioning is used, you can use thin instead:

      ./install --disks /dev/sdb --thin
      

      Note: You can use the --force flag if you already have a VG group or PV on your hosts to override:

      ./install --disks /dev/sdb --thin --force
      

      3. Verify config

      With thin option

      lsblk must return on each host an output similar to:

      > lsblk
      NAME                                                                              MAJ:MIN  RM   SIZE RO TYPE  MOUNTPOINT
      ...
      sdb                                                                                 8:16    0   1.8T  0 disk
      โ””โ”€36848f690df82210028c2364008358dd7                                               253:0     0   1.8T  0 mpath
        โ”œโ”€linstor_group-thin_device_tmeta                                               253:1     0   120M  0 lvm
        โ”‚ โ””โ”€linstor_group-thin_device-tpool                                             253:3     0   1.8T  0 lvm
        โ””โ”€linstor_group-thin_device_tdata                                               253:2     0   1.8T  0 lvm
          โ””โ”€linstor_group-thin_device-tpool                                             253:3     0   1.8T  0 lvm
      ...
      

      With thick option

      No LVM volume is created, only a new group must be present now using vgs command.

      > vgs
        VG                                                 #PV #LV #SN Attr   VSize   VFree  
        ...
        linstor_group                                        1   0   0 wz--n- 931.51g 931.51g
      

      And you must have linstor versions of sm and xha:

      > rpm -qa | grep -E "^(sm|xha)-.*linstor.*"
      sm-2.30.4-1.1.0.linstor.8.xcpng8.2.x86_64
      xha-10.1.0-2.2.0.linstor.1.xcpng8.2.x86_64
      

      4. Finally you can create the SR:

      If you use thick provisioning:

      xe sr-create type=linstor name-label=<SR_NAME> host-uuid=<MASTER_UUID> device-config:group-name=linstor_group device-config:redundancy=<REDUNDANCY> shared=true device-config:provisioning=thick
      

      Otherwise with thin provisioning:

      xe sr-create type=linstor name-label=<SR_NAME> host-uuid=<MASTER_UUID> device-config:group-name=linstor_group/thin_device device-config:redundancy=<REDUNDANCY> shared=true device-config:provisioning=thin
      

      So for example if you have 4 hosts, a thin config and you want a replication of 3 for each disk:

      xe sr-create type=linstor name-label=XOSTOR host-uuid=bc3cd3af-3f09-48cf-ae55-515ba21930f5 device-config:group-name=linstor_group/thin_device device-config:redundancy=3 shared=true device-config:provisioning=thin
      
      

      5. Verification

      After that you must have a XOSTOR SR visible in XOA with all PBDs attached.

      6. Update

      If you want to update your LINSTOR and other packages, you can execute on each host the install script like this:

      ./install --update-only
      

      F.A.Q.

      How the SR capacity is calculated? ๐Ÿค”

      If you can't create a VDI greater than the displayed size in the XO SR view, don't worry:

      • There are two important things to remember: the maximum size of a VDI that can be created is not necessarily equal to the capacity of the SR. The SR capacity in the XOSTOR context is the maximum size that can be used to store all VDI data.
      • Exception: if the replication count is equal to the number of hosts, the SR capacity is equal to the max VDI size, i.e. the capacity of the smallest disk in the pool.

      We use this formula to compute the SR capacity:

      sr_capacity = smallest_host_disk_capacity * host_count / replication_count
      

      For example if you have a pool of 3 hosts with a replication count of 2 and a disk of 200 GiB on each host, the capacity of the SR is equal to 300 GiB using the formula. Notes:

      • You can't create a VDI greater than 200 GiB because the replication is not block based but volume based.
      • If you create a volume of 200 GiB (400 of the 600 GiB are physically used) and the remaining disk can't be used because it becomes impossible to replicate on two different disks.
      • If you create 3 volumes of 100 GiB: the SR becomes fully filled. In this case you have 300 GiB of unique data and a replication of 300 GiB.

      How to destroy properly the SR after a SR.forget call?

      If you used a command like SR.forget, the SR is not actually removed properly. To do that you can execute these commands:

      # Create new UUID for the SR to reintroduce it.
      uuidgen
      
      # Reintroduce the SR.
      xe sr-introduce uuid=<UUID_of_uuidgen> type=linstor shared=true name-label="XOSTOR" content-type=user
      
      # Get host list to recreate PBD
      xe host-list
      ...
      
      # For each host, you must execute a `xe pbd-create` call.
      # Don't forget to use the correct SR/host UUIDs, and device-config parameters.
      xe pbd-create host-uuid=<host_uuid>  sr-uuid=uuid_of_uuidgen> device-config:provisioning=thick device-config:redundancy=<redundancy> device-config:group-name=<group_name>
      
      # After this point you can now destroy the SR properly using xe or XOA.
      

      Node: auto-eviction and how to restore?

      If a node is no longer active for 60 minutes by default, it's automatically evicted. This behavior can be changed.
      There is an advantage using auto evict, if there are enough nodes in your cluster, LINSTOR will create new replicas of your disks.

      See: https://linbit.com/blog/linstors-auto-evict/

      Now if you want to re-add your node, it's not automatic. You can used a linstor command to remove it: linstor node lost. Then you can recreate it. Also if there is no disk issue, and it was a network problem, whatever, just run one command linstor node restore.

      How to use a specific network storage?

      You can run few specific LINSTOR commands to configure new NICs to use. By default the XAPI management interface is used.

      For more info: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-managing_network_interface_cards

      In case of failure with the preferred NIC, the default interface is used.

      How to replace drives?

      Take a look at the official documentation: https://kb.linbit.com/how-do-i-replace-a-failed-d

      XAPI plugin: linstor-manager

      It's possible to perform low-level tasks using the linstor-manager plugin.

      It can be executed using the following command:

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=<FUNCTION> args:<ARG_NAME_1>=<VALUE_1> args:<ARG_NAME_2>=<VALUE_2> ...
      

      Many functions are not documented here and are reserved for internal use by the smapi driver (LinstorSR).

      For each command, HOST_UUID is a host of your pool, master or not.

      Add a new host to an existing LINSTOR SR

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=addHost args:groupName=<THIN_OR_THICK_POOL_NAME>
      

      This command creates a new PBD on the SR and new node in the LINSTOR database. Also it starts what's is necessary for the driver.
      After running this command, it's up to you to set up a new storage pool in the LINSTOR database with the same name used by the other nodes.
      So again use pvcreate/vgcreate and then a basic "linstor storage-pool create"

      Remove a new host from an existing LINSTOR SR

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=removeHost args:groupName=<THIN_OR_THICK_POOL_NAME>
      

      Check if the linstor controller is currently running on a specific host

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=hasControllerRunning
      

      Example:

      xe host-call-plugin host-uuid=ddcd3461-7052-4f5e-932c-e1ed75c192d6 plugin=linstor-manager fn=hasControllerRunning
      False
      

      Check if a DRBD volume is currently used by a process on a specific host

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=getDrbdOpeners args:resourceName=<RES_NAME> args:volume=0
      

      Example:

      xe host-call-plugin host-uuid=ddcd3461-7052-4f5e-932c-e1ed75c192d6 plugin=linstor-manager fn=getDrbdOpeners args:resourceName=xcp-volume-a10809db-bb40-43bd-9dee-22d70d781c45 args:volume=0
      {}
      

      List DRBD volumes

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=listDrbdVolumes args:groupName=<THIN_OR_THICK_POOL_NAME>
      

      Example:

      xe host-call-plugin host-uuid=ddcd3461-7052-4f5e-932c-e1ed75c192d6 plugin=linstor-manager fn=listDrbdVolumes args:groupName=linstor_group/thin_device
      {"linstor_group": [1000, 1005, 1001, 1007, 1006]}
      

      Force destruction of DRBD volumes

      Warning: In principle, the volumes created by the smapi driver (LinstorSR) must be destroyed using the XAPI or XOA. Only use these functions if you know what you are doing. Otherwise, forget them.

      # To destroy one volume:
      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=destroyDrbdVolume args:minor=<MINOR>
      
      # To destroy all volumes:
      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=destroyDrbdVolumes args:groupName=<THIN_OR_THICK_POOL_NAME>
      
      posted in XOSTOR
      olivierlambertO
      olivierlambert
    • RE: EOL: XCP-ng Center has come to an end (New Maintainer!)

      Thanks for all your work @borzel ! We are doing our best to improve our capacity to quickly review and structure external contribution in XO Lite: we are currently working on a dedicated CONTRIBUTING.md with links to the Figma UX design and such, so everything can be developed in the open to everyone.

      posted in News
      olivierlambertO
      olivierlambert

    Latest posts made by olivierlambert

    • RE: PCI device doesn't show in XO or xe pci-list

      Hi,

      Just to be sure I understand, you can see the device with lspci (as the CX23887/8 Broadcast thing) but it's not listen in XO web UI, right?

      posted in Compute
      olivierlambertO
      olivierlambert
    • RE: ๐Ÿšจ AI on XCPโ€‘ng 8.3: Not Ready for Prime Time? FlashAttention/ROCm Passthrough Stalls vs Proxmox & Bareโ€‘Metal

      Hi,

      Small side note: lately Iโ€™ve noticed that some posts look like they were generated by LLMs. This can actually make it harder for the community to help, because the text is often long, unclear, or missing the basic details we really need to assist.

      Iโ€™d really encourage everyone to write posts in their own words and share as much relevant information as possible. The real value of this community is people helping each other directly ๐Ÿ™‚

      posted in Compute
      olivierlambertO
      olivierlambert
    • RE: Intel ARC 310 Problem

      Hmm are you sure it's correctly ignored before trying to pass it trough? The success of PCI passthrough is only possible because it's hidden from the Dom0

      posted in Hardware
      olivierlambertO
      olivierlambert
    • RE: Delta backup stuck on "Clean VM Directory" for a long time

      Adding @florent in the loop

      posted in Backup
      olivierlambertO
      olivierlambert
    • RE: Delta backup stuck on "Clean VM Directory" for a long time

      Hi,

      Sadly, even if Minio is probably better than others (eg BackBlaze), they can suffer from various issues. There's a reason why we only provide official pro support on AWS.

      Would it be possible to test another provider to double check it's not XO related?

      posted in Backup
      olivierlambertO
      olivierlambert
    • RE: XCP-ng - XOA vs. XOCE

      Hi,

      It's not relevant for our most critical users (ie paid customers), because they don't want to have a complex matrix. They all come from VMware and they need something simple to explain.

      If you are a home labber, you can just check xcp-ng.org website and https://docs.xen-orchestra.com/installation, that's enough ๐Ÿ™‚

      posted in Management
      olivierlambertO
      olivierlambert
    • RE: Intel ARC 310 Problem

      Can you do a lspci in the Dom0? I'm not sure the audio device is an adressable device (unlike Nvidia cards for example)

      posted in Hardware
      olivierlambertO
      olivierlambert
    • RE: XO Lite - Storage button greyed out on fresh install

      Each month we release new features, so it's hard to tell, it depends on which feature you are expecting exactly ๐Ÿ™‚

      posted in XO Lite
      olivierlambertO
      olivierlambert
    • RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs

      Upstream inclusion is ultra recent, but you can always add drivers yourself (still requiring a relatively recent kernel IIRC). Anyway, we have the hardware and it will be in the roadmap at least to explore ๐Ÿ™‚

      posted in Hardware
      olivierlambertO
      olivierlambert
    • RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs

      Yes but in order to see the VF I need a working driver in the Dom0 first ๐Ÿ˜„ Probably a lot easier with a more recent kernel in XCP-ng 9.0

      posted in Hardware
      olivierlambertO
      olivierlambert