XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. abufrejoval
    3. Best
    A
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 29
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: XOSTOR hyperconvergence preview

      @ronan-a

      That brings me to the topic of observability:

      I can't say I have been entirely happy observing what was going on in Gluster on oVirt, but depending on if you used the chunking mode (or the oVirt storage overlay) vs. the pure file mode, you had a rather granular overview on what was going on, what was good, what needed healing and just how far behind synchronizations might be.

      With DRBD I feel like flying blind again, mostly because it's a block not a file layer. From what I've seen in the DRBD and LINSTOR manuals, I'll be able to query replication state and whether or not replicas are in sync. When they are not and offlined, because the (limited?) update queue has overflowed, it seems you may have to re-create the replica. Yet there is also a checksumming mode, which might be able to "resilver" a replica even if the update queue isn't complete. I guess that's where LINBIT wants to sell consulting or support...

      So when you suggest control over replication at the VDI level, I wonder how this happens, since without another layer in between, I can only imagine replication control at the SR level using distinct DRBD resources. Perhaps some explanations on how Xcp SRs and DRBD resources and volumes are supposed to correlate would be helpful.

      In my edge oriented HCI setups, I'd just be using a triple replica setup, because it's a nice compromise between the write amplification and redundancy. Yeah, having a (pop-up?) arbiter that helps maintain a quorum while you're doing maintenance on one node, wouldn't be too bad to have, but I've not been too happy with 2 replica + 1 arbiter Glusters on oVirt: You're really only standing on one leg when doing maintenance or handling faults. I used it on the 2.5Gbit nodes, because write amplification was too expensive on the 10Gbit nodes with NVMe I prefer 3 replicas, if only to reduce making mistakes.

      For the additional compute nodes I prefer to go diskless, also because I shut them down to save power when load is low.

      But that's the home-lab. For the corporate lab (which is what I am testing it for), there it's more like a dozen machines, some storage heavy (recycled), some compute heavy (GPGPU compute), with both populations changing, sometimes by choice, sometimes because they fail.

      Now since erasure coding isn't LINSTOR native, having to use staggered replicas in distinct SRs to manage fault-tolerance/write-amplification/storage-efficiency will quickly become a real burden: I'd love to know how much intelligence you're willing to put into XOA to help manage redistributions (which require observability). At least in theory, Gluster was vastly superior there, not that I've actually tried transforming terabytes of dispersed volumes say from a 6+2 to a 12+3 configuration.

      And to be quite honest: I'm still struggling to understand the abstraction topology of DBRD/LINSTOR/Pacemaker and then their new LINBIT VSAN. Everbody is so focused on producing videos or 'getting started' tutorials, they completely forget writing a proper concept's & architecture guide.

      posted in XOSTOR
      A
      abufrejoval
    • RE: Help building kernel & kernel-alt, please

      @olivierlambert
      Merci Oliver (& Stormi), I found the issue was in front of the computer (again): I had missed to clone the repos before starting the run.py.

      Seems to be working just now, at least it's compiling a ton of stuff....

      posted in Development
      A
      abufrejoval
    • RE: Clonezilla not recognising network adaptor

      @fred974

      On the virtual machine advanced attributes tab you can chose between a RealTek RTL8139 and an Intel e1000 (both virtual devices) NIC. It defaults to RealTek and the suggestion is to change it to the Intel one and retry booting the VM.

      The NIC on the host (X520) does not matter at all for this operation.

      It's been awhile, but I can confirm that moving images between (in my case) oVirt and Xcp-ng VMs via Clonezilla works just fine.

      Also did it between VMware and Xcp-ng btw.

      posted in Compute
      A
      abufrejoval