XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. cg
    3. Posts
    C
    Offline
    • Profile
    • Following 2
    • Followers 0
    • Topics 8
    • Posts 113
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Windows 2025 Standard 24H2.11 (iso release of sept 25) crash on reboot with "INACCESSIBLE BOOT DEVICE 0x7B" in XCP 8.2.1 and XCP 8.3

      @dinhngtu I can't say on XCP-ng side, but it's likely linked to:
      August patch (and following), as Microsoft changed something to the NVMe stack.

      e.g.
      https://learn.microsoft.com/en-us/answers/questions/5536733/potential-ssd-detection-bug-in-windows-11-24h2-fol

      Google gives a lot about it. It seems that it most likely doesn't kill NVMes but can cause trouble.
      We have a few PCs becoming more unstable (BSODs) or even very slow after that upgrae.

      posted in XCP-ng
      C
      cg
    • RE: Debian 9 virtual machine does not start in xcp-ng 8.3

      I often wondered what's the general purpose of that option.
      As I only have 1 - 2 socket servers, I always choose 1 socket with x cores (mostly 2 - 8, not exeeding 1 real CPU).
      Also for historic reasons: Sockets have been limited, but not cores.

      Does it generally make any difference on Xen side/backend?
      VM OS might handle it different due to NUMA optimizations.

      posted in XCP-ng
      C
      cg
    • Upgrade 8.2.1 -> 8.3 failed (manually fixed)

      After upgrading my first server successfully, I upgraded another one recently (different environments, no pool), but it failed.

      I remembered from the forum to check installer logs, so I copied the whole directory (in case it contains useful info) and switched to another console (ALT+F3?) to see where it failed.
      I don't know if it's documented somewhere, but following that console is pretty informative, rather then just seeing a progress bar.
      Older windows types didn't hide what the installer is doing. It's a bit sad XCP-ng "hides" that.

      tl;dr: The problem seemed to be:
      STANDARD ERROR:

      cp: error reading '/tmp/primary-jqbXmQ/usr/lib64/python2.7/lib-dynload/_codecs_hk.so': Input/output error
      cp: failed to extend '/tmp/backup-TbutMQ/usr/lib64/python2.7/lib-dynload/_codecs_hk.so': Input/output error
      

      As it was during backup phase, nothing was broken and I could just retry... to end up with the same problem.
      As it looks like some hongkong locales, I just removed the file and tried again: with success.
      Backup ran through, install/upgrade went fine. Box is running since.

      I didn't back the file up, but with "ls" it looked fine like everything else. Also nobody ever touched that file. I can't say why, but wanted to drop it here, for archival purposes. Maybe someone else stumbles over a close or similar problem.

      As the logs and other terminal give quite some information about current actions, debugging was somehow fun and it was interesting to dig a bit into what the installer is actually doing. Big pro over Microsoft... which often is a big pain to debug.

      If you want the whole installlog-dir: I still have it, but will delete the next days, if not.

      Greetings

      • Christof
      posted in XCP-ng
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @nikade I found out the HPE MSA2060 has a full flash bundle option, wich is suprisingly cheap, so our SAN has 3.84 TB SAS SSDs - they'll be good within a few hours, but our backup server has a RAID6 with 10 TB HDDs.

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @nikade It's also RAS. The risk of a 2nd failing disc during rebuild is a lot higher than usual.
      Our B2D2T server needs about 24 hours for that.

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @olivierlambert said in First SMAPIv3 driver is available in preview:

      Hi,

      8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3 🙂

      Lots of work means lots of changes, means: I'm exciting about it. Also sounds more like a 9.0, if that much work is going into it. 😉

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
      That's why the only option for our SAN is: RAID6 respectively any DP algorythm.

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @Paolo If it's only for that: Any HW-RAID with DP should do the job. (in case you don't fully go for SW-RAID)

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
      I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
      IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @olivierlambert I know the problem of a shared FS, the quesion I had was rather: does qcow2 or vhdx have benefits above each other. What are pros/cons with the choice of one.
      Does it matter at all?

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @still_at_work the question is technically wrong. It's less depending on SMAPI, moreso on the "drivers" that it'll be able to use.
      Someone needs to implement something for thin provisioned shared storage that could handle it.
      e.g. via GFS2 or something else.

      You could make your own "adapter"/"driver" (I forgot how they called it) for it, like they did with ZFS.

      posted in Development
      C
      cg
    • RE: First SMAPIv3 driver is available in preview

      @john-c as well as FC. Basically all shared storages that are production ready.

      What are the up/downsides of qcow2 vs. VHDX?

      posted in Development
      C
      cg
    • RE: XCP-ng Windows PV Drivers 8.2.2

      @ThierryC01 said in XCP-ng Windows PV Drivers 8.2.2:

      Why is there no video drivers for Windows, I have a Windows 10 Pro VM and the video lags a bit compared to my Linux Desktop VMs despite having 8GO RAM and 16MO (?16 max???) video memory.

      Because it's rudimentary GFX and has no acceleration at all. It's not made for what you might expect.

      posted in Development
      C
      cg
    • RE: NUMA-impact - Xeon/Epyc - 1P vs 2P

      Also something to keep in mind: It's not only about NUMA (which is different since 2nd Epyc gen, as they have all memory channels on an IO-Die and only split the caches now), it's also about memory bandwith!

      So it adds more complexity and depends on the needs of your workload.
      If it benefits from high memory bandwith, a 2nd socket doubles it (technically)!

      posted in Compute
      C
      cg
    • RE: Backup solutions for XCP-ng

      I've set up a test-environment and indeed it worked. Commvault was (is?) checking for specific version/agent string when connecting to XS/CHV/XCP-ng and in older versions refused to connect to XCP-ng.

      I successfully installed the agent on a proxy-vm (it uses a proxy to ro-mount the VM-VHDs and backup the content), connected the pool, delivered the VM-inventory and also ran a successful backup:
      171c0956-e3e2-4fb9-a93b-48a2be6751bb-grafik.png

      In other words: If you're using Commvault (11.28+) you're not any longer locked on Citrix.

      posted in News
      C
      cg
    • RE: NFS nconnect support

      @reinvtv said in NFS nconnect support:

      As far as I understand it, this will give us multiple tcp streams over an LACP link, truely aggregating traffic on multiple interfaces. (Until now, you needed to use iSCSI multipathing for this, which isnt able to thin provision.).

      You're neigher right nor wrong: LACP is more complex than just bundling NICs and in many cases it will NOT give you any benefit.
      Why? Simple: For that you need to have NFS using multiple ports but stay with same MAC and IP, which means that your LACP balancing algo needs to be set (and support) L4.
      Many just go L2 or L3 (means they decide on MAC or IP-Address).
      As a quick search didn't answer me: If it even doesn't go that 'passive range' multiple port way: In typical environments it won't help you at all with LACP.

      One might correct my post.

      Also interesting: https://vastdata.com/blog/meet-your-need-for-speed-with-nfs/

      posted in Development
      C
      cg
    • RE: Backup solutions for XCP-ng

      @florent said in Backup solutions for XCP-ng:

      @cg we are envisaging various way, from using iSCSI to access tape from the VM, to using an agent on the tape (but here we'll have to support physical hardware patching , updating ) . There is also a lot of work to ensure we write sequentially without concurrency and to make it work with the futur dedup and to keep a catalog of backups / tapes

      I don't know every product, but yet I've never seen a Tapedrive/Library using iSCSI.
      iSCSI is usually only used by storage systems, not by devices or libraries.
      Common interfaces are either SAS or - especially in larger environments - Fiberchannel. So your way to go, probably, is to passthrough an HBA.

      posted in News
      C
      cg
    • RE: Backup solutions for XCP-ng

      @olivierlambert Sure it is a different thing, that's why I recommended using your connections to HPE to offer a bundle or at least to offer a version, that runs with a (more or less specific) version of one of their servers. As it only makes sense when the environment reaches a certain point, it would make sense to pick a DL380/385 series/generation, which offer a good bandwith of performance and space.
      E.g. we use a DL385 with 10x 10 TB HDD + a few SSDs for cache and database.

      IMHO it's okay to say: We support bare metal on platform X. Lots of configurations options don't matter for your support, as more memory, bigger CPUs or more storage behind the same controller don't touch the needed drivers/evaluations.

      posted in News
      C
      cg
    • RE: Backup solutions for XCP-ng

      @florent IIRC OpenZFS 2 uses zstd and/or lz4 as efficient algorythms, which do a pretty good job. Yet I only know brotli from webservers.

      How do you connect the tape, if it's virtualized?
      Putting it on bare metal would also target that (aside of performance benefits and falling restrictions on backup size due to VHD limits).

      posted in News
      C
      cg
    • RE: Backup solutions for XCP-ng

      @olivierlambert
      Okay, so bascially what I (and other people, I know, who are in a similar environment) need:

      • Tape backup is mandatory, as we're talking about (deduplicated, compressed) dozens of TB
      • Deduplication and good compression: duplicated, uncompressed ends in PB
      • All of that needs to scale! E.g. Commvault scales on 16 threads here even with only 1 - 2 tasks. (Can't say if even more without upgrade of CPU)
        (We need chains, of course, to go B2D2T -> dumping the dedup-store on tape as disaster recovery)
      • Applicationawareness (which can be done via agent - agentless is not always king or very important):
        -- MS Exchange, recovering datastores, mailboxes and even single mail items
        -- SQL Servers: MS SQL, MySQL (MariaDB)...
        -- Windows Active Directory Items

      Over here I also need to have the option to do backups only inside the VM via agent, as I can't snapshot them.
      You might have gotton me wrong: AFAIK quiesced backups always made a VM snapshot and submitted the request also to the VSS writer and then the VM/VHDs have been "read". In this case it will always fail, as there's no space on storage for such. Also Citrix discontinued that in their VM tools! XCP-ng yet lacks a proper tool maintenance with corresponding releases, which would be the only chance to keep that.

      As an idea for XOA:

      • Make it run on physical hardware, as VMs are too small for such
      • You already cooperate with HPE - let them run bare metal on ProLiants!
      • ZFS(oL) Could be your way to go to implement dedup and compression

      ...feel free to add, comment... whatever.

      posted in News
      C
      cg