XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Chemikant784
    3. Topics
    C
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 24
    • Groups 0

    Topics

    • C

      "Download System logs" tgz-file does not work

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      17
      0 Votes
      17 Posts
      1k Views
      P
      @gthvn1 Yeah, that works. I can get the logs both from the same host and from the other one in the pool.
    • C

      Windows Server 2025 on XCP-ng

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      60
      2
      0 Votes
      60 Posts
      19k Views
      G
      @Chemikant784 It's likely that both Microsoft and Xen tools are a combination of the fix, I doubt XCP-ng has anything to do with this issue. And I never had time to check the XCP-ng Guest tools for Windows to see if this happened, but I'm guessing no or not tested. All my hosts are now on XCP-ng 8.3 and I don't see any point in testing 8.2 since it is EOL. And that said, I'm no farther along in my Server 2025 testing, too many other things going on to think about it right now. If I have time I need to burn the vSphere portion of my lab down and install either Harvester HCI or Windows Server for Hyper-V. Broadcom is just (seemingly) going out of their way to prevent people like me (or us) from learning their products and using them in our labs to further that goal. I've explained this several times to VMUG Advantage managers, but they seem so tied up in clawing out some continuing relationships with Broadcom that they will not "rock the boat". I've said these things in Broadcom webcasts as well, always a run-around with no answers. Sorry for the rant. All that said, eagerly awaiting XCP-ng 9, unfortunately I think the Alpha or Beta may wait until XO 6 is finished (just a guess). The updated kernel brings with it some storage changes that I really want to test, NFS nconnect=XX being one of them to see if I can get a little better performance to/from the disks. ESXi default was nconnect=4 and the VMs were slightly faster to/from their disks (all thin provisioned). The 4k "block" size and smaller is what I want to improve in all this.