• Slow LiveMigration for VMs with large memory

    7
    0 Votes
    7 Posts
    146 Views
    S
    I get ~510MiB on DL380 G10, 2.6ghz, 40gbe cx3, 8.3.
  • Windows Server 2025 on XCP-ng

    60
    2
    0 Votes
    60 Posts
    18k Views
    G
    @Chemikant784 It's likely that both Microsoft and Xen tools are a combination of the fix, I doubt XCP-ng has anything to do with this issue. And I never had time to check the XCP-ng Guest tools for Windows to see if this happened, but I'm guessing no or not tested. All my hosts are now on XCP-ng 8.3 and I don't see any point in testing 8.2 since it is EOL. And that said, I'm no farther along in my Server 2025 testing, too many other things going on to think about it right now. If I have time I need to burn the vSphere portion of my lab down and install either Harvester HCI or Windows Server for Hyper-V. Broadcom is just (seemingly) going out of their way to prevent people like me (or us) from learning their products and using them in our labs to further that goal. I've explained this several times to VMUG Advantage managers, but they seem so tied up in clawing out some continuing relationships with Broadcom that they will not "rock the boat". I've said these things in Broadcom webcasts as well, always a run-around with no answers. Sorry for the rant. All that said, eagerly awaiting XCP-ng 9, unfortunately I think the Alpha or Beta may wait until XO 6 is finished (just a guess). The updated kernel brings with it some storage changes that I really want to test, NFS nconnect=XX being one of them to see if I can get a little better performance to/from the disks. ESXi default was nconnect=4 and the VMs were slightly faster to/from their disks (all thin provisioned). The 4k "block" size and smaller is what I want to improve in all this.
  • Epyc VM to VM networking slow

    260
    0 Votes
    260 Posts
    171k Views
    ForzaF
    @dinhngtu said in Epyc VM to VM networking slow: @Forza There's a new script here that will help you check the VM's status wrt. the Fix 1. Thank you. It does indeed look like the EPYC fix is active in XOA. [07:25 22] xoa:~$ python3 epyc-fix-check.py 'xen-platform-pci' PCI IO mem address is 0xFB000000 Grant table cacheability fix is ACTIVE. Has Vates checked if a newer kernel would help the network performance with XOA? Current kernel is: linux-image-amd64/oldstable,now 6.1.148-1 amd64 [installed] When trying to install any of the newer kernels (6.12.43-*) it immediately fails dependency check: [07:30 22] xoa:~$ apt install linux-image-6.12.43+deb12- linux-image-6.12.43+deb12-amd64 linux-image-6.12.43+deb12-cloud-amd64-unsigned linux-image-6.12.43+deb12-amd64-dbg linux-image-6.12.43+deb12-rt-amd64 linux-image-6.12.43+deb12-amd64-unsigned linux-image-6.12.43+deb12-rt-amd64-dbg linux-image-6.12.43+deb12-cloud-amd64 linux-image-6.12.43+deb12-rt-amd64-unsigned linux-image-6.12.43+deb12-cloud-amd64-dbg [07:30 22] xoa:~$ apt install linux-image-6.12.43+deb12-amd64 Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-image-6.12.43+deb12-amd64 : PreDepends: linux-base (>= 4.12~) but 4.9 is to be installed E: Unable to correct problems, you have held broken packages.
  • WIndows 10 VM Reboots

    4
    0 Votes
    4 Posts
    183 Views
    olivierlambertO
    Try to disconnect/shutdown XO, and see if your Windows VM still restart every 10-15min.
  • Host time and XOA time are not consistent with each other

    Solved
    6
    0 Votes
    6 Posts
    303 Views
    olivierlambertO
    @edisoninfo Great! This warning is a life saver and did perfectly its job, allowing you to discover a hidden issue Glad you found the root cause!
  • [dedicated thread] Dell Open Manage Appliance (OME)

    Solved
    82
    1
    0 Votes
    82 Posts
    22k Views
    T
    @AtaxyaNetwork Did try to change those files and add the missing drivers, but couldn't get it to work with plugins. I think I will just have a look at you image when I have it and see what I am missing
  • CPU assignment

    6
    1
    0 Votes
    6 Posts
    368 Views
    olivierlambertO
    You are welcome Enjoy XCP-ng/XO!
  • PCI device doesn't show in XO or xe pci-list

    26
    0 Votes
    26 Posts
    1k Views
    marcoiM
    so it seems like when i make the igpu pass through, i see the starting lines of xcp-ng but it never gets to the console screen. which is what i expect. When I add the pcie devices (igpu/hdmi audio) to a vm and start it up, I get and update to the screen and it shows a red number counting which is part of the console. So it seems that when the VM re-initializes the igpu, dom/xcp retakes it over. I confirmed that by using the keyboard to move around and it reshowing the console and filling the missing details as i move around. So why is the dom/xcp retaking the video device over again when it should be blocked from accessing it? [image: 1756568880819-img_3348.jpeg]
  • Bash script to work with pci passthrough on host

    1
    0 Votes
    1 Posts
    188 Views
    No one has replied
  • Passthrough Contention Problems with Console and Linux VM

    10
    0 Votes
    10 Posts
    544 Views
    TeddyAstieT
    @chicagomed said in Passthrough Contention Problems with Console and Linux VM: @TeddyAstie great will take a look this weekend. Is there anything in particular you want us to test / check out? What works/doesn't work and overall performance.
  • 0 Votes
    3 Posts
    527 Views
    olivierlambertO
    Hi, Small side note: lately I’ve noticed that some posts look like they were generated by LLMs. This can actually make it harder for the community to help, because the text is often long, unclear, or missing the basic details we really need to assist. I’d really encourage everyone to write posts in their own words and share as much relevant information as possible. The real value of this community is people helping each other directly
  • Recovery after power outage

    Solved
    8
    0 Votes
    8 Posts
    562 Views
    DanpD
    @roaringsilence Glad you got it all sorted out.
  • CPU Scheduler

    6
    0 Votes
    6 Posts
    564 Views
    H
    @olivierlambert I 100% agree its vague. I even told the person that. Like I was saying I was more looking for anyone who has ran a cluster in those other scheduler settings to get some feedback on it. I guess being a little more specific, would socket or Core be better for VMs that are NUMA sensitive? Such as database servers or the like?
  • Set passthrough GPU as a primary graphics card for a VM

    18
    0 Votes
    18 Posts
    2k Views
    psafontP
    @kubuntu-newbie I've looked at the code. The intel passthrough seems to not care about the value of the vga key in platform, and instead it expects the key igd_passthrough to be set to true. https://github.com/xapi-project/xen-api/blob/9eb5f9f9f3c742c0c0b691098e1dafc02e40c856/ocaml/xapi/xapi_xenops.ml#L390
  • Linux VM (Ubuntu 22.04) - Grub-Menu invisible

    11
    0 Votes
    11 Posts
    749 Views
    GuillaumeHullinG
    @KPS Glad to hear
  • Win11 24H2 install fails consistently

    15
    0 Votes
    15 Posts
    1k Views
    X
    @markr All, I am currently running Windows 11 24H2 on an Intel Haswell i7-4700HQ laptop and it runs fine. What's more, it was in-place upgraded from Windows 11 23H2 using the (still working) setup.exe /product server switch compatibility bypass. (Simpler than all the LabConfig registry tweaks.) The only change I know of that simply won't run due to 24H2 is that it requires a processor capability that has been available since first generation Intel Core processors. It should run on your hardware in a VM. Seems like something else is going on.
  • Can't boot a VM with 1TB memory / 128 CPUs

    14
    0 Votes
    14 Posts
    753 Views
    D
    @olivierlambert said in Can't boot a VM with 1TB memory / 128 CPUs: We are testing some machines with 6TiB RAM My god
  • Large "steal time" inside VMs but host CPU is not overloaded

    22
    0 Votes
    22 Posts
    3k Views
    L
    Thanks for your feedback, @TeddyAstie and @gecant . From your answers and the discussion in Non-server CPU compatibility - Ryzen and Intel I conclude, that the benefits of the 8 V-cache cores vs. the 8 turbo cores are just irrelevant. For XEN they are transparent and not really different in terms of performance. Hence, in a typical server environment we just let XEN randomly choose the cores and don't think about different "performance cores" that COULD be faster in specific applications like games.
  • Non-server CPU compatibility - Ryzen and Intel

    115
    0 Votes
    115 Posts
    83k Views
    L
    Hey @olivierlambert and @andyhhp , thanks for your quick reply. That means it's not a problem at all. But we can't use all of the CPU's performance features either. However, these would only be used for specific computing operations (e.g., games) and would have little to no relevance for standard server applications, right?
  • Vm.migrate Operation blocked

    7
    0 Votes
    7 Posts
    2k Views
    R
    @fanuelsen I was having a similar problem just now with XCP-NG 8.3 LTS and the latest XO. I was unable to migrate an MCS-created VM using XO (I was doing a Host Migrate within the pool only; no storage migration). Oddly, I was able to do the Host Migration using XCP-NG Center. This was in a production Pool that I had recently set up, and this time I had been given a dedicated 10Gbps team with VLANs for migration and host management, and two separate 10Gbps teams for storage and for VMs, respectively. To get live migration of my MCS-created VMs to work, I had to delete the Default Migration Network on the Pool's Advanced tab. I don't see a downside to doing this as all my NICs are 10Gbps, so all NICs should operate at roughly the same speed. ETA: With the Default Migration Network deleted, I have confirmed that migration traffic defaults to going over the management NICs where it is desired rather than going over the storage or VM NICs.