• Alternative to memory ballooning

    2
    0 Votes
    2 Posts
    330 Views
    olivierlambertO
    No sure this will be compatible with the level of memory isolation provided in Xen (vs the more "open bar" model in KVM). Worth taking a look though, thanks for the link.
  • qemu-dp vs qemu

    1
    0 Votes
    1 Posts
    209 Views
    No one has replied
  • could not build xen from xcp-ng-build-env

    10
    0 Votes
    10 Posts
    525 Views
    olivierlambertO
    We'll be pleased to you review your future contributions
  • CEPHFS - why not CEPH RBD as SR?

    15
    0 Votes
    15 Posts
    3k Views
    P
    Hello, just a follow-up I figured out probable fix for performance issues (the locking issue seems to have disappeared on its own I suspect it happened only due to upgrade process as pool contained mix of 8.0 and 8.2 hosts) It was caused by very slow (several second) executions of basic LVM commands - pvs / lvs etc took many seconds. When started with debug options it seems it took excessive amount of time scanning iSCSI volumes in /dev/mapper as well as the individual LVs that were also presented in /dev/mapper as if they were PVs - it actually subsequently ignored them but still those were (in my case) hundreds of LVs and each had to be open to check metadata and size. After modifying /etc/lvm/master/lvm.conf by adding this: # /dev/sd.* is there to avoid scanning RAW disks that are used via UDEV for storage backend filter = [ "r|/dev/sd.*|", "r|/dev/mapper/.*|" ] Performance of LVM commands improved from ~5 seconds to less than 0.1 second and issue with slow startup / shutdown / snapshot of VMs (sometimes they took almost 10 minutes) was resolved. Of course this filter needs to be adjusted based on specific needs of the given situation. In my case both /dev/sd* as well as /dev/mapper devices are NEVER used by LVM backed SRs, so it was safe to ignore them for me. (all my LVM SRs are from /dev/rbd/)
  • XOSTOR Progress & Questions

    4
    0 Votes
    4 Posts
    3k Views
    AtaxyaNetworkA
    @JacobS Hi ! You can find it here : https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview (PS: I'm using XOSTOR with 3 nodes, and it's works well!)
  • How to control AMD GPU?

    5
    1
    0 Votes
    5 Posts
    916 Views
    splastunovS
    Some time later I faced with problem that VMs (Linux and Windows) can't correctly start GPU (ADM MxGPU). In windows device manager there was error #43. I have solved this error without host reboot by reloading gim module. Hope this will help somebody else. rmmod gim gim_api modprobe gim gim_api
  • Enhancement: Virtual OpenGL support (Virgl)

    7
    0 Votes
    7 Posts
    3k Views
    ForzaF
    It might be happening after all https://www.phoronix.com/news/AMD-Xen-GPU-For-Cars https://www.phoronix.com/news/AMD-GPU-Xen-Hypervisor-S3 Having virtio gpu with gl/vulkan/dx3d support would be really interesting.
  • Reminder, Centos 7 EOL 1 year away

    5
    1 Votes
    5 Posts
    819 Views
    olivierlambertO
    You shouldn't assume about the distro used as a base, but you can safely assume about being RPM based
  • Management Agent on pesky Sangoma 7

    4
    0 Votes
    4 Posts
    338 Views
    olivierlambertO
    Yes
  • vGPU - which graphics card supported?

    vgpu
    55
    1 Votes
    55 Posts
    38k Views
    pedroalvesbatistaP
    @ScarfAntennae At a glance, there are those links : Nvidia : https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html https://www.nvidia.com/pt-br/data-center/graphics-cards-for-virtualization/ https://docs.nvidia.com/grid/13.0/product-support-matrix/index.html AMD : Introducing the AMD FirePro(TM) S7100X, the Industry's First and Only Hardware-Virtualized GPU for Blade Servers https://www.semiaccurate.com/2018/08/26/amd-v-series-virtualization-gpus-launch-with-the-v340/ https://techreport.com/news/29666/firepro-s7100-graphics-cards-bring-hardware-gpu-virtualization-to-life https://web.archive.org/web/20160507042500/http://www.tomsitpro.com/articles/amd-firepro-s1750-firepro-s7150x2-hardware-virtualization,1-3129.html https://www.amd.com/pt/graphics/workstation-virtual-graphics https://www.amd.com/system/files/documents/gpu-consistency-security-whitepaper.pdf https://www.techtarget.com/searchvirtualdesktop/opinion/Comparing-AMD-vs-Nvidia-for-virtual-desktop-GPU-cards https://www.reddit.com/r/VFIO/comments/nqf749/gpu_splitting_on_consumer_amd_gpus_vgpu_mxgpu/ https://github.com/DualCoder/vgpu_unlock - here there is a hack to make consumer GPUs able to provide vGPU So, coming from all those links, we would have to do : Add entries to the docs or do a separate wiki regarding this subject Design a "test bed" with some guidance so people can test their setups and GPUs Design some methods (and which tools could be used) to perform benchmarks and find possible bottlenecks regarding driver, synchronization, pass-through issues etc. We also would have to dig into specs of each vendor's GPU families and series, to find anything that could drag us to a more "precise" judgment on which ones does provide support for vGPU. @olivierlambert Could we create a GPU dedicated section on docs, and perhaps provide some GPU HCL - Hardware Compatibility List - regarding vGPU and who knows what else stuff is relevant ? Am I missing something here ?
  • New guest tools ISO for Linux and FreeBSD. Can you help with the tests?

    62
    2 Votes
    62 Posts
    29k Views
    A
    @Pierre-Briec , @stormi I had a look at getting the xe-guest-utilities working on Ipfire v2 now (core 173, the latest version). Using a new /usr/sbin/xe-linux-distribution script, like suggested here, allows it to detect the ipfire. I then manually copied the binaries and scripts from the linux tar file, into the folders in Ipfire, since the install script did not seem to handle Ipfire properly. When starting the daemon using /etc/init.d/xe-linux-distribution, the next problem was that the "action" function does not exist in the /etc/init.d/functions file in Ipfire. So I just edited the script, replacing the "else" with an "fi" in the if testing where the functions file is sources, so that the locally defined action method is used. Then the agent started fine. Then I also saw the issue of the IP address not being reported. In my setup, there are two reasons for this. One is that Ipfire uses "red0", "green0", "blue0" etc as interface names, which the xe-guest-utilities will not consider. The other reason is that I do PCI passthrough of 3 network cards to the Ipfire, and hence does not use the "vif" interface/network that XCP-ng makes available to the Ipfire. Althought the "green0" is really on the same network as the "vif" in my setup. This was using the 7.30 tar file from the XCP ISO, I think. I then cloned the 7.33 / master version of xe-guest-utilities from github, and used that thereafter. I manually changed and built the xe-guest-utilities, adding "red", "green", "blue" to the list of interface prefixes that got considered, but it did not help. I suspect the reason is that these interfaces does not have a /sys/class/net/eth0/device/nodename entry, which contains a reference to the "vif" that XCP-ng knows about, as I understand it. So /sys/class/net/eth0/device/nodename exists, but the eth0 is not assigned any IP address, since it is not used by IPfire. While there is no /sys/class/net/green0/device/nodename entry. I am not sure who is "creating" this "nodename" entry, but I suspect is it Xen. And I suspect it is missing, since the green0 interface has no relationsship with the dom0 really. But then I also got more questions around what is actually meant to be displayed of "network" info in the XOA web UI. Is it only the network between dom0 and domU ? Or ideally all networks defined on domU ? (i.e. red0 and blue0 and orange0 ) ? And I also think I spotted a bug on the "Stats" page of XOA, since under "Disk throughput", it seems like always "xvda" and "xvdd" is displayed, even if the host only has one disk, "xvda". But that I should report as a bug, if I do not find it as already reported / known. While playing with this, I also noticed that the management agent version was not properly displayed, i.e. not at all. And this seems to be caused by the the version constants not being replaced while building the daemon. I am not a go build expert, so I'll investigate it a bit more. But it seems like I'm not the only one with that issue, because the same problem seems to exist with the xe-guest-utilities that are part of Alpine Linux 3.17 distribution. I do not think that there are that many running Ipfire on XCP-ng/Xen. I've been briefly involved in some pull requests against Ipfire, so I might look at making one for getting the xe-guest-utilities into Ipfire itself, but since the use is not high, I have a doubt it makes much sense. Thanks for a great tool in XCP-ng, I enjoy using it in my home setup. Regards Alf Høgemark
  • [Windows Guest Tools] Cleaner Tool

    10
    3 Votes
    10 Posts
    10k Views
    S
    @borzel Does this still work with current versions of the guest tools?
  • Cannot export VM in .ova file

    Solved
    31
    0 Votes
    31 Posts
    18k Views
    olivierlambertO
    Depends a lot on what you have available. IMHO, I consider the system as disposable, since you can export the metadata, reinstall and import the metadata again. Or backup your VM, reinstall, restore. Or migrate out to another host, reinstall, re-migrate the VMs. So many possibilities, that all depends on your setup.
  • Expand a virtual disk while vm is running

    4
    0 Votes
    4 Posts
    364 Views
    olivierlambertO
    That's the right way to do it in the future I suppose, however if it's not done, I can suspect it was during early "Citrix times" and decided it wasn't easy to do with the SMAPIv1 structure. Hopefully, since SMAPIv3 is splitting the things in a better way, it might be easier to make it real
  • Windows Server 2022 Essentials

    6
    0 Votes
    6 Posts
    1k Views
    fohdeeshaF
    @olivierlambert never done it myself, but this is indeed exactly what the feature "Copy host BIOS strings to VM" was intended for as @Andrew mentioned. Hopefully the BIOS strings this feature copies are enough for the ROK installer to recognize the "authorized" dell hardware
  • Centos 9 . why nobody use this OS?!

    8
    0 Votes
    8 Posts
    5k Views
    C
    @poltushima There technically is no CentOS 9, only CentOS stream. CentOS died a premature death when they pulled support for version 8 in 2021. We moved all of our workloads to rocky Linux and haven't looked back. Stream is basically a rolling beta and not something I would use unless you're just mucking around with it.
  • XCP-NG / XO Packer Plugin

    9
    2 Votes
    9 Posts
    1k Views
    pedroalvesbatistaP
    Very cool ! Will a great move relying on a single entrypoint.
  • Pre-Freeze-/Post-Thaw-Script with xe-guest-utilities

    9
    0 Votes
    9 Posts
    681 Views
    julien-fJ
    @rjt You can customize the hook timeout by adding this setting in your xo-server's config: [xapiOptions] # Timeout in milliseconds syncHookTimeout = 300e3 # 5 minutes
  • Nehalem cpu power management

    14
    0 Votes
    14 Posts
    1k Views
    A
    This is unconditional for a reason. The CSTATE errata in Nehalem are crippling - IIRC a core going in an out of a deep C state does not maintain cache coherency correctly, resulting in arbitrary memory corruption. You really do care about not hitting this errata, even on a test/hobby server.
  • Host CPU Statistics

    3
    1
    0 Votes
    3 Posts
    390 Views
    JSylvia007J
    @splastunov said in Host CPU Statistics: You can get it with such command That command doesn't give me the aggregate, it only gives me the CPU for Dom0. I'd like to get that aggregate number.