XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. tuxen
    T
    • Profile
    • Following 0
    • Followers 1
    • Topics 0
    • Posts 30
    • Best 10
    • Controversial 0
    • Groups 0

    tuxen

    @tuxen

    56
    Reputation
    1199
    Profile views
    30
    Posts
    1
    Followers
    0
    Following
    Joined Last Online

    tuxen Unfollow Follow

    Best posts made by tuxen

    • RE: Centos 8 is EOL in 2021, what will xcp-ng do?

      @indyj said in Centos 8 is EOL in 2021, what will xcp-ng do?:

      @jefftee I prefer Alpine Linux. ✌

      +1

      Low resource footprint, no bloatware... They even have a pre-built Xen Hypervisor ISO flavor 😉

      posted in News
      T
      tuxen
    • RE: New XCP-ng documentation!

      I liked as well. Easy to find the topics and good layout 👍

      posted in News
      T
      tuxen
    • RE: 100,000 unique downloads for XCP-ng

      @olivierlambert congrats to the team and also to this great community! 👏

      posted in News
      T
      tuxen
    • RE: 1M Euros for XCP-ng innovation

      Kudos to XCP-ng team! 👏

      posted in News
      T
      tuxen
    • RE: Dedicated CPU topology

      @fred974 Yep, see the docs about NUMA/core affinity (soft/hard pinning):

      https://docs.xcp-ng.org/compute/#advanced-xen

      posted in Compute
      T
      tuxen
    • RE: HPC with 2x64core (256 threads) possible with XCP-ng?

      @Forza Take a look:

      https://xcp-ng.org/forum/post/49400

      At the time of this topic, I remember asking a coworker to boot a CentOS 7.9 with more than 64 vcpus on a 48C/96T Xeon server. The VM started normally, but it didn't recognizes the vcpus > 64.

      I've not tested that VM param platform:acpi=0 as a possible solution and the trade-offs. In the past, some old RHEL 5.x VMs without acpi support would simply power off (like pulling the power cord) instead of a clean shutdown on a vm-shutdown command.

      Regarding that CFD software, does it support a worker/farm design? vGPU offload? I'm not a HPC expert but considering the EPYC MCM architecture, instead of a big VM, spreading the workload across many workers pinned to each CCD (or each numa nodes on a NPS4 confg) may be interesting.

      Before buying those monsters, I would ask AMD to deploy a PoC using the target server model. For such demands, it's very important to do some sort of certification/validation.

      posted in Compute
      T
      tuxen
    • RE: No free virtual function found vGPU S7150

      @erfant probably not because the nvme driver is loaded and there're no nvme errors in the logs.

      @olivierlambert thank you and your team for this great project and community! It's a nice place to share knowledge and learn new stuff. I learn a lot here! 👍

      posted in Compute
      T
      tuxen
    • RE: No free virtual function found vGPU S7150

      @erfant after seeing your uploaded dmesg, the steps 2 & 3 boot options can be put aside for while because the error isn't the same as the other topics.

      The log is showing MxGPU driver probe/initialization errors. After some digging, could be the case of a GPU firmware being incompatible with UEFI. Do you have any spare server for testing XCP-ng boot in legacy/BIOS with this GPU?

      [  119.418930]        gim error:(gim_probe:123) gim_probe(08:00.0)
      [  121.145663]        gim error:(wait_cmd_complete:2387)  wait_cmd_complete -- time out after 0.003044131 sec
      [  121.145719]        gim error:(wait_cmd_complete:2390)   Cmd = 0x17, Status = 0x0, cmd_Complete=0
      [  121.145984]        gim error:(init_register_init_state:4643) Failed to INIT PF for initial register 'init-state'
      

      Edited for clarification.

      posted in Compute
      T
      tuxen
    • RE: Strange issue with booting XCP-NG

      @Appollonius said in Strange issue with booting XCP-NG:

      Its only when I install the GPU and dont connect it to a monitor that it will not boot properly.

      Maybe because, when there's a GPU installed but no monitor attached, the motherboard POST fails at EDID probe? As stated, some boards/BIOS require an explicit configuration in order to boot without a monitor/keyboard/mouse plugged, eg.:

      https://www.supermicro.com/support/faqs/faq.cfm?faq=11902

      headless.png

      posted in Compute
      T
      tuxen
    • RE: xcp-ng CPU low performance issue

      The incorrect clock results mean that Xen isn't in charge of frequency scaling management. Set the CPU Power Management to Performance Per Watt (OS) and run the previous xenpm, this time with a watch for real-time:

      # watch 'xenpm start 1 | grep -i "avg freq"'
      

      Start a VM boot storm (or a stress test inside one or more VMs) in order to generate some CPU load.

      posted in Compute
      T
      tuxen

    Latest posts made by tuxen

    • RE: Dedicated CPU topology

      @fred974 Yep, see the docs about NUMA/core affinity (soft/hard pinning):

      https://docs.xcp-ng.org/compute/#advanced-xen

      posted in Compute
      T
      tuxen
    • RE: error -104

      @ptunstall when the GPU was pushed back to dom0, did you also remove the PCI address from the VM config?

      What's the output of:

      xe vm-param-get uuid=<...> param-name=other-config

      ?

      posted in Xen Orchestra
      T
      tuxen
    • RE: Proper way to set default CPU Governor?

      @sluflyer06 In order to persist across reboots, you must set the cpufreq boot option. There's no need to rebuild grub because the change will occur at Xen level (instead of dom0):

      /opt/xensource/libexec/xen-cmdline --set-xen cpufreq=xen:ondemand
      

      After that, change the System power profile to Performance Per Watt (OS) in BIOS.

      Verifying the config:

      Check if the attribute current_governor is set to ondemand:

      xenpm get-cpufreq-para
      

      Check the clock scaling:

      xenpm start 1|grep "Avg freq"
      

      More info:
      https://support.citrix.com/article/CTX200390/power-settings-in-citrix-hypervisor-cstates-turbo-and-cpu-frequency-scaling

      posted in Compute
      T
      tuxen
    • RE: HPC with 2x64core (256 threads) possible with XCP-ng?

      @Forza Take a look:

      https://xcp-ng.org/forum/post/49400

      At the time of this topic, I remember asking a coworker to boot a CentOS 7.9 with more than 64 vcpus on a 48C/96T Xeon server. The VM started normally, but it didn't recognizes the vcpus > 64.

      I've not tested that VM param platform:acpi=0 as a possible solution and the trade-offs. In the past, some old RHEL 5.x VMs without acpi support would simply power off (like pulling the power cord) instead of a clean shutdown on a vm-shutdown command.

      Regarding that CFD software, does it support a worker/farm design? vGPU offload? I'm not a HPC expert but considering the EPYC MCM architecture, instead of a big VM, spreading the workload across many workers pinned to each CCD (or each numa nodes on a NPS4 confg) may be interesting.

      Before buying those monsters, I would ask AMD to deploy a PoC using the target server model. For such demands, it's very important to do some sort of certification/validation.

      posted in Compute
      T
      tuxen
    • RE: Accedentally set up a pool on an xcp-ng server

      It could be. For an user point of view, a single host pool wouldn't make any sense, so they created the "implicit/explicit" concept and treated everything as a pool internally.

      posted in Xen Orchestra
      T
      tuxen
    • RE: Accedentally set up a pool on an xcp-ng server

      That's a question for the Citrix dev team 😉

      posted in Xen Orchestra
      T
      tuxen
    • RE: Accedentally set up a pool on an xcp-ng server

      Just FYI guys, XenCenter/XCP-ng Center have the menu option Pool > Make into standalone server. As pointed out by other members, every standalone host is in a pool, but that option reverts to an "implicit" one.

      Hope this helps.

      posted in Xen Orchestra
      T
      tuxen
    • RE: XCP 8.2 VCPUs-max settings

      @jeff In order to create a virtual NUMA topology and expose it to the guest, the vNUMA feature needs to be implemented at hypervisor level and accessible through XAPI. I'm not sure if that feature is fully supported at the moment. Maybe @olivierlambert can confirm this?

      You could try adding the cores-per-socket attribute following the physical NUMA topology (96 / 4 nodes = 24):

      xe vm-param-set platform:cores-per-socket=24 uuid=<VM UUID>
      

      Let me know if it works.

      posted in Compute
      T
      tuxen
    • RE: Centos 8 is EOL in 2021, what will xcp-ng do?

      @indyj said in Centos 8 is EOL in 2021, what will xcp-ng do?:

      @jefftee I prefer Alpine Linux. ✌

      +1

      Low resource footprint, no bloatware... They even have a pre-built Xen Hypervisor ISO flavor 😉

      posted in News
      T
      tuxen
    • RE: VDI_IO_ERROR(Device I/O errors) when you run scheduled backup

      This got my attention:

      Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||import] Caught exception in import handler: VDI_IO_ERROR: [ Device I/O errors ]
      Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] VDI.import D:378e6880299b failed with exception Unix.Unix_error(Unix.EPIPE, "single_write", "")
      Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] Raised Unix.Unix_error(Unix.EPIPE, "single_write", "")
      

      This Unix.EPIPE error on the remote target means that the pipe stream is being closed before VDI.Import receives all the data. The outcome is a VDI I/O error due to a broken, partial sent/received VDI.

      Since a remote-over-the-internet link can be more prone to latency/intermittency issues, it might be needed to adjust the remote NFS soft timeout/retries or mounting the target with hard option.

      I would also check if the remote target is running out-of-space during the backup process.

      posted in Xen Orchestra
      T
      tuxen