XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. bullerwins
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 6
    • Posts 36
    • Groups 0

    bullerwins

    @bullerwins

    12
    Reputation
    12
    Profile views
    36
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    bullerwins Unfollow Follow

    Best posts made by bullerwins

    • RE: Intel iGPU passthough

      olivierlambert I tried but getting this error when turning on the VM

      INTERNAL_ERROR(xenopsd internal error: (Failure
      "Error from xenguesthelper: Populate on Demand and PCI Passthrough are mutually exclusive"))

      Not sure what it means

      EDIT: after googleing it seems that static and dynamic memory has to be the same:
      9aa61d84-dd71-4f32-9590-8cff8c365a1a-image.png

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Continuous Replication health check fails

      Just updated to lastest commit https://github.com/vatesfr/xen-orchestra/commit/4bd5b38aeb9d063e9666e3a174944cbbfcb92721

      It's fixed now. Working wonderfully. Thanks!

      603fbea0-1c07-45b8-a005-54c100a74663-image.png

      0 fbeauchamp committed to vatesfr/xen-orchestra
      fix(backups): fix health check task during CR (#6830)
      
      Fixes https://xcp-ng.org/forum/post/62073
      
      `healthCheck` is launched after `cleanVm`, therefore it should be closing the parent task, not `cleanVm`.
      posted in Xen Orchestra
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      olivierlambert

      I got it working!

      Pic while transcoding with plex
      ae4d6736-b170-4d1a-b423-ed037fb6c389-image.png

      Plex info:
      86dbffbe-c1e1-47bd-802a-4396e2fab7b2-image.png

      Detected:
      9957882a-0beb-4b5e-817b-df6a28716a7e-image.png

      For future reference if anyone finds this with google:

      I had to " sudo chmod -R 777 /dev/dri" inside the Ubuntu VM, otherwise it didnt work.

      I'm using binhex plex docker image, I have to add the device to the container. It's really easy with portainer:

      The "/dev/dri/" part is the important
      96f5a51e-3281-4c01-aafc-803850a4ff8b-image.png

      PS: olivierlambert there is a typo in the xcp docs, a space is missing in Step 5:
      https://docs.xcp-ng.org/compute/#5-put-this-pci-device-into-your-vm
      b2d60148-73b5-4e62-8bbe-0126deab8fd5-image.png
      This copy/pastes as:
      xe vm-param-set other-config:pci=0/0000:04:01.0uuid=<vm uuid>

      It should be:
      xe vm-param-set other-config:pci=0/0000:04:01.0 uuid=<vm uuid>

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Continuous Replication health check fails

      florent said in Continuous Replication health check fails:

      bullerwins nice catch, the fix is trivial with such a detailled error : https://github.com/vatesfr/xen-orchestra/pull/6830

      nice! as soon as it's merges I'll update it and check. I'm so happy if this report helped you guys in any way!

      posted in Xen Orchestra
      bullerwinsB
      bullerwins
    • RE: XOA Proxy Error when updating

      julien-f said in XOA Proxy Error when updating:

      bullerwins XO Proxy Appliance deployment is not supported in XO built from sources.

      my bad, it used to work a few months back I believe

      olivierlambert said in XOA Proxy Error when updating:

      I missed that in the original message, indeed 🙂

      FYI bullerwins there's no such thing as XOA built from sources, you have either:

      • XOA is Xen Orchestra virtual Appliance, that's the VM we distribute with pro support and QA/stable channel, updater and such
      • XO from the sources is from Github, without pro support but community support, no QA no stable version, no udpater and such

      Thanks, I got confused with the naming, I'm using the second option.

      As a workaround I used a VPN to manage the remote XCP-ng host, thanks!

      posted in Xen Orchestra
      bullerwinsB
      bullerwins
    • RE: XO Proxy not working

      julien-f Thank you so much, I restarted the XOA built from source ubuntu VM and now it works! (how could I not thought of this before #1 rule in IT, turn off and on again).
      Thanks a lot for the support.

      posted in Xen Orchestra
      bullerwinsB
      bullerwins
    • RE: Remote XCP-NG Connection Doesn't Show Console in XO

      Soarin Hi! Would you mind sharing what config did you use for openvpn?
      Did you install openvpn in the xcp-ng host? in a vm?

      posted in Xen Orchestra
      bullerwinsB
      bullerwins

    Latest posts made by bullerwins

    • RE: Epyc VM to VM networking slow

      probain said in Epyc VM to VM networking slow:

      I ran these tests now that newer updates have been released for 8.3-beta.
      Results are as below:

      • iperf-sender -> iperf-receiver: 5.06Gbit/s
      • iperf-sender -> iperf-receiver -P4: 7.53Gbit/s
      • host -> iperf-receiver: 7.83Gbit/s
      • host -> iperf-receiver -P4: 13.0Gbit/s

      Host (dom0):

      • CPU: AMD EPYC 7302P
      • Sockets: 1
      • RAM: 6.59GB (dom0) / 112GB for VMs
      • MotherBoard: H12SSL-i
      • NIC: X540-AT2 (rev 01)

      xl info -n

      host                   : xcp
      release                : 4.19.0+1
      version                : #1 SMP Mon Jun 24 17:20:04 CEST 2024
      machine                : x86_64
      nr_cpus                : 32
      max_cpu_id             : 31
      nr_nodes               : 1
      cores_per_socket       : 16
      threads_per_core       : 2
      cpu_mhz                : 2999.997
      hw_caps                : 178bf3ff:7ed8320b:2e500800:244037ff:0000000f:219c91a9:00400004:00000780
      virt_caps              : pv hvm hvm_directio pv_directio hap gnttab-v1 gnttab-v2
      total_memory           : 114549
      free_memory            : 62685
      sharing_freed_memory   : 0
      sharing_used_memory    : 0
      outstanding_claims     : 0
      free_cpus              : 0
      cpu_topology           :
      cpu:    core    socket     node
        0:       0        0        0
        1:       0        0        0
        2:       1        0        0
        3:       1        0        0
        4:       4        0        0
        5:       4        0        0
        6:       5        0        0
        7:       5        0        0
        8:       8        0        0
        9:       8        0        0
       10:       9        0        0
       11:       9        0        0
       12:      12        0        0
       13:      12        0        0
       14:      13        0        0
       15:      13        0        0
       16:      16        0        0
       17:      16        0        0
       18:      17        0        0
       19:      17        0        0
       20:      20        0        0
       21:      20        0        0
       22:      21        0        0
       23:      21        0        0
       24:      24        0        0
       25:      24        0        0
       26:      25        0        0
       27:      25        0        0
       28:      28        0        0
       29:      28        0        0
       30:      29        0        0
       31:      29        0        0
      device topology        :
      device           node
      No device topology data available
      numa_info              :
      node:    memsize    memfree    distances
         0:    115955      62685      10
      xen_major              : 4
      xen_minor              : 17
      xen_extra              : .4-3
      xen_version            : 4.17.4-3
      xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
      xen_scheduler          : credit
      xen_pagesize           : 4096
      platform_params        : virt_start=0xffff800000000000
      xen_changeset          : d530627aaa9b, pq 7587628e7d91
      xen_commandline        : dom0_mem=6752M,max:6752M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=256M,below=4G console=vga vga=mode-0x0311
      cc_compiler            : gcc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)
      cc_compile_by          : mockbuild
      cc_compile_domain      : [unknown]
      cc_compile_date        : Thu Jun 20 18:17:10 CEST 2024
      build_id               : 9497a1ec7ec99f5075421732b0ec37781ba739a9
      xend_config_format     : 4
      

      VMs - Sender and Receiver

      • Distro: Ubuntu 24.04
      • Kernel: 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC
      • vCPUs: 32
      • RAM: 4GB

      have you tested without the 8.3 updates? The results seem still low. Any improvement?

      posted in Compute
      bullerwinsB
      bullerwins
    • RE: Epyc VM to VM networking slow

      hi olivierlambert ! it's the nomenclature bleader used in the report table, sorry for the misunderstanding:
      https://xcp-ng.org/forum/post/67750

      v2m 1 thread: throughput / cpu usage from xentop³
      v2m 4 threads: throughput / cpu usage from xentop³
      h2m 1 thread: througput / cpu usage from xentop³
      h2m 4 threads: througput / cpu usage from xentop³
      

      it's vm to vm and host (dom0) to vm.

      Btw I'm super happy to do any more test that could help, with different kernels, OS's, xcp ng versions... whatever you need.

      PS: vm to host resulted in unreachable host even though I could ping from vm to host just fine, I checked the iptables are blocked for the iperf port but open to ping, but I didn't want to mess with dom0.

      posted in Compute
      bullerwinsB
      bullerwins
    • RE: Epyc VM to VM networking slow

      Posting my results:

      XCP ng stable 8.2, up-to-date.
      Epyc 7402 (1 socket)
      512GB RAM 3200Mhz
      Supermicro h12ssl-i

      No cpu pinning

      VM's: Ubuntu 22.04 Kernel 6.5.0-41-generic

      v2m 1 thread: 3.5Gb/s - Dom0 140%, vm1 60%, vm2 55%
      v2m 4 threads: 9.22Gb/s - Dom0 555%, vm1 320%, vm2 380%
      h2m 1 thread: 10.4Gb/s - Dom0 183%, vm1 180%, vm2 0%
      h2m 4 thread: 18.0Gb/s - Dom0 510%, vm1 490%, vm2 0%

      host                   : xcp-ng-7402
      release                : 4.19.0+1
      version                : #1 SMP Tue Jan 23 14:12:55 CET 2024
      machine                : x86_64
      nr_cpus                : 48
      max_cpu_id             : 47
      nr_nodes               : 1
      cores_per_socket       : 24
      threads_per_core       : 2
      cpu_mhz                : 2800.047
      hw_caps                : 178bf3ff:7ed8320b:2e500800:244037ff:0000000f:219c91a9:00400004:00000500
      virt_caps              : pv hvm hvm_directio pv_directio hap shadow
      total_memory           : 524149
      free_memory            : 39528
      sharing_freed_memory   : 0
      sharing_used_memory    : 0
      outstanding_claims     : 0
      free_cpus              : 0
      cpu_topology           :
      cpu:    core    socket     node
        0:       0        0        0
        1:       0        0        0
        2:       1        0        0
        3:       1        0        0
        4:       2        0        0
        5:       2        0        0
        6:       4        0        0
        7:       4        0        0
        8:       5        0        0
        9:       5        0        0
       10:       6        0        0
       11:       6        0        0
       12:       8        0        0
       13:       8        0        0
       14:       9        0        0
       15:       9        0        0
       16:      10        0        0
       17:      10        0        0
       18:      12        0        0
       19:      12        0        0
       20:      13        0        0
       21:      13        0        0
       22:      14        0        0
       23:      14        0        0
       24:      16        0        0
       25:      16        0        0
       26:      17        0        0
       27:      17        0        0
       28:      18        0        0
       29:      18        0        0
       30:      20        0        0
       31:      20        0        0
       32:      21        0        0
       33:      21        0        0
       34:      22        0        0
       35:      22        0        0
       36:      24        0        0
       37:      24        0        0
       38:      25        0        0
       39:      25        0        0
       40:      26        0        0
       41:      26        0        0
       42:      28        0        0
       43:      28        0        0
       44:      29        0        0
       45:      29        0        0
       46:      30        0        0
       47:      30        0        0
      device topology        :
      device           node
      No device topology data available
      numa_info              :
      node:    memsize    memfree    distances
         0:    525554      39528      10
      xen_major              : 4
      xen_minor              : 13
      xen_extra              : .5-9.40
      xen_version            : 4.13.5-9.40
      xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
      xen_scheduler          : credit
      xen_pagesize           : 4096
      platform_params        : virt_start=0xffff800000000000
      xen_changeset          : 708e83f0e7d1, pq 9a787e7255bc
      xen_commandline        : dom0_mem=8192M,max:8192M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=256M,below=4G console=vga vga=mode-0x0311
      cc_compiler            : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
      cc_compile_by          : mockbuild
      cc_compile_domain      : [unknown]
      cc_compile_date        : Thu Apr 11 18:03:32 CEST 2024
      build_id               : fae5f46d8ff74a86c439a8b222c4c8d50d11eb0a
      xend_config_format     : 4
      
      posted in Compute
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      xerxist have you tried with the 8.3 beta of XCP-ng? I believe it's got a newer kernel maybe?

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      xerxist my ubuntu 22.04 install came with kernel 5.15, i have it updated regularly but it doens't update the kernel it seems. But newer fresh installs of ubuntu 22.04 install a newer kernel. I'll check out if the kernel needs to be manually updated

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      xerxist in BIOS mode, i would say it was the default for my ubuntu VM

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      xerxist are you using Plex in docker or native install?

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      adriangabura Gigabyte B760M Gaming X / DDR4 / MicroATX

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      olivierlambert

      I got it working!

      Pic while transcoding with plex
      ae4d6736-b170-4d1a-b423-ed037fb6c389-image.png

      Plex info:
      86dbffbe-c1e1-47bd-802a-4396e2fab7b2-image.png

      Detected:
      9957882a-0beb-4b5e-817b-df6a28716a7e-image.png

      For future reference if anyone finds this with google:

      I had to " sudo chmod -R 777 /dev/dri" inside the Ubuntu VM, otherwise it didnt work.

      I'm using binhex plex docker image, I have to add the device to the container. It's really easy with portainer:

      The "/dev/dri/" part is the important
      96f5a51e-3281-4c01-aafc-803850a4ff8b-image.png

      PS: olivierlambert there is a typo in the xcp docs, a space is missing in Step 5:
      https://docs.xcp-ng.org/compute/#5-put-this-pci-device-into-your-vm
      b2d60148-73b5-4e62-8bbe-0126deab8fd5-image.png
      This copy/pastes as:
      xe vm-param-set other-config:pci=0/0000:04:01.0uuid=<vm uuid>

      It should be:
      xe vm-param-set other-config:pci=0/0000:04:01.0 uuid=<vm uuid>

      posted in Hardware
      bullerwinsB
      bullerwins
    • RE: Intel iGPU passthough

      bullerwins Update. The VM recognized the intel GPU:

      intel_gpu_top
      1978ef47-f998-4806-a5e0-8105a4132a2a-image.png

      lspci -k | grep -EA2 'VGA|3D'
      0e9cb348-42c8-4087-854f-3f93d022ecea-image.png

      I will test with Plex Encoding

      posted in Hardware
      bullerwinsB
      bullerwins