XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. jbamford
    3. Posts
    Offline
    • Profile
    • Following 1
    • Followers 0
    • Topics 8
    • Posts 51
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Dell R720 | 620 PCI-E Pass Through

      Good afternoon,

      I have resolved the problem. Domo was trying to use the network Controller. Fixed it by using,

      /opt/xensource/libexec/xen-cmdline --set-dom0 "xen-pciback.hide=(0000:44:00.0)(0000:44:00.1)(0000:45:00.0)(0000:45:00.1)"
      
      reboot
      
      xl pci-assignable-list
      

      Xen Orchestra now allows PCI Networking Card to Pass-Through the VM.

      Regards

      posted in Hardware
      jbamfordJ
      jbamford
    • RE: Dell R720 | 620 PCI-E Pass Through

      @TeddyAstie Hi thanks for your message,

      This is the output from the R620 xl info

      host                   : RHS-XCP-Host
      release                : 4.19.0+1
      version                : #1 SMP Thu Jan 13 12:55:45 CET 2022
      machine                : x86_64
      nr_cpus                : 24
      max_cpu_id             : 47
      nr_nodes               : 2
      cores_per_socket       : 6
      threads_per_core       : 2
      cpu_mhz                : 2600.017
      hw_caps                : bfebfbff:77bee3ff:2c100800:00000001:00000001:00000281:00000000:00000100
      virt_caps              : pv hvm hvm_directio pv_directio hap shadow iommu_hap_pt_share
      total_memory           : 262080
      free_memory            : 147511
      sharing_freed_memory   : 0
      sharing_used_memory    : 0
      outstanding_claims     : 0
      free_cpus              : 0
      xen_major              : 4
      xen_minor              : 13
      xen_extra              : .4-9.19.1
      xen_version            : 4.13.4-9.19.1
      xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
      xen_scheduler          : credit
      xen_pagesize           : 4096
      platform_params        : virt_start=0xffff800000000000
      xen_changeset          : 6e2fc128eb1a, pq dd3d13f0a45e
      xen_commandline        : dom0_mem=7584M,max:7584M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=256M,below=4G console=vga vga=mode-0x0311
      cc_compiler            : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
      cc_compile_by          : mockbuild
      cc_compile_domain      : [unknown]
      cc_compile_date        : Wed Feb  9 12:07:47 CET 2022
      build_id               : b709b0bb6a0cad9906689853f5bc629ba4b3e23f
      xend_config_format     : 4
      

      R620 is running 8.2.1 plan on upgrade though,

      This is the output from the R720 which is running 8.3

      host                   : r720vm
      release                : 4.19.0+1
      version                : #1 SMP Tue May 6 15:24:43 CEST 2025
      machine                : x86_64
      nr_cpus                : 24
      max_cpu_id             : 47
      nr_nodes               : 2
      cores_per_socket       : 6
      threads_per_core       : 2
      cpu_mhz                : 1999.999
      hw_caps                : bfebfbff:1fbee3ff:2c100800:00000001:00000001:00000000:00000000:00000100
      virt_caps              : pv hvm hvm_directio pv_directio hap gnttab-v1 gnttab-v2
      total_memory           : 131007
      free_memory            : 98858
      sharing_freed_memory   : 0
      sharing_used_memory    : 0
      outstanding_claims     : 0
      free_cpus              : 0
      xen_major              : 4
      xen_minor              : 17
      xen_extra              : .5-13
      xen_version            : 4.17.5-13
      xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
      xen_scheduler          : credit
      xen_pagesize           : 4096
      platform_params        : virt_start=0xffff800000000000
      xen_changeset          : 430ce6cd9365, pq 3941a9ecb541
      xen_commandline        : dom0_mem=7584M,max:7584M watchdog ucode=scan dom0_max_vcpus=1-16 crashkernel=256M,below=4G console=vga vga=mode-0x0311
      cc_compiler            : gcc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)
      cc_compile_by          : mockbuild
      cc_compile_domain      : [unknown]
      cc_compile_date        : Tue May 13 11:56:07 CEST 2025
      build_id               : 276c39c6465df3ee400d28199a52e4760162470e
      xend_config_format     : 4
      

      output from xl dmesg R720,

      I cannot post xl dmesg however with passthrough to the VM the host is reporting the following relating to vd-d,

      (XEN) [    5.749249] Brought up 24 CPUs
      (XEN) [    5.763020] Testing NMI watchdog on all CPUs:ok
      (XEN) [    5.876546] Scheduling granularity: cpu, 1 CPU per sched-resource
      (XEN) [    5.890556] mcheck_poll: Machine check polling timer started.
      (XEN) [    5.924606] NX (Execute Disable) protection active
      (XEN) [    5.938382] d0 has maximum 3416 PIRQs
      (XEN) [    5.952084] csched_alloc_domdata: setting dom 0 as the privileged domain
      (XEN) [    5.965967] *** Building a PV Dom0 ***
      (XEN) [    6.298965]  Xen  kernel: 64-bit, lsb
      (XEN) [    6.312489]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x302c000
      (XEN) [    6.327086] PHYSICAL MEMORY ARRANGEMENT:
      (XEN) [    6.340730]  Dom0 alloc.:   0000001ff0000000->0000001ff4000000 (1920491 pages to be allocated)
      (XEN) [    6.355230]  Init. ramdisk: 000000203edeb000->000000203ffff70b
      (XEN) [    6.369315] VIRTUAL MEMORY ARRANGEMENT:
      (XEN) [    6.383053]  Loaded kernel: ffffffff81000000->ffffffff8302c000
      (XEN) [    6.396886]  Phys-Mach map: 0000008000000000->0000008000ed0000
      (XEN) [    6.410703]  Start info:    ffffffff8302c000->ffffffff8302c4b8
      (XEN) [    6.424553]  Page tables:   ffffffff8302d000->ffffffff8304a000
      (XEN) [    6.438320]  Boot stack:    ffffffff8304a000->ffffffff8304b000
      (XEN) [    6.452059]  TOTAL:         ffffffff80000000->ffffffff83400000
      (XEN) [    6.465789]  ENTRY ADDRESS: ffffffff8242b180
      (XEN) [    6.481017] Dom0 has maximum 16 VCPUs
      (XEN) [    6.586427] Masked UR signaling on 0000:00:00.0
      (XEN) [    6.599996] Found masked UR signaling on 0000:00:01.0
      (XEN) [    6.613599] Found masked UR signaling on 0000:00:01.1
      (XEN) [    6.627188] Found masked UR signaling on 0000:00:02.0
      (XEN) [    6.640830] Found masked UR signaling on 0000:00:02.2
      (XEN) [    6.654539] Found masked UR signaling on 0000:00:03.0
      (XEN) [    6.668125] Found masked UR signaling on 0000:00:03.2
      (XEN) [    6.681745] Masked VT-d error signaling on 0000:00:05.0
      (XEN) [    6.713524] Found masked UR signaling on 0000:40:01.0
      (XEN) [    6.727164] Found masked UR signaling on 0000:40:02.0
      (XEN) [    6.740897] Found masked UR signaling on 0000:40:03.0
      (XEN) [    6.754561] Found masked UR signaling on 0000:40:03.2
      (XEN) [    6.768225] Masked VT-d error signaling on 0000:40:05.0
      (XEN) [   16.159374] Initial low memory virq threshold set at 0x4000 pages.
      (XEN) [   16.173083] Scrubbing Free RAM in background
      (XEN) [   16.186689] Std. Loglevel: Errors, warnings and info
      (XEN) [   16.200411] Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
      (XEN) [   16.214412] ***************************************************
      (XEN) [   16.228050] Booted on L1TF-vulnerable hardware with SMT/Hyperthreading
      (XEN) [   16.241806] enabled.  Please assess your configuration and choose an
      (XEN) [   16.255669] explicit 'smt=<bool>' setting.  See XSA-273.
      (XEN) [   16.269483] ***************************************************
      (XEN) [   16.283372] Booted on MLPDS/MFBDS-vulnerable hardware with SMT/Hyperthreading
      (XEN) [   16.297418] enabled.  Mitigations will not be fully effective.  Please
      (XEN) [   16.311534] choose an explicit smt=<bool> setting.  See XSA-297.
      (XEN) [   16.325707] ***************************************************
      (XEN) [   16.339991] 3... 2... 1...
      (XEN) [   19.353800] Xen is relinquishing VGA console.
      (XEN) [   19.411460] *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
      (XEN) [   19.412014] Freed 2048kB init memory
      (XEN) [   28.498008] Found masked UR signaling on 0000:00:00.0
      (XEN) [   28.498616] Found masked UR signaling on 0000:00:01.0
      (XEN) [   28.499192] Found masked UR signaling on 0000:00:01.1
      (XEN) [   28.499878] Found masked UR signaling on 0000:00:02.0
      (XEN) [   28.500461] Found masked UR signaling on 0000:00:02.2
      (XEN) [   28.501128] Found masked UR signaling on 0000:00:03.0
      (XEN) [   28.501759] Found masked UR signaling on 0000:00:03.2
      (XEN) [   28.502227] Masked VT-d error signaling on 0000:00:05.0
      (XEN) [   28.556213] Found masked UR signaling on 0000:40:01.0
      (XEN) [   28.556880] Found masked UR signaling on 0000:40:02.0
      (XEN) [   28.557563] Found masked UR signaling on 0000:40:03.0
      (XEN) [   28.558191] Found masked UR signaling on 0000:40:03.2
      (XEN) [   28.558674] Masked VT-d error signaling on 0000:40:05.0
      

      Regards

      posted in Hardware
      jbamfordJ
      jbamford
    • Dell R720 | 620 PCI-E Pass Through

      Good evening,

      So i am setting a homelab with a friend, he has recently bought a Dell R720 to run everything on i.e pfSense with PCI-E Passthrough as well as TrueNAS with a H200E Pass through, although the system is allowing PC-E passthrough to the VM we are having issues where pfSense isn't taking full control of the network controller.

      So after investigating, the BIOS on the R720 has no option for vt-D I have tested the same situation on the R620 and both servers are showing the same, no vt-d in the BIOS nothing to do with IOMMU. would anyone have any ideas ?

      When running the command dmesg | grep -e DMAR -e IOMMU this nothing there it's blank.

      Same with grep vmx /proc/cpuinfo

      Regards

      posted in Hardware
      jbamfordJ
      jbamford
    • Copy existing Disk for other staging VMs

      Howdy folks, just curiosity I have staging environments for Production Public VMs I like to test upgrades and changes on staging environments, is it possible to copy a existing Disk to use on another staging Virtual Machine without messing with the existing VM?

      Regards

      posted in Management
      jbamfordJ
      jbamford
    • RE: Losing Windows Activation when migrating VM from ESXi 6.7

      @waveguide This makes sense. Windows will treat it as changing hardware due to the Boot environment and BIOS.

      It is no different to doing a motherboard swap on a Server or PC.

      Windows should reactivate though once connected to the Internet.

      posted in Migrate to XCP-ng
      jbamfordJ
      jbamford
    • RE: XO 5.93 is out!

      @olivierlambert I am open to building stuff into XCP-ng if my knowledge was there to do it.

      posted in News
      jbamfordJ
      jbamford
    • RE: XO 5.93 is out!

      @olivierlambert Thanks for your reply. I will have a play over the weekend. Have a few days as it’s bank holiday so I can get something’s done. Are you going to implement the UPS API you did for me? I’ve chooses to go with PowerChute in the end over NUT. I will post some documentation once I have implemented it and tested if that’s okay.

      Regards.

      posted in News
      jbamfordJ
      jbamford
    • RE: XO 5.93 is out!

      @olivierlambert Hope you have a great time away sir 😄 Would it be possible if you could add dark mode to the next version of XO 5.9.4 ? I know you mentioned about it being in version 6. Anyways have a good weekend

      posted in News
      jbamfordJ
      jbamford
    • RE: Vates and IONOS partnership

      @olivierlambert That's great, hope something great comes out of it. Good luck 😄

      posted in News
      jbamfordJ
      jbamford
    • RE: Dell cancels VMWare contract after Broadcom purchase

      @olivierlambert said in Dell cancels VMWare contract after Broadcom purchase:

      Time to make a phone call to Dell 👼

      Yes sir do it. I hopefully a contract between you / Vates and Dell is on the horizen. Really hope XCP-ng comes as the standard for Virtualization just like ESXi once was.

      posted in News
      jbamfordJ
      jbamford
    • RE: Vates and IONOS partnership

      @olivierlambert That's great, any luck with partnership with Dell?

      posted in News
      jbamfordJ
      jbamford
    • RE: Veeam and XCP-ng

      @tc-atwork that’s not entirely true. I’ve had a few clients that has had issues with Veeam where backups get stuck and never finish until you restart Veeam. When having problems like this, this in my opinion is not good enough for backups.

      My previous job had 8 Hypervisors in a Cluster which also used Veeam and backups was getting stuck.

      posted in XCP-ng
      jbamfordJ
      jbamford
    • RE: Veeam and XCP-ng

      @jasonnix there is having something faster and there is something that’s going to be reliable. I’d rather compromise on faster backup speed and have the designed backup system.

      I’m not saying that Veeam isn’t going to be reliable but what I am going to say is that you are at more risk of backups not being reliable and you are going to have to consult Veeam if you need help resolving a issue, where with XO you are not going to have any problems as it’s designed to work.

      Not only that you need to think about extra licensing too.

      If this is a Production environment I would go with the XCP-ng and XO because if something goes wrong you are going to be a laughing stock to the company it will be embarrassing. If it’s a homelab then it doesn’t really matter if it breaks.

      posted in XCP-ng
      jbamfordJ
      jbamford
    • RE: Veeam and XCP-ng

      @nikade that is one of the downfalls. Limit on backup, I have a 10Gbp iSCSI backbone with 10Gbp core network but maximum I am able to achieve is 80mbps maximum turns outs it’s a known problem according to Olivier Citrix Hypervisor also has the same problem.

      posted in XCP-ng
      jbamfordJ
      jbamford
    • RE: Veeam and XCP-ng

      @Andrew Meh if you say so, i mean it is up to you but it will only end in tears.

      Making something work for backups which is so crucial is a bad thing.

      posted in XCP-ng
      jbamfordJ
      jbamford
    • RE: Veeam and XCP-ng

      @aurora-chase why would you want to use a 3rd party solution? XO has it all built in. Question is what feature is it your wanting and what doesn’t XO have built in which veeam have? Personally my opinion is that XO has all features that Veeam has and using Veeam means an extra VM and more resources.

      posted in XCP-ng
      jbamfordJ
      jbamford
    • RE: Veeam and XCP-ng

      @jasonnix better to use XO for backups tbh. But it would work.

      posted in XCP-ng
      jbamfordJ
      jbamford
    • RE: From VMware to XCP-ng

      @jasonnix Yes XCP-ng has vSAN look here https://xcp-ng.org/docs/storage.html#storage-types it is called XOSAN.

      XCP-ng will work fine with EMC & QNAP. I am using two EMC iSCSI SANs with my 3 node Cluster.

      posted in Migrate to XCP-ng
      jbamfordJ
      jbamford
    • RE: From VMware to XCP-ng

      @jasonnix Depends what the requirement are, extras like vSAN will require licensing but Type 1 Hypervisors are all like this tbh. Personally when it comes to shared to storage I would either recommend using iSCSI or NFS than vSAN only because you would require high speed networking. Performance. In my testing between ESXi and XCP-ng there aren’t many differences but then again it all depends on the hardware that is being used. I’ve tested on R710s, R620s, R630s and all performed quite well.

      Security. That is all down to the Technician that has deployed it, I wouldn’t expose it to the Public for sure.

      posted in Migrate to XCP-ng
      jbamfordJ
      jbamford
    • RE: Create Bond for management

      @nikade no management VLAN looks a mess. Don’t moan when you get compromised and you wonder why data breaches happen 😄

      posted in Management
      jbamfordJ
      jbamford