XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • P

      TAGS in BACKUP/RESTORE

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      2
      1
      2 Votes
      2 Posts
      42 Views
      olivierlambertO
      Adding @gregoire in the loop
    • ForzaF

      Set default resolution for UEFI

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      5
      0 Votes
      5 Posts
      1k Views
      G
      I haven't had too much difficulty hitting the esc key in time to get into the EFI config. Click the start VM button, quickly click away from the display area and click in the display area, then toggle the esc key until I see it take effect. I know I have a couple running at 1920x1080, but that's actually kind of a pain. I only did that to try and get a larger RDP window, RDP may be limited by the original "monitor" resolution, but this might also be fixed in later updates. This one VM has been up for a few years. (edit, yes this has been changed, VMs with a 4x3 monitor now RDP is whatever I have set).
    • ForzaF

      Citrix or XCP-ng drivers for Windows Server 2022

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      16
      0 Votes
      16 Posts
      5k Views
      G
      [edit] I just realized this was from 2024. So far I only have 1 of my 2022 servers changed to the XCP-ng drivers and management agent, the only thing that happened was the NIC went back to DHCP and a short struggle to shift it back to static. I use the e1000 profile when I set up a VM. This was previously using the Xen 9.4.x drivers downloaded from Xenserver. The only other glitch is that if you migrate the VM to another host in the pool, the management agent is no longer detected. But additional migrations and rolling pool updates and reboots are not bothered by this. I'm told this will be fixed in the next version. As far as the drivers themselves, I don't see any issues on this one VM. I think I have another 2022 in my lab with the XCP-ng stuff installed, I'll have to check later. I've had no issues with it. These were both installed from the new Tools ISO that is default with the latest XCP-ng 8.3.x
    • ForzaF

      Xen 4.21

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      2
      0 Votes
      2 Posts
      84 Views
      G
      @Forza I keep hoping to see an Alpha or Beta of XCP-ng 9 soon, but I think there is still a lot of work to be done on the host side and on the Xen Orchestra 6 side.
    • nick.lloydN

      Wait for IP(v4) address similar to terraform

      Watching Ignoring Scheduled Pinned Locked Moved Management
      6
      0 Votes
      6 Posts
      489 Views
      olivierlambertO
      Ping @Team-Documentation-Knowledge-Management
    • P

      REQUEST / TAG on networks ?

      Watching Ignoring Scheduled Pinned Locked Moved Management
      2
      2
      0 Votes
      2 Posts
      58 Views
      olivierlambertO
      I think it's planned for XO 6 adding @gregoire in the loop
    • S

      Disable TX checksumming with API

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      4
      0 Votes
      4 Posts
      464 Views
      S
      Sorry for the necropost but here is what I did. import XenAPI import ssl HOST_IP = "192.168.1.100" USERNAME = "root" PASSWORD = "hostpasswordsecret" VM_LIST = ('sms', 'firewall1a', 'firewall1b', 'firewall2a', 'firewall2b', 'firewall3a', 'firewall3b') def main(): # disable https certificate checking if hasattr(ssl, '_create_unverified_context'): ssl._create_default_https_context = ssl._create_unverified_context url = f"https://{HOST_IP}" session = XenAPI.Session(url) try: print(f"Connecting to {HOST_IP}...") session.xenapi.login_with_password(USERNAME, PASSWORD, "1.0", "python-script") except XenAPI.Failure as e: print(f"XenAPI Error: {e}") return except Exception as e: print(f"General Error: {e}") return for vm in VM_LIST: print(f"Searching for VM: {vm}...") vms = session.xenapi.VM.get_by_name_label(vm) if len(vms) == 0: print(f"Error: VM '{vm}' not found.") continue vm_ref = vms[0] vif_refs = session.xenapi.VM.get_VIFs(vm_ref) if not vif_refs: print("No network interfaces found on this VM.") continue print(f"Found {len(vif_refs)} interface(s). Updating settings...") for vif in vif_refs: device = session.xenapi.VIF.get_device(vif) other_config = session.xenapi.VIF.get_other_config(vif) # ethtool-tx transmit checksum offload # ethtool-tso TCP segmentation offload # ethtool-ufo UDP fragmentation offload # ethtool-gro generic receive offload if other_config.get('ethtool-tx') == 'off': print(f" Interface {device}: TX Checksumming already disabled.") else: print(f"Disabling TX checksumming for interface {device}") other_config['ethtool-tx'] = 'off' try: session.xenapi.VIF.set_other_config(vif, other_config) print(f" - Interface {device}: TX Checksumming disabled (ethtool-tx: off)") power_state = session.xenapi.VM.get_power_state(vm_ref) if power_state == 'Running': print(" [!] VM is RUNNING. A reboot is required for these changes to take effect.") elif power_state == 'Halted': print(" [i] VM is Halted. Changes will apply on next boot.") else: print(f" [i] VM state is {power_state}.") print("Note: You must reboot the VM or unplug/plug the VIFs for changes to take effect.") print("") except XenAPI.Failure as e: print(f"XenAPI Error: {e}") except Exception as e: print(f"General Error: {e}") try: session.xenapi.logout() except: pass if __name__ == "__main__": main()
    • W

      Scheduled backup job stopped executing after an XO Sources migration. Fix?

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Backup
      1
      1
      0 Votes
      1 Posts
      21 Views
      No one has replied
    • rhkeanR

      WiFi controller not recognized during XCP-NG install

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      10
      0 Votes
      10 Posts
      469 Views
      yannY
      @hoehnp we're aiming at sharing a very first public version before end of year, but don't hope for it to be anything complete or stable
    • J

      Every VM in a CR backup job creates an "Unhealthy VDI"

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      19
      1
      0 Votes
      19 Posts
      625 Views
      J
      There are no other jobs. I've now spun up a completely separate, fresh install of XCP-ng 8.3 to test the symptoms mentioned in the OP. Steps taken Installed XCP-ng 8.3 Text console over SSH xe host-disable xe host-evacuate (not needed yet of course since it's a brand-new install) yum update Reboot Text console over SSH again Created local ISO SR xe sr-create name-label="Local ISO" type=iso device-config:location=/opt/var/iso_repository device-config:legacy_mode=true content-type=iso cd /opt/var/iso_repository wget # ISO for Ubuntu Server xe sr-scan uuid=07dcbf24-761d-1332-9cd3-d7d67de1aa22 XO Lite New VM Booted from ISO, installed Server Text console to VM over SSH apt update/upgrade installed xe-guest-utilities Installed XO from source (ronivay script) XO Import ISO for Ubuntu Mate New VM Booted from ISO, installed Mate apt update/upgrade xe-guest-utilities New CR backup job Nightly VMs: 1 (Mate) Retention: 15 Full: every 7 Exact same behaviour. After first (full) CR job run, additional (incremental) CR job runs results in one more 'unhealthy VDI'. I've engaged in no other shenanigans. Plain vanilla XCP-ng and XO. There are only two VMs on this host, the XO from source VM, and a desktop OS VM which is the only target of the CR job. There are zero exceptions in SMlog. What do you need to see?
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      1
      4 Votes
      1 Posts
      82 Views
      No one has replied
    • S

      cleanVm: incorrect backup size in metadata

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      16
      1
      0 Votes
      16 Posts
      3k Views
      U
      @k11maris Same on my side. Delta backup for all VMs show this message. [image: 1763710321097-bildschirmfoto-2025-11-21-um-08.30.44.png]
    • P

      Ansible Role - Install XO from source - Now available

      Watching Ignoring Scheduled Pinned Locked Moved Infrastructure as Code
      4
      3 Votes
      4 Posts
      223 Views
      W
      @probain No worries, im still learning/improving my ansible/terraform skills aswell
    • idealI

      install of VboxGuestAdditions breaking XO

      Watching Ignoring Scheduled Pinned Locked Moved Management
      1
      0 Votes
      1 Posts
      49 Views
      No one has replied
    • J

      MAP_DUPLICATE_KEY error in XOA backup - VM's wont START now!

      Watching Ignoring Scheduled Pinned Locked Moved Solved Backup
      36
      2
      0 Votes
      36 Posts
      9k Views
      Bastien NolletB
      Hi @jshiells, After some tests, I don't think it can be caused by the load balancer. If the load balancer tries to migrate a VM that is being backed up, the migration instantly fails and nothing happens. Reversely, if a backup job starts when a VM is being migrated by the load balancer, the backup will fail for that VM with error "cannot backup a VM currently being migrated".
    • T

      QCOW2 support on XOSTOR

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      3
      0 Votes
      3 Posts
      103 Views
      ronan-aR
      @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.