Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    M
    @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    D
    @florent Didn't see your message until now about only applying the fix to only 1 client. I did do a warm migration on client with ticket #7748053. This completed without issue! I tried a warm migration on client with ticket #7747444 and this failed again. Sounds expected as the patch was not on this one yet. Can you push the patch to client with ticket #7747444. Thanks!
  • 3k Topics
    26k Posts
    S
    Sorry for the necropost but here is what I did. import XenAPI import ssl HOST_IP = "192.168.1.100" USERNAME = "root" PASSWORD = "hostpasswordsecret" VM_LIST = ('sms', 'firewall1a', 'firewall1b', 'firewall2a', 'firewall2b', 'firewall3a', 'firewall3b') def main(): # disable https certificate checking if hasattr(ssl, '_create_unverified_context'): ssl._create_default_https_context = ssl._create_unverified_context url = f"https://{HOST_IP}" session = XenAPI.Session(url) try: print(f"Connecting to {HOST_IP}...") session.xenapi.login_with_password(USERNAME, PASSWORD, "1.0", "python-script") except XenAPI.Failure as e: print(f"XenAPI Error: {e}") return except Exception as e: print(f"General Error: {e}") return for vm in VM_LIST: print(f"Searching for VM: {vm}...") vms = session.xenapi.VM.get_by_name_label(vm) if len(vms) == 0: print(f"Error: VM '{vm}' not found.") continue vm_ref = vms[0] vif_refs = session.xenapi.VM.get_VIFs(vm_ref) if not vif_refs: print("No network interfaces found on this VM.") continue print(f"Found {len(vif_refs)} interface(s). Updating settings...") for vif in vif_refs: device = session.xenapi.VIF.get_device(vif) other_config = session.xenapi.VIF.get_other_config(vif) # ethtool-tx transmit checksum offload # ethtool-tso TCP segmentation offload # ethtool-ufo UDP fragmentation offload # ethtool-gro generic receive offload if other_config.get('ethtool-tx') == 'off': print(f" Interface {device}: TX Checksumming already disabled.") else: print(f"Disabling TX checksumming for interface {device}") other_config['ethtool-tx'] = 'off' try: session.xenapi.VIF.set_other_config(vif, other_config) print(f" - Interface {device}: TX Checksumming disabled (ethtool-tx: off)") power_state = session.xenapi.VM.get_power_state(vm_ref) if power_state == 'Running': print(" [!] VM is RUNNING. A reboot is required for these changes to take effect.") elif power_state == 'Halted': print(" [i] VM is Halted. Changes will apply on next boot.") else: print(f" [i] VM state is {power_state}.") print("Note: You must reboot the VM or unplug/plug the VIFs for changes to take effect.") print("") except XenAPI.Failure as e: print(f"XenAPI Error: {e}") except Exception as e: print(f"General Error: {e}") try: session.xenapi.logout() except: pass if __name__ == "__main__": main()
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^