Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    P
    @benapetr you seem gifted with app development. do you know RV Tools ? https://www.dell.com/support/kbdoc/en-us/000325532/rvtools-4-7-1-installer this tool is pretty handy when auditing VMWARE infrastructures, it can connect to vcenter or directly to ESXi and full dump in csv/xlsx the infrastructure configuration (all aspects of the config, be it VM, hosts, networks, datastores, files, ...) I could see a real production use of same tool, but for XCP Pools/XCP hosts would be a great add to XenAdmin !
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    A
    Update 4: Hello, After a few tries, i reattempted to re-add the host to the pool, this time capturing the logs in another terminal and this time it seems to have worked. I think i followed the same steps the previous time, but maybe i did something different, like using pif-scan to introduce the reordered interfaces. This is how I've managed to get it to work: All hosts on version 8.2.1 all up to date. On the new host disconnected the management, remove pif, reorder the interfaces, reboot, add interfaces manually (used pif-introduce, not pif-scan), reboot, re-enable management, join pool. ifconfig eth2 down ifconfig eth3 down xe pif-list device=eth2 xe pif-list device=eth3 xe pif-forget uuid=9104e54c-6c82-5d83-b9fc-1d2b73d5d6f1 xe pif-forget uuid=27abec81-6861-796e-9abe-3b2653444c8f interface-rename --update eth6=14:23:f2:24:5a:80 eth7=14:23:f2:24:5a:81 reboot now interface-rename --list ifconfig eth6 up ifconfig eth7 up xe host-list xe pif-introduce device=eth6 host-uuid=edfaf68e-2c28-4486-8939-723bf2c72820 mac=14:23:f2:24:5a:80 xe pif-introduce device=eth7 host-uuid=edfaf68e-2c28-4486-8939-723bf2c72820 mac=14:23:f2:24:5a:81 reboot now I'm still baffled why it worked now, I've tried a few time with all the host up to date, and was planning to source a dual nic pcie card, to solve this issue. However on this host the the bond id doesnt match the pool ... on poll: [image: 1772217597892-f68e687e-5896-459f-9463-8806ffe4bb63-image.png] on the new host: [image: 1772217689154-b58bac6f-f7fa-484e-be30-de2c4d4304bc-image.png] xensource_error_log.txt xensource.log also presents lots of errors, so I'm going to leave it running over the weekend, and will try to add the necessary data for our SR on monday. what is the best way to send the full log file?
  • 3k Topics
    27k Posts
    D
    Hi @olivierlambert, Finally got round to following up on this. I found a better solution using DEFAULT_CHUNK_SIZE, which is normally set to 4MB, with MAX_PART_NUMBER being 1000, this limits the largest object backed up to 3rd party S3 implementations as 4GB. Increasing the DEFAULT_CHUNK_SIZE in the S3 code, or having a setting in configuration, either file or web GUI would allow backups of VMs larger than 4GB, without the memory required to track several thousand object chunks. Regards, Mark
  • Our hyperconverged storage solution

    41 Topics
    717 Posts
    DanpD
    @tmnguyen You can open a support ticket and request that we reactivate your XOSTOR trial licenses to match your existing XOA trial.
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.