Subcategories

  • All Xen related stuff

    584 Topics
    6k Posts
    olivierlambertO
    @edisoninfo Great! This warning is a life saver and did perfectly its job, allowing you to discover a hidden issue Glad you found the root cause!
  • The integrated web UI to manage XCP-ng

    23 Topics
    339 Posts
    C
    @lsouai-vates Great! Thanks for addressing this
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    104 Topics
    1k Posts
    florentF
    @idar21 @idar21 said in Xen Orchestra 5.110 V2V not working: Don't intend to bump in but the new migration tool isn't working as per the release notes. I had similar issues, there is no warm migration. My testing against esxi v7, resulted in: .Abrupt power off of source VM on esxi. .VM disks start copying. I can see disk copy progress in tasks. .Migration tasks fails but multiple disks of the source VM keeps on copying. .when all the disks are copied, there is no VM with the name available in xcp. .All disks are labeled orphaned under health in xo. .Where is the pause/resume function as stated in the release notes. I don't think the tool has been tested properly. The only difference from older migration tool to this one is progress of disk copying. Otherwise nothing new. The old tool could only do cold migrations and had issues with vms with multiple disks. The new can also only do cold migrations and still has issues with multiple disks migrations. First, I would like to say again that latest can be fresh, and that we know that we ask for our users to be more inventive with latest, in exchange for faster features. Even more for users from source. The documentation is still in the work, and will be ready for sure before this reach "xoa stable". The resume part don't have a dedicated interface : you do a first migration without enabling the "stop source", and then, later you launch the same migration with stop source enabled ( or VM stopped ) and it will reuse the already transfered data if the prerequisites are validated. Then debugging an issue with migration is quite complex, since it's involve multiple systems, and we won't have any access, nor control on the vmware part. It's even harder without a tunnel. I will need you to look at your journalctl and check for errors during migration . Also are the failing disks sharing some specific configuration? what storage do they uses ? Is there something relevant on the xcp side ?
  • Hardware related section

    128 Topics
    1k Posts
    olivierlambertO
    It is, if you don't use ZFS. ZFS is a memory hog.
  • The place to discuss new additions into XCP-ng

    244 Topics
    3k Posts
    DustyArmstrongD
    Testing the agent out on Arch Linux (mainly due to the spotty 'support' in the AUR/generally) and it is working fine - better than what I had before (which did not report VM info properly). I've set it up as a systemd service to replace the previous one I had, also working as expected. This would be fun to contribute towards.
  • Problems with existing pool, problems migrating to new pool

    Solved
    12
    0 Votes
    12 Posts
    1k Views
    S
    @tjkreidl Yeah, thanks. 12 hours, 68 VDIs to coalesce down to 10. Quite the improvement.
  • Oops! We removed busybox

    5
    0 Votes
    5 Posts
    490 Views
    olivierlambertO
    Interpreting vulnerability scanners is a hard task. They are often screaming for "common cases", but remember XCP-ng is an appliance, so there are many cases where things do not apply. Happy to help you further via our pro support to answer in details your concerns
  • Rebuild boot / OS drive

    community os rebild disaster-rec
    3
    0 Votes
    3 Posts
    673 Views
    G
    @Danp So, that's the "best" way? Backup the meta-data with XOA, rebuild the OS drive - add the "new" server into XOA and restore the meta-data back to the new install? (I'm not doubting it is, just wanting to be sure we're understanding each other fully.) Seems straight-forward - but there's a ton of things I've done over the years that "seemed" pretty straight-forward that turned out to be anything but, and at least occasionally found I had no way back.
  • Re enabling NIC without rebooting host

    2
    0 Votes
    2 Posts
    385 Views
    BenjiReisB
    @mmancina Hi you can try to call xe pif-scan host-uuid=<uuid of your host> , the NIC should appear after that.
  • Endless Xapi#getResource /rrd_updates in tasks list

    85
    0 Votes
    85 Posts
    39k Views
    1
    Spoke too soon. Still getting them... Just much slower and father apart.. about 5-10 a day
  • 8.3beta2 dom0 kernel panic, possibly triggered by over-mtu packet?

    3
    0 Votes
    3 Posts
    329 Views
    W
    @bleader No, the opnsense box itself doesn't have wireguard (or anything else VPN-ish) running on it. It's mostly just a NAT with the normal variety of DHCP, DNS, ... services running on it.
  • After installing updates: 0 bytes free, Control domain memory = 0B

    92
    7
    0 Votes
    92 Posts
    42k Views
    nikadeN
    @Dataslak said in After installing updates: 0 bytes free, Control domain memory = 0B: @nikade @olivierlambert @stormi @Danp @yann Just wanted to say to you all: Thank you for your contributions and kind helpful assistance which has helped me through this crisis. I would have been in deep trouble without you. I respect your expertise, and appreciate deeply that you are working so hard to help us dumb users. I have learned a lot, and hope one day to become skilled enough to at least help other new users on this forum. Best wishes Aslak Happy everything worked out, this is what this community is all about. I've gotten a lot of help and given some too, it's all about helping out with the things that you can. With time you'll be able to help out more and more and more
  • 8.3 beta, crashed Windows11 VM when trying to snapshot with memory

    18
    0 Votes
    18 Posts
    1k Views
    stormiS
    Using dynamic memory is known to be prone to causing occasional issues. Thanks for the feedback. Oh, by the way, regarding vTPM and snapshots, we finally established that it's fully supported by XenServer 8 and that the documentation was just out of date.
  • cluster slave no connection to pool

    1
    0 Votes
    1 Posts
    229 Views
    No one has replied
  • Guest tools in nested XCP-ng

    Solved
    2
    0 Votes
    2 Posts
    208 Views
    olivierlambertO
    Hi, It's not possible.
  • XCP/Vates support hours

    support xcp-ng xostor
    2
    0 Votes
    2 Posts
    301 Views
    olivierlambertO
    Hi, No.
  • Hosts fencing after latest 8.2.1 update

    4
    0 Votes
    4 Posts
    488 Views
    J
    Right, so yeah I did just that - disabled HA and have been keeping an eye on the logs as well as general performance in the env. Glad to know my actions align with your recommendations at least There are a couple of hosts that get a lot of these sorts of messages in the kern.log: Apr 25 08:15:33 oryx kernel: [276757.645457] vif vif-21-3 vif21.3: Guest Rx ready Apr 25 08:15:54 oryx kernel: [276778.735509] vif vif-22-3 vif22.3: Guest Rx stalled Apr 25 08:15:54 oryx kernel: [276778.735522] vif vif-21-3 vif21.3: Guest Rx stalled Apr 25 08:16:04 oryx kernel: [276788.780828] vif vif-21-3 vif21.3: Guest Rx ready Apr 25 08:16:04 oryx kernel: [276788.780836] vif vif-22-3 vif22.3: Guest Rx ready Am I wrong to attribute this to issues within specific VMs (i.e. not a hv performance issue)? I know one of the VMs that causes these is a very old centos 5 testing VM one of my devs use and the messages stop when it's powered down. Is there any way to easily associate those vifs with the actual VMs they are attached to? My google-foo failed me for that. Other than that, I noticed my nic firmware is a bit old on the X710-da2's I use so I'm going through and upgrading those with no noticeable changes. I'm fairly hesitant to re-enable HA without tracking down the root cause.
  • Stuck in maintenance mode after joining pool.

    2
    8
    0 Votes
    2 Posts
    693 Views
    DanpD
    @DwightHat said in Stuck in maintenance mode after joining pool.: When I try to look at the host in Xen Orchestra it just says "An error has occurred". Check to browser's Dev Tools console when this happens. It will likely contain some additional details. You likely need to check the logs to find out why you are encountering this issue. Many times the "stuck in maintenance mode" issue is related to an unmountable storage repository. https://docs.xcp-ng.org/troubleshooting/log-files/
  • Whatchdog support for Linux and Windows guests

    4
    0 Votes
    4 Posts
    622 Views
    olivierlambertO
    This post specifically: https://xcp-ng.org/forum/post/57441 The person had issues to speak in English but wanted to tell this configuration worked. You can use the xen_wdt backend to rely on a Xen operation to force restart the VM. I have no idea how it works on Windows.
  • 0 Votes
    12 Posts
    1k Views
    A
    @lawrencesystems I think this is quite common when you need to test certain scenarios with multiple hypervisors (backup, migrations, etc.). You only need a couple of HVs with a few tiny running VMs. We have done this setup with nested esxi many times for testing purposes. And since e.g. Ubuntu and Windows work this way, the problem is probably specific to Debian (and maybe others?).
  • PXE Boot a VM and use HTTP for kernel/initrd download

    4
    0 Votes
    4 Posts
    976 Views
    olivierlambertO
    Question for @gduperrey or @stormi
  • USB pass-through device with wrong product and vendor identifiers on 8.2

    Unsolved
    3
    0 Votes
    3 Posts
    339 Views
    I
    @infodavid Finally, I follow an existing topic and configure nut-server on the hypervisor to access the ups via usb. I know that Olivier is not fully aligned with the fact that the host is modified but IMO it is an acceptable change on my XCP-NG host.
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    25 Views
    No one has replied
  • VMs are abruptly getting shutdown

    14
    0 Votes
    14 Posts
    1k Views
    J
    @lritinfra Something to consider also the HPE Intelligent Provisioning is the main way, outside of HPE iLO, HPE SUM or HPE SPP to update the server's hardware firmware. If you aren't using individual RPMs or SCEXE files for the task. With HPE Intelligent Provisioning and HPE SPP being able to update, both firmware and BIOS. As not all of the updates for firmware will be in a compatible format, for use with HPE iLO. I'm not sure if it has changed but an Administrator Password set on the BIOS (at minimum), also locks out (disables) access to the Erase option on the HPE Intelligent Provisioning. At least it does on my only HPE Server running an up to date BIOS, HPE iLO and HPE Intelligent Provisioning. Thus disabled HPE Intelligent Provisioning doesn't help with being up to date enough to fix vulnerabilities and bugs at hardware or firmware level.
  • 1 Votes
    6 Posts
    1k Views
    R
    @rjt Note to self about creating and managing appliances at xe cli. xe help --all | egrep -i '(appliance)' # find xe appliance related commands. appliance-assert-can-be-recovered, appliance-create, appliance-destroy, appliance-list, appliance-param-clear, appliance-param-get, appliance-param-list, appliance-param-set, appliance-recover, appliance-shutdown, appliance-start,