XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. stormi
    3. Topics
    Offline
    • Profile
    • Following 0
    • Followers 18
    • Topics 41
    • Posts 1,458
    • Groups 7

    Topics

    • stormiS

      Second (and final) Release Candidate for QCOW2 image format support

      Watching Ignoring Scheduled Pinned Locked Moved News
      2
      2 Votes
      2 Posts
      84 Views
      stormiS
      Here's a work in progress version of the FAQ that will go with the release. QCOW2 FAQ What storage space available do I need to have on my SR to have large QCOW2 disks to support snapshots? Depending on a thin or thick allocated SR type, the answer is the same as VHD. A thin allocated is almost free, just a bit of data for the metadata of a few new VDI. For thick allocated, you need the space for the base copy, the snapshot and the active disk. Must I create new SRs to create large disks? No. Most existing SR will support QCOW2. LinstorSR and SMBSR (for VDI) does not support QCOW2. Can we have multiples different type of VDIs (VHD and QCOW2) on the same SR? Yes, it’s supported, any existing SR (unless unsupported e.g. linstor) will be able to create QCOW2 beside VHD after installing the new sm package What happen in Live migration scenarios? preferred-image-formats on the PBD of the master of a SR will choose the destination format in case of a migration. source preferred-image-format VHD or no format specified preferred-image-format qcow2 qcow2 >2 TiB X qcow2 qcow2 <2 TiB vhd qcow2 vhd vhd qcow2 Can we create QCOW2 VDI from XO? XO hasn’t yet added the possibility to choose the image format at the VDI creation. But if you try to create a VDI bigger than 2TiB on a SR without any preferred image formats configuration or if preferred image formats contains QCOW2, it will create a QCOW2. Can we change the cluster size? Yes, on File based SR, you can create a QCOW2 with a different cluster size with the command: qemu-img create -f qcow2 -o cluster_size=2M $(uuidgen).qcow2 10G xe sr-scan uuid=<SR UUID> # to introduce it in the XAPI The qemu-img command will print the name, the VDI is <VDI UUI>.qcow2 from the output. We have not exposed the cluster size in any API call, which would allow you to create these VDIs more easily. Can you create a SR which only ever manages QCOW2 disks? How? Yes, you can by setting the preferred-image-formats parameter to only qcow2. Can you convert an existing SR so that it only manages QCOW2 disks? If so, and it had VHDs, what happens to them? You can modify a SR to manage QCOW2 by modifying the preferred-image-formats parameter of the PBD’s device-config. Modifying the PBD necessitates to delete it and recreate it with the new parameter. This implies stopping access to all VDIs of the SR on the master (you can for shared SR migrate all VMs with VDIs on other hosts in the pool and temporarily stop the PBD of the master to recreate it, the parameter only need to be set on the PBD of the master). If the SR had VHDs, they will continue to exist and be usable but won’t be automatically transformed in QCOW2. Can I resize my VDI above 2 TiB? A disk in VHD format can’t be resized above 2 TiB, no automatic format change is implemented. It is technically possible to resize above 2 TiB following a migration that would have transferred the VDI to QCOW2. Is there any thing to do to enable the new feature? Installing updated packages that supports QCOW2 is enough to enable the new feature (packages: xapi, sm, blktap). Creating a VDI bigger than 2 TiB in XO will create a QCOW2 VDI instead of failing. Can I create QCOW2 disks lesser than 2 TiB? Yes, but you need to create it manually while setting sm-config:image-format=qcow2 or configure preferred image formats on the SR. Is QCOW2 format the default format now? Is it the best practice? We kept VHD as the default format in order to limit the impact on production. In the future, QCOW2 will become the default image format for new disks, and VHD progressively deprecated. What’s the maximum disk size? The current limit is set to 16 TiB. It’s not a technical limit, it’s a limit that we corresponds to what we tested. We will raise it progressively in the future. We’ll be able to go up to 64 TiB before meeting a new technical limit related to live migration support, that we will adress at this point. The theoretical maximum is even higher. We’re not limited by the image format anymore. Can I import without modification my KVM QCOW2 disk in XCP-ng? No. You can import them, but they need to be configured to boot with the drivers like in this documentation: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-kvm-libvirt You can just skip the conversion to VHD. So it should work depending on different configuration.
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      437
      1 Votes
      437 Posts
      179k Views
      A
      @rzr Always a reboot after big updates, as instructed/required.
    • stormiS

      Xen 4.17 on XCP-ng 8.3!

      Watching Ignoring Scheduled Pinned Locked Moved News
      37
      3 Votes
      37 Posts
      16k Views
      ForzaF
      @olivierlambert thanks for confirming!
    • stormiS

      XCP-ng 8.2.1 refreshed installation ISO - better hardware support

      Watching Ignoring Scheduled Pinned Locked Moved News
      9
      3 Votes
      9 Posts
      3k Views
      olivierlambertO
      Thanks for your feedback @loopway ! Happy to make your home lab running
    • stormiS

      XCP-ng 8.3 betas and RCs feedback 🚀

      Watching Ignoring Scheduled Pinned Locked Moved News
      792
      5 Votes
      792 Posts
      2m Views
      stormiS
      This is the end for this nice and useful thread, as XCP-ng 8.3 is not a beta nor a RC anymore: it's a supported release now. However, we still need your feedback, as we publish update candidates ahead of their official release, for users to test them. Right now, there's a security update candidate which is to be tested. I strongly invite everyone who is currently subscribed to this thread to now subscribe to the new, dedicated thread: XCP-ng 8.3 updates announcements and testing, and to verify that their settings allow sending notification e-mails and/or other forms of notification.
    • stormiS

      Drivers for recent homelab NICs in XCP-ng 8.2

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      41
      5 Votes
      41 Posts
      22k Views
      A
      @bigdweeb Yes.
    • stormiS

      XCP-ng 8.3 public alpha 🚀

      Watching Ignoring Scheduled Pinned Locked Moved News
      264
      7 Votes
      264 Posts
      347k Views
      stormiS
      We just released XCP-ng 8.3 beta 1 ! I opened a new thread for us to discuss it and for you to provide feedback: https://xcp-ng.org/forum/topic/7464/xcp-ng-8-3-beta Thanks for all the feedback already provided here, and see you on this new thread! In order not to miss anything (and, let's be honest, for me to be sure that messages on the new thread reach you all), the best course of action is: open the new thread right now and use the "watch" button. [image: 1687457779384-53fac025-6e0c-465b-97ab-5ca73a97bd93-image.png] And let's answer this common and legitimate question: how to upgrade from alpha to beta ? Well, there's nothing to do, just update as usual. In fact, you might already be in beta state. However, as indicated in the blog post, we need a lot of testing of the installer, so it's also an option to start from the installation ISO again.
    • stormiS

      XCP-ng 8.2.1 (maintenance update) - final testing sprint

      Watching Ignoring Scheduled Pinned Locked Moved News
      40
      0 Votes
      40 Posts
      22k Views
      stormiS
      @KPS Hi. This thread is now inactive sop please open a new thread dedicated to your issue.
    • stormiS

      XCP-ng 8.2.1 (maintenance update) - ready for testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      30
      4 Votes
      30 Posts
      15k Views
      stormiS
      This discussion now continues here: https://xcp-ng.org/forum/topic/5594/xcp-ng-8-2-1-maintenance-update-final-testing-sprint
    • stormiS

      Guest UEFI Secure Boot on XCP-ng

      Watching Ignoring Scheduled Pinned Locked Moved Development
      25
      1 Votes
      25 Posts
      21k Views
      A
      @stormi Nope.... my mistake. Now ubuntu 20.04 and Windows 2016 boot with UEFI Secure Boot enabled. # secureboot-certs install No arguments provided to command install, default arguments will be used: - PK: default - KEK: default - db: default - dbx: latest Downloading https://www.microsoft.com/pkiops/certs/MicCorKEKCA2011_2011-06-24.crt... Downloading https://www.microsoft.com/pkiops/certs/MicCorUEFCA2011_2011-06-27.crt... Downloading https://www.microsoft.com/pkiops/certs/MicWinProPCA2011_2011-10-19.crt... Downloading https://uefi.org/sites/default/files/resources/dbxupdate_x64.bin... Successfully installed certificates to the XAPI DB for pool.
    • stormiS

      Refreshed XCP-ng 8.2.0 ISOs: 8.2.0-2 - testing

      Watching Ignoring Scheduled Pinned Locked Moved Development
      14
      0 Votes
      14 Posts
      4k Views
      stormiS
      @andrew Yes indeed, those are known shortcomings: The installer does not wipe previous RAID setups. It does not offer to create another RAID for data storage. There's an extensive guide about this at https://xcp-ng.org/docs/guides.html#software-raid-storage-repository
    • stormiS

      New guest tools ISO for Linux and FreeBSD. Can you help with the tests?

      Watching Ignoring Scheduled Pinned Locked Moved Development
      62
      2 Votes
      62 Posts
      51k Views
      A
      @Pierre-Briec , @stormi I had a look at getting the xe-guest-utilities working on Ipfire v2 now (core 173, the latest version). Using a new /usr/sbin/xe-linux-distribution script, like suggested here, allows it to detect the ipfire. I then manually copied the binaries and scripts from the linux tar file, into the folders in Ipfire, since the install script did not seem to handle Ipfire properly. When starting the daemon using /etc/init.d/xe-linux-distribution, the next problem was that the "action" function does not exist in the /etc/init.d/functions file in Ipfire. So I just edited the script, replacing the "else" with an "fi" in the if testing where the functions file is sources, so that the locally defined action method is used. Then the agent started fine. Then I also saw the issue of the IP address not being reported. In my setup, there are two reasons for this. One is that Ipfire uses "red0", "green0", "blue0" etc as interface names, which the xe-guest-utilities will not consider. The other reason is that I do PCI passthrough of 3 network cards to the Ipfire, and hence does not use the "vif" interface/network that XCP-ng makes available to the Ipfire. Althought the "green0" is really on the same network as the "vif" in my setup. This was using the 7.30 tar file from the XCP ISO, I think. I then cloned the 7.33 / master version of xe-guest-utilities from github, and used that thereafter. I manually changed and built the xe-guest-utilities, adding "red", "green", "blue" to the list of interface prefixes that got considered, but it did not help. I suspect the reason is that these interfaces does not have a /sys/class/net/eth0/device/nodename entry, which contains a reference to the "vif" that XCP-ng knows about, as I understand it. So /sys/class/net/eth0/device/nodename exists, but the eth0 is not assigned any IP address, since it is not used by IPfire. While there is no /sys/class/net/green0/device/nodename entry. I am not sure who is "creating" this "nodename" entry, but I suspect is it Xen. And I suspect it is missing, since the green0 interface has no relationsship with the dom0 really. But then I also got more questions around what is actually meant to be displayed of "network" info in the XOA web UI. Is it only the network between dom0 and domU ? Or ideally all networks defined on domU ? (i.e. red0 and blue0 and orange0 ) ? And I also think I spotted a bug on the "Stats" page of XOA, since under "Disk throughput", it seems like always "xvda" and "xvdd" is displayed, even if the host only has one disk, "xvda". But that I should report as a bug, if I do not find it as already reported / known. While playing with this, I also noticed that the management agent version was not properly displayed, i.e. not at all. And this seems to be caused by the the version constants not being replaced while building the daemon. I am not a go build expert, so I'll investigate it a bit more. But it seems like I'm not the only one with that issue, because the same problem seems to exist with the xe-guest-utilities that are part of Alpine Linux 3.17 distribution. I do not think that there are that many running Ipfire on XCP-ng/Xen. I've been briefly involved in some pull requests against Ipfire, so I might look at making one for getting the xe-guest-utilities into Ipfire itself, but since the use is not high, I have a doubt it makes much sense. Thanks for a great tool in XCP-ng, I enjoy using it in my home setup. Regards Alf Høgemark
    • stormiS

      A major security flaw in sudo

      Watching Ignoring Scheduled Pinned Locked Moved News
      6
      1 Votes
      6 Posts
      2k Views
      stormiS
      The update is now available for everyone https://xcp-ng.org/blog/2021/01/28/security-issue-in-sudo/
    • stormiS

      XCP-ng 8.2.0 RC now available!

      Watching Ignoring Scheduled Pinned Locked Moved News
      58
      5 Votes
      58 Posts
      37k Views
      olivierlambertO
      Great news! Thanks for the feedback
    • stormiS

      New UEFI implementation for VMs

      Watching Ignoring Scheduled Pinned Locked Moved Development
      3
      2 Votes
      3 Posts
      2k Views
      R
      Initial brief test seems ok Will see if i can do more of the tests later... Updated from 8.1 via yum which caused windows 10 & windows 2019 server to hang on the tiano logo. Interestingly a debian 10 uefi vm worked fine... After the update to uefistored both windows VMs started in recovery and did whatever it is windows does besides spin dots on your screen After a reboot both Windows 10 2004 and windows 2019 server booted just fine
    • stormiS

      XCP-ng 8.2.0 beta now available!

      Watching Ignoring Scheduled Pinned Locked Moved News
      42
      4 Votes
      42 Posts
      24k Views
      J
      @hoerup Hi, I can't remember too much to be honest. I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM' Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me. Cheers.
    • stormiS

      Looking for a temporary test host with a nVIDIA GPU

      Watching Ignoring Scheduled Pinned Locked Moved Development
      27
      0 Votes
      27 Posts
      13k Views
      stormiS
      @stormi said in Looking for a temporary test host with a nVIDIA GPU: For the tests we had a need for 1 week ago, it's now fine, however I'll probably update this thread once we have an ISO image that can be tested by users that have such hardware. Note: it's not about using the GPU itself, but simply about making sure that the hypervisor works well with the changes we made to replace the not-built-by-us gpumon tool whose absence would make XCP-ng 8.1 unbootable (as we sadly found out after the release) with a dummy one built by us. Now is time for the tests I was talking about earlier. XCP-ng 8.2 beta is now available with our dummy gpumon and we need users who have nVIDIA GPUs to test it and give us feedback. There may be situations we have not tested where our dummy gpumon is not enough to make the XAPI happy, despite the fact that we don't support nVIDIA vGPUs (proprietary software from Citrix required for that feature).
    • stormiS

      "CROSSTalk" CPU vulnerabilty (cross-core data leak)

      Watching Ignoring Scheduled Pinned Locked Moved News
      29
      0 Votes
      29 Posts
      13k Views
      D
      @stormi Exactly. Must've been related to something other than just the latest packages.
    • stormiS

      XCP-ng 7.6 end of life, 8.0 best-effort support

      Watching Ignoring Scheduled Pinned Locked Moved News
      1
      0 Votes
      1 Posts
      453 Views
      No one has replied
    • stormiS

      XCP-ng 8.1 Release Candidate now available!

      Watching Ignoring Scheduled Pinned Locked Moved News
      89
      8 Votes
      89 Posts
      78k Views
      olivierlambertO
      I'm not sure Unitrends is supporting XCP-ng. Please refer to your backup vendor. Current backup solutions claming to support XCP-ng: https://xcp-ng.org/docs/backup.html