XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.2.0 RC now available!

    Scheduled Pinned Locked Moved News
    58 Posts 14 Posters 23.4k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • R Offline
      r1 XCP-ng Team @jmccoy555
      last edited by

      @jmccoy555 said in XCP-ng 8.2.0 RC now available!:

      CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh

      Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.

      J 2 Replies Last reply Reply Quote 0
      • J Offline
        jmccoy555 @r1
        last edited by

        @r1 Good question, I don't know!! 😖

        I'll probably find out when I update my main host shortly.

        1 Reply Last reply Reply Quote 0
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team @jmccoy555
          last edited by stormi

          @jmccoy555 said in XCP-ng 8.2.0 RC now available!:

          @stormi Just tried a restoring a backup from yesterday and still no luck. Also I can not reproduce the successful copy I thought happened the other day so I can only assume I booted a VM that was on the host prior to the upgrade to 8.2 last time when I thought it worked. At least it appears to consistently not work 😖

          Ping something across if you want it testing.

          An update candidate is now available that should fix that backup restore / VM copy issue.

          Install it with:

          yum clean all --enablerepo=xcp-ng-testing
          yum update uefistored --enablerepo=xcp-ng-testing
          

          I don't think a reboot is needed, maybe not even a toolstack restart. If you don't see a better behaviour with the update, then maybe try first a toolstack restart and then a reboot.

          J 1 Reply Last reply Reply Quote 0
          • J Offline
            jmccoy555 @stormi
            last edited by

            @stormi said in XCP-ng 8.2.0 RC now available!:

            yum update uefistored

            I could only get it (uefistored-0.2.6-1.xcpng8.2.x86_64) to update by yum update uefistored --enablerepo=xcp-ng-testing

            But it has done the trick. No toolstak restart or reboot needed either.

            stormiS 1 Reply Last reply Reply Quote 2
            • stormiS Offline
              stormi Vates 🪐 XCP-ng Team @jmccoy555
              last edited by

              @jmccoy555 you're right, I've fixed my post.

              1 Reply Last reply Reply Quote 2
              • D Offline
                DeOccultist
                last edited by

                I see that installing XCP-ng 8.2.0 will create ext4 storage repositories by default. Why isn't dom0 also ext4?

                Filesystem Type Size Used Avail Use% Mounted on
                /dev/sda1 ext3 18G 2.2G 15G 13% /
                /dev/sda5 ext3 3.9G 20M 3.6G 1% /var/log

                stormiS 1 Reply Last reply Reply Quote 0
                • stormiS Offline
                  stormi Vates 🪐 XCP-ng Team @DeOccultist
                  last edited by

                  @deoccultist To limit the maintenance work, we're not diverging from what Citrix Hypervisor does unless this brings significant value, and they still install dom0 on ext3.

                  1 Reply Last reply Reply Quote 0
                  • J Offline
                    jmccoy555 @r1
                    last edited by

                    @r1 said in XCP-ng 8.2.0 RC now available!:

                    @jmccoy555 said in XCP-ng 8.2.0 RC now available!:

                    CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh

                    Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.

                    No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.

                    I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.

                    Anyway all sorted now and back up and running with no CephFS issues! 🙂

                    1 Reply Last reply Reply Quote 0
                    • 1 Offline
                      1845solutions @olivierlambert
                      last edited by

                      @olivierlambert I just wanted to point out how I solved this issue. It happened to me when I had a host kicked from a pool while the host was offline and I manually re-added it. Long story I was remote and didn't want to reinstall via IPMI. What I did after joining the pool and seeing that no iSCSI IQN information was showing either in the general page in xcp center or via this

                      xe host-list uuid=<host uuid> params=other-config
                      

                      I left the pool and re-added via XOA. I think I ran into it with a bad join and leaving the re joining rebuilt all of the appropriate configuration files.

                      All the best and massive kudos for such a great product.
                      -Travis

                      1 Reply Last reply Reply Quote 1
                      • olivierlambertO Online
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Great news! Thanks for the feedback 🙂

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post