XCP-ng 8.2.0 RC now available!
-
Just a quick follow-up on my last post on restoring backups from a 8.1 to a 8.2.0RC pool.
Successfully tested restoring Linux VMs (Debian 9/10) and Windows 10 VMs (but no UEFI) from 8.1 to 8.2. Tried newly created ("plain") VMs and production VMs (but not all). So apart from that one odd VM, backup and restore works on my homelab / playlab .
-
XCP-ng 8.2 officially released https://xcp-ng.org/blog/2020/11/18/xcp-ng-8-2-lts/
-
@gskger said in XCP-ng 8.2.0 RC now available!:
Just a quick follow-up on my last post on restoring backups from a 8.1 to a 8.2.0RC pool.
Successfully tested restoring Linux VMs (Debian 9/10) and Windows 10 VMs (but no UEFI) from 8.1 to 8.2. Tried newly created ("plain") VMs and production VMs (but not all). So apart from that one odd VM, backup and restore works on my homelab / playlab .
We're still debugging this VM startup issue and will fix it as soon as possible.
-
@stormi said in XCP-ng 8.2.0 RC now available!:
XCP-ng 8.2 officially released https://xcp-ng.org/blog/2020/11/18/xcp-ng-8-2-lts/
And, of course, many thanks to all the pre-release testers!
-
@jmccoy555 said in XCP-ng 8.2.0 RC now available!:
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
-
@r1 Good question, I don't know!!
I'll probably find out when I update my main host shortly.
-
@jmccoy555 said in XCP-ng 8.2.0 RC now available!:
@stormi Just tried a restoring a backup from yesterday and still no luck. Also I can not reproduce the successful copy I thought happened the other day so I can only assume I booted a VM that was on the host prior to the upgrade to 8.2 last time when I thought it worked. At least it appears to consistently not work
Ping something across if you want it testing.
An update candidate is now available that should fix that backup restore / VM copy issue.
Install it with:
yum clean all --enablerepo=xcp-ng-testing yum update uefistored --enablerepo=xcp-ng-testing
I don't think a reboot is needed, maybe not even a toolstack restart. If you don't see a better behaviour with the update, then maybe try first a toolstack restart and then a reboot.
-
@stormi said in XCP-ng 8.2.0 RC now available!:
yum update uefistored
I could only get it (uefistored-0.2.6-1.xcpng8.2.x86_64) to update by
yum update uefistored --enablerepo=xcp-ng-testing
But it has done the trick. No toolstak restart or reboot needed either.
-
@jmccoy555 you're right, I've fixed my post.
-
I see that installing XCP-ng 8.2.0 will create ext4 storage repositories by default. Why isn't dom0 also ext4?
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext3 18G 2.2G 15G 13% /
/dev/sda5 ext3 3.9G 20M 3.6G 1% /var/log -
@deoccultist To limit the maintenance work, we're not diverging from what Citrix Hypervisor does unless this brings significant value, and they still install dom0 on ext3.
-
@r1 said in XCP-ng 8.2.0 RC now available!:
@jmccoy555 said in XCP-ng 8.2.0 RC now available!:
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues!
-
@olivierlambert I just wanted to point out how I solved this issue. It happened to me when I had a host kicked from a pool while the host was offline and I manually re-added it. Long story I was remote and didn't want to reinstall via IPMI. What I did after joining the pool and seeing that no iSCSI IQN information was showing either in the general page in xcp center or via this
xe host-list uuid=<host uuid> params=other-config
I left the pool and re-added via XOA. I think I ran into it with a bad join and leaving the re joining rebuilt all of the appropriate configuration files.
All the best and massive kudos for such a great product.
-Travis -
Great news! Thanks for the feedback