XCP-ng 8.2.0 RC now available!
-
Updates are available on the 8.2 repositories for the recent "Platypus" security vulnerability.
-
@jmccoy555 We think we managed to reproduce the issue. I'll let you know about the outcome.
-
@stormi Just tried a restoring a backup from yesterday and still no luck. Also I can not reproduce the successful copy I thought happened the other day so I can only assume I booted a VM that was on the host prior to the upgrade to 8.2 last time when I thought it worked. At least it appears to consistently not work
Ping something across if you want it testing.
-
I am a bit out of my knowledge zone here, so please excuse me if I post something unrelated or obvious.
EDIT - it looks like only this VM has a problem when restored from the 8.1 pool to the 8.2.0RC pool (good pick ). Did some more restores and those restored fine .
Restored the plain backup of a Debian
109 VM (based on the Debian 9 Template) from my homelab (DELL R210 II, Xeon E3-1270 V2, 32GB RAM, 8.1 fully patched) to my playlab (DELL Optiplex 9010, i5-3550, 16GB RAM, 8.2.0RC fully patched). My homelab and playlab have different (Synology) NFS SRs connected. I am using XO from source (xo-server 5.70.0, xo-web 5.74.0) to manage my homelab and playlab pool.Restore was sucessfull and the VM does spin up after about 2min, but with some issues during boot:
[ 0.764806] vbd vbd-5696: 19 xenbus_dev_probe on device/vbd/5696
followed by
Gave up waiting for suspended/resume device
and
[*** ] A start job is running for dev-disk-by\x2duuid-7b8f0998[uuid like string].device (8s / 1 min 30s)
and finally (after 1min 30s)
[ TIME ] Timed out for dev-disk-by\x2duuid-7b8f0998[some uuid, see above] [DEPEND] Dependency failed for /dev/disk/by-uuid/[some uuid, see above] [DEPEND] Dependency failed for SWAP.
The VM finally starts and is fully working but does the above after every reboot.
fdisk -l
shows the expected drive/partition/swap. Most likely related to the somewhat different host? -
Just a quick follow-up on my last post on restoring backups from a 8.1 to a 8.2.0RC pool.
Successfully tested restoring Linux VMs (Debian 9/10) and Windows 10 VMs (but no UEFI) from 8.1 to 8.2. Tried newly created ("plain") VMs and production VMs (but not all). So apart from that one odd VM, backup and restore works on my homelab / playlab .
-
XCP-ng 8.2 officially released https://xcp-ng.org/blog/2020/11/18/xcp-ng-8-2-lts/
-
@gskger said in XCP-ng 8.2.0 RC now available!:
Just a quick follow-up on my last post on restoring backups from a 8.1 to a 8.2.0RC pool.
Successfully tested restoring Linux VMs (Debian 9/10) and Windows 10 VMs (but no UEFI) from 8.1 to 8.2. Tried newly created ("plain") VMs and production VMs (but not all). So apart from that one odd VM, backup and restore works on my homelab / playlab .
We're still debugging this VM startup issue and will fix it as soon as possible.
-
@stormi said in XCP-ng 8.2.0 RC now available!:
XCP-ng 8.2 officially released https://xcp-ng.org/blog/2020/11/18/xcp-ng-8-2-lts/
And, of course, many thanks to all the pre-release testers!
-
@jmccoy555 said in XCP-ng 8.2.0 RC now available!:
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
-
@r1 Good question, I don't know!!
I'll probably find out when I update my main host shortly.
-
@jmccoy555 said in XCP-ng 8.2.0 RC now available!:
@stormi Just tried a restoring a backup from yesterday and still no luck. Also I can not reproduce the successful copy I thought happened the other day so I can only assume I booted a VM that was on the host prior to the upgrade to 8.2 last time when I thought it worked. At least it appears to consistently not work
Ping something across if you want it testing.
An update candidate is now available that should fix that backup restore / VM copy issue.
Install it with:
yum clean all --enablerepo=xcp-ng-testing yum update uefistored --enablerepo=xcp-ng-testing
I don't think a reboot is needed, maybe not even a toolstack restart. If you don't see a better behaviour with the update, then maybe try first a toolstack restart and then a reboot.
-
@stormi said in XCP-ng 8.2.0 RC now available!:
yum update uefistored
I could only get it (uefistored-0.2.6-1.xcpng8.2.x86_64) to update by
yum update uefistored --enablerepo=xcp-ng-testing
But it has done the trick. No toolstak restart or reboot needed either.
-
@jmccoy555 you're right, I've fixed my post.
-
I see that installing XCP-ng 8.2.0 will create ext4 storage repositories by default. Why isn't dom0 also ext4?
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext3 18G 2.2G 15G 13% /
/dev/sda5 ext3 3.9G 20M 3.6G 1% /var/log -
@deoccultist To limit the maintenance work, we're not diverging from what Citrix Hypervisor does unless this brings significant value, and they still install dom0 on ext3.
-
@r1 said in XCP-ng 8.2.0 RC now available!:
@jmccoy555 said in XCP-ng 8.2.0 RC now available!:
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues!
-
@olivierlambert I just wanted to point out how I solved this issue. It happened to me when I had a host kicked from a pool while the host was offline and I manually re-added it. Long story I was remote and didn't want to reinstall via IPMI. What I did after joining the pool and seeing that no iSCSI IQN information was showing either in the general page in xcp center or via this
xe host-list uuid=<host uuid> params=other-config
I left the pool and re-added via XOA. I think I ran into it with a bad join and leaving the re joining rebuilt all of the appropriate configuration files.
All the best and massive kudos for such a great product.
-Travis -
Great news! Thanks for the feedback