-
Since there's an update for both XCP-ng 7.5 and 7.6, please all tell me which version you tested
-
@AllooTikeeChaat there was no performance change, was the first use of that test host
@stormi XCP-ng 7.6, just HVM, no PV
-
If all goes well I'll push the updates to everyone on monday. Meanwhile, please go on with the testing if you can.
-
@stormi XCP-ng 7.6
-
Thanks to everyone who tested the update candidate. I've now pushed the security update for XCP-ng 7.6. I'm holding that of XCP-ng 7.5 until at least one person confirms that it's working for them.
-
Sorry been away this weekend .. @stormi XCP-ng 7.6
-
I've installed the updates on a pool. Now every time I migrate a pfSense HVM VM within the pool the console is gone and memory usage is at 100%. I did do the following on the pool master:
yum update xenopsd xenopsd-xc xenopsd-xenlight --enablerepo='xcp-ng-updates_testing'
. The package version also differs from other hosts (0.66.0-1.1.xcpng
versus0.66.0-1.el7.centos
). Would that be the cause? -
@Ultra2D Could you downgrade to the previous version with
yum downgrade xenopsd xenopsd-xc xenopsd-xenlight
, restart the toolstack and tell us if migration works better?Also make sure that you already have the latest updates for
xcp-emu-manager
, which fixed many migration issues about two months ago. See https://github.com/xcp-ng/xcp/wiki/Updates-Howto#a-special-word-about-xcp-ng-75-76-and-live-migrations. -
@Ultra2D In fact, you may be right, differing versions of those packages may cause an issue, because both hosts may behave differently regarding VMs that have no
platform:device-id
set.So another test to do would be install the update candidate for those packages on all hosts and restart their toolstack, then test migration of that VM again.
-
@stormi Thanks. Installing the update candidate on all hosts and restarting the toolstack works, but only after power cycling the VM once.
xcp-emu-manager
is version0.0.9-1
Is it advisable to stay on the testing repo until the next version? There are some more non-Windows HVM VM's.
-
@Ultra2D so you mean that installing the updated packages would "break" the first migration of such a VM unless it's been rebooted once? If that is so, then I'd advise to revert to the previous version (or to make sure not to attempt a migration without power cycling the VMs once). Else, your choice. If the updated packages bring a benefit to you, you can keep them, else revert to the previous ones.
-
@stormi I only tested with one VM. It crashed a couple of times when the pool master had updates from
xcp-ng-updates_testing
and the slaves had the updates that were released yesterday. After updating the last remaining slave to updates fromxcp-ng-updates_testing
, moving the VM resulted in a stuck VM. So I don't think you can draw any conclusions from this, except maybe that you should install the same version on master and slaves. -
Xen security updates pushed to everyone (7.6 yesterday, 7.5 today).
Blog post: https://xcp-ng.org/blog/2019/03/12/xcp-ng-security-bulletin-vulnerabilities-pv-guests/ -
I'm getting an error
Running transaction test Transaction check error: installing package xen-dom0-tools-4.7.6-6.4.1.xcpng.x86_64 needs 1 inodes on the /var/log filesystem
-
@codedmind This is probably related to a bug in the openvswitch package and log rotation. We sent a newsletter about it a few days ago: https://mailchi.mp/7ed52f9a2151/important-noticeopenvswitch-issue
-
Humm ok... but i cannot update... yes i have that version.. but as i cannot up date how can i solve it?
-
@codedmind when it got to the point that you ran out of inodes in
/var/log
, you need to remove some files before you can update.find /var/log -name ovsdb-server.log.*.gz -delete
should work (untested, try without-delete
first). Else look at the other threads that covered the subject. -
Ok thanks!
-
Does the openvswitch bug cause the local storage to fill up, or just the log file?
Would this openvwswitch potential lead our XCP server to crash? -
Local storage is not touched. Only the /var/log partition, or the / partition if you have no /var/log partition. In the latter case, I suppose this could hang or crash the server. In the former case, it prevents some new tasks to be performed, but we haven't had reports of crashing servers at this stage.