Citrix Hypervisor 8.0 landed
-
@nuts23 did you use fresh installed windows or "used" ones?
-
@Prilly I installed Citrix Hypervisor 8.0 on my Dell C6100 which is running L5630 CPU and it booted just fine.
-
@crash you can even try with XCP-ng 8.0 now (still beta but will be useful to try)
-
@olivierlambert @prilly Just loaded the XCP-ng 8.0 successfully on a Dell C6100 with 2 x L5630.
No errors during install, and boots up just fine for use.
-
Thanks you guys for testing the l5630 cpu, this gave me confidence to upgrade my dell r610 with 2x x5675 cpus with hypervisor 8.0, upgrade was done with iso and cd and the upgrade process went very smooth, server boot up and everything seems almost nice.
i did notice it load cpu microcode rev 1f on boot, i also notices systemd is throwing a error on boot: systemd failed to load kernel modules, this has no impact and the host is running fine with no error other than that. i suspect the error might be related to upgrade fra 7.6, i will try to reinstall 8.0 as a fresh install and see if this clears the kernel modules stuff.
-
From what I see in https://www.intel.com/content/dam/www/public/us/en/documents/corporate-information/SA00233-microcode-update-guidance_05132019.pdf X5675 CPUs are not supported by Intel itself anymore, so no mitigation for you for the MDS attacks
And that's why no vendor can say they "support" it anymore, since no one can guarantee the security of anything running on them now.
-
@stormi as long as you dont have any untrusted vms running on this cpus there is no problem with security issues.
-
@Prilly You're fine if you're running trusted workload. This includes VMs themselves and everything that gets executed in it. Including maybe javascript or webassembly stuff on some not-so-trusted websites. This also means that a compromised VM due to a security flaw in the VM or something badly configured or access obtained through social engineering can leverage the hardware security flaws to get access to sensitive data not only from within the VM but also from other VMs.
So, I agree with you but we need to be careful about the definition of "trusted".
-
@cg said in Citrix Hypervisor 8.0 landed:
@maxcuttins said in Citrix Hypervisor 8.0 landed:
I throw down one of my xcp-host to setup a not-nested-virtualized xen-8 in order to test RBD speed. Performance are about 4x slower than they should be but at least it run almost like a standard local disk.
dd if=/dev/zero of=./test.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 1.86156 s, 577 MB/s
1G is usually a really bad test, as pretty small things can influence the result massively.
You should run tests with 10 or better 100 - if you can.
That also diminishes influence of any caches (on source and target!).Not very good.
Here is 10M:dd if=/dev/zero of=./test1.img bs=10M count=1 oflag=dsync 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.0545468 s, 192 MB/s
and here 100M:
dd if=/dev/zero of=./test1.img bs=100M count=1 oflag=dsync 1+0 records in 1+0 records out 104857600 bytes (105 MB) copied, 0.266544 s, 393 MB/s
-
@maxcuttins did you really measure 10 and 100 MB after I said 1 G is not enough for accurate results?
Usually you set blocksize to something usefull, like 1M and set count to e.g. 10000.
Of course you can change blocksize to test a bit, but that's usually between like 64k and maybe 4M. -
@cg said in Citrix Hypervisor 8.0 landed:
@maxcuttins did you really measure 10 and 100 MB after I said 1 G is not enough for accurate results?
Usually you set blocksize to something usefull, like 1M and set count to e.g. 10000.
Of course you can change blocksize to test a bit, but that's usually between like 64k and maybe 4M.Ah did you intend 10G? instead of 10M?
-
First rule of all benchmarks: The longer and more often they run, the more precise they are.
If we talk about 1G as base, why should I switch with 10 or 100 to M? That doesn't make any sense at all. -
I heard in several places over this forum that
fio
would be a better benchmark thandd
. Does it apply here too? -
It's always better than
dd
, because it's closer to a real load. -
@stormi dd stands for disk dump and does exactly that: Copy a stream of data.
Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc.So the first thing will only give you streamline benchmarks, what almost nobody cares about.
The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference.
Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'.You can spend days on benchmarks and how to do what.